File Download

There are no files associated with this item.

  • Find it @ UNIST can give you direct access to the published full text of this article. (UNISTARs only)
Related Researcher

김미정

Kim, Mijung
Read More

Views & Downloads

Detailed Information

Cited time in webofscience Cited time in scopus
Metadata Downloads

Full metadata record

DC Field Value Language
dc.citation.conferencePlace CN -
dc.citation.title IEEE International Conference on Software Analysis, Evolution and Reengineering -
dc.contributor.author Xie, Danning -
dc.contributor.author Yoo, Byoungwoo -
dc.contributor.author Jiang, Nan -
dc.contributor.author Kim, Mijung -
dc.contributor.author Tan, Lin -
dc.contributor.author Zhang, Xiangyu -
dc.contributor.author Lee, Judy -
dc.date.accessioned 2024-12-06T17:35:05Z -
dc.date.available 2024-12-06T17:35:05Z -
dc.date.created 2024-12-01 -
dc.date.issued 2025-03-04 -
dc.description.abstract Software specifications are essential for many Software Engineering (SE) tasks such as bug detection and test generation. Many existing approaches are proposed to extract the specifications defined in natural language form (e.g., comments) into machine-readable formal forms (e.g., first-order logic). However, existing approaches suffer from limited generalizability and require manual efforts. The recent emergence of Large Language Models (LLMs), which have been successfully applied to numerous software engineering tasks, offers a promising avenue for automating this process. In this paper, we conduct the first empirical study to evaluate the capabilities of LLMs for generating software specifications from software comments or documentation. We evaluate LLMs’ performance with Few-Shot Learning (FSL) and compare the performance of 13 state-of-the-art LLMs with traditional approaches. In addition, we conduct a comparative diagnosis of the failure cases from both LLMs and traditional methods, identifying their unique strengths and weaknesses. Our study offers valuable insights for future research to improve specification generation -
dc.identifier.bibliographicCitation IEEE International Conference on Software Analysis, Evolution and Reengineering -
dc.identifier.uri https://scholarworks.unist.ac.kr/handle/201301/84715 -
dc.language 영어 -
dc.publisher IEEE -
dc.title How Effective are Large Language Models in Generating Software Specifications? -
dc.type Conference Paper -
dc.date.conferenceDate 2025-03-04 -

qrcode

Items in Repository are protected by copyright, with all rights reserved, unless otherwise indicated.