File Download

There are no files associated with this item.

  • Find it @ UNIST can give you direct access to the published full text of this article. (UNISTARs only)
Related Researcher

김미정

Kim, Mijung
Read More

Views & Downloads

Detailed Information

Cited time in webofscience Cited time in scopus
Metadata Downloads

How Effective are Large Language Models in Generating Software Specifications?

Author(s)
Xie, DanningYoo, ByoungwooJiang, NanKim, MijungTan, LinZhang, XiangyuLee, Judy
Issued Date
2025-03-04
URI
https://scholarworks.unist.ac.kr/handle/201301/84715
Citation
IEEE International Conference on Software Analysis, Evolution and Reengineering
Abstract
Software specifications are essential for many Software Engineering (SE) tasks such as bug detection and test generation. Many existing approaches are proposed to extract the specifications defined in natural language form (e.g., comments) into machine-readable formal forms (e.g., first-order logic). However, existing approaches suffer from limited generalizability and require manual efforts. The recent emergence of Large Language Models (LLMs), which have been successfully applied to numerous software engineering tasks, offers a promising avenue for automating this process. In this paper, we conduct the first empirical study to evaluate the capabilities of LLMs for generating software specifications from software comments or documentation. We evaluate LLMs’ performance with Few-Shot Learning (FSL) and compare the performance of 13 state-of-the-art LLMs with traditional approaches. In addition, we conduct a comparative diagnosis of the failure cases from both LLMs and traditional methods, identifying their unique strengths and weaknesses. Our study offers valuable insights for future research to improve specification generation
Publisher
IEEE

qrcode

Items in Repository are protected by copyright, with all rights reserved, unless otherwise indicated.