File Download

There are no files associated with this item.

  • Find it @ UNIST can give you direct access to the published full text of this article. (UNISTARs only)
Related Researcher

이주용

Yi, Jooyong
Programming Languages and Software Engineering Lab.
Read More

Views & Downloads

Detailed Information

Cited time in webofscience Cited time in scopus
Metadata Downloads

Codeflaws: A programming competition benchmark for evaluating automated program repair tools

Author(s)
Tan, Shin HweiYi, JooyongYulisMechtaev, SergeyRoychoudhury, Abhik
Issued Date
2017-05-20
DOI
10.1109/ICSE-C.2017.76
URI
https://scholarworks.unist.ac.kr/handle/201301/35329
Fulltext
https://ieeexplore.ieee.org/document/7965296
Citation
39th IEEE/ACM International Conference on Software Engineering Companion, ICSE-C 2017, pp.180 - 182
Abstract
Several automated program repair techniques have been proposed to reduce the time and effort spent in bug-fixing. While these repair tools are designed to be generic such that they could address many software faults, different repair tools may fix certain types of faults more effectively than other tools. Therefore, it is important to compare more objectively the effectiveness of different repair tools on various fault types. However, existing benchmarks on automated program repairs do not allow thorough investigation of the relationship between fault types and the effectiveness of repair tools. We present Codeflaws, a set of 3902 defects from 7436 programs automatically classified across 39 defect classes (we refer to different types of fault as defect classes derived from the syntactic differences between a buggy program and a patched program). © 2017 IEEE.
Publisher
IEEE

qrcode

Items in Repository are protected by copyright, with all rights reserved, unless otherwise indicated.