ACM International Symposium on Software Testing and Analysis
Abstract
Input constraints are useful for many software development tasks. For example, input constraints of a function enable the generation of valid inputs, i.e., inputs that follow these constraints, to test the function deeper. API functions of deep learning (DL) libraries have DL-specific input constraints, which are described informally in the free-form API documentation. Existing constraint-extraction techniques are ineffective for extracting DL-specific input constraints.
To fill this gap, we design and implement a new technique—Ddocter—to analyze API documentation to extract DL-specific input constraints for DL API functions. Ddocter features a novel algorithm that automatically constructs rules to extract API parameter constraints from syntactic subtree patterns of API descriptions. These rules are then applied to a large volume of API documents in popular DL libraries to extract their input parameter constraints. To demonstrate the effectiveness of the extracted constraints, Ddocter uses the constraints to guide the testing of DL API functions. Specifically, Ddocter uses the constraints to enable the automatic generation of valid inputs and invalid inputs.
Our evaluation on three popular DL libraries (TensorFlow, PyTorch, and MXNet) shows that Ddocter’s precision in extracting input constraints is 85.8%. Ddocter detects 96 bugs, including one previously unknown security vulnerability that is now documented into the CVE database, while a baseline technique without input constraints detects only 69 bugs. Most (67) of the 96 bugs are previously unknown, 43 of which have been fixed or confirmed by developers after we report them. In addition, Ddocter detects 38 inconsistencies within documents, including 28 that are fixed or confirmed after we report them.