Fourteen years later, Pasareanu’s automated software-testing work awarded for retrospective impact

Daniel Tkacik

Apr 19, 2018

BuildSys paper got the Audience Choice Award

Source: CMU Silicon Valley

From left to right : CMU-SV’s Pei Zhang, Shijia Pan, Jun Han, and CMU’s Hae Young Noh

Fourteen years ago, Corina Pasareanu, CyLab associate research professor, and two of her colleagues published a paper outlining three automated techniques for checking software for bugs and vulnerabilities. At the time, software failures cost the United States about $60 billion every year, and today that cost is nearly $1 trillion worldwide.

“These three techniques are seminal,” Pasareanu says. “People are still actively investigating this kind of work. The problem is very difficult.”

This month, Pasareanu and her colleagues are receiving the 2018 Retrospective Impact Award from the International Symposium on Software Testing and Analysis (ISSTA). Pasareanu will accept the award alongside her two co-authors, University of Texas at Austin professor Sarfraz Kurshid and Stellenbosch University professor Willem Visser, at the ISSTA 2018 annual meeting, taking place in Amsterdam July 16-21.

In their keynote talk, “Test input generation with Java PathFinder: Then and Now,” Pasareanu, Kurshid and Visser will review the research that has happened in the past 14 years in the context of the work, and discuss future directions.

“The idea for the work is that you have a complex piece of software, and you want to test it—run it on inputs and see if it performs the desired functionality. The difficulty is that often software takes in complex, structured inputs, and it is difficult to automate the input generation,” Pasareanu says.

These three techniques are seminal. People are still actively investigating this kind of work. The problem is very difficult.

Corina Pasareanu, Associate Research Professor, Carnegie Mellon Silicon Valley

Java PathFinder is an open-sourced software that autonomously finds vulnerabilities in Java code, one of the most popular programming languages in use with more than nince million developers worldwide. It was originally developed at the NASA Ames research facility as an effort to develop tools and methods to find and fix software bugs in complex mission-critical software systems.

Testing software for bugs involves generating mock input and seeing how software responds to it. Some of these inputs may be simple, like single integers or strings of integers, while other inputs may be very complex, involving long, multi-dimensional tables of data or complicated structures in the shape of lists or trees.

“There are many applications that require complex data structures, and it’s very hard to test for them,” says Pasareanu. “Manual generation is very time-consuming and it’s very hard to make sure you cover all of the cases, so in this paper we present three techniques to do these systematically and automatically.”