Concordia University

http://www.concordia.ca/content/shared/en/news/encs/computer-science/2019/07/29/Master-Thesis-Defense-Zishuo-Ding.html

notice

Master Thesis Defense: Zishuo Ding

July 29, 2019

Speaker: Zishuo Ding

Supervisor: Dr. W. Shang

Examining Committee: Drs. J. Rilling, J. Yang, T.-H. Chen (Chair)

Title: Towards the Use of Readily Available Functional Tests in Verifying Performance Issues Fixes

Date: Monday, July 29, 2019

Time: 10:00am

Place: EV 2.260

ABSTRACT

Performance is one of the important aspects of the software quality. In fact, performance issues exist widely in software systems, and the process of fixing the performance issues is an essential step in the release cycle of software systems. Although different types of performance testing, such as load testing, stress testing, and micro benchmarking, are adopted, it is still challenging for verifying the fixes of the performance issues, as conducting such kind of performance testing can be expensive and time-consuming. In particular, the load testing and stress testing are usually conducted after the systems’ deployment in the field or dedicated performance testing environment. On the other hand, there exists a large amount of functional tests readily available, and are executed very often during the software development. In this thesis, we perform an exploratory study on the determining whether the readily available functional tests are capable in verifying the performance issues being fixed or not. We collect 127 performance issues from Hadoop and Cassandra, and evaluate the performance of the functional tests on the commits fixing the performance issues. In particular, we leverage the statistical analysis to identify the performance improvement before and after the issue fixing commits. We find that most of the fixes to performance issues can be verified using the readily available functional tests. However, only a very small portion of the functional tests can be used for verifying the fixes. By manually examining the functional tests, we identify eight reasons of functional tests not being able to verify the issues fixes. In addition, we build random classifiers determining the important metrics influencing the functional tests (not) being able to verify the performance issues fixes. We find that the test code itself and the source code covered by the test are important, while the factors related to the code changes in the performance issues fixes has a low importance. Practitioners should focus on designing and improving the tests, instead of optimizing tests for different performance issues fixes. Our findings can be used as a guideline for practitioners to reduce the amount of efforts on designing tests to verify the performance issues fixes.




Back to top

© Concordia University