Graduation Year


Document Type




Degree Granting Department

Computer Science

Major Professor

Dewey Rundus, Ph.D.

Co-Major Professor

Alan Hevner, Ph.D.

Committee Member

Rafael Perez, Ph.D.


Return on investment metrics, Automated testing feasibility, Manual vs. automated testing, Software quality assurance, Regression testing, Repetitive testing


Software test automation is widely accepted as an efficient software testing technique. However, automation has failed to deliver the expected productivity more often than not. The goal of this research was to find out the reason for these failures by collecting and understanding the metrics that affect software test automation and provide recommendations on how to successfully adopt automation with a positive return on investment (ROI). The metrics of concern were schedule, cost and effectiveness. The research employed an experimental study where subjects worked on individual manual and automated testing projects. The data collected were cross verified and supplemented with additional data from a feedback survey at the end of the experiment. The results of this study suggest that automation involves a heavy initial investment in terms of schedule and cost, which needs to be amortized over subsequent test cycles or even subsequent test projects. A positive ROI takes time and any decision to automate should take into consideration the profit margin per cycle and the number of cycles required to break even. In this regard, automation has been found to be effective for testing that is highly repetitive in nature like, smoke testing, regression testing, load testing and stress testing.