How (Much) Do Developers Test?

by Beller, Moritz and Gousios, Georgios and Zaidman, Andy

You can get a pre-print version from here.
You can view the publisher's page here.

Abstract

What do we know about software testing in the real world? It seems we know from Fred Brooks’ seminal work “The Mythical Man-Month” that 50% of project effort is spent on testing. However, due to the enormous advances in software engineering in the past 40 years, the question stands: Is this observation still true? In fact, was it ever true? The vision for our research is to settle the discussion about Brooks’ estimation once and for all: How much do developers test? Does developers’ estimation on how much they test match reality? How frequently do they execute their tests, and is there a relationship between test runtime and execution frequency? What are the typical reactions to failing tests? Do developers solve actual defects in the production code, or do they merely relax their test assertions? Emerging results from 40 software engineering students show that students overestimate their testing time threefold, and 50% of them test as little as 4% of their time, or less. Having proven the scalability of our infrastructure, we are now extending our case study with professional software engineers from open-source and industrial organizations.

Bibtex record

@inproceedings{BGZ15,
  author = {Beller, Moritz and Gousios, Georgios and Zaidman, Andy},
  title = {How (Much) Do Developers Test?},
  booktitle = {Proceedings of the 37th International Conference on Software Engineering -- Niew Ideas and Emerging Results track},
  year = {2015},
  month = may,
  volume = {2},
  pages = {559-562},
  doi = {10.1109/ICSE.2015.193},
  url = {/pub/test-time-nier.pdf}
}

The paper