Off-Topic: Thoughts on Code Coverage


Continuing the discussion from Confirming best practices when running packages specs (ie tests):

I got around to writing a blog post on the reasons why I’m not bullish on code coverage tools:

To be clear, code coverage has a use and I prefer to have it available rather than not. But I do strongly disagree that code coverage is a proof of security any more than lack of it is a proof of insecurity.


Tests have their use, but chasing a metric doesn’t get you anywhere. You make some smart observations there, thanks for sharing!


Makes a lot of sense to me. I’ve only recently learned about the new hype of code coverage and I get the impression that those who love chasing this metric are not primarily developers, but their managers or team leaders who constantly have to think of management reporting.

Code coverage does seem like a highly potent smoke screen.
I come from a Support background and I have been guilty of exploiting the same fallacy in the past:
You give them a ratio of opened versus closed support tickets for example, which says very little the more varied the type and length of support efforts there are, to the point where some tickets are open for minutes and others for months (more detailed metrics can alleviate this, but business managers don’t know that unless you tell them).

Senior management want to have KPIs to evaluate your team’s performance so you come up with any number of nicely formatted percentages that are both easy to grasp for a non- or semi-technical audience, and also easy to produce.

However, the reality is that technical complexity, as hard as it may be to understand for your managers, oftentimes just can’t be dumbed down enough without becoming inaccurate or outright misleading.

I’d prefer the Test-Driven developer who is fanatic about not writing a single function without having a strong test for it already in place, and then trusts that his coverage is sufficient.


Totally agree with the general feeling here. I was once quite interested in coverage and after a few pet projects with it I didn’t find that it provided the quality it was supposed to bring. It was also confusing me in searching how to improve the coverage rather than improving the product, its feature, its simplicity, etc.
You can have 100% tests coverage and still have a poor product in the end.


And a poorly tested one on top of that, as pointed out by @leedohm.


I had a meeting with someone I’m mentoring yesterday and code coverage was just one of the things he and I discussed (though the code coverage part was pretty short because I gave him the short answer and then told him to read my blog post :laughing:) I’m working on a new blog post from the ideas that we talked about, “Things Every Developer Should Know About Testing”.


As a tester, I can add that the same is true of test coverage. I can write a bunch of simple tests to achieve 100% test coverage of the application, but they won’t tell you much more than ‘does the application exist’. If the aim is to improve quality, it’s almost always better to focus on the more complex parts of the application than it is to try and achieve 100% coverage, especially when it comes to unit tests. One of the first things a tester learns is that it’s impossible to test everything, unless you have infinite time, people and resources. This means that you will need to pick your battles, so as a tester you usually do a risk analysis (this can range from a full formal risk analysis to simply thinking about it in your head) and based on that you decide where you will start testing and how deep you will dive into each aspect.


I maintain some open source projects that have test coverage on them, but as has been mentioned, it does in no way reflect the quality of the code. There can still be plenty of bugs even if all your tests are passing with 100% coverage.

I will say that one thing that test coverage has done was help me to identify logical paths that weren’t tested for and help me reduce logical complexity. At first it was about the statistic, but then it became about simplifying the code. Since then, my tests have become much more comprehensive and my code has felt much cleaner without a whole lot of technical debt.

Has anyone else found any other benefits to test coverage besides the testing aspect of it?


Code/test coverage can be a loose indication of the testability of your system. In my experience, testability is in direct proportion to maintainability.


Sure, and it’s quite common when practicing TDD with code coverage to quickly reach 9x% of coverage. But the extra effort to reach the 100% doesn’t provide any noticeable value IMO.


Just to try to clarify my point, I didn’t say that high code coverage meant that an app was secure.

My point was that a low value of code coverage (and in my book that is anything below 90%), means that an app is very likely to have security vulnerabilities (the main reason being that there is a significant part of the code that is not currently being tested). By vulnerabilities, I’m assuming that the application/code has assets to protect that can be exploited due to code issues.

And of course that code coverage can be abused (just like any other metric). In fact it tells a lot about an manager (or developer) understanding of code-coverage, if they are measuring just for the sake of it.

What I found is that to have a high degree of code coverage, one will need a very good testing infrastructure, which allows the developers to test they code from all sort of angles.

And btw, I actually use Code-Coverage as a technique to perform all sort of analysis. For example I have cases where I will map an applications’ attack surface (i.e. its exposed pages/APIs) and use code coverage to understand how much of the application is actually being covered by those tests.