Makes a lot of sense to me. I’ve only recently learned about the new hype of code coverage and I get the impression that those who love chasing this metric are not primarily developers, but their managers or team leaders who constantly have to think of management reporting.
Code coverage does seem like a highly potent smoke screen.
I come from a Support background and I have been guilty of exploiting the same fallacy in the past:
You give them a ratio of opened versus closed support tickets for example, which says very little the more varied the type and length of support efforts there are, to the point where some tickets are open for minutes and others for months (more detailed metrics can alleviate this, but business managers don’t know that unless you tell them).
Senior management want to have KPIs to evaluate your team’s performance so you come up with any number of nicely formatted percentages that are both easy to grasp for a non- or semi-technical audience, and also easy to produce.
However, the reality is that technical complexity, as hard as it may be to understand for your managers, oftentimes just can’t be dumbed down enough without becoming inaccurate or outright misleading.
I’d prefer the Test-Driven developer who is fanatic about not writing a single function without having a strong test for it already in place, and then trusts that his coverage is sufficient.