Coverage "categories", exceptional code blocks

Coverage "categories", exceptional code blocks

First, thanks for such a great tool - we use it regularly and it is an invaluable part of our build process!

I'd like to propose an enhancement I've been thinking about for a while. Although this feature request would likely require changes to both the gui and the underlying ncover engine, I wanted to start from the top-down, describing what I'd like to see in the gui, and then working down to the details of what might be required in ncover itself.

In practice, when I am reviewing coverage results for my team's code, I find myself going through a manual process of pinpointing areas of low coverage, and then drilling down on the specific classes with low coverage but many lines of code, and then looking at specific sections that have or have not been covered in the class, and then trying to "categorize" those uncovered sections in order to prioritize. In particular, I tend to bunch code inside exception blocks into an "exceptional" category, which would have a different priority than the normal code.

That's not to say that I don't want exception blocks covered by some testing - of course they should be covered. However, there are often exception blocks that are hard to reach in a repeatable way, and in some cases the effort involved in setting up a framework to test them is not worth the gain in covering what might be very trivial code. Given the choice, I'd probably rather have developers working out test cases to cover code in the normal execution path, rather than covering the occasional OutOfMemoryException.

This idea could also be extended to other categories. For example, I would probably want to categorize code inside switch case statements slightly differently, as those could also be as hard to reach as exception blocks in some cases. I might also want to categorize higher-than-normal-priority items, such as critical lock/mutex sections. If I were to spot something like that in our code and it was not covered, I would be much more likely to prioritize covering that critical section than I might be for exceptional or even normal (non-critical-section) code.

Without the means to categorize and prioritize, I'm concerned that making coverage results too visible to the team can lead to developers being preoccupied with generic coverage levels, which can be great for competitive motivation among the team, but which could lead developers to alter their coding style in potentially negative ways. For example, because of the issues with testing exceptional blocks, I'm concerned that developers will alter their style to handle less exceptions, or to write more generic exception handling code, instead of drilling down with more specific exception handlers. I would prefer that developers have visibility on all aspects of their coverage, but that the high-level "acceptability:75%" threshold was not affected by things like exception blocks. Perhaps that category should have a completely different acceptability threshold - maybe 50% or less. Similarly, perhaps critical sections should have a 90% threshold.

So my thought is to somehow have coverage broken down in another dimension, along the lines of "categories", and to then be aggregated up separately for each category, and then to be applied to a separate threshold for each category. The key value here is in having the high-level rolled up aggregation categorized, so that instead of this:

MyAssembly.dll : coverage = 65%, threshold = 75%, UNACCEPTABLE

I would like to see this:

MyAssembly.dll:
[normal code] : coverage = 85%, threshold = 75%, ACCEPTABLE
[exception blocks] : coverage = 35%, threshold = 20%, ACCEPTABLE
[critical sections] : coverage = 99%, threshold = 90%, ACCEPTABLE

I'm sure I can't be the first person who has thought about this - so if there is already a better approach to these issues, please let me know, or point me at some links. Otherwise, I'd be interested to know your thoughts.

Thanks!