The Perfect Code Coverage Score

ncover-code-coverageYou manage what you measure – but what if you are looking at the wrong thing? The metrics we define influence our process and end result. For example, trying to gauge your speed by the sound of your radio would lead to noise dampening in the car and volume controls on the dashboard. While these features may be an interesting experiment, it is not the main information you need to see about knowing your speed. Is volume the best metric to look at?  Probably not. Actually – no. It is definitely not. Please practice responsible driving.

Back to the topic at hand, we see this same confusion when we talk to our customers that are trying to define their code coverage and looking for the perfect score. It is very important to select the right combination of metrics to measure the effectiveness of your testing strategies and the quality of your code base to guide your development and quality efforts moving forward. But striving for the perfect 100% on single basic metric may be guiding you down the wrong path.

anders-able-code-coverageWe have talked previously about some of the best practices we have found in our years of covering code. Recently, we came across a post by Anders Abel discussing some of the same things we see everyday. He discusses the difference between line coverage and functional coverage. He even shows some pretty strong examples on how bad code can sneak through line coverage tests.

Our quest for what seems like a good measure – 100% seems pretty perfect – may not be telling us the whole picture. Code coverage metrics, like branch coverage, sequence point coverage and change-risk-anti-patterns score, help you and your team build quality code and let’s you know that it is good. There is no one perfect score. Each team is different. The important piece is setting the foundation for developing meaningful metrics that influence your code in meaningful ways.

Trackbacks

  1. […] The Perfect Code Coverage Score (Kerry Meade) […]