Metrics and craftmanship

Ever since we’ve had access to increasingly more comprehensive and easy to comprehend metrics there has been a conflict between the artisan and craftsmanship elements of software development and the data-driven viewpoints.

Things like code quality are seen as being difficult to express in terms of user-affecting metrics. I suspect that is because most of the craft concerns of software development do not affect the overall value of a product. This is not to align myself fully with those driven by metrics.

There are lots of situations where two approaches result in the same metrics outcome. It is tempting to give in to the utilitarian argument that in this case what you should do is simply choose the lowest cost option.

That is too reductionist though and while it may lead to an optimised margin-generating product it feels to me that it is just as likely to create a spiral of compromise that jeopardises the ability to make further improvements.

It is here at the point where metrics are silent that we are put to the test of making good decisions. Our routes forward are neutral from a data point of view but good decisions will unlock better possibilities in the future. It is at this moment that I feel our preferences for things like craft and aesthetic and our understanding of things like cost and consequence matter. Someone able to understand how to achieve beauty and simplicity in software for the same cost as the compromise while achieve very different outcomes.

So we need to be metrics-first so we know we are being honest with ourselves but once we are doing things in a truthful environment, our experience and discretion can make all the difference.

Programming, Work

Code Coverage 90% meaningless

I have always been quite sceptical about the value of code coverage metrics. Most of the teams I have seen who have been productive and produced code with few defects in production have not used code coverage as a metric. Where code coverage tends to be obsessively tracked it seems to be more as a management reporting metric (often linked to “Quality”) and rarely seems to correlate with lower defects or malleable software instead it often appears in low-collaboration or low trust environments.

Code coverage has most benefit in an immature unit testing environment or in a “test-after” team. With test-after you have to have code coverage to ensure that people remembered to test all the possible execution paths. My personal preference is to push TDD as a practice in preference to code coverage because a side-effect is that you get 100% code coverage.

Code coverage is also quite a different beast to static or complexity analysis of code bases. Static analysis is a useful tool and some complexity measures actually make good indicators of the “quality” of the code base. It is also not the same as instrumented code, which is invaluable with dealing with code you’ve inherited or to discover how much of the codebase actually gets used in production.