Work

Learning to love the Capability Maturity Model

I had a job where the management were enamoured of the Capability Maturity Model (CMM) and all future planning had to be mapped onto the stages of the maturity model. I didn’t enjoy the exercise very much because in addition to five documented stages there was generally a sixth which was stagnation and decay as the “continually improving” part of the Optimising stage was generally forgotten about in my experience.

Instead budgets for ongoing maintenance and iteration were cut to the bone so that the greatest amount of money could be extracted from the customers paying for the product.

Some government departments I have had dealings with had a similar approach where they would budget capital investment for the initial development of software or services and then allocate nothing for the upkeep of them except fixed costs such as on-premise hosting for 20 years (because why would you want to do anything other than run your own racks?).

This meant that five years into this allegedly ongoing-cost-free paradise services were breaking down, no budget was available to address security problems, none of the original development team were available to discuss the issues with the services and the bit rot of the codebase was making a rewrite the only feasible response to the problem which undercut the entire budgetary argument for amortisation.

A helpful model misapplied

So generally I’ve not had a good experience with people who use the model. And that’s a shame because recently I’ve been appreciating it more and more. If you bring an Agile mindset to the application of CMM: seeing it as a way of describing the lifecycle of a digital product within a wider concept of cyclical renewal and growing understanding of your problem space then it is a very powerful tool.

In particular some product delivery practices have an assumption on the underlying state of maturity in the business process. Lets take one of the classics: the product owner or subject matter expert. Both Scrum and Domain Driven Design make the assumption that there is someone who understands how the business is meant to work and can explain it clearly in a way that can be modelled or turned into clear requirements.

However this can only be true at Level 2 (Repeatable) at the earliest and generally the assumption of a lot of Agile delivery methods is that the business is at Level 4 (Managed). Any time a method asks for clear requirements or the ability to quantify the value returned through metrics you are in the later stages of the maturity model.

Lean Startup is one of the few that actually addresses the problems and uncertainty of a Level 1 (Initial) business. It focuses on learning and trying to lay down foundations that are demonstrated to be consistent and repeatable. In the past I’ve heard a lot of argument about the failings of the Minimum Viable Product and the need for Minimum Loveable, Marketable or some more developed concept Product. Often people who make these arguments seem confused about where they are in terms of business maturity.

The Loveable Product often tries to jump to Level 3 (Defined), enshrining a particular view of the business or process based on the initial results. Sometimes this works but it as just a likely to get you to a dangerous cul de sac where the product is too tailored to a small initial audience and needs to be reworked if it is meet the needs of the larger potential target audience.

John Cutler talks about making bets in product strategy and this seems a much more accurate way to describe product delivery in the early maturity levels. Committing more effort without validation is a bigger bet, often in an early stage business you can’t do that much validation, therefore if you want to manage risk it has to be through the commitment you’re making.

Go to market phases are tough partly because they explicitly exist in these low levels of capability maturity, often you as an organisation and your customers are in the process of trying to put together a way of working with few historic touchpoints to reference. Its natural that this situation is going to be a bit chaotic and ad-hoc. That’s why techniques that focus on generating understanding and learning are so valuable at this stage.

The rewards of maturity

Even techniques like Key Performance Indicators are highly dependent on the underlying maturity. When people talk about the need to instrument a business process they often have an unspoken assumption that there is one that just needs to be translated into a digital product strategy of some kind. That assumption can often be badly wrong and it turns out the first task is actually traditional business analysis to standardise what should be happening and only then instrumenting it.

In small businesses in particular there is often no process than the mental models of a few key staff members. The key task is to try and surface that mental model (which might be very successful and profitable, don’t think immature means not valuable) into external artefacts that are robust enough to go through continuous improvement processes.

A lot of businesses jump into Objective Key Results and as an alignment tool that can be really powerful but when it comes to Key Results if you are not at that Level 4 (Managed) space then the Key Results often seem to boil down to activities completed rather than outcomes. In fairness at Level 5 (Optimising) the two can often be the same, Intel’s original OKRs seem very prescriptive compared to what I’ve encountered in most businesses but they had a level of insight into what was required to deliver their product that most businesses don’t.

If you do get to that Level 5 (Optimising) space then you can start to apply a lot of buzzy processes with great results. You can genuinely be data-driven, you can do multi-variant testing, you can apply RICE, you can drive KPIs with confidence that small gains are sustainable and real.

Before you’re there though you need to look at how to split your efforts between maturing process, enabling consistency and not just doing digital product delivery.

Things that work across maturity stages

Some basic techniques like continual improvement (particularly expressed through methods like total quality), basic business intelligence that quantifies what is happening without necessarily being able to analyse or compare it and creating focus work at every stage of maturity.

However until you get to Level 2 (Repeatable) then the value of most techniques based on value return or performance improvement are going to be almost impossible to assess. To some extent the value of a digital product in Level 1 (Initial) is to offer a formal definition of a process and subject it to analysis and revision. Expressing a process in code and seeing what doesn’t work in the real world is a modelling exercise in itself (but sadly a potentially expensive one).

Learning to love the model

The CMM is a valuable way of understanding a business and used as a tool for understanding rather than cost-saving it can help you understand whether certain agile techniques are going to work or not. It also helps understand when you should be relying more on your understanding and expertise rather than data.

But please see it as a circle rather than a purely linear progression. As soon as your technology or business context changes you may be experiencing a disruptive change that might mean rethinking your processes rather than patching and adapting your current ones. Make sure to reassess your maturity against your actual outputs.

And please always challenge people who argue that product or process maturity is an excuse to strip away the capacity to continually optimise because that simply isn’t a valid implementation of the model.

Standard
Software

In praise of fungible developers

The “fungibility” of developers is a bit of hot topic at the moment. Fungibility means the ability to substitute one thing for another for the same effect; so money is fungible for goods in modern economies.

In software development that means taking a developer in one part of the organisation and substituting them elsewhere and not impacting the productivity of either developer involved in the exchange.

This is linked to the mythical “full-stack” developer by the emergence of different “disciplines” within web software development, usually these are: devops, client-side (browser-based development) and backend development (services).

It is entirely possible for developers to enter one of these niches and spend all their time in it. In fact sub-specialisations in things like responsive CSS and single-page apps (SPA) are opening up.

Now my view has always been that a developer should always aspire to have as broad a knowledge base as possible and to be able to turn their hand to anything. I believe when you don’t really understand what is going on around your foxhole then problems occur. Ultimately we are all pushing electric pulse-waves over wires and chips and it is worth remembering that.

However my working history was pretty badly scarred by the massive wave of Indian outsourcing that happened post the year 2000 and as a consequence the move up the value-chain that all the remaining onshore developers made. Chad Fowler’s book is a pretty good summary of what happened and how people reacted to it.

For people getting specialist pay for niche work, full-stack development doesn’t contain much attraction. Management sees fungibility as a convenient way of pushing paper resources around projects and then blaming developers for not delivering. There are also some well-written defences of specialisation.

In defence of broad skills

But I still believe that we need full-stack developers and if you don’t like that title then let’s call them holistic developers.

Organisations do need fungibility. Organisations without predictable demand or who are experiencing disruption in their business methodology need to be flexible and they need to respond to situations that are unexpected.

You also need to fire drill those situations where people leave, fall ill or have a family crisis. Does the group fall apart or can it readjust and continue to deliver value? In any organisation you never know when you need to change people round at short notice.

Developers with a limited skill set are likely to make mistakes that someone with a broader set of experiences wouldn’t. It is also easier for a generalist developer to acquire specialist knowledge when needed than to broaden a specialist.

Encouraging specialism is the same as creating knowledge silos in your organisation. There are times when this might be acceptable but if you aren’t doing it in a conscious way and accompanying it with a risk assessment then it is dangerous.

Creating holistic developers

Most organisations have an absurd reward structure that massively benefits specialists rather than generalists. You can see that in iOS developer and mobile responsive web CSS salaries. The fact that someone is less capable than their colleagues means they are rewarded more. This is absurd and it needs to end.

Specialists should be treated like contractors and consultants. They have special skills but you should be codifying their knowledge and having them train their generalist colleagues. A specialist should be seen as a short-term investment in an area where you lack institutional memory and knowledge.

All software delivery organisations should practice rotation. Consider it a Chaos Monkey for your human processes.

Rotation puts things like onboarding processes to the test. It also brings new eyes to the solution and software design of the team. If something is simple it should make sense and be simply to newcomer, not someone who has been on the team for months.

Rotation applies within teams too. Don’t give functionality to the person who can deliver it the fastest, give it to the person who would struggle to deliver it. Then force the rest of the team to support that person. Make them see the weaknesses in what they’ve created.

Value generalists and go out of your way to create them.

Standard
Software

The sly return of Waterfall

No-one does Waterfall any more of course, we’re all Agile incrementalists. It is just that a lot of things are difficult to tackle in increments. You can’t get a great design, for example, or a visual style guide without a lot of user testing and workshopping. From a technical perspective you need to make sure your scaling strategy is baked in from the start and to help support that you will also want to have a performance testing framework in place. You’ll also want to be running those test suites in a continuous deployment process, because its hard to create that after the fact.

In short apart from the actual software you want to do everything else up front.

Waterfall existed for a reason, it tried to fix certain issues with software development, it made sure that when you finished a step in the process you didn’t have to go back and re-visit it. It made you think about all the issues that you would encounter in creating complex software and come up with a plan for dealing with them.

Therefore I can see all the exciting enticements to make sure you do something “up front because it will be difficult to solve later”.

However Waterfall had to change because it didn’t work and dressing up the same process in the guise of sensible forethought doesn’t make it more viable than its predecessors.

It can be really frustrating to have a product that looks ugly, or is slow or constantly falls over. It is far more frustrating to have a stable, beautiful and irrelevant product.

Occasionally you know that a product is going to take off like a rocket, as with fast-follow products for example, and it is going to fully payback a big investment in its creation. However in all other cases you have no idea whether a product is going to work and therefore re-coup its investment.

Even with an existing successful product major changes to the product are just as likely to have unexpected consequences as they are to deliver expected benefits. Sometimes they do both.

What always matters is the ability to change direction and respond to the circumstances that you find yourself in. Some aspects of software development have seen this and genuinely try to implement it. Other parts of the discipline are engaged in sidling back into the comfortable past under the guise of “responsibility”.

Standard
Work

Agile: are scrummasters the masters?

One of the fault lines in modern Agile development remains the purpose and application of process. For me the fundamental conflict between a developer and a “scrummaster” is to do with what the main purpose of that role is. Scrummasters often profess a servant manager role for themselves while actually enacting a traditional master hierarchical function.

The following is the acid test for me. The servant manager is one who takes the work I am doing and expresses it in a form that allows people outside the team to understand what I am doing, the progress I have made on it and make predictions about when my work will be complete.

The traditional manager instead tries to control my work so that it fits neatly into the reporting tools that they want to use. They don’t hesitate to interfere, manipulate and control to make their life easier with their own superiors.

Calling yourself a servant manager but then telling people how to structure their work is paying lipservice to a popular slogan while continuing a strand of managerial behaviour that has been proven to fail for decades.

Standard
Work

Agile software development defers business issues

My colleague Michael Brunton-Spall makes an interesting mistake in his latest blog post:

much of our time as developers is being completely wasted writing software that someone has told us is important.  Agile Development is supposed to help with this, ensuring that we are more connected with the business owners and therefore only writing software that is important.

Most Agile methodologies actually don’t do what Michael says here. Every one I’ve encountered in the wild treats it as almost axiomatic that there exists someone who knows what the correct business decision is. That person is then given a title, “product owner” for example and then is usually assigned responsibility for three things: deciding what order work is to be done, judging whether the work has been done correctly and clarifying requirements until they can be reduced to a programming exercise.

That’s why it was liberating to come across System Thinking which does try to take a holistic approach and say that any organisation is only really as good as its worst performing element. Doing that does not eliminate all the process improvements in development that Agile can provide but also illustrates that a great development team doing the wrong thing is a worse outcome than a poor development team doing the right thing.

The invention of the always correct product owner was a neat simplification of a complex problem that I think was probably designed to avoid having multiple people telling a development team different requirements. Essentially by assigning the right to direct the work of the development team to one person the issue of detail and analysis orientated developers getting blown off-course by differing opinions was replaced by squabbling outside the team to try and persuade the decision maker. Instead of developer versus business the problem was now business versus business.

Such a gross simplification has grave consequences as the “product owner” is now a massive point of failure and few software delivery teams can effectively isolate themselves from the effects of such a failure. I have heard the excuse “we’re working on the prioritised backlog” several times but I’ve never seen it protect a team from a collectivised failure to deliver what was really needed.

Most Agile methodologies essentially just punt and pray over the issue of business requirements and priorities, deferring the realities of the environment in the hoping of tackling an engineering issue. Success however means to doing what Michael suggests and trying to deal with the messy reality of a situation and providing an engineering solution that can cope with it.

Standard
Web Applications

Good magic, bad magic

Philip Potter pinged me his post on Sinatra magic during the week. Mark Needham’s comment and code on solving the mocking problem is good advice to the problem as posed.

At Wazoku where we use the often equally magical Bottle framework we don’t use top-down TDD but instead outside-in functional tests (with no funky runners as we don’t need CI). This solves the whole magic issue by shifting the attention to what the public interactions of the application are. This is one of the massive benefits of using a microapp HTTP/JSON/REST-like architecture. I could flip the API from Bottle to Django or Compojure or Sinatra and my test suite can keep on rocking and telling me whether the behaviour my consumers are relying on is correct.

The major thing I felt when reading through Philip’s post was the massive amount of effort that was going into testing relatively simple behaviour. This is a bit of anti-pattern with Agile developers (or perhaps it is part of the mastery thing where rote “correct” behaviour is modified by experience and judgement). One of the massive advantages of using something like Sinatra is that you can get a whole web app with rich behaviour into less than 200 lines. If you then create thousands of lines of test code and battle with the magic for hours on end you’ve completely destroyed your productivity.

If you have a code base that you expect to be large and highly contested by a large development team you need good, layered testing and to use frameworks that support that. If you have an app that is small and when its done it is done then there is no need to agonise as to whether it was done “right”.

The idea that top-down TDD is the only correct way to write software is corrosive. When faced with a generally poorly skilled and educated workforce it is good to have rules. I have imposed a certain style of TDD on a group myself because it gives a good framework for work and achieves very consistent output.

However with skilled people on small scale projects you can kill yourself by imposing arbitrary rules. I love Sinatra and while I might be equivocal about magic I think it is ridiculous to moan about it if you are using something as unicorn-packed as Ruby. For example Philip was trying to use RSpec mocks and stubs to do his TDD. The result is kind of saying that you’re disappointed that your “good” magic for testing didn’t work with the “bad” magic of a DSL for web applications. Even if your RSpec code passed its tests you still haven’t said anything particularly deep about the production behaviour of your application as your unit testing environment was severely compromised by the manipulations of your mocking framework.

So my rule of thumb is: if its simple, do it; if it was simple, functionally test it; if it was never really simple then test-drive it with suitable tools.

Standard
Software, Work

Agile must be destroyed

Let’s stop talking about the post-Agile world and start creating it.

Agile must be destroyed not because it has failed, not because we want to return to Waterfall or to the unstructured, failing, flailing projects of the past but because it’s maxims, principles and fixed ideas have started to strangle new ideas. All we need to take with us are the four principles of the Agile Manifesto. Everything else are just techniques that were better than what preceded them but are not the final answer to our problems.

There are new ideas that help us to think about our problems: a philosophy of flowing and pulling (but not the training courses and books that purport to solve our problems), the principles of software craftmanship, “constellation” architecture, principles of Value that put customers first not our clients. There are many more that I don’t know yet.

We need to adopt them, experiment with them, change and adapt them. And when they too ossify into unchallengeable truisms we will need to destroy them too.

Standard
Programming, Work

Code Coverage 90% meaningless

I have always been quite sceptical about the value of code coverage metrics. Most of the teams I have seen who have been productive and produced code with few defects in production have not used code coverage as a metric. Where code coverage tends to be obsessively tracked it seems to be more as a management reporting metric (often linked to “Quality”) and rarely seems to correlate with lower defects or malleable software instead it often appears in low-collaboration or low trust environments.

Code coverage has most benefit in an immature unit testing environment or in a “test-after” team. With test-after you have to have code coverage to ensure that people remembered to test all the possible execution paths. My personal preference is to push TDD as a practice in preference to code coverage because a side-effect is that you get 100% code coverage.

Code coverage is also quite a different beast to static or complexity analysis of code bases. Static analysis is a useful tool and some complexity measures actually make good indicators of the “quality” of the code base. It is also not the same as instrumented code, which is invaluable with dealing with code you’ve inherited or to discover how much of the codebase actually gets used in production.

Standard