Programming, Software, Work

Defining the idea of “software engineering”

I have been reading Dave Farley’s Modern Software Engineering. Overall it’s a great read and thoroughly recommended (I’m still reading through it but I’ve read enough to know it is really interesting and a well-considered approach to common problems in development).

One of the challenges Dave tackles is to try and provide a definition of what software engineering actually is. This is actually a pretty profound challenge in my view. I’ve often felt that developers have usurped the title of engineer to provide a patina of respectability to their hacky habits. Even in Dave’s telling of the origin of the term it was used to try and provide parity of esteem with hardware engineers (by no lesser figure than Margaret Hamilton).

In large organisations where they have actual engineers it is often important to avoid confusion between what Dave categorises as Design and Production engineering. Software engineering sits in the world of design engineering. Software is malleable and easy to change unlike a supply chain or a partially completed bridge. Where the end result of the engineering process is an expensive material object Dave points that it is common to spend a lot of time modelling the end result and refining the delivery process for the material output as a result of the predictions of the model. For software to some extent our model is the product and we can often iterate and refine it at very low cost.

Dave proposes the following definition of engineering:

Engineering is the application of an empirical, scientific approach to finding efficient, economical solutions to practical problems.

Dave Farley, Modern Software Engineering

This definition is one I can live with and marries my experience of creating software to the wider principles of engineering. It also bridges across the two realms of engineering, leaving the differences to practices rather than principles.

It is grounded in practicality rather than aloof theories and it emphasises that capacities drive effective solutions as much as needs. This definition is a huge step forward in being able to build consensus around the purpose of a software engineer.

Standard
Programming

PR Reviews: not the messiah, not a naughty boy

On Tech Twitter there seems to be a lot of chat recently about how the Github Pull Request (PR) review process is “broken” or underserves internal teams (an example) that have strong collaboration practices such as pairing or mob coding.

Another objection raised is the idea of internal gatekeeping within development groups, I’m not sure I fully follow the argument but I think it runs along the lines that the PR review process allows powerful, influential members of the group to enforce their views over the others.

This is definitely a problem but frankly writing AST tools linked to the “merge to master” checks is probably a more controlling tool than spending your time policing PRs.

With open source projects often all the action happens at the pull request because the contributors often have no other interaction. Proponents are right to point out that if this works for open source projects do you really need to put effort into other practices upstream of the PR? Opponents are also right to point out that adopting the work practice of an entirely different context into your salaried, work context is crazy. They are both right, individual organisations need to make deliberate decisions using the challenges of both side to the way they are working.

I’m not sure how much of this is a fightback by pairing advocates (that’s how I interpret this thread). There is a general feeling that pairing as a practice has declined.

In my work the practice is optional. As a manager I’ve always known that in terms of output delivery (which before people object, may be important particularly if you’re dealing with runway to meet salary) pairing is best when you’re increasing the average rather than facilitating the experienced.

I think even with pairing you’d want to do a code review step. Pairs are maybe better at avoid getting into weird approaches to solutions than an individual but they aren’t magical and if you are worried about dominant views and personalities pairing definitely doesn’t help solve that.

So I’d like to stick up for a few virtues of the Pull Request Review without making the argument that Pull Requests are all you need in your delivery process.

As an administrator of policy who often gets asked about Software Development LifeCycles (SDLC) as a required part of good software governance. It is handy to have a well documented, automated review process before code goes into production. It ticks a box at minimum disruption and there isn’t an alternative world where you’re just streaming changes into production on the basis of automated testing and production observability anyway.

As a maintainer of a codebase I rely on PRs a lot as documentation. Typically in support you handle issues on much more software than you have personally created. Pairing or mob programming isn’t going to work in terms of allowing me to support a wide ranging codebase. Instead it’s going to create demand for Level 3 support.

Well-structured PRs often allow me to understand how errors or behaviour in production relate to changes in code and record the intent of the people making the change. It makes it easier to see situations that were unanticipated or people’s conception of the software in use and how that varies from actual use.

PR review is also a chance for people outside the core developers to see what is happening and learn and contribute outside of their day to day work. Many eyes is not perfect but it is a real thing and people who praise teaching or mentoring as a way to improve technique and knowledge should be able to see that answering review questions (in a suitable form of genuine inquiry) is part of the same idea.

PRs also form a handy trigger for automated review and processes. Simple things like spelling checks allow a wider range of people to contribute to codebases fearlessly. Sure you don’t need a PR to use these tools but in my experience they seem more effective with the use of a stopping point for consideration and review of the work done to date.

Like a lot of things in the fashion-orientated, pendulum swinging world of software development good things are abandoned when no longer novel and are exaggerated to the point that they become harmful. It’s not a world known for thoughtful reflection and consideration. But saying that pull request reviews undermine trust and cohesion in teams or a formulaic practice without underlying benefit seems unhelpfully controversial and doctrinal.

Standard
Programming

Recommendation: use explicit type names

I attended an excellent talk recently at Lambdale about teaching children to code (not just Scratch but actual computer languages) and one of the points of feedback was that the children found the use of names in type declarations and their corresponding implementations in languages such as Haskell confusing because frequently names are reused despite the two things being completely different categories of things.

The suggestion was that instead of f[A] -> B it was simpler to say f[AType] -> BType.

Since I am in the process of introducing Python type checking at work this immediately sparked some interest in me and when I returned to work I ran a quick survey with the team that revealed that they too preferred the explicit type name with the suffix Type rather than package qualifier for types that I had been using.

Instead of kitchen_types.Kettle the preference was for KettleType.

The corresponding function type annotations would therefore read approximately:

def boil(water: WaterType, kettle: KettleType) -> WaterType

So that’s the format we’re going to be adopting.

Standard
Programming

Tackley’s law of caching

Tackley’s law of caching is that the right cache time is the time it takes someone to come to your desk to complain about a problem.

If you obey this law then when you open your browser to check the issue the person is complaining about it will already have resolved itself.

Tackers may not have invented this rule but I heard it from him first and it one of the soundest pieces of advice in software development I’ve ever had.

Standard
Programming

No-one loves bad ideas

Charles Arthur has an interesting piece of post-Guardian vented frustration on his blog. His argument about developers and journalists sitting together is part-bonkers opinion and partly correct. Coders and journalists are generally working on different timeframes and newsroom developers generally don’t focus enough on friction in the tools that they are creating for journalists.

Journalists however focus too much on the deadline and the frenzy of the news cycle. I often think newsroom developers are a lot like the street sweepers who clean up after a particularly exuberant street market. Everything has to be tidied up and put neatly away before the next day’s controlled riot takes place.

The piece of the article I found most interesting was something very personal though. The central assumption that runs through Arthur’s narrative is that it is valuable to let readers pre-order computer games via Amazon. One of the pieces of work I’ve done at the Guardian is to study the value of the Amazon links in the previous generation of the Guardian website. I can’t talk numbers but the outcome was that the expense of me looking at how much money was earned resulted in all the “profits” being eaten up by cost of my time. You open the box but the cat is always dead.

Similarly Arthur’s Quixotic quest meant that he spent more money in developer’s time than the project could ever possibly earn. Amazon referrals require huge volumes to be anything other than a supplement to an individual’s income.

His doomed attempt to get people to really engage with his idea really reflected the doomed nature of the idea. British journalism favours action and instinct and sometimes that combination generates results. Mostly however it just fails and regardless of whom is sitting next to whom, who can get inspired by a muddle-minded last-minute joyride on the Titanic except deadline-loving action junkies?

Standard
Programming

Programming as Pop Culture

The “programming is pop culture” quote has been doing the rounds from a 2004 interview with Alan Kay in terms of the debate on the use of craft as a metaphor for development. Here’s a recap:

…as computing spread out much, much faster than educating unsophisticated people can happen. In the last 25 years or so, we actually got something like a pop culture, similar to what happened when television came on the scene and some of its inventors thought it would be a way of getting Shakespeare to the masses. But they forgot that you have to be more sophisticated and have more perspective to understand Shakespeare. What television was able to do was to capture people as they were.

On the face of it this is a snotty quote from someone who feels overlooked; but then I am exactly one of those unsophisticated people who entered a democratised medium and is now rediscovering the past!

As a metaphor though in trying analyse how programmers talk and think about their work and the way that development organisations organise what they do then the idea that programming is pop culture is powerful, relevant and useful.

I don’t necessarily think that pop culture is necessarily derogatory. However in applying it to programming I think that actually you have to accept that it is negative. Regular engineering for example doesn’t discuss the nature of its culture, it is much more grounded in reality, the concrete and stolen sky as Locke puts it.

Architecture though, while serious, ancient and storied is equally engaged in its own pop culture. After all this is the discipline that created post-modernism.

There are two elements to programming pop culture that I think are worth discussing initially: fashion and justification by existence.

A lot of things exist in the world of programming purely because they are possible: Brainfuck, CSS versions of the Simpsons, obfuscated C. These are the programming equivalent of the Ig Nobles, weird fringe activities that contribute little to the general practice of programming.

However ever since the earliest demoscene there has been a tendency to push programming to extremes and from there to playful absurdity. These artifacts are justified through existence alone. If they have an audience then they deserve to exist, like all pop culture.

Fashion though is the more interesting pop culture prism through which we can investigate programming. Programming is extremely faddish in the way it adopts and rejects ideas and concepts. Ideas start out as scrappy insurgents, gain wider acceptance and then are co-opted by the mainstream, losing the support of their initial advocates.

Whether it is punk rock or TDD the patterns of invention, adoption and rejection are the same. Little in the qualitative nature of ideas such as NoSQL and SOA changes but the idea falls out of favour usually at a rate proportional to the fervour with which it was initially adopted.

Alpha geeks are inherently questors for the new and obscure, the difference between them and regular programmers is their guru-like ability to ferret out the new and exciting before anyone else. However their status and support creates an enthusiasm for things. They are tastemakers, not critics.

Computing in general has such fast cycles of obsolescence that I think its adoption of pop culture mores is inevitable. It is difficult to articulate a consistent philosophical position and maintain it for years when the field is in constant churn and turmoil. Programmers tend to attach to concrete behaviour and tools rather than abstract approaches. In this I have sympathy for Alan Kay’s roar of pain.

I see all manner of effort invested in CSS spriting that is entirely predicated on the behaviour of HTTP 1.1 and which will all have to be changed and undone when the new version HTTP changes file downloading. Some who didn’t need the micro-optimisation in initial download time will have better off if they ignored the advice and waiting for the technology to improve.

When I started programming professional we were at the start of a golden period of Moore’s Law where writing performant code mainframe-style was becoming irrelevant. Now at the end of that period we still don’t need to write performant code, we just want easy ways to execute it in parallel.

For someone who loves beautiful, efficient code the whole last decade and a half is just painful.

But just as in music, technical excellence doesn’t fill stadiums. People go crazy for terrible but useful products and to a lesser degree for beautiful but useless products.

We rediscover the wisdom of the computer science of the Sixties and Seventies only when we are forced to in the quest to find some new way to solve our existing problems.

Understanding programming as pop culture actually makes it easier to work with developers and software communities than trying to apply an inappropriate intellectual, academic, industrial or engineering paradigms. If we see the adoption, or fetishism, of the new as a vital and necessary part of deciding on a solution then we will not be frustrated with change for its own sake. Rather than scorning over-engineering and product ivory towers we can celebrate them as the self-justifying necessities of excess that are the practical way that we move forward in pop culture.

We will not be disappointed in the waste involved in recreating systems that have the same functionality implemented in different ways. We will see it as a contemporary revitalisation of the past that makes it more relevant to programmers now.

We will also stop decrying things as the “new Spring” or “new Ruby on Rails”. We can still say that something is clearly referencing a predecessor but we see the capacity for the homage to actually put right the flaws in its ancestor.

Pop culture isn’t a bad thing. Pop culture in programming isn’t a bad thing. But it is a very different vision of our profession that the one we have been trying to sell ourselves, but as Kay says maybe it better captures who we really are.

Standard
Software

In praise of fungible developers

The “fungibility” of developers is a bit of hot topic at the moment. Fungibility means the ability to substitute one thing for another for the same effect; so money is fungible for goods in modern economies.

In software development that means taking a developer in one part of the organisation and substituting them elsewhere and not impacting the productivity of either developer involved in the exchange.

This is linked to the mythical “full-stack” developer by the emergence of different “disciplines” within web software development, usually these are: devops, client-side (browser-based development) and backend development (services).

It is entirely possible for developers to enter one of these niches and spend all their time in it. In fact sub-specialisations in things like responsive CSS and single-page apps (SPA) are opening up.

Now my view has always been that a developer should always aspire to have as broad a knowledge base as possible and to be able to turn their hand to anything. I believe when you don’t really understand what is going on around your foxhole then problems occur. Ultimately we are all pushing electric pulse-waves over wires and chips and it is worth remembering that.

However my working history was pretty badly scarred by the massive wave of Indian outsourcing that happened post the year 2000 and as a consequence the move up the value-chain that all the remaining onshore developers made. Chad Fowler’s book is a pretty good summary of what happened and how people reacted to it.

For people getting specialist pay for niche work, full-stack development doesn’t contain much attraction. Management sees fungibility as a convenient way of pushing paper resources around projects and then blaming developers for not delivering. There are also some well-written defences of specialisation.

In defence of broad skills

But I still believe that we need full-stack developers and if you don’t like that title then let’s call them holistic developers.

Organisations do need fungibility. Organisations without predictable demand or who are experiencing disruption in their business methodology need to be flexible and they need to respond to situations that are unexpected.

You also need to fire drill those situations where people leave, fall ill or have a family crisis. Does the group fall apart or can it readjust and continue to deliver value? In any organisation you never know when you need to change people round at short notice.

Developers with a limited skill set are likely to make mistakes that someone with a broader set of experiences wouldn’t. It is also easier for a generalist developer to acquire specialist knowledge when needed than to broaden a specialist.

Encouraging specialism is the same as creating knowledge silos in your organisation. There are times when this might be acceptable but if you aren’t doing it in a conscious way and accompanying it with a risk assessment then it is dangerous.

Creating holistic developers

Most organisations have an absurd reward structure that massively benefits specialists rather than generalists. You can see that in iOS developer and mobile responsive web CSS salaries. The fact that someone is less capable than their colleagues means they are rewarded more. This is absurd and it needs to end.

Specialists should be treated like contractors and consultants. They have special skills but you should be codifying their knowledge and having them train their generalist colleagues. A specialist should be seen as a short-term investment in an area where you lack institutional memory and knowledge.

All software delivery organisations should practice rotation. Consider it a Chaos Monkey for your human processes.

Rotation puts things like onboarding processes to the test. It also brings new eyes to the solution and software design of the team. If something is simple it should make sense and be simply to newcomer, not someone who has been on the team for months.

Rotation applies within teams too. Don’t give functionality to the person who can deliver it the fastest, give it to the person who would struggle to deliver it. Then force the rest of the team to support that person. Make them see the weaknesses in what they’ve created.

Value generalists and go out of your way to create them.

Standard
Work

The gold-plated donkey cart

I'm not sure if he came up with the term but I'm going to credit this idea to James Lewis who used it in relation to a very stuck client we were working on at ThoughtWorks.

The golden donkey cart is a software delivery anti-pattern where a team ceases to make step changes to their product but instead undergoes cycles of redevelopment of the same product making every more complex and rich iterations of the same feature set.

So a team creates a product, the donkey cart, and it's great because you can move big heavy things around in it. After a while though you're used to the donkey cart and you're wondering if things couldn't be better. So the team gets together and realise that they could add suspension so the ride is not so bumpy and you can get some padded seats and maybe the cart could have an awning and some posts with rings and hooks to make it easier to lash on loads. So donkey cart v2 is definitely better and it is easier and more comfortable to use so you wonder, could it be better yet.

So the team gets back together and decides that this time they are going to build the ultimate donkey cart. It's going to be leather and velvet trim, carbon fibre to reduce weight, a modular racking system with extendible plates for cargo. The reins are going to have gold medallions, it's going to be awesome.

But it is still going to be a donkey cart and not the small crappy diesel truck that is going to be the first step on making donkey carts irrelevant.

The gold-plated donkey cart is actually a variant on both the Iron Law of Oligarchy and the Innovator's Dilemma.

The donkey cart is part of the valuable series of incremental improvements that consists of most business as usual. Making a better donkey cart makes real improvements for customers and users.

The donkey cart team is also there to create donkey carts. That's what they think and talk about all the time. It is almost churlish to say that they are really the cargo transport team because probably no-one has ever expressed their purpose or mission in those terms because no-one else has thought of the diesel truck either.

Finally any group of people brought together for a purpose will never voluntarily disband itself. They will instead find new avenues of donkey cart research that they need to pursue and there will be the donkey cart conference circuit to be part of. The need for new donkey cart requirements will be self-evident to the team as well as the need for more people and time to make the next donkey cart breakthrough, before one of their peers does.

Standard
Software

The sly return of Waterfall

No-one does Waterfall any more of course, we’re all Agile incrementalists. It is just that a lot of things are difficult to tackle in increments. You can’t get a great design, for example, or a visual style guide without a lot of user testing and workshopping. From a technical perspective you need to make sure your scaling strategy is baked in from the start and to help support that you will also want to have a performance testing framework in place. You’ll also want to be running those test suites in a continuous deployment process, because its hard to create that after the fact.

In short apart from the actual software you want to do everything else up front.

Waterfall existed for a reason, it tried to fix certain issues with software development, it made sure that when you finished a step in the process you didn’t have to go back and re-visit it. It made you think about all the issues that you would encounter in creating complex software and come up with a plan for dealing with them.

Therefore I can see all the exciting enticements to make sure you do something “up front because it will be difficult to solve later”.

However Waterfall had to change because it didn’t work and dressing up the same process in the guise of sensible forethought doesn’t make it more viable than its predecessors.

It can be really frustrating to have a product that looks ugly, or is slow or constantly falls over. It is far more frustrating to have a stable, beautiful and irrelevant product.

Occasionally you know that a product is going to take off like a rocket, as with fast-follow products for example, and it is going to fully payback a big investment in its creation. However in all other cases you have no idea whether a product is going to work and therefore re-coup its investment.

Even with an existing successful product major changes to the product are just as likely to have unexpected consequences as they are to deliver expected benefits. Sometimes they do both.

What always matters is the ability to change direction and respond to the circumstances that you find yourself in. Some aspects of software development have seen this and genuinely try to implement it. Other parts of the discipline are engaged in sidling back into the comfortable past under the guise of “responsibility”.

Standard
Software, Work

Up-front quality

There has been a great exchange on the London Clojurians mailing list recently talking about the impact of a good REPL on development cycles. The conversation kicks into high-gear with this post from Malcolm Sparks although it is worth reading it from the start (membership might be required I can’t remember). In his post Malcolm talks about the cost of up-front quality. This, broadly speaking, is the cost of the testing required to put a feature live, it is essentially a way of looking at the cost that automated testing adds to the development process. As Malcolm says later: “I’m a strong proponent of testing, but only when testing has the effect of driving down the cost of change.”.

Once upon a time we had to fight to introduce unit-testing and automated integration builds and tests. Now it is a kind of given that this is a good thing, rather like a pendulum, the issue is going too far in the opposite direction. If you’ve ever had to scrap more than one feature because it failed to perform then the up-front quality cost is something you consider as closely as the cost of up-front design and production failure.

Now the London Clojurians list is at that perfect time in its lifespan where it is full of engaged and knowledgeable technologists so Steve Freeman drops into the thread and sensibly points out that Malcolm is also guilty of excess by valuing feature mutability to the point of wanting to be able to change a feature in-flight in production, something that is cool but is probably in excess of any actual requirements. Steve adds that there are other benefits to automated testing, particularly unit testing, beyond guaranteeing quality.

However Steve mentions the Forward approach, which I also subscribe to, of creating very small codebases. So then Paul Ingles gets involved and posts the best description I’ve read of how you can use solution structure, monitoring and restrained codebases to avoid dealing with a lot of the issues of software complexity. It’s hard to boil the argument down because the post deserves reading in full. I would try and summarise it as the external contact points of a service are what matters and if you fulfil the contract of the service you can write a replacement in any technology or stack and put the replacement alongside the original service.

One the powerful aspects of this approach is that is generalises the “throw one away” rule and allows you to say that the current codebase can be discarded whenever your knowledge of the domain or your available tools change sufficiently to make it possible to write an improved version of the service.

Steve then points out some of the other rules that make this work, being able to track and ideally change consumers as well. Its an argument for always using keys on API services, even internal ones, to help see what is calling your service. Something that is moving towards being a standard at the Guardian.

So to summarise, a little thread of pure gold and the kind of thing that can only happen when the right people have the time to talk and share experiences. And when it comes to testing, ask whether your tests are making it cheaper to change the software when the real functionality is discovered in production.

Standard