Programming

Scale Summit 2014

Scale Summit is the new Scale Camp, an unconference aimed at bringing the same kind of topics as you might expect at Velocity.

This was the first Scale Summit, the venue was excellent as was the food (especially the bacon rolls, from Eden apparently) and supply of drink. Scale Summit happens under Chatham House rules so there’s no attribution of what is said which allows the attendees to be really frank and also for people to be free with what they really know rather than hedging and trying to be “on message”. It makes for a fascinating gathering.

The sessions varied in their organisation but all focussed on discussion between the participants. I managed to go to the Elasticsearch session, which was interesting for the practical boundaries that people were finding and also the operational knowledge. On the subject of using ES as the primary application store, the feeling seemed to be “not yet”, but there was also some words of wisdom about separating out document stores and search functionality and not finding a superficial unity in the two purposes.

The microservices session was a fast and furious fishbowl, easily the liveliest event and one that is going to require a post in its own right. It was interesting to see that the room split into practitioners and people who were sceptical that microservices were a thing or held value over conventional service development.

After lunch I sat in on what can be done to get frontend testing off the critical path to production (not much now but clearly more effort needs to be made), distributed DOS attacks on transactional sites (not as scary as I imagined but again we have to be thinking about how this works), distributed data stores (good war stories, felt better informed for going), getting ops and developers to work together and Linux containers (definitely going to try Docker now).

I had quite a few questions going into the event and while I didn’t get all the answers I hoped for I did at least establish that smart people don’t have simple answers to them either which is reassuring. It’s hard to tell in the heat of it all whether you’re on the edge of things doing things that are pushing the boundaries or simply over-complicating your situation.

The attendees were nicely mixed and from a range of backgrounds, ops, architecture and developers were all well-represented so you felt you were seeing a rounded situation.

The unconference format left me wanting more rather than feeling I had had enough. The openess was amazing and I am planning on being there next year.

Standard
Programming

Authenticity and appropriation

I know I’m not a hacker, I don’t describe myself that way and I know its not what I do. I may indulge in hacks of both programming and other kinds but hacks do not make the hacker.

“Hack” and its derivatives are very popular though. “Hackdays”, prototypes getting described as “hacks”, and people self-identifying as “hackers”. Learning something in 24 hours is old hat now, why not just “hack” it instead.

This kind of wholesale appropriation of sub-culture is nothing new. Look at punk, hip-hop or skateboarding. In a way this theft of technologist’s jargon is a backhanded complement, a validation of its worth and validity.

Appropriation brings with it the question of authenticity. Authenticity brings with it the whole field of identity politics. It is a cascade of events that brings us to point where arguments erupt as to who is capable of determining who is truly a “hacker”.

Until recently I didn’t feel this argument has effected me very much. Since I have an instinct for people who meet the archetype of the hacker (by trade) and I don’t seek the title for myself it has felt like a fight I don’t have a dog in.

The latest wave of hacker appropriation renders a useful concept useless. As a good post-modernist I don’t weep for the hacker. The thing about all appropriated sub-cultures is that if they are going to thrive they are going to evolve; change, renew and protect themselves. Witness the rise of the “brogrammer” as way of delineating those inside and outside the tent.

However when I hear the accusation that authenticity in technology is a matter of white male privilege rather than an attempt for a community to express and recognise an identity, I think we have an argument that seems to serve no-one very well.

I’m not saying that technology communities aren’t sexist or male-dominated. They self-evidently are. Unlike a lot of communities, though, technology is something where a meritocracy can function. While meritocracies are clearly shaped by peer pressure and conventional wisdom the simple fact is that a programmer is going to evaluate the utility of a piece of code entirely on how well it serves their own needs and not the gender of the person who wrote it. In fact in an internet world of handles and shared code the real identity of the person you collaborate with is often unknown to you, an irrelevance.

“Good code” is a cultural artefact, shaped by the constitution of the community. Useful code is not.

Appropriating hacking may seem a good idea. But when you do it, dismissing criticism of the authenticity of the result is self-defeating. Anything appropriated is devalued.

Attempting to liberate or seize control of the language of technologists might seem a good idea in the name of a diverse community. But anything done without consent will result in resistance.

Let’s tackle sexism and exclusion by all means, particularly in user groups and conferences where identity is concrete.

But let’s not think that cultural politics can substitute for code contributed to and valued by the community. Let the work speak for the individual, let’s value utility, humility and modesty more than any one disputed signifier.

Standard
Programming

Writing code without tests

This post is aimed at people who have mastered test-driven development and ideally also behaviour-driven development and who are familiar with XCheck testing. If you don’t have good basic steps then trying to jump onto some of these techniques are likely to backfire on you as you will probably struggle to assess the risks correctly.

There is a reason TDD was invented, it represents the refinement of good testing practice and the philosophy of good software design. TDD is a relatively simple practice to describe that requires effort to implement. Writing code driven by tests is safer that straight-coding.

Writing untested code is a kind of mastery technique. It is high-risk and relies on the skills and the knowledge of the programmer. I don’t think it is ever responsible if the programmer is not going to be the person supporting the result in production. Without this condition then the programmer’s interests are not properly aligned with the consumers of their code.

So with all those caveats in place what if we want to create code faster because we don’t have to write tests?

Well we have to understand where bugs come from and we will have to write code that doesn’t allow those situations to arise.

There are two important principles to start with. If you can rely on tested library code, then you can rely on the underlying quality of the tested code and leverage it in your own application. Secondly the code you don’t write will not have bugs.

Therefore we should be aiming to write the smallest amount of code possible and we should never try to code what others have coded for us.

The next point is about where bugs occur. I think we’re now at a consensus point that most bugs occur in the way we change and maintain state. In both procedural and functional languages it is rare to get a mistake in the order of steps that something must happen in for example. These kind of problems tend to be misunderstandings of the domain (that get written into test suites as well so testing doesn’t help catch them) rather than genuinely unexpected consequences of the programmer’s code. Object-orientated code is really hard to reason about from this point of view as objects don’t have an implied order of execution.

This is why quick scripts of less than 200 lines tend to do stable sterling service for years whereas larger applications are more tortured in their existence.

Therefore whatever language we are coding in we need to adopt the functional principle of operating only on our parameters and returning values that can be consumed by the caller.

Size matters, a lot, if whole program can fit into a single file and you can pretty much hold the whole thing in your head then it will be easy to reason about what the program is doing and see flaws in the logic of the program. A single complex line of code is better than many lines and is much better than many lines split across many files.

One way to bring down the size of code files is to be ruthless about concerns. For example recently in my Python programming I have been assigning only one purpose to each module: this module renders reports, this one provides JSON endpoints.

Another technique is to not persist any state, this is actually surprisingly easy in web programming since each request is completely separate event and by default you can trade CPU time for isolation.

If you are doing batch or server-side programming then it is worth considering using something like parallel to create many separate bubbles of execution rather than trying to write code yourself to distribute work.

Another aspect of state that causes issues are making global modifications, whether it be to a database or a filesystem. Try and defer all global changes to the final moment of a program and do all the manipulation in-memory instead. If you never change the world then you can run a program over and over again refining what it does.

Assertions are more powerful than logging in writing test-less code, it is better to kill a thread of execution rather than let it do something you weren’t expecting. Logging is really about helping build your intuition about what a program does and how it works.

Assertions allow you to create strong pre and post-conditions on the operation of the program. Essentially they allow you to guarantee the “happy path” execution of your code and avoid having to test all the negative situations that might occur.

Despite this you always want to code for failure, use short-circuit logic to abort code flow early and therefore simplify the context of the code in the rest of the function.

Remember all the basic rules of cyclomatic complexity, don’t nest, don’t do conditionals, do try and express your looping as list comprehensions.

Don’t write generic code, ever. The more potential inputs a function has, the more you end up needing unit-tests to verify the interactions. If something is meant to work on strings don’t try to make it work on strings and integers. Your detection code ends up being a potential source of bugs that needs testing.

If you write dynamic interpreted languages then you are going to have do some manual testing, unless you can remember the names and orders of the functions exactly. Don’t forget to dive into the shell or REPL and play around with the code in isolation. If you can verify the behaviour of individual parts of your program without having to wire together multiple components then you have the right level of granularity for your code.

Re-use code that is already working. Code re-use is generally best achieved by cut and pasting files and then importing the functions you need. Don’t try and synchronise your code, updating some library code ultimately means that you are going to know whether the new library code works as you expect with your functionality and that means you’ll need a test suite.

Don’t refactor your code, rewrite it. Refactoring requires unit tests. Don’t be afraid of things like myfunction2 (although once you have the new functionality you need to delete the old unused stuff). Re-writing allows you to ditch all your assumptions about the code and attempt to express your new understanding of the problem and the requirements as simply as possible.

Don’t work with large numbers of people on the same code base. The more people trying to modify and change the code the more you need tests to try and clarify your different intents for the code base. Again, try divide and conquer on the problem, rather than six people working on the same code can you get three sets of two people collaborating on three smaller codebases.

Finally don’t be afraid to write a test. Writing the right unit-test to prove you can rely on a base piece of functionality means that you then don’t have to write tests for all the pieces of code that use that underlying function. I like to try and write code without tests to maximise the flexibility of the code base when I’m tackling problems with unclear solutions. It is not an ideological thing to have no tests whatsoever, it is rather that when tempted to write a test I think “Could I do this in a way that is trivial and doesn’t require a test?”. Simplicity is the cornerstone of test-free code.

Standard
Programming

Clojurescript at London FunctionalJS

At the January’s London FunctionalJS meetup the technology under discussion and use was Clojurescript. There was an introduction to the language basics from Thomas Kristensen of Forward, which was really much more about the basics of the syntax. We then went into the dojo exercises: the choices were implementing the Todo list SPA (the Javascript world’s Pet Store), using Clojurescript with an existing Javascript framework people were already used to working with or doing some 4Clojure.

Everyone ended up doing the Todo list which is interesting in its own way. Clearly the SPA is seen as the benchmark for evaluating these kinds of technologies.

Most of the teams were able to get the basic Todo functionality done in terms of adding and removing things from the list and re-rendering it. Most teams seemed to abstract the rendering but most put the list management into the callback for the event.

Again I was interested to see that most people grasped the idea of an atom and were able to manipulate its value. Because that kind of stuff is second-nature to me now I was wondering if it would cause issue in terms of creating a modifying function rather than directly manipulating the value. The example in the setup functions of the dojo code using conj seemed to be straight-forward enough for everyone.

Identifying and deleting items seemed more problematic. Some people wanted to do it by index but for the most part matching the text of the todo-item seemed to be popular. Probably the sensible way to actually manage the items is to uuid the items to allow their underlying state to change away from the identity.

Laziness definitely caught people out, including myself! I’ve moaned about the fact that using map purely for side-effects in fact results in the form not executing. Despite this I fell into the trap again, however fortunately having encountered it before I could reverse into a quick doall.

Other teams imaginatively re-implemented doall using loop. Which I guess is testament to how easy it is to do things in a LISP.

One thing that was hellish in our team’s code and which I think cropped up in the other teams as well was the amount of set! we were applying to build up very low-level DOM calls. Right at the end I remembered that Google Closure was available to abstract some of that work away. However it still means that your knowledge of Clojure needs to be heavily supplemented by low-level DOM APIs as well as what is available in the Google Closure library (which is not the best known of libraries).

I was also wondering whether doto might not have cleaned up our code a lot. It’s an issue that a lot of Javascript mutable state is not easy to wrangle with things like threading macros that normally ease the pain. I’ve seen this in the WebGL dojos as well.

The final ugly issue of the evening was the project template that managed to both run on my machine and not run on my machine. The template was more complex that the standard SPA template as it used Compojure and Clojurescript (presumably using the former to serve static assets on localhost). Leiningen skeleton projects have to work and be reliable, otherwise potential adopters just get frustrated and quit.

The reactions were interesting, a guy on our team at the end asked why he would want to use Clojurescript. Good question. People who were doing things like building HTML5 games seemed to see the potential and advantage much more. This is an area I hadn’t really considered before but it does make a lot of sense as regular Clojure has already had a lot of success in implementing animation and complex state machines.

For me the alternating between high-level Clojure and low-level DOM APIs was painful. I’m going to be more interested in having wrappers that allow high-level programming consistently in a project. And I am going to be thinking about games more!

Standard
Clojure

Clojurescript: is it any better yet?

A while ago I wrote about how Clojurescript was a square peg being hammered into a non-existent hole to complete indifference by everyone.

So has anything changed? Well Clojurescript has continued to be developed and it seems to have lost some of the insane rough-edges it launched with, it seems possible to use it with OpenJDK now for example.

Some people have made some very cool things by using Clojurescript as a dialect for writing NodeJS code. I was particularly struck by Philip Potter's talk on Marconi.

The appearance of core.async for Clojurescript is in my view the first genuine point of interest for non-Lisp fans. It provides a genuinely compelling model for handling events in an elegant way.

David Nolen, while being modest about his contribution, has also contributed some excellent blog posts about the virtues of Clojurescript. The most essential of which is the 101 on how to get a basic Clojurescript application going with core.async. This is an excellent tutorial but also is underpinned by hard work on getting the "out of the box" developer experience slick via the Mies Leiningen template.

Let's be clear about what a difference this makes. At the London Clojure dojo we once had a dojo about using Clojurescript, after a few hours all the teams limped in with various war stories about what bits of Clojurescript they had working and what was failing and why they thought it was failing. The experience was so bad I really didn't want to do Clojurescript as a general exercise again.

In the last dojo we did Clojurescript, we used Mies and David's blogpost as a template and all the teams were able to reproduce the blogpost and some of the teams were creating enhancements based on the basic idea of asynchronous service calls.

When someone pitches a Clojurescript idea in 2014 I'm no longer in fear of a travesty. That is a massive step forward.

And that's the good news.

Clojurescript! Who is good for!

After the dojo there were some serious discussions about using Clojurescript in anger. The conversation turned eventually to GWT. In case you don't remember or have never met it GWT is essentially a browser client kit for Java developers. In London it gets used a lot by financial institutions that need rich UIs for small numbers of people. Javascript developers are unlikely to be hired by those organisations so just like Google they end up with a need but the wrong kind of skills and GWT bridges the gap. A Java developer can use a familiar language and off-the-shelf components and will end up with a perfectly serviceable rich client-side app.

There is no chance in hell that a Javascript developer is going to use GWT to build their applications.

Clojurescript feels like the same thing. Clojure developers and LISP aficionados don't know a great deal of Javascript, they can program in Clojurescript and Google Closure and it is probably going to be okay. Better in fact than if they tried to create something in an unfamiliar language with all manner of gotchas.

But there is no chance in hell that a Javascript developer is going to use Clojurescript to build their applications.

Why Coffeescript failed

The reason I say this is because Coffeescript, a great rationalisation of Javascript programming is still viewed with suspicion by Javascript developers.

I asked some at a recent Javascript tech meeting why some people didn't use it and the interesting answer was that they couldn't really understand its syntax and were effectively translating the forms into Javascript. The terseness of the language was actually off-putting because it made it harder to mentally translate what the Coffeescript program was doing.

Adding LISP and Google Closure into that mix isn't going to make that mental disconnect any easier. The truth would appear to be that Javascript developers are simply not that disenchanted with their language. Clojurescript is going to have to offer something major to get over the disadvantage of a non-curly brace language and machine-optimised generated code.

At the Clojure dojo post-mortem people talked about the fact that Clojurescript helped avoid pitfalls and unusual behaviour in Javascript. That might seem a rationale argument to a Clojurian. However it was never an argument used on the JVM. "Hey Java just has all these edge-cases, why not use a LISP variant instead?".

In both languages practitioners of the language are deeply aware of the corner cases of their language. Since they are constantly working with it they are also familiar with the best practices required to make sure you don't encounter those corner cases.

Clojure on the JVM brought power, simplicity and a model of programming that made reasoning about code paths simple.

Clojurescript has the same advantages of code structure but doesn't really give more functionality over Javascript and still has a painful inter-operation story.

Javascript: the amazing evolving language

Javascript has a strong Scheme inheritance meaning that it already contains a lot of LISP inheritance.

Also unlike Java which ended up with a specification that was in the wilderness for years, Javascript has managed to keep its language definition moving. It's sorted out its split and with aggressive language implementers in the form of the competing browsers it is rapidly adding features to the core of the languages and standards for extensions.

Javascript is almost unique that when lots of people wrote languages that compiled into Javascript the community weren't stuffy about it but actually created a specification, Source Maps, that made it easier to support generated code.

Javascript has a lot of problem areas but it is also rapidly evolving and syntactic sugar and new language constructs are adding power without necessary creating new problems or complexity. It is expansionist and ruthless pragmatic, just like Clojure on JVM in many ways.

Better the devil you know?

Visual Basic isn't the best language in the world, it's certainly not the best language for creating apps on Windows. However for organisations and programmers who have invested a lot of time in a language and a platform it normally takes a lot to get them to change.

Usually, it will take an inability to hire people to replace those leaving or the desertion of major clients before change can really be countenanced.

Javascript developers are in the same sunk-cost quandary but there is nothing on the horizon that is going to force an external change. There may be better alternatives but Javascript is one of the easiest languages to learn. It's highly interactive and its right there in the console window of this very browser!

There's no lack of demand for good looking websites and browser hackery to differentiate one web product from the next.

Regardless of the technical merits of any alternative solution offers, and we are not just talking about Clojurescript, Elm offers a similar set of advantages, the herding effect is powerful. Your investment in Javascript is far more likely to pay off that putting time into one of the alternatives, none of which have momentum.

Sometimes there are real advantages to sticking close to the devil.

Node.js

For me the most interesting area of Clojurescript is being able to write Clojure and treat V8 as an alternative runtime.

People are already noticing some odd performance characteristics where some things run better on Node than they do on JVM, most particularly around Async.

Polyglot developers are a familiar sight in the server side, knowing a variety of languages is advantage for the general programmer. Server-side Javascript is really only for those who are a one-trick pony.

It is still going to be a niche area but it is much more likely to happen than in the client-side.

Standard
Programming

Clojure Exchange 2013: Tommy Hall on Concurrency versus Parallelism

One of the really interesting talks at Clojure Exchange 2013 was one by Tommy Hall with the (not good) title You came for the concurrency, right?.

The talk had two main threads. It served as a helpful review of Clojure’s state-handling, concurrency and parallel processing features. Useful for beginners but also a helpful recap for the more experienced.

The other was a discussion of what we mean by concurrency and parallelism. Something that I hadn’t really thought about before (although I was aware there was a difference). Tommy referenced this talk Concurrency is not Parallelism (it’s better) by Rob Pike.

In the talk Rob gives the following definitions:

Concurrency: programming as the composition of independently executing processes

Parallelism: programming as the simultaneous execution of (possibly related) computations

In the talk Tommy gives the future function as an example of programming to indicate concurrency boundaries and the reducers library as an example of parallelism.

Rob’s presentation is worth reading in full but his conclusion is that concurrency is not a guarantee of parallel execution but that achieving parallelism without concurrency is very hard or impossible. Tommy’s talk uses an Erlang example to make the same point.

Don’t fear the monoid

While discussing reducers Tommy finally explained something that I struggled before. A lot of type champions point at reducers and shout Monoids! As if that was some kind of argument.

During his talk Tommy explains that parallel combination function needs to be able to return an identity so the function always returns a value and that because ordering of values is not guaranteed (unlike the implied order in a regular reduce)  it needs to be associative.

That makes sense. Turns out that those are also the properties of a monoid.

Fans of type systems throw terms like Monad, Dual and Monoid not to help add understanding to a discussion but to use them as shibboleths. It was far more enlightening to see an example of where the needs of a problem were driving to a category of function with certain properties. If that is common enough to deserve a shorthand-name, fair enough, but the name itself is not magical and knowing the various function category names is a feat of learning rather akin to memorising all those software pattern names from the Gang of Four’s book.

Standard
Blogging, Programming

CSS learnings from 2013

I don’t do masses of CSS in my work but this year was an interesting one because there was acres of CSS being created due to the general acceptance that we just need to get on with implementing the responsive web. Leaving the fixed-width grid also meant having to rethink the structure and architecture of CSS and modernise the legacy styling which if we’re all honest is actually created organically and piecemeal as requirements drift in.

CSS Architecture

I struggled and fought with SMACSS but I’m now over it and think that the result in practice has shown that it is the most sensible way to do things today.

BEM felt overly complex and fussy while covering pretty much the same ideas as SMACSS. OOCSS was off-putting due to its weird appropriation of object-orientated programming concepts.

I did like its idea of using multiple classes to allow for composition of styles. But using the idea of multiple inheritance as a metaphor for this seemed to be bizarre. Don’t these people know how painful inheritance is?

Isolation

One of the big things I took from SMACSS is that isolation is more valuable than re-use. Using a system that guarantees that your styling rules are not going to have side-effects is massively powerful.

When you then try to abstract components and share rules between them you lose some of that isolation and you begin to recouple components.

Seeing CSS as a conflict between isolation and duplication was a powerful metaphor for making decisions as to how to structure things.

Pre-compilers

I’m pretty sure that when we look back at 2013 the invention of Turing-complete CSS pre-compilers is not going to be a high-point. The CSS generation languages have been important as a way of pushing the CSS specification forward and of proving ideas in practice. I’ve certainly been grateful to be able to use Less in prototyping for example.

The agreement of the need for CSS variables and for being able to do calculations with those variables is a credit to the pre-compilers. The problem they create is the complexity of their abstraction. Turning a declarative language into something more programmatic is problematic for all the reasons that programming languages have problems. DRY is good but compilation errors in your CSS doesn’t feel like progress to me.

Just as with user agent extensions I think that pre-compilers are a great test bed for innovation but what they need to lead to is a better syntax for declaring intention rather than a new language.

Migration

Having new ideas for organising and structuring CSS is great but we also need to reflect our new ideas in our old work. Well, that’s easy right? Now we have new ideas we’ll just call all our existing assets legacy and re-write the whole thing. It’ll probably only take three months or more…

For the project I worked most on I applied a rule of thumb that seemed to serve pretty well. All the new CSS architectures organise themselves around a concept of components or modules. If your existing CSS is more or less built around the same concept you should be able to adapt whatever structure you choose and retroactively apply it to your existing code.

If your existing code though is designed more around concepts of pages or tag styling then you need to rebuild it piecemeal. Introduce the new component with its styling build on the new standard and then go back and

Hacks and shame

I was quite taken by the idea of having the file of shame. However in practice developers preferred to have commented sections of the files with hacks in. And before long we had people accidentally adding non-hack rules after the hack commment-line and we also have hacks liberally sprinkled all over the place.

I lost the battle but the result has convinced me that you want one place for the hackery and you want it right at the bottom of the cascade.

The incentive should be to one day delete the file of shame.

Simplicity

Clearly this year the only person who was talking any sense in CSS (apart from Jonathan Snook) was Harry Roberts as I also liked the presentation where he pointed out that design needs to be regularised in the name of sanity. If we have some components that have a margin of 1rem and and some with a margin of 1.1rem because it looks better we need to be taking into account the cognitive burden of having those additional rules.

Creating independent components encouraged me to allow every one to be a special snowflake. It was good to have a counter-balancing principle to make sure that principal of least surprise applied.

Harry also made the sensible suggestion (in 2012!) that padding and margins be defined in just one vertical and horizontal direction. Thing about flow from top to bottom and left to right also helped me to stop make special cases and look for a more general declaration of what I was trying to describe.

Standard
Software

The sly return of Waterfall

No-one does Waterfall any more of course, we’re all Agile incrementalists. It is just that a lot of things are difficult to tackle in increments. You can’t get a great design, for example, or a visual style guide without a lot of user testing and workshopping. From a technical perspective you need to make sure your scaling strategy is baked in from the start and to help support that you will also want to have a performance testing framework in place. You’ll also want to be running those test suites in a continuous deployment process, because its hard to create that after the fact.

In short apart from the actual software you want to do everything else up front.

Waterfall existed for a reason, it tried to fix certain issues with software development, it made sure that when you finished a step in the process you didn’t have to go back and re-visit it. It made you think about all the issues that you would encounter in creating complex software and come up with a plan for dealing with them.

Therefore I can see all the exciting enticements to make sure you do something “up front because it will be difficult to solve later”.

However Waterfall had to change because it didn’t work and dressing up the same process in the guise of sensible forethought doesn’t make it more viable than its predecessors.

It can be really frustrating to have a product that looks ugly, or is slow or constantly falls over. It is far more frustrating to have a stable, beautiful and irrelevant product.

Occasionally you know that a product is going to take off like a rocket, as with fast-follow products for example, and it is going to fully payback a big investment in its creation. However in all other cases you have no idea whether a product is going to work and therefore re-coup its investment.

Even with an existing successful product major changes to the product are just as likely to have unexpected consequences as they are to deliver expected benefits. Sometimes they do both.

What always matters is the ability to change direction and respond to the circumstances that you find yourself in. Some aspects of software development have seen this and genuinely try to implement it. Other parts of the discipline are engaged in sidling back into the comfortable past under the guise of “responsibility”.

Standard
Programming

Metrics and craftmanship

Ever since we’ve had access to increasingly more comprehensive and easy to comprehend metrics there has been a conflict between the artisan and craftsmanship elements of software development and the data-driven viewpoints.

Things like code quality are seen as being difficult to express in terms of user-affecting metrics. I suspect that is because most of the craft concerns of software development do not affect the overall value of a product. This is not to align myself fully with those driven by metrics.

There are lots of situations where two approaches result in the same metrics outcome. It is tempting to give in to the utilitarian argument that in this case what you should do is simply choose the lowest cost option.

That is too reductionist though and while it may lead to an optimised margin-generating product it feels to me that it is just as likely to create a spiral of compromise that jeopardises the ability to make further improvements.

It is here at the point where metrics are silent that we are put to the test of making good decisions. Our routes forward are neutral from a data point of view but good decisions will unlock better possibilities in the future. It is at this moment that I feel our preferences for things like craft and aesthetic and our understanding of things like cost and consequence matter. Someone able to understand how to achieve beauty and simplicity in software for the same cost as the compromise while achieve very different outcomes.

So we need to be metrics-first so we know we are being honest with ourselves but once we are doing things in a truthful environment, our experience and discretion can make all the difference.

Standard