Programming, Software, Web Applications, Work

Prettier in anger

I’ve generally found linting to be a pretty horrible experience and Javascript/ES haven’t been any exception to the rule. One thing that I do agree with the Prettier project is that historically linters have tried to perform two tasks to mixed success: formatting code to conventions and performing static analysis.

Really only the latter is useful and the former is mostly wasted cycles except for dealing with language beginners and eccentrics.

Recently at work we adopted Prettier to avoid having to deal with things like line-lengths and space-based tab sizes. Running Prettier over the codebase left us with terrible-looking cramped two-space tabbed code but at least it was consistent.

However having started to live with Prettier I’ve been getting less satisfied with the way it works and Prettier ignore statements have been creeping into my code.

The biggest problem I have is that Prettier has managed its own specific type of scope creep out of the formatting space. It rewrites way too much code based on line-size limits and weird things like precedent rules in boolean statements. So for example if you have a list with only one entry and you want to place the single entry on a separate line to make it clear where you intend developers to extend the list Prettier will put the whole thing on a single line if it fits.

If you bracket a logical expression to help humans parse the meaning of the statements but the precedent rules mean that brackets are superfluous then Prettier removes them.

High-level code is primarily written for humans, I understand that the code is then transformed to make it run efficiently and all kinds of layers of indirection are stripped out at that point. Prettier isn’t a compiler though, it’s a formatter with ideas beyond its station.

Prettier has also benefited from the Facebook/React hype cycle so we, like others I suspect, are using it before it’s really ready. It hides behind the brand of being “opinionated” to avoid giving control over some of its behaviour to the user.

This makes using Prettier a kind of take it or leave it proposition. I’m personally in a leave it place but I don’t feel strongly enough to make an argument to remove from the work codebase. For me currently tell Prettier to ignore code, while an inaccurate expression of what I want it to do, is fine for now while another generation of Javascript tooling is produced.

Standard
Software

Passive-aggressive collaboration

One interesting (and depressing) aspect of post-Github open source development is the use of the pull request as a passive aggressive way of putting off potential contributors, users and testers.

I have had the experience of discovering an issue with a piece of open source and raising an issue for it only to be “invited” to contribute a reliable failing test case, with a fix and all to the project’s contribution standards.

Now the joy of open source is being able to scratch your itch and prioritising your own problems by contributing solutions.

However there are often very good reasons why the maintainers should be the ones fixing the issues.

Firstly the maintainers should be the ones who gain most from fixing issues. One cool thing about Git-based development is that forking allows you to use existing codebases but not share the views and priorities of the original project. If I disagree with the direction or design of a project I can fork it and completely change the code to match my own aesthetics and priorities.

However in most cases I agree with the direction of the maintainers and I am simply pointing out an issue or problem that maybe they haven’t encountered in their context. I could scratch my itch but it is often more effective if the maintainer took my use case into consideration and reworked the codebase to include it.

Maintainers:

  • have more context on the code base
  • know more about the problems the code tackles
  • know their conventions and coding standards better than I do
  • have more invested in having an effective solution than I do

Looking at things like Guava which while open are effectively not open to contribution. This is a more honest approach than inviting non-trivial contributions.

Trying to take the perspective of the maintainer I know it is tedious when people do things outside of your area of interest (i.e. IE fixes when you’re not targeting that version of the browser or Windows fixes in a project targeted at UNIXes). However telling people to “fix it themselves” is not as honest as saying “that’s not our focus”. People can then decide whether they want to fork or not.

Standard
Software

In praise of fungible developers

The “fungibility” of developers is a bit of hot topic at the moment. Fungibility means the ability to substitute one thing for another for the same effect; so money is fungible for goods in modern economies.

In software development that means taking a developer in one part of the organisation and substituting them elsewhere and not impacting the productivity of either developer involved in the exchange.

This is linked to the mythical “full-stack” developer by the emergence of different “disciplines” within web software development, usually these are: devops, client-side (browser-based development) and backend development (services).

It is entirely possible for developers to enter one of these niches and spend all their time in it. In fact sub-specialisations in things like responsive CSS and single-page apps (SPA) are opening up.

Now my view has always been that a developer should always aspire to have as broad a knowledge base as possible and to be able to turn their hand to anything. I believe when you don’t really understand what is going on around your foxhole then problems occur. Ultimately we are all pushing electric pulse-waves over wires and chips and it is worth remembering that.

However my working history was pretty badly scarred by the massive wave of Indian outsourcing that happened post the year 2000 and as a consequence the move up the value-chain that all the remaining onshore developers made. Chad Fowler’s book is a pretty good summary of what happened and how people reacted to it.

For people getting specialist pay for niche work, full-stack development doesn’t contain much attraction. Management sees fungibility as a convenient way of pushing paper resources around projects and then blaming developers for not delivering. There are also some well-written defences of specialisation.

In defence of broad skills

But I still believe that we need full-stack developers and if you don’t like that title then let’s call them holistic developers.

Organisations do need fungibility. Organisations without predictable demand or who are experiencing disruption in their business methodology need to be flexible and they need to respond to situations that are unexpected.

You also need to fire drill those situations where people leave, fall ill or have a family crisis. Does the group fall apart or can it readjust and continue to deliver value? In any organisation you never know when you need to change people round at short notice.

Developers with a limited skill set are likely to make mistakes that someone with a broader set of experiences wouldn’t. It is also easier for a generalist developer to acquire specialist knowledge when needed than to broaden a specialist.

Encouraging specialism is the same as creating knowledge silos in your organisation. There are times when this might be acceptable but if you aren’t doing it in a conscious way and accompanying it with a risk assessment then it is dangerous.

Creating holistic developers

Most organisations have an absurd reward structure that massively benefits specialists rather than generalists. You can see that in iOS developer and mobile responsive web CSS salaries. The fact that someone is less capable than their colleagues means they are rewarded more. This is absurd and it needs to end.

Specialists should be treated like contractors and consultants. They have special skills but you should be codifying their knowledge and having them train their generalist colleagues. A specialist should be seen as a short-term investment in an area where you lack institutional memory and knowledge.

All software delivery organisations should practice rotation. Consider it a Chaos Monkey for your human processes.

Rotation puts things like onboarding processes to the test. It also brings new eyes to the solution and software design of the team. If something is simple it should make sense and be simply to newcomer, not someone who has been on the team for months.

Rotation applies within teams too. Don’t give functionality to the person who can deliver it the fastest, give it to the person who would struggle to deliver it. Then force the rest of the team to support that person. Make them see the weaknesses in what they’ve created.

Value generalists and go out of your way to create them.

Standard
Software

The sly return of Waterfall

No-one does Waterfall any more of course, we’re all Agile incrementalists. It is just that a lot of things are difficult to tackle in increments. You can’t get a great design, for example, or a visual style guide without a lot of user testing and workshopping. From a technical perspective you need to make sure your scaling strategy is baked in from the start and to help support that you will also want to have a performance testing framework in place. You’ll also want to be running those test suites in a continuous deployment process, because its hard to create that after the fact.

In short apart from the actual software you want to do everything else up front.

Waterfall existed for a reason, it tried to fix certain issues with software development, it made sure that when you finished a step in the process you didn’t have to go back and re-visit it. It made you think about all the issues that you would encounter in creating complex software and come up with a plan for dealing with them.

Therefore I can see all the exciting enticements to make sure you do something “up front because it will be difficult to solve later”.

However Waterfall had to change because it didn’t work and dressing up the same process in the guise of sensible forethought doesn’t make it more viable than its predecessors.

It can be really frustrating to have a product that looks ugly, or is slow or constantly falls over. It is far more frustrating to have a stable, beautiful and irrelevant product.

Occasionally you know that a product is going to take off like a rocket, as with fast-follow products for example, and it is going to fully payback a big investment in its creation. However in all other cases you have no idea whether a product is going to work and therefore re-coup its investment.

Even with an existing successful product major changes to the product are just as likely to have unexpected consequences as they are to deliver expected benefits. Sometimes they do both.

What always matters is the ability to change direction and respond to the circumstances that you find yourself in. Some aspects of software development have seen this and genuinely try to implement it. Other parts of the discipline are engaged in sidling back into the comfortable past under the guise of “responsibility”.

Standard
Software

Switching Nvidia drivers from the command-line

The Steam client informed me today that there were more recent Nvidia drivers for Ubuntu available and that I should upgrade for stability, etc. etc. It seemed a fairly innocuous change compared to the beta drivers I was using so I pressed the button and then restarted. Resulting in a failure to boot Unity and a chance to rediscover the joy of the command-line.  I don’t know why Unity and the new drivers fail to mix to spectacularly however the simplest thing to do seemed to be to revert to the earlier drivers.

The problem with doing that is that I’ve only ever done it via the gui tools. This AskUbuntu answer told me about Jockey the software that underpins the proprietary driver contol. Running Jockey at the command-line was very, very slow but it did indeed allow me to select the earlier drivers and after a restart the GUI was booting again. Much easier than hand-editing an X config file.

Standard
Software, Work

Up-front quality

There has been a great exchange on the London Clojurians mailing list recently talking about the impact of a good REPL on development cycles. The conversation kicks into high-gear with this post from Malcolm Sparks although it is worth reading it from the start (membership might be required I can’t remember). In his post Malcolm talks about the cost of up-front quality. This, broadly speaking, is the cost of the testing required to put a feature live, it is essentially a way of looking at the cost that automated testing adds to the development process. As Malcolm says later: “I’m a strong proponent of testing, but only when testing has the effect of driving down the cost of change.”.

Once upon a time we had to fight to introduce unit-testing and automated integration builds and tests. Now it is a kind of given that this is a good thing, rather like a pendulum, the issue is going too far in the opposite direction. If you’ve ever had to scrap more than one feature because it failed to perform then the up-front quality cost is something you consider as closely as the cost of up-front design and production failure.

Now the London Clojurians list is at that perfect time in its lifespan where it is full of engaged and knowledgeable technologists so Steve Freeman drops into the thread and sensibly points out that Malcolm is also guilty of excess by valuing feature mutability to the point of wanting to be able to change a feature in-flight in production, something that is cool but is probably in excess of any actual requirements. Steve adds that there are other benefits to automated testing, particularly unit testing, beyond guaranteeing quality.

However Steve mentions the Forward approach, which I also subscribe to, of creating very small codebases. So then Paul Ingles gets involved and posts the best description I’ve read of how you can use solution structure, monitoring and restrained codebases to avoid dealing with a lot of the issues of software complexity. It’s hard to boil the argument down because the post deserves reading in full. I would try and summarise it as the external contact points of a service are what matters and if you fulfil the contract of the service you can write a replacement in any technology or stack and put the replacement alongside the original service.

One the powerful aspects of this approach is that is generalises the “throw one away” rule and allows you to say that the current codebase can be discarded whenever your knowledge of the domain or your available tools change sufficiently to make it possible to write an improved version of the service.

Steve then points out some of the other rules that make this work, being able to track and ideally change consumers as well. Its an argument for always using keys on API services, even internal ones, to help see what is calling your service. Something that is moving towards being a standard at the Guardian.

So to summarise, a little thread of pure gold and the kind of thing that can only happen when the right people have the time to talk and share experiences. And when it comes to testing, ask whether your tests are making it cheaper to change the software when the real functionality is discovered in production.

Standard
Software, Work

Generating corporate welfare through enterprise software

It is always good to have someone on the inside and therefore service software companies often go to great lengths to woo potential champions within large organisations. That’s the way things are but there is an interesting phenomena that takes this too far and I call it “corporate welfare”.

Companies often like to tote how configurable and adaptable their software is. By using just a few web screens or maybe a set of configuration files you can make the software do whatever you want. How convenient! Or rather how convenient for the suppliers. How many of you have ever had a burning desire to tinker with your email system setup, or your bug tracker’s workflow or the permissions of your project management software.

Probably no-one except the product champion who argued for the software to be introduced in the first place. In fact the champion’s role in the company is now predicated on their expertise with the existing solution. What incentive do they have to replace or review “their” section of infrastructure? Their salary is now based on how effective their relationship is with their supplier.

In fact I don’t think it is uncommon for people changes to precede changes in software providers. Someone has to take over the champion’s job of massaging the product and without the massive personal commitment to it finds the job cumbersome and undesirable, sparking the search for solutions.

My argument would be that if you cannot primarily use a solution out of the box then you are better off not using it. If you have a business process that requires a lot of gnarly configuration and bespoke software work then the greater value is in simplifying the business process rather than recreating in software.

In my view complex or whitebox products are more about capturing customers than serving them and that goes from SAP down to JIRA.

Standard