Work

Interviewing in software 2023

With the waves of layoffs and the large numbers of developers looking for work there is a lot of frustrating and venting about interviewing and hiring processes. I have my own horror stories and that is part of the problem of writing about this topic. Interviewing generally and especially in tech is broken but writing about it when you’re going through it looks like the complaints of someone inadequate to the requirements of the role. Even when you have a job then where is the wisdom in criticising the process that brought you that job?

The recruitment process for developers has been pretty terrible for years but at least during the hiring boom the process tended to err on the side of taking a chance on people. Now employers seem to feel pretty confident that whatever bar they set they will find a suitable candidate or whatever conditions they apply will be accepted. That means that often the reasons you get back for not proceeding during an interview are often pretty flimsy. The interviewers are the gatekeepers into the roles and they don’t really have to justify themselves too much.

The fundamental problem

At its heart though the problem has always been and remains that most people are really bad at interviewing. Often people spend more time interviewing other people rather than going through interviews themselves. When conducting interviews they are mostly isolated from feedback unless another interviewer takes objection to what they are doing.

Therefore virtually every developer I’ve known who does interviews thinks they are really good at interviewing (including me, I’m really good at conducting interviews (I’ve also had some feedback from agents that I’m really terrible at interviewing, who are you going to believe?)). However most of them are really bad. They don’t really know how to frame open questions, they don’t stick to the scripts or they stick too literally to the scripts, they don’t use any scoring criteria or objective marking and they often freestyle some genuinely awful questions.

One of my favourite pieces of recent interview feedback was that I didn’t have a lot of experience in a particular area. Now while it might be true that I didn’t exhibit much evidence of that experience in the interview I would also have found it easier to do that if I had been asked questions about it. If an area of expertise is vital to the role then you need to have formulated some questions about it and also importantly make sure you allocate enough time in the interview to ask them. Flirting may require mastery of the tease but interviewing usually benefits from a very direct approach.

People who do interviews need to be trained in doing interviews. In an ideal world that would also mean doing some mock interviews where it is known whether the candidate has the skills to do the job. Their interviewing needs to be reviewed by managers from time to time and the easiest way to do that review is by having managers and managers of managers in the actual interviews.

In a previous role some engineering managers who reported to me did a little live roleplay of what they thought a good interview would look like, one taking the part of the interviewer and one the interviewee. Naturally the stakes were low but the exercise gave a template for the rest of the interviewers to set their expectations and give them a sense of where we thought good was.

Interviews, interviews, interviews

Employers confidence in being able to pick and choose is nowhere more exemplified by having loads of interview rounds. For more responsible roles I get that you often have to meet up and down the chain along with peers and stakeholders. In recent processes though I wouldn’t have been surprised to be interviewed by the office cat, probably just to see if I was desperate enough to put up with this kind of treatment. A personality fit with key stakeholders is important but I feel that previously this was done in a single call with multiple people at a time.

Candidate experience surveys

How can you try to improve the interviewing process and allow candidates to provide the feedback to the interviewers that they so desperately need? Some places have used candidate surveys. I’ve tried using these myself and occasionally you got some good feedback, in particular on how someone felt about the way communicated with them as a corporate body. However as a candidate (and in the current economy) I would never fill one out since it doesn’t help you secure an offer and seems in most cases to actually be positively risky as you can either give a high rating and look like a kiss-ass or a low rating that will automatically put the organisation on the defensive, especially those people who interviewed you.

Even after accepting a job I find it really hard to talk to the people who interviewed me about the interviewing experience. In some ways the only safe time to give feedback on the interview process is after you’ve received an offer and have decided not to accept it. At that point it truly does depend on how willing to learn an organisation is.

At a previous role I added a closing question to our script: “What question do you think we should have asked you?”. This was originally intended to be a way for candidates to draw attention to experience that they thought was relevant (even if maybe our scoring system did not take it into account). For a few candidates though it became an opening into discussing the interview process and their thoughts on it. It is the closest thing I’ve found to an effective feedback mechanism I’ve been able to find.

To sum it up

Interviewing generally sucks, right now it sucks even more because without the benefit of the doubt bad interviewing practices make it difficult to succeed as a candidate and enjoy the experience. A negative candidate experience causes brand issues for an employer and while they may not care about it now if the market tightens or visas stop being so easy to acquire then it might start to matter again. As an industry we should do better and genuinely try to find ways to improve how we find a fit between people and roles and make the hiring process less hateful.

Other posts in this area

Standard
Programming

Enterprise programming 2023 edition

Back in the Naughties there were Enterprise Java Beans, Java Server Pages, Enterprise Edition clustered servers, Oracle databases and shortly thereafter the Spring framework with its dependency injection wiring. It was all complicated, expensive and to be honest not much fun to work with. One of the appeals of Ruby on Rails was that you could just get on an start writing a web application rather than staring at initialisation messages.

During this period I feel there was a big gap between the code that you wrote for living and the code you wrote for fun. Even if you were writing on the JVM you might be fooling around with Jython or Groovy rather than a full Enterprise Java Bean. After this period, and in particular post-Spring in everything, I feel the gap between hobby and work languages collapsed. Python, Ruby, Scala, Clojure, all of these languages were fun and were equally applicable to work and small-scale home projects. Then with Node gaining traction in the server space then gap between the two worlds collapsed pretty dramatically. There was a spectrum that started with an inline script in a HTML page that ran through to a server-side API framework with pretty good performance characteristics.

Recently though I feel the pendulum has been swinging back towards a more enterprisey setup that doesn’t have a lot of appeal for small project work. It often feels that a software delivery team can’t even begin to create a web application without deploying on a Kubernates cluster with Go microservices being orchestrated with a self-signing certificate system and a log shipping system with Prometheus and Grafana on top.

On the frontend we need an automatically finger-printing statically deployed React single-page app, ideally with some kind of complex state management system like sagas or maybe everything written using time-travelable reactive streams.

Of course on top of that we’ll need a design system with every component described in Storybook and using a modular class-based CSS system like Tailwind or otherwise a heavyweight styled component library based on Material design. Bonus points for adding React Native into this and a CI/CD system that is ideally mixes a task server with a small but passionate community with a home-grown pipeline system. We should also probably use a generic build tool like Bazel.

And naturally our laptop of choice will be Apple’s OSX with a dependency on XCode and Homebrew. We may use Github but we’ll probably use a monorepo along with a tool to make it workable like Lerna.

All of this isn’t much fun to work on unless you’re being paid for it and it is a lot of effort that only really pays off if you hit the growth jackpot. Most of the time this massive investment in complex development procedures and tooling simply throws grit into the gears of producing software.

I hope that soon the wheel turns again and a new generation of simplicity is discovered and adopted and that working on software commercially can be fun again.

Standard
Programming, Software, Work

Defining the idea of “software engineering”

I have been reading Dave Farley’s Modern Software Engineering. Overall it’s a great read and thoroughly recommended (I’m still reading through it but I’ve read enough to know it is really interesting and a well-considered approach to common problems in development).

One of the challenges Dave tackles is to try and provide a definition of what software engineering actually is. This is actually a pretty profound challenge in my view. I’ve often felt that developers have usurped the title of engineer to provide a patina of respectability to their hacky habits. Even in Dave’s telling of the origin of the term it was used to try and provide parity of esteem with hardware engineers (by no lesser figure than Margaret Hamilton).

In large organisations where they have actual engineers it is often important to avoid confusion between what Dave categorises as Design and Production engineering. Software engineering sits in the world of design engineering. Software is malleable and easy to change unlike a supply chain or a partially completed bridge. Where the end result of the engineering process is an expensive material object Dave points that it is common to spend a lot of time modelling the end result and refining the delivery process for the material output as a result of the predictions of the model. For software to some extent our model is the product and we can often iterate and refine it at very low cost.

Dave proposes the following definition of engineering:

Engineering is the application of an empirical, scientific approach to finding efficient, economical solutions to practical problems.

Dave Farley, Modern Software Engineering

This definition is one I can live with and marries my experience of creating software to the wider principles of engineering. It also bridges across the two realms of engineering, leaving the differences to practices rather than principles.

It is grounded in practicality rather than aloof theories and it emphasises that capacities drive effective solutions as much as needs. This definition is a huge step forward in being able to build consensus around the purpose of a software engineer.

Standard
Programming

PR Reviews: not the messiah, not a naughty boy

On Tech Twitter there seems to be a lot of chat recently about how the Github Pull Request (PR) review process is “broken” or underserves internal teams (an example) that have strong collaboration practices such as pairing or mob coding.

Another objection raised is the idea of internal gatekeeping within development groups, I’m not sure I fully follow the argument but I think it runs along the lines that the PR review process allows powerful, influential members of the group to enforce their views over the others.

This is definitely a problem but frankly writing AST tools linked to the “merge to master” checks is probably a more controlling tool than spending your time policing PRs.

With open source projects often all the action happens at the pull request because the contributors often have no other interaction. Proponents are right to point out that if this works for open source projects do you really need to put effort into other practices upstream of the PR? Opponents are also right to point out that adopting the work practice of an entirely different context into your salaried, work context is crazy. They are both right, individual organisations need to make deliberate decisions using the challenges of both side to the way they are working.

I’m not sure how much of this is a fightback by pairing advocates (that’s how I interpret this thread). There is a general feeling that pairing as a practice has declined.

In my work the practice is optional. As a manager I’ve always known that in terms of output delivery (which before people object, may be important particularly if you’re dealing with runway to meet salary) pairing is best when you’re increasing the average rather than facilitating the experienced.

I think even with pairing you’d want to do a code review step. Pairs are maybe better at avoid getting into weird approaches to solutions than an individual but they aren’t magical and if you are worried about dominant views and personalities pairing definitely doesn’t help solve that.

So I’d like to stick up for a few virtues of the Pull Request Review without making the argument that Pull Requests are all you need in your delivery process.

As an administrator of policy who often gets asked about Software Development LifeCycles (SDLC) as a required part of good software governance. It is handy to have a well documented, automated review process before code goes into production. It ticks a box at minimum disruption and there isn’t an alternative world where you’re just streaming changes into production on the basis of automated testing and production observability anyway.

As a maintainer of a codebase I rely on PRs a lot as documentation. Typically in support you handle issues on much more software than you have personally created. Pairing or mob programming isn’t going to work in terms of allowing me to support a wide ranging codebase. Instead it’s going to create demand for Level 3 support.

Well-structured PRs often allow me to understand how errors or behaviour in production relate to changes in code and record the intent of the people making the change. It makes it easier to see situations that were unanticipated or people’s conception of the software in use and how that varies from actual use.

PR review is also a chance for people outside the core developers to see what is happening and learn and contribute outside of their day to day work. Many eyes is not perfect but it is a real thing and people who praise teaching or mentoring as a way to improve technique and knowledge should be able to see that answering review questions (in a suitable form of genuine inquiry) is part of the same idea.

PRs also form a handy trigger for automated review and processes. Simple things like spelling checks allow a wider range of people to contribute to codebases fearlessly. Sure you don’t need a PR to use these tools but in my experience they seem more effective with the use of a stopping point for consideration and review of the work done to date.

Like a lot of things in the fashion-orientated, pendulum swinging world of software development good things are abandoned when no longer novel and are exaggerated to the point that they become harmful. It’s not a world known for thoughtful reflection and consideration. But saying that pull request reviews undermine trust and cohesion in teams or a formulaic practice without underlying benefit seems unhelpfully controversial and doctrinal.

Standard
Programming

Recommendation: use explicit type names

I attended an excellent talk recently at Lambdale about teaching children to code (not just Scratch but actual computer languages) and one of the points of feedback was that the children found the use of names in type declarations and their corresponding implementations in languages such as Haskell confusing because frequently names are reused despite the two things being completely different categories of things.

The suggestion was that instead of f[A] -> B it was simpler to say f[AType] -> BType.

Since I am in the process of introducing Python type checking at work this immediately sparked some interest in me and when I returned to work I ran a quick survey with the team that revealed that they too preferred the explicit type name with the suffix Type rather than package qualifier for types that I had been using.

Instead of kitchen_types.Kettle the preference was for KettleType.

The corresponding function type annotations would therefore read approximately:

def boil(water: WaterType, kettle: KettleType) -> WaterType

So that’s the format we’re going to be adopting.

Standard
Programming

Tackley’s law of caching

Tackley’s law of caching is that the right cache time is the time it takes someone to come to your desk to complain about a problem.

If you obey this law then when you open your browser to check the issue the person is complaining about it will already have resolved itself.

Tackers may not have invented this rule but I heard it from him first and it one of the soundest pieces of advice in software development I’ve ever had.

Standard
Programming

No-one loves bad ideas

Charles Arthur has an interesting piece of post-Guardian vented frustration on his blog. His argument about developers and journalists sitting together is part-bonkers opinion and partly correct. Coders and journalists are generally working on different timeframes and newsroom developers generally don’t focus enough on friction in the tools that they are creating for journalists.

Journalists however focus too much on the deadline and the frenzy of the news cycle. I often think newsroom developers are a lot like the street sweepers who clean up after a particularly exuberant street market. Everything has to be tidied up and put neatly away before the next day’s controlled riot takes place.

The piece of the article I found most interesting was something very personal though. The central assumption that runs through Arthur’s narrative is that it is valuable to let readers pre-order computer games via Amazon. One of the pieces of work I’ve done at the Guardian is to study the value of the Amazon links in the previous generation of the Guardian website. I can’t talk numbers but the outcome was that the expense of me looking at how much money was earned resulted in all the “profits” being eaten up by cost of my time. You open the box but the cat is always dead.

Similarly Arthur’s Quixotic quest meant that he spent more money in developer’s time than the project could ever possibly earn. Amazon referrals require huge volumes to be anything other than a supplement to an individual’s income.

His doomed attempt to get people to really engage with his idea really reflected the doomed nature of the idea. British journalism favours action and instinct and sometimes that combination generates results. Mostly however it just fails and regardless of whom is sitting next to whom, who can get inspired by a muddle-minded last-minute joyride on the Titanic except deadline-loving action junkies?

Standard
Programming

Programming as Pop Culture

The “programming is pop culture” quote has been doing the rounds from a 2004 interview with Alan Kay in terms of the debate on the use of craft as a metaphor for development. Here’s a recap:

…as computing spread out much, much faster than educating unsophisticated people can happen. In the last 25 years or so, we actually got something like a pop culture, similar to what happened when television came on the scene and some of its inventors thought it would be a way of getting Shakespeare to the masses. But they forgot that you have to be more sophisticated and have more perspective to understand Shakespeare. What television was able to do was to capture people as they were.

On the face of it this is a snotty quote from someone who feels overlooked; but then I am exactly one of those unsophisticated people who entered a democratised medium and is now rediscovering the past!

As a metaphor though in trying analyse how programmers talk and think about their work and the way that development organisations organise what they do then the idea that programming is pop culture is powerful, relevant and useful.

I don’t necessarily think that pop culture is necessarily derogatory. However in applying it to programming I think that actually you have to accept that it is negative. Regular engineering for example doesn’t discuss the nature of its culture, it is much more grounded in reality, the concrete and stolen sky as Locke puts it.

Architecture though, while serious, ancient and storied is equally engaged in its own pop culture. After all this is the discipline that created post-modernism.

There are two elements to programming pop culture that I think are worth discussing initially: fashion and justification by existence.

A lot of things exist in the world of programming purely because they are possible: Brainfuck, CSS versions of the Simpsons, obfuscated C. These are the programming equivalent of the Ig Nobles, weird fringe activities that contribute little to the general practice of programming.

However ever since the earliest demoscene there has been a tendency to push programming to extremes and from there to playful absurdity. These artifacts are justified through existence alone. If they have an audience then they deserve to exist, like all pop culture.

Fashion though is the more interesting pop culture prism through which we can investigate programming. Programming is extremely faddish in the way it adopts and rejects ideas and concepts. Ideas start out as scrappy insurgents, gain wider acceptance and then are co-opted by the mainstream, losing the support of their initial advocates.

Whether it is punk rock or TDD the patterns of invention, adoption and rejection are the same. Little in the qualitative nature of ideas such as NoSQL and SOA changes but the idea falls out of favour usually at a rate proportional to the fervour with which it was initially adopted.

Alpha geeks are inherently questors for the new and obscure, the difference between them and regular programmers is their guru-like ability to ferret out the new and exciting before anyone else. However their status and support creates an enthusiasm for things. They are tastemakers, not critics.

Computing in general has such fast cycles of obsolescence that I think its adoption of pop culture mores is inevitable. It is difficult to articulate a consistent philosophical position and maintain it for years when the field is in constant churn and turmoil. Programmers tend to attach to concrete behaviour and tools rather than abstract approaches. In this I have sympathy for Alan Kay’s roar of pain.

I see all manner of effort invested in CSS spriting that is entirely predicated on the behaviour of HTTP 1.1 and which will all have to be changed and undone when the new version HTTP changes file downloading. Some who didn’t need the micro-optimisation in initial download time will have better off if they ignored the advice and waiting for the technology to improve.

When I started programming professional we were at the start of a golden period of Moore’s Law where writing performant code mainframe-style was becoming irrelevant. Now at the end of that period we still don’t need to write performant code, we just want easy ways to execute it in parallel.

For someone who loves beautiful, efficient code the whole last decade and a half is just painful.

But just as in music, technical excellence doesn’t fill stadiums. People go crazy for terrible but useful products and to a lesser degree for beautiful but useless products.

We rediscover the wisdom of the computer science of the Sixties and Seventies only when we are forced to in the quest to find some new way to solve our existing problems.

Understanding programming as pop culture actually makes it easier to work with developers and software communities than trying to apply an inappropriate intellectual, academic, industrial or engineering paradigms. If we see the adoption, or fetishism, of the new as a vital and necessary part of deciding on a solution then we will not be frustrated with change for its own sake. Rather than scorning over-engineering and product ivory towers we can celebrate them as the self-justifying necessities of excess that are the practical way that we move forward in pop culture.

We will not be disappointed in the waste involved in recreating systems that have the same functionality implemented in different ways. We will see it as a contemporary revitalisation of the past that makes it more relevant to programmers now.

We will also stop decrying things as the “new Spring” or “new Ruby on Rails”. We can still say that something is clearly referencing a predecessor but we see the capacity for the homage to actually put right the flaws in its ancestor.

Pop culture isn’t a bad thing. Pop culture in programming isn’t a bad thing. But it is a very different vision of our profession that the one we have been trying to sell ourselves, but as Kay says maybe it better captures who we really are.

Standard
Software

In praise of fungible developers

The “fungibility” of developers is a bit of hot topic at the moment. Fungibility means the ability to substitute one thing for another for the same effect; so money is fungible for goods in modern economies.

In software development that means taking a developer in one part of the organisation and substituting them elsewhere and not impacting the productivity of either developer involved in the exchange.

This is linked to the mythical “full-stack” developer by the emergence of different “disciplines” within web software development, usually these are: devops, client-side (browser-based development) and backend development (services).

It is entirely possible for developers to enter one of these niches and spend all their time in it. In fact sub-specialisations in things like responsive CSS and single-page apps (SPA) are opening up.

Now my view has always been that a developer should always aspire to have as broad a knowledge base as possible and to be able to turn their hand to anything. I believe when you don’t really understand what is going on around your foxhole then problems occur. Ultimately we are all pushing electric pulse-waves over wires and chips and it is worth remembering that.

However my working history was pretty badly scarred by the massive wave of Indian outsourcing that happened post the year 2000 and as a consequence the move up the value-chain that all the remaining onshore developers made. Chad Fowler’s book is a pretty good summary of what happened and how people reacted to it.

For people getting specialist pay for niche work, full-stack development doesn’t contain much attraction. Management sees fungibility as a convenient way of pushing paper resources around projects and then blaming developers for not delivering. There are also some well-written defences of specialisation.

In defence of broad skills

But I still believe that we need full-stack developers and if you don’t like that title then let’s call them holistic developers.

Organisations do need fungibility. Organisations without predictable demand or who are experiencing disruption in their business methodology need to be flexible and they need to respond to situations that are unexpected.

You also need to fire drill those situations where people leave, fall ill or have a family crisis. Does the group fall apart or can it readjust and continue to deliver value? In any organisation you never know when you need to change people round at short notice.

Developers with a limited skill set are likely to make mistakes that someone with a broader set of experiences wouldn’t. It is also easier for a generalist developer to acquire specialist knowledge when needed than to broaden a specialist.

Encouraging specialism is the same as creating knowledge silos in your organisation. There are times when this might be acceptable but if you aren’t doing it in a conscious way and accompanying it with a risk assessment then it is dangerous.

Creating holistic developers

Most organisations have an absurd reward structure that massively benefits specialists rather than generalists. You can see that in iOS developer and mobile responsive web CSS salaries. The fact that someone is less capable than their colleagues means they are rewarded more. This is absurd and it needs to end.

Specialists should be treated like contractors and consultants. They have special skills but you should be codifying their knowledge and having them train their generalist colleagues. A specialist should be seen as a short-term investment in an area where you lack institutional memory and knowledge.

All software delivery organisations should practice rotation. Consider it a Chaos Monkey for your human processes.

Rotation puts things like onboarding processes to the test. It also brings new eyes to the solution and software design of the team. If something is simple it should make sense and be simply to newcomer, not someone who has been on the team for months.

Rotation applies within teams too. Don’t give functionality to the person who can deliver it the fastest, give it to the person who would struggle to deliver it. Then force the rest of the team to support that person. Make them see the weaknesses in what they’ve created.

Value generalists and go out of your way to create them.

Standard
Work

The gold-plated donkey cart

I'm not sure if he came up with the term but I'm going to credit this idea to James Lewis who used it in relation to a very stuck client we were working on at ThoughtWorks.

The golden donkey cart is a software delivery anti-pattern where a team ceases to make step changes to their product but instead undergoes cycles of redevelopment of the same product making every more complex and rich iterations of the same feature set.

So a team creates a product, the donkey cart, and it's great because you can move big heavy things around in it. After a while though you're used to the donkey cart and you're wondering if things couldn't be better. So the team gets together and realise that they could add suspension so the ride is not so bumpy and you can get some padded seats and maybe the cart could have an awning and some posts with rings and hooks to make it easier to lash on loads. So donkey cart v2 is definitely better and it is easier and more comfortable to use so you wonder, could it be better yet.

So the team gets back together and decides that this time they are going to build the ultimate donkey cart. It's going to be leather and velvet trim, carbon fibre to reduce weight, a modular racking system with extendible plates for cargo. The reins are going to have gold medallions, it's going to be awesome.

But it is still going to be a donkey cart and not the small crappy diesel truck that is going to be the first step on making donkey carts irrelevant.

The gold-plated donkey cart is actually a variant on both the Iron Law of Oligarchy and the Innovator's Dilemma.

The donkey cart is part of the valuable series of incremental improvements that consists of most business as usual. Making a better donkey cart makes real improvements for customers and users.

The donkey cart team is also there to create donkey carts. That's what they think and talk about all the time. It is almost churlish to say that they are really the cargo transport team because probably no-one has ever expressed their purpose or mission in those terms because no-one else has thought of the diesel truck either.

Finally any group of people brought together for a purpose will never voluntarily disband itself. They will instead find new avenues of donkey cart research that they need to pursue and there will be the donkey cart conference circuit to be part of. The need for new donkey cart requirements will be self-evident to the team as well as the need for more people and time to make the next donkey cart breakthrough, before one of their peers does.

Standard