Programming

Juxt 24

Juxt hold an occasional conference and this edition was focused on Fintech which isn’t an area that I know really well but have dabbled in a bit.

Opening keynote

Fortunately the opening talk by Fran Bennett of the Ada Lovelace Institute was on AI and draw a parallel between the Post Office/Fujitisu debacle and the current level of credulity around the potential of generative AI. I particularly liked this (paraphrased) quote.

Computer systems operate within existing systems of power

If we choose to believe the myth of infallible machines over fallible humans then injustices like the Horizon scandal will just occur again and again.

Eliminating non-determinism

Allen Rohner of Griffin Bank offered a talk on improving testing systems by taking aim firmly at “flaky” tests and attributing them to non-deterministic and side-effecting behaviour either in the system under test or in the testing code itself.

He used the example of the FoundationDB testing strategy and a focus on invariant behaviour to facilitate automated generative testing. The practical twist he offered on this was Griffin’s use of stateful proxies that can also be part of generated testing to provide something strong than mocks or stubs in integration testing.

I think the key takeaway though was to change the way that you think about unreliable tests and consider changing the system to solve the problem rather than hacking around the tests.

Workflows in service clothing

Phill Barber‘s talk on workflows was one of my favourites of the day. Perhaps because I wasn’t expecting to enjoy it and partly because his argument in favour of workflows and orchestrated workflows (over choreographed events) was persuasive. He also didn’t try to deny the problems there can be with workflows: like only being able to visually design them and then exporting them to source control and never delivering the ability to for non-technical to change the system.

He tackled the key issues of the “workflow black hole effect” head on by putting the workflows inside the service boundaries. This approach also minimises the complexity and rigidity that can come from orchestration as you are talking about a few dedicated flows within a service. The orchestration rules are hidden from the service callers and therefore remain malleable.

He also suggested something interesting in that when a collaborating service becomes too anaemic and the balance of functionality ends up in the workflow side you can eliminate the service entirely and allow the workflow to access the datastore associated with the service. In the example given this essentially eliminated a feature light microservice in his example and instead brought the data ownership into a service with broader responsibilities. I would be interested if the idea would extend to multiple data ownerships but the thought only occurred to me well after the event.

He mentioned nflow as an embeddable (JVM-based) open source workflow engine that allows configuration in code.

Monoliths, monoliths, monoliths!

Everyone was of one mind that you should start each development with a monolith as Vlad Yatsenko, CTO of Revolut, put it, the service should be just one box on the system diagram. No-one was fundamentally against microservices but the preference was clear for “right-sized” services divided by organisation or operational properties and to decompose the monolith into the services rather than trying to jump straight to a distributed system.

Magic versus abstraction

In the questions section of the final talk by Zohor Melamed, Harry Percival asked the question about what the difference was between a great abstraction and the “magic” behaviour that Zohor had railed against in his talk. Again paraphrasing the response:

The difference between magic and a good abstraction is that the abstraction doesn’t shape the solution.

Bad abstractions are like async and await, good abstractions are like Docker which genuinely does not leak the details of the running container.

Conclusion

Thanks to Malcolm and Jon for the invite, it was an interesting line up, even for someone for whom the “buy side” is a mystery.

Standard
Programming

The state of microservices

One of the liveliest sessions at Scale Summit was the one on microservices where opinions flowed free and fast in a rapidly rotating fishbowl.

There were several points of interest that I took away and I thought I would write them up here as some of the key questions. In another post I’ll talk about the problems with microservices that haven’t been solved yet.

Are we doing microservices yet?

Some people in the room had already adopted microservices, the reasons given included: breaking down or trying to change functionality monolith codebases, trying to scale up an existing applications or architectures or actually as an existing best practice that should be applied more generally.

What is a microservice?

A few people wanted to get back to this. Showing that while it is a handy term it isn’t universally understood or agreed.

I personally think a microservice is a body of code whose purpose and implementation is easy to understand when studied, adheres to the Unix philosophy of doing one thing well and which can be re-implemented quickly if needed.

Are microservices just good practice or righteous SOA?

Well naturally you can regard every new concept as being a distillation of previous good practice. However the point of naming something is to try and make it easy to triangulate on a meaning in conversation.

Won’t microservices just be corrupted by consultants and architects?

Yes, everything popular gets corrupted in time. I’m okay with that because in the interval we have a handy term to describe some useful patterns of solution design.

Don’t microservices complicate operations?

One attendee put it well: microservices are recursive so if the operations team are going to support them then they should be in the business of creating a service that deploys services.

Some people felt that for microservices to work developers and teams had to have full stack ownership and responsibility but I felt that was trying to smuggle devops in under the microservices banner.

I think microservices are best deployed on a platform and that platform defines what a deployable service is and can be responsible for shutting down misbehaving services.

Such a scheme allows for other aspects of the Unix way to be applied such as man pages, responding to –help and other useful conventions.

The platform can check whether these conventions have been met before deploying the service.

Isn’t microservices just a way to discuss the granularity of a service?

Yes in a way, although there are a few other practices that make up a successful application of microservices you can just think about it as being a way of checking the responsibility boundaries of your service and how easy it would be to replace.

If your service has multiple responsibilities and is difficult to replace easily then it might have problems.

A lot of people wanted to use AWS an an example of good services without microservices. I think AWS is a good example of service implementation: each part has good boundaries and there are a few implementations of the key APIs. Things like the AWS security functionality is a good example of how you have to work hard to avoid having services rely on other services, and the result isn’t elegant as a result.

I would argue that public-facing APIs are probably where you want to composite microservices to provide a consistent facade onto the functionality though.

As other delegates pointed out, isolating services makes them easier to scale. If starting a server in the EC2 API requires more resources than shutting it down you might prefer to scale up just the creation service rather than many instances of the whole API which are unlikely to be used or consume resources.

As ever, horses for courses, you’re going to know your domain better than I do.

Don’t microservices cause problems as well as solve them?

Absolutely, choosing the wrong solution at the wrong time is going to cause problems. Zealously over-applying microservices or applying the idea to absurd levels is not going to have a happy outcome.

A guess a good point is that we know our problems with existing service implementations. We don’t know what problems there are with microservices or whether they have logical and simple solutions. However they are helping us solve some of our known problems.

Aren’t microservices simply REST-ful services done right?

The most common form of microservice today is probably one implemented via HTTP and JSON. However this form isn’t prescriptive. ProtocolBuffers might be a better choice for an internal exchange format and ZeroMQ might be a better choice for a transport.

I also think that message queues are a good basis for microservices with micro consumers and producers focussing on tight message types.

See also my mini-list of microservice myths which has more on this subject.

Should we be doing microservices?

I suspect that doing microservices for the sake of ticking a solution buzzword is bad. However I think microservices seem a pretty good solution to a problem I know I’ve seen in a fast-moving domain where you are trying to innovate without creating a maintenance burden or large legacy.

Standard
Software, Work

Up-front quality

There has been a great exchange on the London Clojurians mailing list recently talking about the impact of a good REPL on development cycles. The conversation kicks into high-gear with this post from Malcolm Sparks although it is worth reading it from the start (membership might be required I can’t remember). In his post Malcolm talks about the cost of up-front quality. This, broadly speaking, is the cost of the testing required to put a feature live, it is essentially a way of looking at the cost that automated testing adds to the development process. As Malcolm says later: “I’m a strong proponent of testing, but only when testing has the effect of driving down the cost of change.”.

Once upon a time we had to fight to introduce unit-testing and automated integration builds and tests. Now it is a kind of given that this is a good thing, rather like a pendulum, the issue is going too far in the opposite direction. If you’ve ever had to scrap more than one feature because it failed to perform then the up-front quality cost is something you consider as closely as the cost of up-front design and production failure.

Now the London Clojurians list is at that perfect time in its lifespan where it is full of engaged and knowledgeable technologists so Steve Freeman drops into the thread and sensibly points out that Malcolm is also guilty of excess by valuing feature mutability to the point of wanting to be able to change a feature in-flight in production, something that is cool but is probably in excess of any actual requirements. Steve adds that there are other benefits to automated testing, particularly unit testing, beyond guaranteeing quality.

However Steve mentions the Forward approach, which I also subscribe to, of creating very small codebases. So then Paul Ingles gets involved and posts the best description I’ve read of how you can use solution structure, monitoring and restrained codebases to avoid dealing with a lot of the issues of software complexity. It’s hard to boil the argument down because the post deserves reading in full. I would try and summarise it as the external contact points of a service are what matters and if you fulfil the contract of the service you can write a replacement in any technology or stack and put the replacement alongside the original service.

One the powerful aspects of this approach is that is generalises the “throw one away” rule and allows you to say that the current codebase can be discarded whenever your knowledge of the domain or your available tools change sufficiently to make it possible to write an improved version of the service.

Steve then points out some of the other rules that make this work, being able to track and ideally change consumers as well. Its an argument for always using keys on API services, even internal ones, to help see what is calling your service. Something that is moving towards being a standard at the Guardian.

So to summarise, a little thread of pure gold and the kind of thing that can only happen when the right people have the time to talk and share experiences. And when it comes to testing, ask whether your tests are making it cheaper to change the software when the real functionality is discovered in production.

Standard