Programming, Software

Preferring Microservices to Unified Services

So I want to present an argument between two philosophies in service orientated design: microservices and what I am calling the Unified Service. I am a fan of microservices so I am worried about presenting a straw man argument for the other side, originally I was going to call the unified service the One True Service, for example, but that seemed too snide.

Now there is a XKCD for everything and in this case it is this cartoon on standards that is relevant. However if argument via XKCD doesn’t float your boat let’s expand the appeal of the unified service.

The desire for a comprehensive service is completely understandable and actually the first wave of service orientation was based around the benefits of centralising services and providing consistency to many clients. However it was during the first wave of implementations that I first became suspicious of the viability of comprehensive services.

If you create a unified service then you end up taking on all the complexity of all your clients and bringing it into one huge uber-complex place. Every requirement and need ends up in the central service that then becomes a slew of conditional code and special cases (unless you have a very brilliant team of coders).

I have ended up preferring the exact opposite approach, heavily influenced by the UNIX philosophy, with lots and lots of microservices. Recently ending up in an apogee or nadir (depending on how you view it) of two whole webapps that differ only in that the expose different time periods of data.

I think this winds up many of my colleagues who regard it perhaps as an absurdly purist approach that actually reintroduces the complexity by having many services with their undocumented JSON formats and endpoints.

The reason I think the approach has merits is that I probably think more about maintaining code than creating it. I like the fact that while I have many services I only have to worry about the ones that are causing problems and when I am trying to fix them I have very small code surface areas to explore.

When I want to modify and change my service I don’t have to worry about taking out five service endpoints with one dodgy piece of shared code. The unified service is a terrifying thing to deploy because when you push out new code you need to verify everything is still working.

No problem right? We use automated testing to sort all this out. Well I think having to have a test suite for services is a bit of an anti-pattern. Something I will blog about later.

So okay, so now I have a test suite and I am no longer worried about breaking something when I push features out. The trouble is that I am now stuck in test land trying to figure out where the wires are crossed in the shared code and the bug is still in production and time is ticking away while I figure out how things relate and how my new requirement is conflicting with all the other requirements on the unified service.

My view is that it is okay to have massive code duplication and functionality overlap if you also have strong vertical separation and the ability to change small parts of a collaborating system. Systems are harder to manage than codebases and while you want both to be as good as they can be savings in codebases are wiped out if the resulting system is more complex and harder to change.

Standard
Programming, Software, Web Applications, Work

Names are like genders

One thing I slightly regret in the data modelling that is done for users in Wazoku is that I bowed to marketing pressure and “conventional wisdom” and created a pair of first and last name fields. If gender is a text field then how much more so is the unique indicator of identity that is a name?

The primary driver for the split was so that email communications could start “Hey Joe” rather than “Hey Joe Porridge Oats McGyvarri-Billy-Spaulding”. Interestingly as it turns out this is definitely the minority usage case and 95% of the time we actually put our fields back together to form a single string because we are displaying the name to someone other than the user. It would have been much easier to have a single name field and then extract the first “word” from the string for the rare case that we want to try and informally greet the user.

My more general lesson is that wherever I (or we more generally as a business) have tried to pre-empt the structure of a data entity we have generally gotten it wrong, however so far we have not had to turn a free text field into a stricter structure.

Standard
Software, Work

Comparing Jabber servers

I recently had a trawl around the available Jabber servers looking for something that was suitable for use as a messaging system for a website. My first job was to do a quick review of what is available out there. The first thing that was quite clear that is ejabberd has massive mindshare. There was a definite feeling of “why would you want to try other servers when you could just be running ejabberd?”.

Well there are kind of two answers; first there is inevitable Erlang objection. This time from sysops who felt uncomfortable with monitoring and support. It is a fair point, I feel Erlang can be particularly obtuse when it’s failing. The second was that ejabberd stubbornly refused to start up on my Fedora test box. Neither the yum copy nor a hand-built version cut the mustard. Ironically I was able to get an instance running in minutes on my own Ubuntu-based virtual machine (hand-built Erlang and ejabberd).

Looking a JVM-based alternatives I looked at OpenFire and Tigase. Tigase has lovely imagery but also seemed to have spam over its comments and the installation was a pig that I gave up on quickly. OpenFire is one of those old-school Java webapps where you are meant to manage everything via a web gui. This makes some kinds of  tasks easy but you have to write your own plugins to get programatic access to the server. I didn’t want to setup up an external database (why on earth would I want to do that?) but the internal HSQL store left me with no way of easily tweaking the setup of the server, a command-line tool would have been ultra-helpful because… OpenFire is annoyingly buggy, by this I mean it doesn’t have a lot of bugs but they are really annoying. After running through the gui setup process I tried to login. This was the wrong thing to do. What I needed to do was stop the server and restart it. Now the server was fucked and I needed to manually delete data and tweak config files to allow me to do the setup again.

Once it was running there was also some fun and games getting the BOSH endpoint to work. Do you need a trailing slash or not? I don’t remember but get it wrong and it doesn’t work. If you try to HTTP GET the endpoint then you get an error, this is technically correct but leaves you wondering whether your config is correct or not (if you know the error message you are looking for perhaps it is helpful but I didn’t so it wasn’t). ejabberd (when I got it working) is more helpful in giving an OHAI message that at least confirms you have something to point the client at.

OpenFire did what I needed but seems to make an implicit assumption that there is going to be an admin working at screens to configure and monitor everything. It feels more like a workgroup tool than a workhorse piece of infrastructure.

One oddity I found was Prosody, it sensibly used the same defaults, urls and conventions as ejabberd but is written in Lua and was gloriously lightweight. It absolutely hit the spot for development work and actually felt fun. I was able to script everything I needed to do via its ctl script. Of course if Erlang (a big, serious infrastructure language) is a bit of an issue the hipster scripting language might be too.

What the hell, if it was about doing what I need to do quickly and without fuss I would use Prosody and worry about any infrastructure issues later.

Standard
Software, Work

The Joy of Acceptance Testing: Is my bug fixed yet?

Here’s a question that should be a blast from the past: “Is my bug fixed yet?”.

I don’t know, is your acceptance test for the bug passing yet?

Acceptance tests are often sold as being the way that stakeholders know that their signed-off feature is not going to regress during iterative development. That’s true, they probably do that. What they also do though is dramatically improve communication between developers and testers. Instead of having to faf around with bug tracking, commit comment buggery and build artifact change lists you can have your test runner of choice tell you the current status of all the bugs as well as the features.

Running acceptance tests is one example where keeping the build green is not mandatory. This creates a need for a slightly more complicated build result visualisation. I like to see a simple bar split into green and red with the number of failing tests. There may be a good day or two when that bar is totally green but in general your test team should be way ahead of the developers so the red bar represents the technical deficit you are running at any given moment.

If it starts to grow you have a prompt to decide whether you need to change your priorities or developer quality bar. Asking whether a bug has been fixed or when the fix will be delivered are the wrong questions. For me the right questions are: should we fix it and how important is it?

If we are going to fix a bug we should have an acceptance test for it and its importance indicates what time frame that test should pass in.

Standard
Software, Work

Agile must be destroyed

Let’s stop talking about the post-Agile world and start creating it.

Agile must be destroyed not because it has failed, not because we want to return to Waterfall or to the unstructured, failing, flailing projects of the past but because it’s maxims, principles and fixed ideas have started to strangle new ideas. All we need to take with us are the four principles of the Agile Manifesto. Everything else are just techniques that were better than what preceded them but are not the final answer to our problems.

There are new ideas that help us to think about our problems: a philosophy of flowing and pulling (but not the training courses and books that purport to solve our problems), the principles of software craftmanship, “constellation” architecture, principles of Value that put customers first not our clients. There are many more that I don’t know yet.

We need to adopt them, experiment with them, change and adapt them. And when they too ossify into unchallengeable truisms we will need to destroy them too.

Standard
Software

Fine grained access control is a waste of time

One of the things I hate developing most in the world (there are many others) is fine grained control systems. The kind of thing where you have to set option customer.view_customer.customer_delivery_options.changes.change_customer_home_delivery_flag to true if you want a role to be able to click a checkbox on and off.

There are two main reasons for this:

  • Early in my career I helped implement a fine grained system, it took a lot of effort and time. It was never used because configuring the options were too difficult and time consuming. Essentially the system was switched to always be on.
  • Secondly, when working in a small company I discovered that people that do the job of dealing with customers, buying stock or arranging short term finance actually did a better job when the IT department didn’t control how they did they worked. Having IT implement “controls” on their systems is like selling a screwdriver that only allows you to turn it in one direction.

Therefore I was very happy to hear Cyndi Mitchell on Thursday talking about the decision not to implement fine level ACL in Mingle. If you record who did what on the system and you make it possible to recover previous revisions of data then you do not need control at level much finer than user and superuser.

Instead you can encourage people to use your system as a business tool and if they decide to use that screwdriver to open paint tins or jam open doors, then good on them.

Standard
Programming, Software

The One True Layer Anti-Pattern

A common SQL database anti-pattern is the One True Lookup Table (OTLT). Though laughable the same anti-pattern often occurs at the application development layer. It commonly occurs as part of the mid-life crisis phase of an application.

Initially all objects and representations are coded as needed and to fit the circumstances at hand. Of course the dynamics of the Big Ball of Mud anti-pattern are such that soon you will have many varying descriptions of the same concept and data. Before long you get the desire to clean up and rationalise all these repetitions, which is a good example of refactoring for simplicity. However, at this point danger looms.

Someone will point out eventually that having one clean data model works so well that perhaps there should be one shared data model that all applications will use. This is superficially appealing and is almost inevitably implemented with a lot of fighting and fussing to ensure that everyone is using the one true data model (incidentally I’m using data models here but it might be services or anything where several applications are meant to drive through a single component).

How happy are we then? We have created a consistent component that is used across all our applications in a great horizontal band. The people who proposed it get promoted and everyone is using the One True Way.

What we have actually done is recreated the n-tier application architecture. Hurrah! Now what is the problem with that? Why does no-one talk about n-tier application architecture anymore? Well the issue is Middleware and the One True Layer will inevitably hit the same rocks that Middleware did and get dashed to pieces.

The problem with the One True Layer is the fundamental fact that you cannot be all things to all men. From the moment it is introduced the OTL must either bloat and expand to cover all possible Use Cases or otherwise hideously hamstring development of the application. If there was a happy medium between the two then someone would have written a library to do the job by now.

There is no consistency between which of the two choices will be made; I have seen both and neither of them have happy outcomes. Either way from this point on the layer is doomed: it becomes unusable and before long the developers will be trying to work their way around the OTL as much as possible, using it only when threatened with dismissal.

If the codebase continues for long enough then usually what happens is the OTL sprouts a number of wrappers around its objects that allow the various consumers of its data to do what they need to. When eventually the initial creators of the OTL are unable to force the teams to use the layer then the wrappers tend to suck up all the functionality of the OTL and the library dependency is removed.

In some ways this may seem regressive, we are back at the anarchy of objects. In fact what has been created is a set of vertical slices that represent the data in the way that makes sense for the context they appear in. These slices then collaborate via external API interfaces that are usually presented via platform neutral data transfer standards (HTTP/JSON for example) rather than via binary compatibility.

My advice is to try to avoid binary dependent interactions between components and try to avoid creating very broad layers of software. Tiers are fine but keep them narrow and try to avoid any tier reaching across more than a few slices (this particularly applies to databases).

Standard