Programming, Software

The One True Layer Anti-Pattern

A common SQL database anti-pattern is the One True Lookup Table (OTLT). Though laughable the same anti-pattern often occurs at the application development layer. It commonly occurs as part of the mid-life crisis phase of an application.

Initially all objects and representations are coded as needed and to fit the circumstances at hand. Of course the dynamics of the Big Ball of Mud anti-pattern are such that soon you will have many varying descriptions of the same concept and data. Before long you get the desire to clean up and rationalise all these repetitions, which is a good example of refactoring for simplicity. However, at this point danger looms.

Someone will point out eventually that having one clean data model works so well that perhaps there should be one shared data model that all applications will use. This is superficially appealing and is almost inevitably implemented with a lot of fighting and fussing to ensure that everyone is using the one true data model (incidentally I’m using data models here but it might be services or anything where several applications are meant to drive through a single component).

How happy are we then? We have created a consistent component that is used across all our applications in a great horizontal band. The people who proposed it get promoted and everyone is using the One True Way.

What we have actually done is recreated the n-tier application architecture. Hurrah! Now what is the problem with that? Why does no-one talk about n-tier application architecture anymore? Well the issue is Middleware and the One True Layer will inevitably hit the same rocks that Middleware did and get dashed to pieces.

The problem with the One True Layer is the fundamental fact that you cannot be all things to all men. From the moment it is introduced the OTL must either bloat and expand to cover all possible Use Cases or otherwise hideously hamstring development of the application. If there was a happy medium between the two then someone would have written a library to do the job by now.

There is no consistency between which of the two choices will be made; I have seen both and neither of them have happy outcomes. Either way from this point on the layer is doomed: it becomes unusable and before long the developers will be trying to work their way around the OTL as much as possible, using it only when threatened with dismissal.

If the codebase continues for long enough then usually what happens is the OTL sprouts a number of wrappers around its objects that allow the various consumers of its data to do what they need to. When eventually the initial creators of the OTL are unable to force the teams to use the layer then the wrappers tend to suck up all the functionality of the OTL and the library dependency is removed.

In some ways this may seem regressive, we are back at the anarchy of objects. In fact what has been created is a set of vertical slices that represent the data in the way that makes sense for the context they appear in. These slices then collaborate via external API interfaces that are usually presented via platform neutral data transfer standards (HTTP/JSON for example) rather than via binary compatibility.

My advice is to try to avoid binary dependent interactions between components and try to avoid creating very broad layers of software. Tiers are fine but keep them narrow and try to avoid any tier reaching across more than a few slices (this particularly applies to databases).

Standard
Programming

Why Repositories and not Services?

This is a good question. Why do people like ThoughtWorks make a lot of fuss about things like Services but then want to use things like the Repository pattern when writing code?

The short answer is that Service Orientation and Domain Driven Design have two slightly different concerns.

For example, in a transportation domain you don’t get a Truck from a Truck Service you get a Truck from the Garage or the Truck Manufacturer depending on whether you own it or are buying it. The point being that a Truck Service in Domain Design terms is meaningless, it is just something that programmers introduce to make their code easier for them to use.

If however you want to track where a Consignment is then it makes sense to offer this as a service. For a start it has different audiences; I might want to offer tracking to customers via the web and a slightly more detailed version of the service to the Customer Service Department.

In this sense the Tracking Service is actually a Domain item, people actually talk about the Tracking Service and the Service has been organised around the transactions and expectations that happen in the course of transporting goods.

I am not sure if it makes any sense to talk about a Service in your codebase if it does not have an external consumer for its functionality. Usually Service objects that only interact with your own code can be broken up and have their concerns divided in a different way so as to eliminate them. A Truck Finder for example might make sense, it would handle finding out whether a Truck was on the Road, in the Depot or in the Repair Shop. The Depot might then tell you whether the Truck was being loaded while the Truck could tell you how full it currently was.

Once you have identified external consumers for a service then you get into the question of Service Contracts and a lot of the good things about Service Architectures begin to apply. Limiting concerns, platform neutrality and service composition for example; but this involves a lot more than just tacking the word “Service” on the end of a class.

Standard
Java, Programming, Software

Programming to Interfaces Anti-Pattern

Here’s a personal bete noir. Interfaces are good m’kay? They allow you to separate function and implementation, you can mock them, inject them. You use them to indicate roles and interactions between objects. That’s all smashing and super.

The problem comes when you don’t really have a role that you are describing, you have an implementation. A good example is a Persister type of class that saves data to a store. In production you want to save to a database while during test you save to an in-memory store.

So you have a Persister interface with the method store and you implement a DatabaseStorePersister class and a InMemoryPersister class both implementing Persister. You inject the appropriate Persister instance and you’re rolling.

Or are you? Because to my mind there’s an interface too many in this implementation. In the production code there is only one implementation of the interface, the data only ever gets stored to a DatabaseStorePersister. The InMemory version only appears in the test code and has no purpose other than to test the interaction between the rest of the code base and the Persister.

It would probably be more honest to create a single DatabaseStorePersister and then sub-class it to create the InMemory version by overriding store.

On the other hand if your data can be stored in both a graph database and a relational database then you can legitmately say there are two implementations that share the same role. At this point it is fair to create an interface. I would prefer therefore to refactor to interfaces rather than program to them. Once the interaction has been revealed, then create the interface to capture the discovery.

A typical expression of this anti-pattern in Java is a package that is full of interfaces that have the logical Domain Language name for the object and an accompanying single implementation for which there is no valid name and instead shares the name of the interface with “Impl” added on. So for example Pointless and PointlessImpl.

If something cannot be given a meaningful name then it is a sure sign that it isn’t carrying its weight in an application. Its purpose and meaning is unclear as are its concerns.

Creating interfaces purely because it is convenient to work with them (perhaps because your mock or injection framework can only work with interfaces) is a weak reason for indulging this anti-pattern. Generally if you reunite an interface with its single implementation you can see how to work with. Often if there is only a single implementation of something there is no need to inject it, you can define the instance directly in the class that makes use of it. In terms of mocking there are mock tools that mock concrete classes and there is an argument that mocking is not appropriate here and instead the concrete result of the call should be asserted and tested.

Do the right thing; kill the Impl.

Standard
Web Applications

Rounded Corners: Die! Die! Die!

If you’ve noticed that we don’t seem to have a lot of websites designed in screaming primary colours. We also don’t have a lot pastel coloured websites anymore (I liked them though). That is because every three years the design schools kick out a new wave of designers who spend five years regurgitating the received wisdom they have just learned.

What is the most insidious recent web trend? Rounded corners. What started as a kind of joke and an impressive hack to differentiate websites is now the stifling, boring norm.

Rounded corners are boring. If you add them to your website they make it look just like all the other websites in the world. If you create an image hack to implement rounded corners so they work on IE then you are actually mad. You are actually forcing users to download at least two images for the privilege of looking just the same as everyone else.

Currently the most exciting web design is Twistori which is an exercise in retro colours but take a close look at it.

Where are the underlined words to show me I can click on things? How can it create an entirely different ways to read text using font size alone? What is it doing?! I think, I think… it might be, the future.

I also quite like the look of GitHub, look at those panels, look at that negative space. Even the tab bar is square.

Tab bars are at the end of their lives too, there are too many and they look too alike. What you want to do is take that tab bar and turn it into massive words that are clickable. That would be awesome.

At least for a few years.

Standard
Computer Games

Final Fantasy: War of the Lions, War Against the Interface

This game is one of the highest rated PSP gaes on Metacritic but I have just found unbelievably frustrating. The amount of key pressing is unbelievable. If I press attack and there is only one target I can attack then you know what computer machine? Just select it for me. In fact wherever I just have one option why don’t you just automatically set it up so I can confirm it? If I am going to be selecting Wait from a menu loads of times why not bind the common options to the symbol buttons so I can just press a button series rather than using arrow keys and X all the time?

There is also a lot of Game Over in War of the Lions. Fail a battle, Game Over. Your character dies in a battle, Game Over (is there a way to save a dying character? if not then why make it a game ender?). And what happens when you get game over, the game reboots… from the start sequence. Sorry but when did I switch on Iron Man mode?

I know this is a port but that’s no excuse for making no effort to try and make the user interface fit the device. Even D&D Tactics is better than this. The excellent cut scenes really don’t compensate for the cruddy design decisions.

Standard