Our tools are doing us a disservice

Do you like using Intellij or a similar IDE that allows you to navigate your code base easily and restructure freely? Do you like the fact that your code has a huge test suite that allows you to make changes with confidence?

These things seem like good things. Why would anyone have a problem with them?

Recently though at conferences and in discussions at work it is starting to seem to me and other that powerful tools have a dark and dangerous side to them. The more powerful the tools you have at your disposal the longer and longer you can work on a codebase without facing up to the issues that you have.

A powerful IDE allows you to have insanely complex projects with hundreds, possibly thousands, of files in them. I’m not sure that the Java love of abstraction across multiple classes would have happened if you have had to navigate the resulting package structure with Vi.

Rather than working to simplify your code base you can continue to add in each special case and niche requirement, everyone can have a home with a Strategy pattern here and a class hierarchy there. Our test suite grows and grows to make sure that each overlapping requirement can be added safely and without consideration of its worth. We are perhaps proud that 60 to 80% of our codebase is test code that, in itself, is adding no value to our business.

Our rich dependency managers encourage us to add in libraries or even worse extract and share code across multiple projects. Until of course we start to burn in a transitive dependency hell or our own making.

We all love powerful tools, we all love powerful languages that are feature rich but the more powerful our tools our the more they should help us find the simplicity in what we do and ensure that we deliver measurable value quicker rather than providing just a longer noose.

Software, Work

Don’t hate the RDBMS; hate the implementation

I read through this post with the usual sinking feeling of despair. How can people get so muddled in their thinking? I am not sure I can even bear to go through the arguments again.

More programmers treat the database as a dumb store than there are situations where such treatment is appropriate. No-one is saying that Twitter data is deep, relational and worthy of data mining. However not all data is like Twitter micro blog posts.

The comments on the post were for the most part very good and say a lot of what I would have said. However looking at CouchDB documentation I noticed that the authors make far less dramatic claims for their product that the blog post does. A buggy alpha release of a hashtable datastore is not going to bring the enterprise RDBMS to its knees.

I actually set up and ran Couch DB but I will save my thoughts for another day, it’s an interesting application. What I actually want to talk about is how we can get more sophisticated with our datastores. It is becoming apparent to me that ORM technologies are really making a dreadful hash of data. The object representation is getting shafted because inheritance is never properly translated into a relational schema. The relational data is getting screwed by the fact the rules for object attributes is at odds with the normal forms.

The outcome is that you end up with the bastard hybrid worst of all worlds solution. What’s the answer?

Well the first thing is to admit that application programmers think the database is a big dumb datastore and will never stop thinking that. The second is that relational data is not the one true way to represent all data. They are the best tool we have at the moment for representing rich data sets that contain a lot of relational aspects. Customer orders in a supply system is the classic example. From a data mining point of view you are going to be dead on your feet if you do not have customers and their orders in a relational data store. You cannot operate if you cannot say who is buying how much of what.

If you let developers reimplement a data mining solution in their application for anything other than your very edge and niche interests then you are going to be wasting a lot of time and money for no good reason. You simply want a relational datastore, a metadata overlay to reinterpret the normalised data in terms of domain models and a standard piece of charting and reporting software.

However the application programmers have a point. The system that takes the order should not really have to decompose an order taken at a store level into its component parts. What the front end needs to do is take and confirm the order as quickly as possible. From this point of view the database is just a dumb datastore. Or rather what we need is a simple datastore that can do what is needed now and defer and delegate the processing into a richer data set in the future. From this point of view the application may store the data in something as transient as a message queue (although realistically we are talking about something like an object cache so the customer can view and adjust their order).

Having data distributed in different forms across different systems creates something of headache as it is hard to get an overall picture of what is happening on the system at any given moment. However creating a single datastore (implemented by an enterprise RDBMS) as a single point of reference is something of an anti-pattern. It is making one thing easier, the big picture. However to provide this data is being bashed by layering technologies into all kinds inappropriate shapes and various groups within the IT department are frequently in bitter conflict.

There needs to be a step back and IT people need to accept the complexity and start talking about the whole system comprising of many components. All of which need to be synced and queried if you want the total information picture. Instead of wasting effort in fitting however many square pegs into round holes we need to be thinking about how we use the best persistence solution for a given solution and how we report and coordinate these many systems.

It is the only way we can move forward.