Java

How do you change or remove a Heroku buildpack?

A weird edge case came up today around a buildpack that turned out to be unneeded for my application. How do you change the Buildpack once you’ve associated it with your application? It turns out that a Buildpack is essentially just a piece of config, so you can see it and change it by using the heroku config command and the config variable name BUILDPACK_URL.

In my case I just needed to remove the configured Buildpack with a simple heroku config unset command:

heroku config:unset BUILDPACK_URL
Standard
Software, Work

Up-front quality

There has been a great exchange on the London Clojurians mailing list recently talking about the impact of a good REPL on development cycles. The conversation kicks into high-gear with this post from Malcolm Sparks although it is worth reading it from the start (membership might be required I can’t remember). In his post Malcolm talks about the cost of up-front quality. This, broadly speaking, is the cost of the testing required to put a feature live, it is essentially a way of looking at the cost that automated testing adds to the development process. As Malcolm says later: “I’m a strong proponent of testing, but only when testing has the effect of driving down the cost of change.”.

Once upon a time we had to fight to introduce unit-testing and automated integration builds and tests. Now it is a kind of given that this is a good thing, rather like a pendulum, the issue is going too far in the opposite direction. If you’ve ever had to scrap more than one feature because it failed to perform then the up-front quality cost is something you consider as closely as the cost of up-front design and production failure.

Now the London Clojurians list is at that perfect time in its lifespan where it is full of engaged and knowledgeable technologists so Steve Freeman drops into the thread and sensibly points out that Malcolm is also guilty of excess by valuing feature mutability to the point of wanting to be able to change a feature in-flight in production, something that is cool but is probably in excess of any actual requirements. Steve adds that there are other benefits to automated testing, particularly unit testing, beyond guaranteeing quality.

However Steve mentions the Forward approach, which I also subscribe to, of creating very small codebases. So then Paul Ingles gets involved and posts the best description I’ve read of how you can use solution structure, monitoring and restrained codebases to avoid dealing with a lot of the issues of software complexity. It’s hard to boil the argument down because the post deserves reading in full. I would try and summarise it as the external contact points of a service are what matters and if you fulfil the contract of the service you can write a replacement in any technology or stack and put the replacement alongside the original service.

One the powerful aspects of this approach is that is generalises the “throw one away” rule and allows you to say that the current codebase can be discarded whenever your knowledge of the domain or your available tools change sufficiently to make it possible to write an improved version of the service.

Steve then points out some of the other rules that make this work, being able to track and ideally change consumers as well. Its an argument for always using keys on API services, even internal ones, to help see what is calling your service. Something that is moving towards being a standard at the Guardian.

So to summarise, a little thread of pure gold and the kind of thing that can only happen when the right people have the time to talk and share experiences. And when it comes to testing, ask whether your tests are making it cheaper to change the software when the real functionality is discovered in production.

Standard
Gadgets

Wifi extension using Solwise power adaptors

I have a problem in my building of having many competing wifi networks, having tried upgrading my router and also scanning the networks to try and find the least used channels I decided that the only way to improve things was to try a wifi extender. My initial plan was to try a range extender but figuring that my problem was congested airspace I thought I might give powerline transmission a go instead since at least my electrical cabling ought to be mine alone.

I’ve been kind of aware of powerline transmission from former colleagues and the tech publications but I have to say that I don’t know how it works and it feels kind of magical right now. After looking at some recommendations on Amazon and BE forums I selected a package by Solwise and bought it direct from them but via Amazon Marketplace for convenience.

Setup wasn’t complicated but wasn’t exactly plug and play either. Contrary to what I had read it was perfectly possible to use the adaptors in gang plugs which was helpful in getting the pairing and initial connectivity to work. There is a helpful leaflet in the package that shows you how to pair the adaptors but I seemed to be struggling to get the individuals into pairing mode rather than resetting. In the end I think I put the extender into pairing mode first and then the adaptor that was going to connect to the router.

Once they were paired then getting access going was as simple as connecting a ethernet cable to the router. LEDs indicate whether the connection is being shared or not but I would recommend connecting to the default unprotected wifi network first to make sure everything is working.

Setting up the SSID and password was again a little more involved than I expected, there was another leaflet that explains the whole process and the easiest way to do the setup is by connecting a laptop directly via Ethernet cable. However at one stage I was modifying my laptop’s IP address to get it to connect to the in-built webserver which is fine for me but I wouldn’t want to be explaining that process over the phone on parent support.

With the setup up done then the whole thing works really well, the adaptors give a great signal and fast connections, transforming my previously flaky connections for the PS3 and AppleTV. The extending adaptor runs too hot for my liking but you can just choose to unplug it. A pretty cheap solution to my problem.

Standard
Work

Agile software development defers business issues

My colleague Michael Brunton-Spall makes an interesting mistake in his latest blog post:

much of our time as developers is being completely wasted writing software that someone has told us is important.  Agile Development is supposed to help with this, ensuring that we are more connected with the business owners and therefore only writing software that is important.

Most Agile methodologies actually don’t do what Michael says here. Every one I’ve encountered in the wild treats it as almost axiomatic that there exists someone who knows what the correct business decision is. That person is then given a title, “product owner” for example and then is usually assigned responsibility for three things: deciding what order work is to be done, judging whether the work has been done correctly and clarifying requirements until they can be reduced to a programming exercise.

That’s why it was liberating to come across System Thinking which does try to take a holistic approach and say that any organisation is only really as good as its worst performing element. Doing that does not eliminate all the process improvements in development that Agile can provide but also illustrates that a great development team doing the wrong thing is a worse outcome than a poor development team doing the right thing.

The invention of the always correct product owner was a neat simplification of a complex problem that I think was probably designed to avoid having multiple people telling a development team different requirements. Essentially by assigning the right to direct the work of the development team to one person the issue of detail and analysis orientated developers getting blown off-course by differing opinions was replaced by squabbling outside the team to try and persuade the decision maker. Instead of developer versus business the problem was now business versus business.

Such a gross simplification has grave consequences as the “product owner” is now a massive point of failure and few software delivery teams can effectively isolate themselves from the effects of such a failure. I have heard the excuse “we’re working on the prioritised backlog” several times but I’ve never seen it protect a team from a collectivised failure to deliver what was really needed.

Most Agile methodologies essentially just punt and pray over the issue of business requirements and priorities, deferring the realities of the environment in the hoping of tackling an engineering issue. Success however means to doing what Michael suggests and trying to deal with the messy reality of a situation and providing an engineering solution that can cope with it.

Standard
Work

Breaking the two-week release cycle

I gave a lightning talk about some of the work I did last year at the Guardian to help break the website out of the two-week release cycle and make it possible to switch to a feature-release based process. It’s the first time I’ve given a public talk about it although I have discussed with friends and obviously within the Guardian as well where we are still talking about how best to adopt this.

I definitely think that feature-releasing is the the only viable basis for effectively software delivery, whether you are doing continuous delivery or not.

In a short talk there’s a lot you have to leave out but the questions in the pub afterwards were actually relatively straight-forward. The only thing I felt I didn’t necessarily get across (despite saying it explicitly) was that this work was done on the big Enterprise Java monolith at the Guardian. We aren’t talking about microapps or our new mobile platform (although they too are released on a feature basis rather than on a cycle) we are talking about the application that is sometimes referred to as the “Monolith”. It was really about changing the world to make it better rather than avoid difficulty and accepting the status quo.

Feature-releasing has real benefits for supporting and maintaining software. On top of this, if you want to achieve collective team effort then focussing on a feature it going to better rather than doing a swath of work in a mini-waterfall “sprint”. The team stands a better chance of building up a release momentum and cadence and from that building up stakeholder confidence and a reputation for responsive delivery.

Standard
Web Applications

Give Draft a go

Draft is a terrific new service that I’ve been using for a while. Imagine Dillinger but with documents stored in the cloud, and the clutter-free aesthetic influence of Svbtle and a lot of additional helpful utilities such as a dynamic word count. It is a really simple idea that in some ways has you kicking yourself for not having thought of it yourself.

I’m using for a mix of purposes, partly replacing Google Docs where what I want is to ultimately generate  clean HTML, partly to provide a drafting facility for products that don’t include (Posthaven and Google Sites for example). It is also handy simply as a document drafter rather than having to install an app like Markdown Editor or UberWriter on various machines.

The service also offers the ability to collaborate with others on the draft documents which is something I’d like to give a go as having to discuss other people’s writing by passing emails of drafts back and forth is painful. So therefore I’m encouraging people to jump on the service and give it a go.

Standard
Clojure

Getting started with Riemann stream processing

Riemann is a great application for dealing with event processing but it doesn’t have a lot of documentation or newbie friendly tutorials. There are some cool pictures that explain the principles of the app but nothing beyond that. At some point I want to try and contribute some better documentation to the official project but in the meantime here’s a few points that I think are useful for getting started.

I’m assuming that you’ve followed these instructions to get a working Riemann installation and you’ve followed the instructions on how to submit events to Riemann via the Ruby Riemann client interface.

At this point you want to start making your own processing rules and it is not clear how to start.

Well the starting point is the idea of streams when an event arrives in Riemann it is passed to each stream that has been registered with what is called the core. Essentially a stream is a function that takes an event and some child streams and these functions are stored in a list in the core atom under the symbol :streams.

Okay let’s look at an example. The first obvious thing you want to do is print out the events that you are sending to Riemann. If you’ve got the standard download open the etc/riemann.config file, set the syntax for the file to be Clojure, as this is read into Clojure environment in the riemann/config namespace and you can use full Clojure syntax in it. In the config file add the following at the end. Now either run the server or if it is running reload the config file with kill -HUP <Riemann PID>.

(streams prn)

prn is a built-in function that will print an event and pass it on to following streams.

In irb let’s issue an event:

r << {host: "rrees.me", service: "posts", metric:  5}

You should see some output in the Riemann log along the following lines.


#riemann.codec.Event{:host "rrees.me", :service "posts", :state nil, :description nil, :metric 5, :tags nil, :time 1366450306, :ttl nil}

I’m going to assume this has worked for you. So now let’s see how events get passed on further down the processing chain. If we change our streams function to the following and reload it.

(streams prn prn)

Now we send the event it should get printed twice! Simples!

Okay now let’s look at how you can have multiple processing streams working off the same event. If we add a second print stream we should get three prints of the event.

(streams prn prn)

(streams prn)

Each stream that is registered can effectively process the event in parallel so some streams can process an event and send it to another system while another can write it to the index.

Let’s change one of our prints slightly so we can see this happen.

(streams (with :state "normal" prn) prn)

(streams prn)

We should now get three prints of the event and in one we should see that the event has the state of “normal”. Okay great! Let’s break this down a bit.

Every parameter of streams is a stream and a stream can take an event and child streams. So when an event occurs it is passed to each stream, each stream might specify more streams that the transformed event should be passed to. That’s why we pass prn as the final parameter of the with statement. We’re saying add the key-value pair to the event and pass the modified event to the prn stream.

Let’s try implementing this by ourselves, there is a bit of magic left here, call-rescue is an in-built function that will send our event to other streams you can think of it as a variant of map:

(defn change-event [& children]
  (fn [event]
    (let [transformed-event (assoc event :hello :world)]
      (call-rescue transformed-event children))))

(streams (change-event prn))

If this works then we should see an event printed out that has the “hello world” key-value pair in it. change-event is a stream handler that takes a list of “children” streams and returns a function that handles an event. If the function does not pass the event onto the children streams then the event stream stops processing, which is a bit like a filter. The event is really just a map of data like all good Clojure.

At this point you actual have a good handle on how to construct your own streams. Everything else is going to be a variation on this pattern of creating a stream function that returns an event handler. The next thing to do now is go and have a look at the source code for things like withprn and call-rescue. Peeking behind the curtain will take a certain amount Clojure experience but it really won’t be too painful, I promise, the code is reasonable and magic is minimal. Most of the functions are entirely self-contained with no magic so everything you need to know is in the function code itself.

Standard
Clojure, Programming

Leiningen doesn’t compile Protocols and Records

I don’t generally use records or protocols in my Clojure code so the fact that Clojure compiler doesn’t seem to detect changes in the function bodies of either took me by surprise recently. Googling turned up this issue for Leiningen. Reading through the issue I ended up specifying all the namespaces containing these structures in the :aot definition in the lein project.clj. This meant that the namespace was re-compiled every time but that seemed the lesser of two evils compared to the clean and build approach.

Where this issue really stung was in the method-like function specifications in the records and as usual it felt that structure and behaviour was getting muddled up again when ideally you want to keep them separate.

Standard
Programming

Keeping NPM local

I’m pretty sure that at one point NPM might not have required sudo privileges to do things but by default it now seems that people are sudo installing all over the place. It’s the equivalent of just clicking “yes” on every Windows permission dialog.

There is no particularly good reason to use sudo with a package manager. Virtualenv is tres local by default and even gem (via rbenv and rvm) can now run happily in the user space. Even if you want to share package downloads to minimise network calls then you still don’t need to install everything globally as root.

NPM is special in almost every sense of the word and it’s author recommends that you switch permissions on /usr/local to be owned by your login user. That seems kind of crazy and the kind of thing that probably would only work out for OSX users.

In fact if you are willing to build your own NodeJS it really isn’t hard to bet Node and NPM working locally and still retaining all the advantages of the “global” NPM install. Just use the standard build option of –prefix to set Node to use your home directory and just add $HOME/bin to your PATH in .bashrc or the equivalent.

Then you should be able to use npm without ever having to sudo.

My view though is that you shouldn’t have to force everything into the user space just to make sure a package manager doesn’t need sudo privileges. It would be better for everyone if NPM used a directory in home for each user since it seems to be aimed at single-user installs anyway there is not a massive saving by having packages installed in /usr/local. For the few use-cases where that would be useful then it should be an override option.

Standard
Programming

Cognitive bias and the difficulty of evolving strongly typed solutions

Functional Exchange 2013 featured an interesting talk by Paul Dale that had been mostly gutted but had a helpful introduction to cognitive bias. That reminded me of something I was trying to articulate about in my own talk when I was talking about the difficulty of evolving typed solutions. I used the analogy of the big ball of mud where small incremental changes to the model of the system result in an increasing warped codebase. Ultimately you need someone to come along and rationalise the changes or you end up with something that is ultimately too difficult to work with and is re-built.

This of course is an example of anchoring where the initial type design for the system tends to remain no matter how far the domain and problem have moved from the initial circumstances of their creation.

Redesigning the type definition of a program is also expensive in terms of having to create a new mental model of what is happening rather than just adapting the existing one. Generally it is easier to adapt and incorporate an exception or set of exceptions to the existing model rather than recreating an entire system.

Dynamic or data-driven systems are more sympathetic to the adaptive approach. However they too have their limits, special cases need to be abstracted periodically and holistic data objects can bloat out of control into documents that are dragging the whole world along with them.

Type-based solutions on the other hand need to be periodically re-engineered and the difficulty is that the whole set of type definitions need to be worked on at the same time. Refactoring patterns of object-orientated code often focus on reorganisation that is easy, such as pulling out traits or extracting new structures. This is still anchoring on the original solution though.

If a type system is to be help rather than a hindrance you need to be rework the overall structure. I think with most type-systems this is actually impossible. Hence the pattern of recreating the application if you want to take a different approach.

The best typed solution currently seems to be type unions where you can make substantial changes and abstractions but not have to work about things like type hierarchies or accidentally polluting the edge cases of the system.

Where these aren’t available then good strongly-typed solutions actually rely heavily on good, proactive technical leaders to regularly drive good change through the system and manage the consequences.

Standard