Internet

The inevitability of ad-blocking

As I work in the content industry I’ve always felt bad about installing ad-blocking software. I’ve always felt that adverts were part of the deal of having free content.

Recently I have started to use them in some of my browser sessions and the reason is almost purely technical: adverts were wrecking my power consumption and hogging my CPU.

The issue is naturally acute on smartphones, which is why Apple is starting to allow ad-blocking on iOS Safari, but my recent problems have actually been on laptops. I have an aging Chromebook which you might expect to have problems but I have also found that in the last six months my pretty powerful dev laptop has also been going into full-fan power drain mode, often resulting in less than two hours of battery life.

At first I thought the issue was simply that I am a total tab monster, keeping open loads of pages and referring to them while coding or researching things.

However by digging into the developer tools and the OS monitors it became apparent that just a few of my tabs were causing all these problems (swap file paging I still have to put my hands up to) and all of them were running visually innocuous ads that were taking up vast quantities of CPU and memory.

With no way of telling whether any given webpage is going to kill my computer or not, the only sane response is to not take the risk and install an ad-blocker.

Since installing them (I’ve been using uBlock) I have indeed obtained longer battery-life and less memory-crashes on my Chromebook.

While I am still worried about how we can pay for high-quality open web content in a world without ads there is no tenable future for an open web that clients cannot viably run.

In my personal web usage I prefer to pay for the services I use and rely on. For those that I’m uncertain of I’m happy to trial and therefore to be the product rather than the customer.

In these situations though I am really dealing with the web as an app delivery platform. For content production there needs to be something better than the annual fundraising drive.

Frustratingly there is also a place for ads. Without advertising then everything becomes (online) word of mouth. There’s a positive case to be made for awareness-based advertising. I want to do it myself around recruitment as part of my work.

These adverts though are really nothing more than pictures and words. They shouldn’t be things that are taxing the capabilities of your hardware.

Advertisers are bringing this change on themselves. If they can’t find a way to square their needs and those of the people they are trying to reach then there isn’t going to be an online advertising market in nine months time and that might mean some big changes to the way the web works for everyone.

Standard
Programming

Trading performance for asynchronicity

An unusual conversation came up at one of the discussion groups in the day job recently. One of the interesting things that the Javascript language specification provides is a very good description of asynchronous execution that is then embodied in execution environments like NodeJS. Asynchronicity on the JVM isĀ  emulated by an event loop mechanism on top of the usual threaded execution environment. In general if you run JVM code in a single-thread environment bad things will happen I would prefer to do it on at least two cores.

So I made the argument that if you want asynchronous code you would be better off executing code on NodeJS rather than emulating via something like Akka.

Some of my colleagues shot back that execution on NodeJS would be inferior and I didn’t disagree. Just like Erlang sometimes you want to trade raw execution performance to get something more useful out of the execution environment.

However people felt that you would never really want to trade performance for a pure asynchronous environment, which I found very odd. Most of the apps we write in the Guardian are not that performant because they don’t really need to be. The majority of our volume is actually handled by caching and a lot of the internal workloads are handled by frameworks like Elasticsearch that we haven’t written.

In follow up discussion I realised that people hadn’t understood the fundamental advantage of asynchronous execution which is that it is easier to reason about than concurrent code. Asynchronous execution contexts on NodeJS provide a guarantee that only one scope is executing at a time so whenever you come to look at an individual function you know that scope is limited entirely the block you are looking at.

Not many programmers are good at parsing and understanding concurrent code. Having used things like Clojure I have come to the conclusion that I don’t want to do concurrency without excellent language support. In this context switching to asynchronous code can be massively helpful.

Another common situation is where you want to try and achieve data locality. With concurrent code it is really easy to actually end up with net poorly performing code due to contention on contexts. Performing a logical and cohesive unit of work is arguably a lot easier in asynchronous code blocks. It should be easier to establish a context, complete a set of operations and then throw away the whole context, knowing that you won’t need to reload that context again as the task will now be complete.

It is hard to make definite statements of what appropriate solutions are for in particular situations. I do know though that performance is a poor place to start in terms of solution design. Understanding the pros and cons of execution modes matters considerably more.

Standard
Programming

Concurrency means performance, yes?

One thing I heard a lot at the Mostly Functional conference last week that concurrency is required for performance on multicore processors. Since Moore’s Law ended it is certainly true that the old trick of not writing performant code but letting hardware advances pick up the slack has been flagging (although things like SSD have still had their impact).

However equating concurrent code with performance is subtly wrong. If there was a direct relationship then we would have seen concurrent programming adopted swiftly by the games programmers. And yet there we still see an emphasis on ordered, predictable execution, cache structure and algorithmic efficiency.

Performance is one of those vague computing terms, like scale, that has many dimensions. Concurrency has no direct relation to performance as anyone who has managed to write a concurrent program with global resource contention can attest.

There are two relevant axes to considering performance and concurrency: throughput and capacity. Concurrency, through parallelism, allows you to greatly increase your utilisation of the available resources to provide a greater capacity for work.

However that work is not inherently performed faster and may actually result in lowered throughput due to the need to read data that is not in memory and the inability to predict the order of execution.

For things like webservices that are inherently stateless then often concurrency does massively increase performance because the capacity to serve request goes up and there is no need coordinate work. If the webservice is accessing a shared store where essentially all of the key data is in memory and what we need to do is read rather than mutate that data then concurrency becomes even more desirable.

On the other hand, if what we want to do is process work as quickly as possible, i.e. maximise throughput, then concurrency can be a very poor choice.

If we cannot predict the order that work will need to be executed in, due to things like having to distribute work across threads and retry work due to temporary errors then we may have to create the entire context for the work repeatedly and load it into local memory.

In circumstances like these concurrency hurts performance. After all the fastest processing is probably still pointer manipulation of a memory-mapped file, if your want to really fast.

So concurrency means performance and beating Moore’s Law if you can be stateless and value volume of processing over unit throughput.

Standard