Web Applications

The changing landscape of UK Energy

In the last year I’ve been building up a list of websites that help understand how electrical energy is produced in the UK and how it feeds into the grid. Building this understanding seems to be a vital requirement to understand the nature of the investment we need to make in the UK’s energy infrastructure and also massive potential that we are still failing to tap into.

But the other thing I’ve learned is that a lot of ideas that I grew up with around energy are probably no longer true. In particular the nature of solar energy, which while quiet and passive is steadily becoming a key part of the country’s energy infrastructure. This means that often there is more cheap renewable electricity in the middle of the day so it makes sense to run things like washing machines in the afternoon. This is a totally different paradigm from the one I grew up with where the cheapest costs were always at night when demand was lowest.

The demand curve is still true but I think this now illustrates the problem of storage and release. If wind energy is available all through the night when demand is low we need to be able to store this more effectively than we do now (if we store it at all, which is something I’m still trying to understand).

I’m really grateful to the creators of the following tools for their efforts in creating such helpful visualisations and utilities and for the creation of the underlying APIs that allow such projects to exist.

Standard
Python

London Django Meetup April 2023

I’m not sure whether I’ve ever been to this Meetup before but it is definitely the first since 2020. It was hosted by Kraken Energy in their offices which have a plywood style auditorium with a nice AV setup for presentations and pizza and drinks (soft and hard) for attendees.

There were two talks: one on carbon estimates for websites built using Django and Wagtail; the other about import load times when loading a Django app into a shell (or more generally expensive behaviour in Python module imports).

Sustainable or low impact computing is a topic that is slowly gaining some traction in the wider development community and in the case of the web there are some immediate quick wins in the form of content negotiation on image formats, lazy loading and caching to be had.

One key takeaway from the talk is that the end user space is the area where most savings are possible. Using large scale cloud hosting means that you are already benefiting from energy efficiencies so things like the power required for a mobile phone screen matters because the impact of inefficient choices in content delivery is multiplied by the size of your audience.

There was a mention in passing that if a web application could be split into a Functions as a Service (FaaS) deployable then, for things like Django that have admin paths and end user paths, you can scale routes independently and save on overprovisioning. If this could be done automatically in the deployment build it would be seamless from the developer’s point of view. I think you can do this via configuration in the Serverless framework. It seems an interesting avenue for making more efficient deployments but at a cost in complexity for the framework builders.

There was quite an interesting research opportunity mentioned in the talk around serverless-style databases. For sites with intermittent or cyclical usage having an “always on” database represents a potentially big saving on cost and carbon. There was mention of the service neon.tech which seems to have a free personal tier which might be perfect for hobby sites where usage is very infrequent and a spin up time would be acceptable.

The import time talk was interesting, it focused on the developer experience of the Django shell boot time (although to be honest a Python shell for any major framework has the same issues). There were some practical tips on avoiding libraries with way too much going on during the import phase but really the issue of Python code doing expensive eager activity during import has been a live one for a long time.

I attended a talk about cold starts with Python AWS Lambdas in 2019 that essentially boiled down to much of the same issues (something addressed, but not very well in this AWS documentation on imports). Little seems to have improved since and assumptions about whether a process is going to be short or long-lived ultimately boils down to the library implementer and the web/data science split in Python means that code is run in very different contexts making sharing libraries across these two use cases hard.

The core language implementation is getting faster but consensus on good practice in import time behaviour is not a conversation that seems to be happening between the major library maintainers.

The performance enhancements for core Python actually linked the two talks because getting existing code onto more efficient runtimes helps reduce compute demands across all usage.

Standard