Web Applications

The changing landscape of UK Energy

In the last year I’ve been building up a list of websites that help understand how electrical energy is produced in the UK and how it feeds into the grid. Building this understanding seems to be a vital requirement to understand the nature of the investment we need to make in the UK’s energy infrastructure and also massive potential that we are still failing to tap into.

But the other thing I’ve learned is that a lot of ideas that I grew up with around energy are probably no longer true. In particular the nature of solar energy, which while quiet and passive is steadily becoming a key part of the country’s energy infrastructure. This means that often there is more cheap renewable electricity in the middle of the day so it makes sense to run things like washing machines in the afternoon. This is a totally different paradigm from the one I grew up with where the cheapest costs were always at night when demand was lowest.

The demand curve is still true but I think this now illustrates the problem of storage and release. If wind energy is available all through the night when demand is low we need to be able to store this more effectively than we do now (if we store it at all, which is something I’m still trying to understand).

I’m really grateful to the creators of the following tools for their efforts in creating such helpful visualisations and utilities and for the creation of the underlying APIs that allow such projects to exist.

Standard
Web Applications

Alternative Mastodon frontends

Mastodon servers provide a CORS-based API that allows people to develop completely local alternative frontends for it that you can freely try with your existing accounts.

This means that you actually have a lot of options if you don’t like the default Mastodon web experience (which I feel is quite a few people). I’ve highlighted a few that I’ve been using in this post.

With these frontends you sign in using OAuth but the token is stored locally so you may need to authenticate multiple times across different devices and you can just clear local storage to stop using the frontend, no server accounts should be involved.

Pinafore

Pinafore (Github) has been one of my favourite interfaces being very simple and clear with a very pure central column.

However it has sadly been discontinued for active development but still works pretty great in practice and I continue to prefer to use it for posting. It’s worth reading the article to see how stressful it can be to maintain open-source projects and also how easy it is to end up in a dead end when choosing frontend technologies.

Phanpy

Phanpy (Github) does a really good job of rendering threads and also periodically highlights posts based on Boosts in the timeline allowing you to pick up on conversations that you might have missed out on.

I’m not sure I’m getting the best out of it currently but I have started it using it more on the weekends to try and catch up on accounts I don’t post on that frequently.

Phanpy seems to have a lot of positive buzz but it hasn’t been an immediate hit for me and I can’t quite articulate why that it is. It definitely makes it easier to follow conversations between people you’re following but there is maybe something in the post layout of the alternatives that I prefer.

Elk

Elk (Github) is a kind of eternal-alpha, I’ve dipped in and out a little bit. It is has a clearer design from my perspective to the default Mastodon experience but with images it really shines and seems to do a much better job at displaying pictures in the timeline, getting heights right and doing a better job of highlighting multiple pictures in a post.

It’s definitely my preferred way of looking at nature and travel photography posts.

Standard
Web Applications

Searching for the perfect calendar

I have multiple calendars with different providers and of course my work calendar. I really love the schedule view in Google Calendar but I would also love not to be sending all my data to Google just to get one UI feature.

Calendar.com is US-based and therefore not much better than Google for privacy, they are also more focused on groups than individuals. Calendar.online seems to have the schedule view, is based in Germany and says it is not interested in collecting and selling customer data but sadly it doesn’t sync with Google Calendar.

Tutanota has an agenda view but again doesn’t allow you to sync calendars due to the way it secures information.

Proton Calendar has the ability to sync with other calendars but it’s agenda view only applies to a single day which isn’t great but probably will get the job done, there is a feature request for a schedule view but nothing like it is currently in the UI. I’ve downloaded the Proton Calendar app for Android and it does seem to be a reasonable offline capable way of viewing multiple calendars and keeping them in sync.

I haven’t been able to find the perfect solution to my problem so far but Proton seems to be the best option I have currently and I would love it if that feature request is moved forward. The calendar feature was good enough to upgrade my plan to cover its functionality so I guess it really is good enough. I’d be interested in hearing about alternatives though.

Standard
Web Applications

Migrating to Fly Apps v2

So having been distracted by other things I completely missed that Fly are deprecating their previous offering (now known as Fly Apps v1). An automated migration happened while I was no wiser and it was only when a database connection broke down that I found out what was happening. It was a bit frustrating but one of the good things about Fly is the fact that I’m currently paying zero dollars for my apps just like my old Heroku setup which makes it perfect for hobby experimentation.

The basics of migrating are not complicated, the configuration file for deployment is slightly different and you now need to associate a Fly Machine (a virtual machine) to the application. Running the migration command flyctl migrate-to-v2 successfully did that for me with all my applications.

The use of Machines is a little different from other Platform as a Service (PaaS) that I’ve used before. They are lightweight virtual machines that use the Firecracker system that is used in AWS Lambda and also later Fargate. You need to assign at least one machine to your application for it to run and Fly recommends at least two.

Since your app is already virtualised in a Docker container normally you leave the scheduling of the machines to the service based on the demand you have but this setup gives you a lot more control of the resources that are available for the app to run over. The basics of the technology are already proven on Lambdas.

One of the nice features about Fly Apps v1 is that they were “always on” at no extra cost. Now you need to think a bit more about how you want to allocate the Machines to the application. Fortunately for most hobby projects it is straight-forward, you can set the auto-stop-start configuration and you probably can just use one machine as you’re never really going to need to fallback to another instance. I’ve set a few of my apps to have two machines and kept the others at the default migrated value of one (because that was what you’re running before).

For the cost conscious hobbyist one of the nice aspects of Firecracker VMs is that they are relatively quick to start on demand so while 99% of the time you’re not using anything when you do want to use the app the spinup time for the single machine apps is about that you’d experience with something like a cold AWS Lambda, the two machine apps seem to be quicker to start but that might be a subjective coincidence.

New applications are now configured with two machines by default plus all the logic to leave managing the machine time to the service. I haven’t had enough time with the new default to say whether its better or worse than the previous setup but it does seem better than other spin-down free tiers like Render’s.

Moving to Fly had it’s ups and downs but now I’ve gotten over the learning curve Fly provides everything I wanted from Heroku and also feels like it could be a platform you could grow with if you wanted to do something more serious with.

V2 has also brought some changes to the deployment configuration file, mostly to simplify it for the common case of HTTP-based apps, which seems a good call. You also control whether the new autoscaling functionality applies to your application from the config, this defaults to the most cost-effective options, which seems right to me. However one oddity is that while you can specify the minimum number of machines you want active I’m not sure you can specify a maximum. Instead you need to apply that configuration via the command-line.

This seems a bit inconsistent but there are a ton of other options to allow scaling across regions so maybe the possibilities are not easy to boil down to a simple configuration format. Again though, the common hobbyist’s case is probably can be catered too with more sophisticated setups being configured via the CLI or Terraform.

Having gotten through the migration now I remain happy with the service and I think when spinning up new projects there is more to understand that there was before but the overall service is probably now better and less magical.

Standard
London, Programming, Web Applications, Work

Halfstack on the Shore(ditch) 2022

This is the first time the conference has been back at Cafe 1001 since the start of the Pandemic and my first HalfStack since 2021’s on the Shore event.

In some ways Halfstack can seem like a bit of an outlandish conference but generally things that are highly experimental or flaky here turn up in refined mainstream forms three to five years later. Part of the point of the event is to question what is possible with the technologies we have and what might be possible with changes that are due in the future. Novelty, niche or pushing the envelope talks are about expanding the conversation about what is possible.

The first standout talk this year was by Stephanie Shaw about Design Systems. It tries to make the absurdist argument that visual memes meet all the criteria to be a design system before looking at what are the properties of a good design system that would disqualify memes. The first major point that resonated with me was that design systems are hot and lots of people say they have them when what they actually have are design principles, a component library or an illustration of UI variant behaviour.

I was also impressed that the talk had a slide dedicated to when a design system would be inappropriate. Context always matters in terms of implementing ideas in organisations and it is important to understand what the organisation needs and capabilities that are required to get value from an idea. Good design systems provide a strong foundation for rapid, consistent development and should demonstrate a clear return on the investment in them.

One of the talks that has stayed with me the longest was one that was about things that can be done now. I’ve seen Chris Heilmann talk about dev tools at previous conferences but this time the frame of the talk was different and was about using dev tools in the browser to make the web sane again. He reminded me that you can use the dev tools to edit the page. Annoying pop-up? Delete it! Right-click hijacked? Go into the handler bindings and unbind the customer listener. Auto-playing video? Change it’s attributes or again just delete the whole thing. He also did explain some new things that I wasn’t aware of such as the ability to take a screenshot of a specific node from within the DOM inspector. I’ve actually used that a few times since in my work.

There was an impromptu talk that was grounded in a context that was a little hard to follow (maintaining peer to peer memes in a centralised internet apocalypse I think) but was about encoding images into QR codes that included an explanation of how QR codes actually work and encode information (something I didn’t know). The speaker took the image data, transformed it into a series of QR codes, then had a website that displayed the QR codes in sequence and a web app that used a phone camera to scan the codes and reassemble the image locally. The scanning app was also able to understand where in the sequence the QR code was which created a kind of scanning line effect as it built up the image which was very cool to watch.

There were three talks that all involved a significant amount of simultaneous interaction and each using slightly different methods but clearly the theme was having many people together on a webpage interacting in near real time.

The first thing to say is that I took a decent but relatively low-powered Pinebook laptop to the conference as I thought I would just need something simple to take notes and look things up on the internet, maybe code along with some Javascript. All of the interactive demos barely worked on it and the time to be active was significantly longer than say the attendees with the latest Macs. I think the issue was a combination of having really substantial downloads (which appeared not to be cached so refreshing the browser was fatal) but also just massive requirements on CPU in the local synchronisation code.

The first was by a pro developer relations person, Jo Franchetti, who works for Ably and who used the Ably API. Predictably this was the best working (and looking) demo with a fun Halloween theme around the idea of an ouija board or, more technically, trying to spell out messages by averaging all the subscribers’ mouse movements to create a single movement over the screen. However even using a commercial API, probably having no more than 25 connections and a single-screen UI my laptop still ground to a halt and had significant lag on the animations. It did look great projected on the big screen though.

Jo’s talk introduced me to an API I hadn’t heard of before scrollTo (part of a family of scrolling APIs). This is an example of how talks about things on the edge of the possible often come back to things that are more practical day to day.

James Allardice and Ross Greenhalf had the least successful take on the multiuser extension and in terms of presentation style seemed to be continuing an offstage squabble in front of everyone. I get the impression that they were very down on what they had been able to achieve and were perhaps hoping for a showcase example to promote their business.

Primarily they didn’t get this because they were bizarrely committed to AWS Lambda as the deployment platform. Their idea was to do a multiplayer version of Pong and it kind of worked, except the performance was terrible (for everyone this time, not just me). This in turn actually created a more fun experience that what they had intended to build as the lag meant you needed to be quite judicious in when you sent your command (up or down) to the server as there was a tendency to overshoot with too many people sending commands as ball approached and then another as they were waiting for the first one to take effect. You needed to slow down your reaction cycle and try and anticipate what other people would be doing.

The game also only lasted for the duration of a Lambda timeout of a single execution run as the whole thing was run in the execution memory of a single Lambda instance. This was a consequence of the flawed design but again it wasn’t hard to imagine how Lambda could be quite effective here as long as you’re not using web sockets for the push channel. It feels like this kind of thing would probably be pretty trivial in something like Elixir in a managed container but was a bit of a uphill battle in a Javascript monolith Function as a Service.

The most creative multi-user demo was by Mynah Marie (aka Earth to Abigail who has been a performer at previous Halfstacks) who used Estuary to create a 15 person online jam session which was surprisingly harmonious for a large group with little in the way of being able to monitor your own sound (I immediately had more empathy for any musician who has asked the desk for less drums in their monitor). However synchronisation was again a big problem, not only did other people paste over my loops but also after leaving the session one of my loops remained stubbornly playing until killed by the admin despite me not being able to access the session again, I was given a new user identity and no-one seemed able to reconnect with the orphan session.

Probably the most mindblowing technical talk was by Ulysses Popple about his tool Nodessey which is both a graph editor or notebook and a way to feed values into nodes that can then visualise the input they are receiving from their parent nodes. It reminded me a bit of PureData. I found following the talk, which was a mixture of notes and live-coded examples, a bit tricky as its an unusual design and trying to follow how the data structure was working while also trying to follow the implementation was tricky for me.

One thing I found personally interesting is that Nodessey is built on top of a minimal framework called Hyperapp which I love but have never seen anyone else use. I now see that I have very much underestimated the power of the framework and I want to start trying to use it more again.

Michele Riva did a talk about the use of English in programming languages which had a helpful introduction to programming languages that had been created in non-English languages. As an English speaker you tend to not need to ever leave the US-led universe of English based languages but it was interesting to see how other language communities had approached making programming accessible for non-English speakers. There was a light touch on non-alphabetic languages and symbolic languages like J (and of course brainfuck).

Perhaps the most practical talk of the conference was by Ante Barić around browser extensions. I’ve found these really valuable for creating internal organisation tooling in a very lightweight way but as Chris Heilmann reminded us in his talk too many extensions end up hammering browser performance as they all attempt to intercept the network requests and render cycle. The talk used a version of Clippy to create annoying commentary on the websites you were visiting but it had some useful insight into what is happening with browser extensions and future plans from both the Google and Mozilla teams as well as practical ways to build and use them.

Ante mentioned a tool that I was previously unaware of called web-ext that is a Mozilla project but which might be able to build out Chrome extensions in the future and gives a simplified framework for putting together extensions.

General notes

Food and drink is available when you want it just by showing the staff your conference lanyard. Personally I think it is great when conferences are able to be so flexible around letting people eat when they want to and avoiding the massive queues for food that typically happen when you try and cram an entire conference into a buffet in 90 minutes. I think it also helps include people who may have particular eating patterns that might not easily fit into scheduled tea and lunch breaks. It also makes it feel less like school.

In terms of COVID risk, the conference was mostly unmasked and since part of the appeal is the food and drink I felt like I wasn’t going to be changing my risk very much by wearing a mask during the talk sections. The ventilation seemed good (the room could be a bit cold if you were sitting in the wrong place) and there was plenty of room so I never had to sit right next to someone. This is probably going to remain a conference that focuses on in-person socialising and therefore isn’t going to appeal to everyone. Having a mask mandate in the current environment would take courage. The open air “beach” version of the conference on the banks of the Thames would probably be more suitable for someone looking to avoid indoor spaces.

Going back?

Halfstack is a lot of fun and I’ve booked my super early-bird for this year I think it offers a different balance of material compared to most web and Javascript conferences. This year I learnt practical things I could bring to my day job and was impressed by what other people have been able to achieve in theirs.

Standard
Web Applications

Email services in 2021

I read this article about switching away from GMail and it struck a bit of a chord in terms of my own attempts to find a satisfactory replacement.

At the moment I feel I’m using all the services and at some point I should actually pick one or two that actually meet my need.

I have ProtonMail and Tutanota accounts for security. In truth I’ve ended up using neither (unless I see someone else using a ProtonMail address).

Day to day I’m probably still using GMail for most things and Fastmail for things where I don’t want to embarrass myself with my GMail address which is based on a gaming handle. Therefore over time my Fastmail address has become my address for financial things, communication with tradespeople and professionals and the odd real-world email invoice and so on.

It may sound strange but the biggest reason I don’t use Tutanota more is that it is hard to communicate verbally to other people. Fastmail still needs to the first word to be spelled out but people expect the XMail.com format and seem to have a lot less trouble with it.

I was on the verge of unsubscribing from Hey when it had it’s massive wobble over handling political issues at work (or alternatively white privilege, whichever way you see it). Rightly or wrongly I’ve kept on using and paying for it.

The strange niche that I’ve found for it is communicating with my extended family and occasionally a bit of business communication. The mail handling features just seems to work really well in terms of not wanting to respond immediately but wanting something better than “star” or “pin”.

When I started with Hey I was very excited about the “Feed” feature for managing email newsletters but after a while I’ve found myself not using it very much and instead I’ve started using Feedbin for these instead.

The Hey Paper Trail function is also good but when it comes to things like online orders I find the delivery updates easier to handle in GMail.

However exactly like the author of the article Fastmail is the most complete replacement for GMail having a similar functionality set (including a calendar) and while Hey might be better for having a well-managed near-zero mailbox, Fastmail is better for the pragmatic keep it all and search it when you need it approach to email.

Standard
Web Applications

Roam Research: initial thoughts

Roam Research not only justified subscribing pretty much up front but has also made it onto my pinned tabs in virtually no time flat. It’s basically a web-based knowledge management system. I’m already a fan of Workflowy so I’m already comfortable with putting information into trees and hierarchies, in fact there’s a lot of overlap between the two applications as you can just use Roam as a kind of org-mode bulleted list organiser.

The thing that makes it different is the ability to overlay a wiki-like ability to turn any piece of text into a link which creates another list page to store other notes.

The resulting page highlights the linked portions of the trees in other pages as well as containing it’s own content.

The links then form a graph that can be explored but I haven’t generate enough content for it to be generating any useful insight yet.

The pages are searchable so you can either take wiki-like journeys of discovery through your notes or just search and jump to anything relevant in your knowledge graph.

By default the system creates a daily “diary” page for you to record notes in an initially unstructured way organically as you roll through the day. I’m still primarily in my todo lists in a Getting Things Done mode during the day but I have found it a useful end of day technique for reflecting or summarising ideas to follow up on.

Roam is very much influenced by and part of the new wave of knowledge management tools based on Zettelkasten. If you’re unfamiliar it’s worth reading up on it (I don’t know it well enough to create a pithy summary).

To date though everything I’ve tried in this space was a bit formal and tricky to get going or fit into my existing ways of working. Roam on the other hand is web-based, relatively quick and usable and uses enough metaphors from existing systems to make it feel accessible.

Weirdly the first use that convinced me I needed this service was actually recipes. You can have a hierarchy of different types of recipes but use a link and you can have a vertical slice across ingredients or techniques.

The second was while genuinely doing some market research on Javascript enhancement frameworks where I wanted to have one page for my overall thoughts (“Is this something to pursue?”) and was able to break the list of all the frameworks I was looking at into their own pages with links to the frameworks and any thoughts I had as I was playing around with them.

The mobile experience isn’t quite as good, it’s a kind of fast noting system where I’m not sure how I can quickly attach a thought to an existing page. Here it’s still easier to use a note-taking app and consolidate thoughts later.

Overall though this is still the most exciting web app I’ve used this year.

Standard
Programming, Software, Web Applications, Work

Prettier in anger

I’ve generally found linting to be a pretty horrible experience and Javascript/ES haven’t been any exception to the rule. One thing that I do agree with the Prettier project is that historically linters have tried to perform two tasks to mixed success: formatting code to conventions and performing static analysis.

Really only the latter is useful and the former is mostly wasted cycles except for dealing with language beginners and eccentrics.

Recently at work we adopted Prettier to avoid having to deal with things like line-lengths and space-based tab sizes. Running Prettier over the codebase left us with terrible-looking cramped two-space tabbed code but at least it was consistent.

However having started to live with Prettier I’ve been getting less satisfied with the way it works and Prettier ignore statements have been creeping into my code.

The biggest problem I have is that Prettier has managed its own specific type of scope creep out of the formatting space. It rewrites way too much code based on line-size limits and weird things like precedent rules in boolean statements. So for example if you have a list with only one entry and you want to place the single entry on a separate line to make it clear where you intend developers to extend the list Prettier will put the whole thing on a single line if it fits.

If you bracket a logical expression to help humans parse the meaning of the statements but the precedent rules mean that brackets are superfluous then Prettier removes them.

High-level code is primarily written for humans, I understand that the code is then transformed to make it run efficiently and all kinds of layers of indirection are stripped out at that point. Prettier isn’t a compiler though, it’s a formatter with ideas beyond its station.

Prettier has also benefited from the Facebook/React hype cycle so we, like others I suspect, are using it before it’s really ready. It hides behind the brand of being “opinionated” to avoid giving control over some of its behaviour to the user.

This makes using Prettier a kind of take it or leave it proposition. I’m personally in a leave it place but I don’t feel strongly enough to make an argument to remove from the work codebase. For me currently tell Prettier to ignore code, while an inaccurate expression of what I want it to do, is fine for now while another generation of Javascript tooling is produced.

Standard
Programming, Web Applications, Work

Why can’t Forms PUT?

HTML Forms can declare a method, the HTTP verb that is used when the form is submitted, the value of this method is GET or POST.

The HTML5 spec briefly had PUT and DELETE as valid methods for the form method but has now removed them. Firefox also added support and subsequently removed them.

Recently over the course of Brexit night at The Guardian we got into a discussion about why this was the case and what the “right” way to map a form into a REST-like resource system would be.

The first piece of research was to dig into why the additional methods had been added and then removed. The answer (via Ian Hickson) was simple: PUT and DELETE have implied idempotency, the nature of form submission is that it is inherently uncacheable and therefore cannot be properly mapped onto those verbs.

So, basic problem solved, it also implies the solution for the url design for a form. A form submission represents a user submitting an untrusted data payload to a resource, this resource in turn choose to make PUT or DELETE requests but it would be dangerous to have the form do this directly.

The resource therefore is one that represents the form submission. In terms of modelling the URL I would be tempted to say that it takes the form :entity/form/submission, so for example: contact/form/submission.

There may be an argument that POSTing to the form resource represents submission so the submission part of the structure is unnecessary. In my imagination though the form resource itself represents the metadata of the form while the submission is the resource that essentially models a valid sumbission and the resource that represents the outcome of the submission.

Standard