Month notes

August 2024 month notes

Co-pilot

Ever late to the party I’ve finally been using AI assisted coding on a work project. It’s been a really interesting experience, sometimes helpful and sometimes maddening.

Among the positives are that it was easy to get the LLM to translate between different number systems like rgb and hex or pixels, rems and Tailwind units.

It was pretty good at organising code according to simple rules like lexical sorting but it was defeated by organising imports according to linting rules. This makes it a great tool for organising crufty code that hasn’t been cared for in a while and has often been more powerful than pure AST-based refactoring.

At one point it correctly auto-populated stub airport code data into a test data structure which felt that something I hadn’t seen in assistance before.

It also helped my write a bash script in a fraction of the time it would normally take. The interesting thing here was that I know a reasonable amount of bash but can never remember the proper bracketing and spacing. Although I tweaked every line that was produced it was much quicker than Googling the correct syntax or running and repeating.

What wasn’t so great was that the interaction between the Co-pilot and Intellisense suggestions aren’t really differentiated in the UI so it was really unclear what completions are the result of reflection or inference from the code and which ones are based on probability. If you’re having a field name suggested then that should only be via reflection in my view. All too often the completion resulted in an immediate check error due to the field having a slightly different name or not existing at all.

I’m almost at the point of switching off Co-pilot suggestions because they aren’t accurate enough right now.

Would I pay for this myself right now? No, I don’t think this iteration has the right UX and ability to understand the context of the code. However there will be a price point that is right in the future for things like the script writing.

Atuin

I started a new job recently and probably the most useful tool I’ve used since starting is Atuin which gives you a searchable shell history. I’ll probably write up more about my new shell setup but I think being able to pull back commands quickly has made it massively easier to cope with a new workflow and associated commands and tools.

Form Data

This little web standards built-in was the best thing to happen to my hobby coding this month. I can’t believe I’ve gone this long without having ever used it. You can pass it a DOM reference and access the contents of the form programmatically or you can construct and instance and pass it along to a fetch call.

It’s incredibly useful and great for using in small frontends.

Reading list

Gotchas in using SQLite in production: https://blog.pecar.me/sqlite-prod

Practical SVG has been published for free on the internet after publisher A Book Apart stopped distributing its catalogue.

Let’s bring about the end of countless hand-rolled debounce functions: https://github.com/whatwg/dom/issues/1298

Python packaging tool uv had a major release this month. Simon Willison shared a number of interesting observations over at the Lobsters thread on the release. I’m still uncertain about the wisdom of trying to fund developer tooling with venture capital, I don’t believe the returns are there, however I did come round to people’s arguments that the tools could be brought into community stewardship if needed. Thinking of recent licensing forks the argument seems persuasive.

I currently happily mimbling along with pipenv but I need to update some hobby apps to Python 3.12/3.13 soon so I think I’m going to give uv a go and see what happens.

I also started a small posts blog this month so I’m probably going to post these items there in the future.

Standard
Month notes

July 2024 month notes

Dockerising Python

Fly have changed their default application support to avoid buildpacks and provide a default Dockerfile when starting new projects. I’ve been meaning to upgrade my projects to Python 3.12 as well and when one of my buildpack projects stopped deploying I ended up spending some time on how to best package Python applications for a PaaS deployment.

I read about which distribution to use as your base image but I haven’t personally encountered those problems and my image sizes are definitely smaller with Alpine.

Docker’s official documentation is a nightmare with no two Dockerfiles being consistent in approach. This page has some commented example files under the manual tabs but there doesn’t seem to be an easy way to generate a direct link to it which seems, actually typical of my documentation experience.

There also doesn’t seem to be a consistent view as to whether an application should use the system Python or a virtual environment within the container. The latter seems more logical to me and is what I was doing previously but the default Fly configuration isn’t set up that way.

Services

I have quite a few single user hobby web projects and I’ve been wondering if they wouldn’t work a lot better with a local SQLite datastore but it is actually often easier to use a cloud Postgres service than it is have a secure read-write directory available to an app and manage backups and so on yourself.

Turso is taking this idea one step further to try and solve the multi-tenancy issue by providing every client with a lightweight database.

I gave Proton Docs a whirl this month and they are pretty usable with the caveat that I haven’t tried sharing and collaboratively editing them yet. The one thing that is missing for me at the moment is keyboard shortcuts which seem pretty necessary when you’re typing.

I had previously tried de-Googling with Cryptpad which is reasonable for spreadsheet but has a really clunky document interface compared to Google Docs and which I ended up using more out of principle than because it was an equivalent product.

Reading list

It’s possible to get hung up on what good image description looks but this WAI guide to writing alt text for images is straight-forward and breaks down the most common cases with examples.

Smolweb is a manifesto for a smaller, lighter web which aligns for me with the Sustainable Web initiatives. There are a few interesting ideas in the manifesto such as using a Content Security Policy to stop you from including content from other sites (such as CDNs).

Following up on this theme is a W3 standard for an Ethical Web which also felt very inspiring. Or maybe depressing that some of these things need to be formulated in a common set of principles.

I also found out about the hobby Spartan protocol this month which seems like it would be a fun thing to implement and is closer to the original HTTP spec which was reasonable easy for people to follow and implement.

Standard
Month notes

June 2024 month notes

Meetups

I went to the Django monthly meeting (which clashed with the first England football match and the Scala meetup) where my former colleague Leo Giordani talked about writing his own markup language Mau for which he’d even hand-rolled his own parser so that he could switch lexing modes between block and inline elements.

Browsers

The Ladybird browser got a non-profit organisation to support its development and the discussion about it reminded me that the servo project also exists.

In the past we’ve understood that it is important to have a choice of different implementations to select from for browsers, so I think it is good to have this community based browsers to compliment the commercial and foundation backed browsers.

I also used Lynx for the first time in many years as I wanted to test a redirect issue with a site and it is still probably the easiest way to check if public facing sites are routing as they should.

Alternative search engines

I started giving Perplexity a go this month after seeing it recommended by Seth Godin. That was before the row with content creators kicked off in earnest. I’m going to let that settle out before continuing to explore it.

I was using it not for straight queries but instead to ask for alternatives to various products or methods. It successfully understood what I was talking about and did I successfully offer alternatives along with some pros and cons (which to be honest felt quite close to to the original material rather than being a synthesis). Queries that benefit from synthesis is definitely one area where LLM-based queries are better than conventional searching by topics.

I’ve also tried this on Gemini but the answers didn’t feel as good as the referenced sources were not as helpful. I would have thought the Google offering would have been better at this but having said that a lot of the Google first page search widgets and answer summary are often not great either.

CSS Units

I learnt about the ex CSS unit this month as well as some interesting facts about how em is actually calculated. I might take up the article’s suggestion of using it for line-height in future.

The calculation of em seems to be the root cause for the problems leading to this recommendation to use rem for line width rather than ch (I’ve started use using ch after reading Every Layout but I don’t use a strict width for my own projects judging myself what feels appropriate).

The environmental impact of LLMs

Both Google and Microsoft (Register’s article, Guardian article) announced that they have massively increased their emissions as a result of increased usage and training of AI models.

The competition to demonstrate that a company has a leading model is intense and there is a lot of money being driven through venture capital and share prices that provides the incentive. This profligacy of energy doesn’t feel like a great use of resources though.

I’ve also read that Google has relied on being offsets rather than switching to genuinely sustainable fossil-fuel-free energy. Which if true is completely mad.

Reading list

I learnt this month that Javascript has an Atomics package which is quite intriguing as I think Atomics are some of the easiest concurrency elements to work with. The Javascript version is quite specific and limited (it works only with ArrayBuffers) but it had completely passed me by.

I also really enjoyed reading through bits of this series on writing minimal Django projects which really helps explain how the framework works and how the bits hang together.

Standard
Month notes

May 2024 month notes

Updating CSS

My muscle memory on CSS is full of left and right, top and bottom. The newer attributes of -inline and -block use start and end qualifiers to avoid confusion with right to left languages. This month I made an effort to try and convert my older hobby code over to the new format to try and get the new names ingrained in my memory.

Another example of things in web development that have now to be unlearnt is that target="blank" is now safe by default. This used to be something that used to be drilled into web developers..

Learning with LLMs

I had my first positive experience using a LLM-based model to learn to code something this month. It was an interesting set of circumstances that led to it really working for me where it hadn’t before.

  • I didn’t know much about the topic, therefore I didn’t know how to formulate search queries that gave me good results
  • The official documentation was complete but poorly written and organised, exploring text can be the perfect task for an LLM
  • Information was scattered over several sites, including Medium. There wasn’t one article or site that really had a definitive answer so synthesising across several sources really helped. I wanted the text of the official documentation combined with the working code from a real person’s blog post.

I used a couple of different systems but Codemate was the most helpful follow by Google’s Gemini.

Previously I’ve been searching for information that I know quite well and therefore instead of getting a lot of value from the information compared to any hallucinated misses the mistakes were irritating me. Summarising data from multiple sources is genuinely an LLM superpower so this consolidation of several not great sources was probably right in its sweet spot.

URL exploring and saving

I needed to build up some queries on a system’s API this month. I decided to give Slumber a go after trying some local Postman-style clones.

The tool is a TUI and uses a YAML file as its store and dynamically syncs the UI when the file is saved. There were a couple of issues; for example it would be helpful to be able to save the content of a response to file and if something is marked sensitive (like the bearer token) then I would prefer to see it masked in the UI.

Overall though I got what I needed to done and the system was a lot easier than most web-based GUI tools that I’ve used as the underlying storage and its relation to the interface is really clear.

Also a shout out to chains, initially these seemed to be an example of making simple things complicated but as I understood them more then they are amazingly powerful for coordinating setups for calls.

Community events

I went to the May Day Data Science event for the first time. It seems the best talks were in rooms that had the least capacity and there was a strict no standing rule. Despite this I did pick up some useful bits and pieces, in particular around prompt design.

I also went to the Django Meetup held at the Kraken offices and was really struck by what a great engineering team they have built up there. Dave Seddon gave a great introduction to the “native library escape hatch” that exists in Python. This time showing how to bring in Rust code to help execution time.

I also went to the Python Meetup this month and spent a day in Milton Keynes at the Juxt 24 conference which had a lot of interesting talks and where I could have spent a lot more time at the afterparty.

Standard
Month notes

April 2024 month notes

I’ve read quite a few people complaining about the continuing degradation of Google search results but this month I genuinely started to notice issues with search results about programming and system design. There’s always been a bit of game playing in the top position but the problem I noticed was that the later results feature a lot of recycling of the same information (and in my case incorrect or irrelevant information) so that there was really only one result on the front page.

There was also a lot of Medium links which itself is getting increasing unusable if you don’t want to have an account or engage in whatever pop up activity Medium thinks is going to boost its monthly active users.

Search alternatives

I’ve started using Ecosia for its green credentials and because it seems to have results that are less gamed (although W3 Schools is still too prominent). I also gave Codemate Bot a go, which is essentially a tailored LLM. It seemed a bit better than Gemini and a few times gave the right answer faster than Google searching. However follow up questions were pretty terrible and conventional LLMs seemed to be better at refining.

This is going to be a bit of a painful ongoing task I think.

Online learning

I’ve been revisiting some Javascript and Typescript basics recently because both languages have changed since I originally encountered them and some new features have replaced previous conventions. I prefer text-based learning because I find it much easier to skim over areas that I know that it is to fast-forward through a video. I therefore have been using Educative and Lean Web Club. Lean Web Club is primarily Web Standards based Javascript and a bit of CSS, its small projects and bite-sized explanations are pretty handy but it lacks an internal search for when you can’t quite remember where something is located. It has been handy for seeing examples of how low-level ES Modules work, Web Components and also getting an overview of the different storage APIs that exist (and which ones haven’t been deprecated!).

Educative is broader in its content and works with different content providers to adapt their material to the platform. Therefore the style is a bit more variable particularly in the granularity of the course topics. It features mini-quizzes and again the quality is a bit variable but it does try to use different means to consolidate learning.

Like everything today, Educative has an LLM element which means it can ask open-ended questions that you reply to with free text and then your answer is evaluated. This seems pretty handy for things like interviewing and testing how clear your explanations are. However just like interviewing it can suffer from unclear questions.

For example in one question about distributed systems it wanted more detail on handling distribution across geographic regions but was unclear about whether there was meant to be a global identity service for all regions or the service was meant to be independently distributed so regions were compatible but still globally unique. There wasn’t really a way to tease that out of the LLM and even the “ideal” answer wasn’t very clear on the preferred approach.

What is awesome in Educative (and credit to MDN because it also has this feature in its documentation and I use it a lot there too) is that it has interactive code examples inline that you can edit and play around with. This allows you to see the effect of the code which is often easier than reading about what it is meant to do and you can play around to confirm your understanding of what is happening.

There were lots of Typescript modules I wish had read before I encountered them in the real world: membership of interfaces and its associated type checkers and when basic type inference fails for example.

Rust Nation pre-conference talks

I went to a community meetup of preview talks from the Rust Nation conference that was held last month. The most interesting talk was this one about the culture of purity in Rust around the use of unsafe and in fact how if this desire to be memory-safe is to be realised there needs to be work in some of the core libraries that the language community uses. I thought Tim did a good job of combining practical research with a plea for a more tolerant community.

Standard
Work

Learning to love the Capability Maturity Model

I had a job where the management were enamoured of the Capability Maturity Model (CMM) and all future planning had to be mapped onto the stages of the maturity model. I didn’t enjoy the exercise very much because in addition to five documented stages there was generally a sixth which was stagnation and decay as the “continually improving” part of the Optimising stage was generally forgotten about in my experience.

Instead budgets for ongoing maintenance and iteration were cut to the bone so that the greatest amount of money could be extracted from the customers paying for the product.

Some government departments I have had dealings with had a similar approach where they would budget capital investment for the initial development of software or services and then allocate nothing for the upkeep of them except fixed costs such as on-premise hosting for 20 years (because why would you want to do anything other than run your own racks?).

This meant that five years into this allegedly ongoing-cost-free paradise services were breaking down, no budget was available to address security problems, none of the original development team were available to discuss the issues with the services and the bit rot of the codebase was making a rewrite the only feasible response to the problem which undercut the entire budgetary argument for amortisation.

A helpful model misapplied

So generally I’ve not had a good experience with people who use the model. And that’s a shame because recently I’ve been appreciating it more and more. If you bring an Agile mindset to the application of CMM: seeing it as a way of describing the lifecycle of a digital product within a wider concept of cyclical renewal and growing understanding of your problem space then it is a very powerful tool.

In particular some product delivery practices have an assumption on the underlying state of maturity in the business process. Lets take one of the classics: the product owner or subject matter expert. Both Scrum and Domain Driven Design make the assumption that there is someone who understands how the business is meant to work and can explain it clearly in a way that can be modelled or turned into clear requirements.

However this can only be true at Level 2 (Repeatable) at the earliest and generally the assumption of a lot of Agile delivery methods is that the business is at Level 4 (Managed). Any time a method asks for clear requirements or the ability to quantify the value returned through metrics you are in the later stages of the maturity model.

Lean Startup is one of the few that actually addresses the problems and uncertainty of a Level 1 (Initial) business. It focuses on learning and trying to lay down foundations that are demonstrated to be consistent and repeatable. In the past I’ve heard a lot of argument about the failings of the Minimum Viable Product and the need for Minimum Loveable, Marketable or some more developed concept Product. Often people who make these arguments seem confused about where they are in terms of business maturity.

The Loveable Product often tries to jump to Level 3 (Defined), enshrining a particular view of the business or process based on the initial results. Sometimes this works but it as just a likely to get you to a dangerous cul de sac where the product is too tailored to a small initial audience and needs to be reworked if it is meet the needs of the larger potential target audience.

John Cutler talks about making bets in product strategy and this seems a much more accurate way to describe product delivery in the early maturity levels. Committing more effort without validation is a bigger bet, often in an early stage business you can’t do that much validation, therefore if you want to manage risk it has to be through the commitment you’re making.

Go to market phases are tough partly because they explicitly exist in these low levels of capability maturity, often you as an organisation and your customers are in the process of trying to put together a way of working with few historic touchpoints to reference. Its natural that this situation is going to be a bit chaotic and ad-hoc. That’s why techniques that focus on generating understanding and learning are so valuable at this stage.

The rewards of maturity

Even techniques like Key Performance Indicators are highly dependent on the underlying maturity. When people talk about the need to instrument a business process they often have an unspoken assumption that there is one that just needs to be translated into a digital product strategy of some kind. That assumption can often be badly wrong and it turns out the first task is actually traditional business analysis to standardise what should be happening and only then instrumenting it.

In small businesses in particular there is often no process than the mental models of a few key staff members. The key task is to try and surface that mental model (which might be very successful and profitable, don’t think immature means not valuable) into external artefacts that are robust enough to go through continuous improvement processes.

A lot of businesses jump into Objective Key Results and as an alignment tool that can be really powerful but when it comes to Key Results if you are not at that Level 4 (Managed) space then the Key Results often seem to boil down to activities completed rather than outcomes. In fairness at Level 5 (Optimising) the two can often be the same, Intel’s original OKRs seem very prescriptive compared to what I’ve encountered in most businesses but they had a level of insight into what was required to deliver their product that most businesses don’t.

If you do get to that Level 5 (Optimising) space then you can start to apply a lot of buzzy processes with great results. You can genuinely be data-driven, you can do multi-variant testing, you can apply RICE, you can drive KPIs with confidence that small gains are sustainable and real.

Before you’re there though you need to look at how to split your efforts between maturing process, enabling consistency and not just doing digital product delivery.

Things that work across maturity stages

Some basic techniques like continual improvement (particularly expressed through methods like total quality), basic business intelligence that quantifies what is happening without necessarily being able to analyse or compare it and creating focus work at every stage of maturity.

However until you get to Level 2 (Repeatable) then the value of most techniques based on value return or performance improvement are going to be almost impossible to assess. To some extent the value of a digital product in Level 1 (Initial) is to offer a formal definition of a process and subject it to analysis and revision. Expressing a process in code and seeing what doesn’t work in the real world is a modelling exercise in itself (but sadly a potentially expensive one).

Learning to love the model

The CMM is a valuable way of understanding a business and used as a tool for understanding rather than cost-saving it can help you understand whether certain agile techniques are going to work or not. It also helps understand when you should be relying more on your understanding and expertise rather than data.

But please see it as a circle rather than a purely linear progression. As soon as your technology or business context changes you may be experiencing a disruptive change that might mean rethinking your processes rather than patching and adapting your current ones. Make sure to reassess your maturity against your actual outputs.

And please always challenge people who argue that product or process maturity is an excuse to strip away the capacity to continually optimise because that simply isn’t a valid implementation of the model.

Standard
Month notes, Work

March 2024 month notes

Dependabot under the hood

I spent a lot more time this month than I was expecting with one of my favourite tools Github’s Dependabot. It started when I noticed that some of the projects were not getting security updates that others were. I know it possible for updates to be suspended on projects that neglect their updates for too long (I should really archive some of my old projects) but checking the project settings confirmed that everything was setup correctly and there was nothing that needed enabling.

Digging in I wondered how you are meant to view what Dependabot is doing, you might think it is implemented as an Action or something similar but in fact you access the information through the Insights tab.

Once I found it though I discovered that the jobs had indeed been failing silently (I’m still not sure if there’s a way to get alerted about this) because we had upgraded our Node version to 20 but had set the option engine-strict on. It turns out that Dependabot runs on its own images and those were running Node 18. It may seem tempting to insist that your CI uses the same version as your production app but in the case of CI actions there’s no need to be that strict, after all they are just performing actions in your repository management that aren’t going to hit your build chain directly.

Some old dependencies also caused problems in trying to reconcile their target version, the package.json Node engine and the runtime Node version. Fortunately these just highlighted some dependency cruft and depreciated projects that we just needed to cut out of the project.

It took a surprising amount of time to work through the emergent issues but it was gratifying to see the dependency bundles flowing again.

Rust

I started doing the Rustlings tutorial again after maybe a year in which I’d forgotten about it (having spent more time with Typescript recently). This is a brilliant structured tutorial of bite-sized introductions to various Rust concepts. Rust isn’t that complicated as a language (apart from its memory management) but I’ve found the need to have everything right for the code to compile means that you tend to need to devote dedicated time to learning it and it is easy to hit some hard walls that can be discouraging.

Rustlings allows you to focus on just one concept and scaffolds all the rest of the code for you so you’re not battling a general lack of understanding of the language structure and just focus on one thing like data structures or library code.

Replacing JSX

Whatever the merits of JSX it introduces a lot of complexity and magic into your frontend tooling and I’ve seen a lot of recommendations that it simply isn’t necessary with the availability of tagged string literals. I came back to an old Preact project this month that I had built with Parcel. The installation had a load of associated security alerts so on whim I tried it with ViteJS which mostly worked except for the JSX compilation.

Sensing a yak to shave I started to look at adding in the required JSX plugin but then decided to see if I really needed it. The Preact website mentioned htm as an alternative that had no dependencies. It took me a few hours to understand and convert my code and I can’t help but feel that eliminating a dependency like this is probably just generally a good idea.

The weirdest thing about htm is how faithful it is to the JSX structure, I was expecting something a bit more, well, HTML-ly but props and components pretty much work exactly how they do in JSX.

Postgres news

A Postgres contributer found a backdoor into SSH that required an extensive amount of social engineering to achieve. If you read his analysis of how he discovered it then it seems improbable that it would have been discovered. Some people have said this is a counterpoint to “many eyes make bugs shallow” but the really problem seems to be how we should be maintaining mature opensource projects that are essentially “done” and just need care and oversight rather than investment. Without wanting to centralise open source it feels like foundations actually do a good job here by allowing these kind of projects to be brought together and have consistent oversight and change management applied to them.

I read the announcement of pgroll which claims to distil best practice for Postgres migrations regarding locks, interim compatibility and continuous deployment. That all sounds great but the custom definition format made me feel that I wanted to understand it a little better and as above, who is going to maintain this if it is a single company’s tool?

Postgres was also compiled into WASM and made available as an in-memory database in the browser, which feels a bit crazy but is also awesome for things like testing. It is also a reminder of how Web Assembly opens up the horizons of what browsers can do.

Hamstack

Another year, another stack. I felt Hamstack was tongue in check but the rediscovery of hypermedia does feel real. There’s always going to be a wedge of React developers, just like there will be Spring developers, Angular developers or anything else that had a hot moment at some point in tech history. However it feels like there is more space to explore web native solutions now than there was in the late 2010s.

This article also introduced me to the delightful term “modulith” which perfects describes the pattern that I think most software teams should follow until the hit the problems that lead to other solution designs.

Standard
Work

2023: Year in review

2023 felt like a very chaotic year with big changes in what investors were looking for, layoffs that often felt on step away from panic, a push from business to return to the office but often without thinking what that would look like and a re-evaluation of technical truisms of the last decade. So much happened I think that’s why its taken so long to try and process it: it feels like lots of mini-years packed into one.

A few themes for the year…

Typescript/Javascript

So I think 2023 might be the year of Peak React and Facebook frontend in general. I think with Yarn finally quiet-quitting and a confused React roadmap that can’t seem to pose a meaningful answer to its critics we’re finally getting to place where we can start to reconsider what frontend development should look like.

The core Node/NPM combination seems to have responded to the challenges better than the alternative runtimes and also seem to be sorting out their community governance at a better clip.

Of course while we might have got to the point that not everyone should be copying Facebook we do seem to have a major problem with getting too excited about tooling provided by companies backed by VC money and with unclear goals and benefits. If developers had genuinely learned anything then they might be more critical of Vercel and Bun.

I tried Deno, I quite liked it. I’d be happy to use it. But if you’re deploying Javascript to NodeJS servers then Typescript is a complex type hinter that is transpiling to a convention that is increasingly out of step with Vanilla Javascript. The trick of using JSDoc’s ts-check seems like it could provide the checking benefits of Typescript along with the Intellisense experience in VSCode that developers love but without the need to actually transpile between languages and all the pain that brings.

It’s also good news the Javascript is evolving and moving forwards. Things seems to have significantly improved in terms of practical development for server-side Javascript this year and the competition in the ecosystem is actually driving improvement in the core which is very healthy for a language community.

Ever improving web standards

I attended State of the Browser again this year and was struck by how many improvements there have been to the adoption of new standards like Web Components, incremental improvements in CSS so that more and more functionality is now better achieved with standards-based approaches and how many historic hacks are counter-productive now.

It is easy to get used to the ubiquity of things like Grid or the enhanced Flexbox model but these are huge achievements and the work going on to allow slot use in both your own templates and the default HTML elements is really impressive and thoughtful.

Maybe the darker side of this was the steady erosion of browser choice but even here the Open Web Advocacy group has been doing excellent, often thankless work to keep Google and Apple accountable and pushing to provide greater choice to consumers in both the UK and EU.

Overall I feel very optimistic that people understand the value of the open web and that the work going on in the foundations of it are better than ever.

Go

The aphorism about chess that says the game is easy to learn but hard to master applies equally well to Go in my view. It is easy to start writing code and the breadth of the language is comparatively small. However the lack of batteries included means that you are often left with having to implement relatively straight-forward things like sets yourself or having to navigate what the approved third-parties are for the codebase you’re working on.

The fact that everyone builds their web services from very low-level primitives and then each shop has their own conventions about middleware and cross-cutting concerns is really wearisome if you are used to language communities with more mature conventions.

The type system is also really anaemic, it feels barely there. A million types of int and float, string and “thing”. Some of the actual type signatures in the codebases have felt like takes a thing and a thing and returns a thing. Structs are basically the same as their C counterparts except there’s a more explicit syntax about pointers and references.

I have concerns that the language doesn’t have good community leadership and guidance, it still looks to Google and Google do not feel like good stewards of the project. The fact that Google is funding Rust for its critical work (such as Android’s operating layer) and hasn’t managed to retire C++ from its blessed languages is not a good look.

That said though most projects that might have been done in Java are probably going to be easier and quicker in Go and most of the teams I know that have made the transition seem to have been pretty effective compared to the classic Spring web app.

It is also an easier language to work with than C, so its not all bad.

The economy

I’m not sure the economy is necessarily in that bad a shape, particularly compared to 2008 or 2001 but what is definitely true is that we had gotten very used to near-zero interest rates and we did not adapt to 5% interest rates very well at all.

It feels like a whole bunch of common-place practices are in the process of being re-evaluated. Can’t get by without your Borg clone? Maybe you can get by with FTP-ing the PHP files to the server.

Salaries were under-pressure due to the layoffs but inflation was in the double-digits so people’s ability to take a pay cut wasn’t huge. I think the net result is that fewer people are now responsible for a lot more than they were and organisations with limited capacity tend to be more fragile when situations change. There’s the old saw about being just one sick day from disaster and it will be interesting to see whether outages become more frequent and more acceptable for the associated costs.

Smaller teams and smaller budgets are the things that feel like they are most profoundly going to reshape the development world in the next five years. Historically there’s been a bit of an attitude of “more with less” but I feel that this time it is about setting realistic goals for the capacity you have but trying to have more certainty about achieving them.

Month notes

I started experimenting with months notes in 2023, I first saw week notes be really effective when I was working in government but it was really hard to write them when working at a small company where lots of things were commercially sensitive. It is still a bit of a balance to try and focus on things that you’re personally learning rather work when often the two can easily be conflated but I think its been worth the effort.

If nothing else then the act of either noting things down as they seem relevant and then the separate act of distillation helps reflect on the month and what you’ve been doing and why.

Standard
Work

February 2024 month notes

Postgres

Cool thing of the month is pgmem which is a NodeJS in-memory database with a Postgres compatible API. It makes it easy to create very complete integration or unit tests covering both statement testing and object definitions. So far everything that has worked with pgmem has been flawless in both Docker-ised Postgres instances and CloudSQL Postgres.

The library readme says that containers for testing are overkill and it has delivered on that claim for me. Highly recommended.

Less good has been adventures in CloudSQL’s IAM world. A set of overlapping work requirements means that the conventional practices of using roles and superuser permissions is effectively impossible so I’ve been diving deeper than I’ve ever expected to go into the world of Postgres’s permission model.

My least favourite discovery this month has been that it is possible to successfully grant a set of permissions to a set of users that generates no errors (admittedly via a Terraform module; I need to check whether the Postgres directly complains about this) but also gets denied by the permission system.

The heart of the problem seems to be that the owner of the database objects defines the superset of permissions that can be accessed by other users but that you can happily grant other users permissions outside of that superset without error except when you try to use that permission.

The error thrown was reported on a table providing a foreign key constraint so there were more than a few hours spent wondering why the user could read the other table but then get permission denied on it. The answer seemingly being that the insert into the child table triggers the permission violation but that the validation of the constraint in the constraining table triggers the permission system.

I’m not sure any of this knowledge will ever be useful again because this setup is so atypical. I might try and write a DevTo article to provide something for a future me to Google but I’m not quite sure how to phrase it to match the query.

Eager initialisation

I learnt something very strange about the Javascript test data generation FakerJS this month but it just a specific example of libraries that don’t make an effort to lazy load their functionality. I’ve come across this issue in Python where it affected start times in on-demand code, Java where the assumption that initialisation is a one-time cost meant that multiple deployments a day meant the price was never amortised and now I’ve encountered it in Javascript.

My takeaways are that it is important to [set aggressive timeouts](https://nodejs.org/api/cli.html#–test-timeout) on your testing suite rather than take the default of no timeouts.. This only surfaced because some fairly trivial tests using the Faker data couldn’t run in under a second which seemed very odd behaviour.

Setting timeouts also helps surface broken asynchronous testing and makes it less tedious to wait for the test suite to fail or hang.

Standard
Work

January 2024 month notes

Water CSS

I started giving this minimal element template a go after years of using various versions of Bootstrap. It is substantially lighter in terms of the components it offers with probably the navigation bar being the one component that I definitely miss. The basic forms and typography are proving fine for prototyping basic applications though.

Node test runner

Node now has a default test runner and testing framework. I’ve been eager to give it a go as I’ve heard that it is both fast and lightweight, avoiding the need to select and include libraries for testing, mocking and assertions. I got the chance to introduce it in a project that didn’t have any tests and I thought it was pretty good although it’s default text output felt a little unusual and the alternative dot notation might be a bit more familiar.

It’s interesting to see that the basic unit of testing is the assertion, something is shares with Go. It also doesn’t support parameterised tests which again is like Go which has a pattern of table-driven tests implemented with for loops except that Go allows more control of the dynamic test case naming.

I’d previously moved to the Ava library and I’m not sure there is a good reason not to use the built-in alternative.

Flask blueprints

In my personal projects I’ve tended to use quite a few cut and paste modules and over the years they tend to drift and get out of sync so I’ve been making a conscious effort to learn about and start adopting Flask Blueprints. Ultimately I want to try and turn these into personal module dependencies that I can update once and use in all the projects. For the moment though it is interesting how the blueprints format is pushing me to do some things like logging better (to understand what is happening in the blueprint) and also structuring the different areas of the application so that they are quite close to Django apps with various pieces of functionality now starting to be associated with a url prefix that makes it a bit easier to create middleware that is registered as part of the Blueprint rather than relying on imports and decorators.

Web components

I’ve been making a bit of progress with learning about web components. I realised that I was trying to do too much initially which is why they were proving complicated. Breaking things down a bit has helped with an initial focus on event listeners within the component. I’m also not bringing in external libraries at the moment but have got as far as breaking things up into [ESM modules](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Guide/Modules) which has mostly worked out so far.

Standard