Month notes

March 2025 month notes

Community

I came across this really interesting project that aims to ensure that the JavaScript libraries are keeping dependencies up to date and that features available in the LTS versions of Node area being used. JavaScript is infamous for fragmented and poorly maintained libraries so this is a great initiative.

But even better I think is Jazzband, a Python initiative that allows for collaborative maintenance but also can provide a longer term home for participating projects.

Digital sovereignty

With the US deciding to switch off defence systems there has been an upswing in interest in how and where digital services are provided. One key problem is that all the cloud services are American every other Western country has let US companies dominate the space and many of the services they offer are now unique to them.

This is probably impossible to solve now but the article linked to above is a useful starting point on how more commodity services could be provided by local providers.

There was also an announcement of an open source alternative to Notion funded by the French (and other) governments. The strategic investment in open source that can be run in the service capacity that European states do have seems a key part of a potential solution and helps share costs and benefits across countries.

Editors

I have been trying out the Fleet editor again, this is Jetbrain’s take on a VSCode style editor that has the power of their IntelliJ IDE but without a lot of the complexity. It’s obviously not as powerful or fully featured as VSCode with all its plugins.

I liked the live Markdown preview but couldn’t get the soft-wrapping that was meant to be enabled by default to work. It was also frustrating that some actions do not seem to be available via the Command Palette and that the Actions aren’t the default tab when hitting Ctrl-Shift-P.

LLMs

I experimented with AWS’s Bedrock this month (via Python scripts), the service has a decent selection of services available with a clear statement of how interactions with them are used. If you’re already an AWS user then access to them casually is effectively free (although how viable that is in the long run is an interesting question) making it a great way to experiment.

I thought have the AWS Nova models might help me write code to interact with Bedrock but they turned out to not really be able to add more than the documentation and a crafty print statement told me.

The Mistral model seemed quite capable though and I’ve used it a couple of times since then for creating Python and Javascript code and haven’t had any real problems with it although it predictably does much better on problems that have been solved a lot.

An alternative to Step Functions

Yan Cui wrote a really interesting post about Restate and writing your own checkpointing system for repeatable AWS Lambda invocations. Step Functions are very cool but the lack of portability and platform-specific elements have been off-putting in the past. These approaches seem very interesting.

When it comes to Lambdas Yan’s newsletter is worth subscribing to.

Reading links

  • Some helpful pieces of good practice in Postgres although some things (soft deleting in particular, view use) will be context specific
  • Defra’s Playbook on AI coding offers a pragmatic view on how to get the best out of AI-led coding
  • Revenge of the Junior Developer, man with a vested interested in AI software developer is still angry at people struggling with it or not wanting to use it
  • AI Ambivalence a more sober assessment of the impact and utility of the current state of AI-assisted coding and a piece that resonated with me for asking the question about whether this iteration of coding retains the same appeal

There was an interesting exchange of views on LLM coding this month; Simon Willison wrote about his frustration with developers giving up on LLM-based coding when he has been having a lot of success with it. Chris Krycho wrote a response pointing out that developer’s responses weren’t unreasonable and that Simon was treating all his defensive techniques fro getting the best out of the approach as implicit knowledge that he was assuming that everyone possessed. It is actually a problem that LLM’s suggest invalid code and libraries that don’t exist.

Soon after that post Simon wrote another post summarising all his development practices and techniques related to using LLMs for coding that makes useful reading for anyone who is curious but struggling to get the same results that people like Simon have had. This post is absolutely packed with interesting observations and it is very much worth a read as a counterbalance to the drive-by opinions of people using Cursor and not being able to tell whether it is doing a good job or not.

Standard
Month notes

December 2024 month notes

Not a whole lot to report on due to this being holiday season.

Colab

I started using Google’s Colab for quick Python notebooks. It’s pretty good and the notebook files integrate into regular Drive. Of course there is always the fear that Google will cancel it at a moment’s notice so I might look at the independent alternatives as well.

I’ve been looking at simulation code recently and it has been handy to run things outside a local setup and across laptops.

tRPC

You can’t spend much time in Typescript world without using Zod somewhere in your codebase. Zod was created by Colin McDonnell and this month I read an old blog post of his introducing the ideas behind tRPC. The post or essay is really interesting as it identifies a lot of problems that I’ve seen with GraphQL usage in projects (and to be fair some OpenAPI generated code as well).

It is quite rare to see a genuine REST API in the commercial world, it is more typical to see a REST and HTTP influenced one. GraphQL insists on a separation of the concepts of read (query) and write (mutation) which makes it more consistent than most REST-like interfaces but it completely fails to make use of HTTP’s rich semantics which leaves things like error handling as a bit of joke.

Remote Procedure Calls (RPC) preceded both REST and GraphQL and while the custom protocols and stub generators were dreadful the mental model associated with RPC is actually pretty close to what most developers actually do with both REST and GraphQL. They execute a procedure and get a return result.

Most commercial-world REST APIs are actually a kind of RPC over HTTP using JSON. See the aside in the post about GraphQL being RPC with a schema.

Therefore the fundamental proposition of the post seems pretty sound.

The second strong insight is that sharing type definitions is far preferable and less painful than sharing generated code or creating interface code from external API definitions (I shudder when I see a comment in a codebase that says something like “the API must be running before you build this code”). This is a powerful insight but one that doesn’t have a totally clean answer in the framework.

Instead the code reaches out to import the type definition from the server by having the local codebase available in some agreed location. I do think this is better than scraping a live API and type sharing code is clearly less coupled than sharing data structures but I’m not sure it is quite the panacea being claimed.

What it undoubtedly does improve on is generated code, generated code is notorious hard to read, leads to arguments about whether it should be version controlled or not and when it goes wrong there is almost inevitably the comparison dance between developers who have working generated code and those who don’t. Having a type definition that is version controlled and located in one place is clearly a big improvement.

I’ve only seen a few mentions of commercial use of tRPC and I haven’t used it myself. It is a relatively small obscure project but I’d be interested in reading production experience reports because on the face of it it does seem to be a considered improvement over pseudo-REST and GraphQL interfaces.

God-interfaces

The article did also remind me of a practice that I feel might be an anti-pattern but which I haven’t had enough experience so far to say for sure. That is taking a generated type of a API output and using it as the data type throughout the client app. This is superficially appealing: it is one consistent definition shared across all the code!

There are generally two problems I see with this approach, firstly is protocol cruft (which seems to be more of a problem with GraphQL and automagic serialisation tools) which is really just a form of leaky abstraction; secondly, if a data type is a response structure from a query type of endpoint then the response often has a mass of optional fields that continuously accrue as new requirements arrive.

You might be working on a simple component to do a nicely formatted presentation of a numeric value but what you’re being passed are twenty plus fields, none of which might exist or have complex dependencies between one another.

What I’ve started doing, and obviously prefer, is to try and isolate the “full fat” API response at the root component or a companion service object. Every other component in the client should use a domain typed definition of its interface.

Ideally the naming of the structures in the API response and the client components would allow each domain interface to be a subset of the full response (or responses) if the component is used across different endpoints.

In Typescript terms this means components effectively define interfaces for their parameters and passing the full response object to the component works but the code only needs to describe the data actually being used.

My experience is that this has led to code that is easier to understand, is easier to modify and is less prone to breaking if the definition of the API response changes.

The death of the developer

I’ve been reading this Steve Yegge post a lot as well The Death of the Stubborn Developer. Steve’s historical analysis has generally been right which gives me a lot of pause for thought in this post. He’s obviously quite invested in the technology that underpins this style of development though and I have a worry that it is the same kind of sales hustle that was involved in crypto. If people don’t adopt this then how is the investment in this kind of assisted coding going to be recouped.

Part of what I enjoy about coding is the element of craft involved in putting together a program and I’m not sure that the kind of programming described in the post is the kind of thing I would enjoy doing and that’s quite a big thing given that it has been how I’ve made a living up until now.

Standard