Month notes, Ruby, Work

September 2025 month notes

Loop

I’ve been doing more technical leadership work recently and apart from spreadsheets that means more documents. Loop is Microsoft’s version of Notion, focusing on wiki-like pages with block content and the ever-present slash command to insert specialised blocks.

As a noting experience it is much more pleasant than OneNote which seems to have gone to a very weird place since last I used it with an odd block UI rather than the Evernote style of noting. Loop is pretty much type inline and use slash to embed or nest another page.

Sharing is a bit more complex than Notion style products as you seem to only be able to share the entire Loop Workspace or nothing. It is hard to understand the organisational visibility of content.

Mermaid and Todo lists all embed as you’d expect and action lists integrate with Microsoft 365 todo lists and notifications system. You can also embed Loop components into Teams and other applications and it mostly seems to not just work but also be dynamically bi-directional so you can edit a component in the embed present in a chat rather than having to move to an edit mode.

Compared to most of the Office 365 suite it feels bracingly online and dynamic.

I’ve never been much of a macro or scripting person in these products so I don’t know if you can do some of the page and list magic that you can with Notion but all the core content features seem present and correct.

This seems like a great additional to the O365 suite and replaces the need for a bunch of adhoc hackery like the endless Word doc.

Ruby drama

Ruby (and Rails) has a problem in that it has never developed proper community governance. I realise now what a major step it was for Guido van Rossum to step away from the Benevolent Dictator For Life (BDFL) role and force the community to step.

This month Ruby Central took over the responsibility for managing key elements of the Ruby ecosystem (Gem, Bundler) and alienated most of the external open source community contributors. This just doesn’t happen in well run communities.

At the heart of the problem is really the Rails BDFL DHH. Now the core selling point of Rails as a framework is that it is opinionated and maintained by contributors whose livings depend on it. By frameworks that have more community governance like Django are seen as being slow moving and unresponsive. However at moments like these when a major leader in a project is going off the rails (pun intended) it is often a virtual to move slowly and make considered moves

Rails is still one of the smartest web frameworks for quick responsible development and its contrarian technology choices have been an excellent counterbalance to the groupthink that has prevailed in say the Node community where ultimately have burned themselves out on the endless upgrade trail of re-architecture after failed re-architecture.

The cost has been an over-dependence on a few key individuals with too much power and a few companies using their money and influence to run things to their benefit.

It will be interesting to see how this develops coming on the heels of WordPress debacle (and I’m conscious of the irony of me still posting here after all that has happened in that space). I suspect that the answer may be just a lot of quiet quitting to other projects and communities.

The more promising signs are an open letter asking for a Rails fork and better governance and the emergence of the Gem Cooperative. Maybe this will be the trigger to get to something much better than it replaces.

Reading list

Fun tools

Standard
Month notes

June 2025 month notes

Foreign keys and link tables in SQL Alchemy

Foreign keys are surprisingly difficult to define as where as normally a basic Foreign Key is unidirectional with a parent and a child relationship SQL Alchemy often needs you to define the attribute on both models in a circular dependency that it then has to resolved by using strings to define the object relationships as Python doesn’t support forward references yet.

Indexed Constraints in Postgres

Primary Keys and Unique Constraints both generate associated indexes automatically but foreign keys, while they need to reference indexed columns do not automatically have an associated index and potentially don’t need them until your query planner tells you that is the bottleneck. I found this last idea a bit counter-intuitive but on reflection I think it must make sense given the lookup times of the parent rows. I guess the index may matter more if the relationship is one to many with potentially large numbers of children.

Thoughts on FastAPI

I finally did a lot of work on a FastAPI codebase recently, my first use of that framework. It is a lot like Flask but its support for Web Sockets and Async routes means that depending on what you’re working with it might be the only practical choice for your application.

FastAPI is actually an aggregrate over other libraries, in particular Starlette and as you dig through the layers you see that the key foundations are maintained by a few individuals with a very opaque governance structure.

I’ve used more obscure frameworks before but I didn’t really think it was a smart idea then and I don’t think it is great now. In theory the FastAPI layer could switch between implementations without breaking consumers but it all seems a bit more fragile than I had realised especially when you add on some of the complications of the various Python async ecosystem.

It’s made me wonder whether you’re better off sticking to synchronous until you have a situation where that can’t possibly work.

Coding with LLMs

LLM generated code means that some relatively uncommon idioms in programming languages come up more and more often when looking at colleagues code. The effect is quite incogrous when you have relatively inexperienced programmers using quite sophisticated or nuanced constructions.

This month in Python I noticed use of the else clause in loops which is akin to a finally block in iterations but isn’t widly used because (I think) the else clause can be hard to related to the enclosing loop and is easily visually confused with conditional clauses inside the loop.

Use of sum instead of len or count, Python’s sum allows you to pass a generator function directly to it instead of having to use an intermediate list comprehension or similar. This means you can save a bit of memory, some programmers use this habitually but I’ve only ever really seen it where people are trying to get a bit of extra performance in special cases. Most of the iterables in code often contain too few items for it to matter much and compared to the performance gain of moving to a more recent version of Python I’m not sure the gain is really noticeable.

Reading list

Standard
Month notes

May 2025 month notes

Computer vision

A colleague at work has been experimenting with feeding streamed camera and screen sharing video into Gemini 2 Flash and the results are really good. The model can accurately describe things in near real-time and is able to make an overall assessment on what activity was being carried out during the screenshare.

Computer vision is one of the more mature areas of machine learning anyway but the speed and ability to describe a higher-level set of actions rather than just matching individual items or pictures was brand-new to me. The functionality can even work in near realtime using the standard browser camera API.

Lessons learned

Although I kind of knew this it managed to catch me out again. Alembic, the Python migration manager for the SQLAlchemy ORM doesn’t really have a dryrun mode and the history command doesn’t use the env.py file and instead works (I think) by simply reading through the migration files.

People have said that switching to the SQL generation mode is an alternative but I was working in the CI pipeline and probably would have needed to also come up with a minor migration file to have some useful feedback.

Standard
Month notes

March 2025 month notes

Community

I came across this really interesting project that aims to ensure that the JavaScript libraries are keeping dependencies up to date and that features available in the LTS versions of Node area being used. JavaScript is infamous for fragmented and poorly maintained libraries so this is a great initiative.

But even better I think is Jazzband, a Python initiative that allows for collaborative maintenance but also can provide a longer term home for participating projects.

Digital sovereignty

With the US deciding to switch off defence systems there has been an upswing in interest in how and where digital services are provided. One key problem is that all the cloud services are American every other Western country has let US companies dominate the space and many of the services they offer are now unique to them.

This is probably impossible to solve now but the article linked to above is a useful starting point on how more commodity services could be provided by local providers.

There was also an announcement of an open source alternative to Notion funded by the French (and other) governments. The strategic investment in open source that can be run in the service capacity that European states do have seems a key part of a potential solution and helps share costs and benefits across countries.

Editors

I have been trying out the Fleet editor again, this is Jetbrain’s take on a VSCode style editor that has the power of their IntelliJ IDE but without a lot of the complexity. It’s obviously not as powerful or fully featured as VSCode with all its plugins.

I liked the live Markdown preview but couldn’t get the soft-wrapping that was meant to be enabled by default to work. It was also frustrating that some actions do not seem to be available via the Command Palette and that the Actions aren’t the default tab when hitting Ctrl-Shift-P.

LLMs

I experimented with AWS’s Bedrock this month (via Python scripts), the service has a decent selection of services available with a clear statement of how interactions with them are used. If you’re already an AWS user then access to them casually is effectively free (although how viable that is in the long run is an interesting question) making it a great way to experiment.

I thought have the AWS Nova models might help me write code to interact with Bedrock but they turned out to not really be able to add more than the documentation and a crafty print statement told me.

The Mistral model seemed quite capable though and I’ve used it a couple of times since then for creating Python and Javascript code and haven’t had any real problems with it although it predictably does much better on problems that have been solved a lot.

An alternative to Step Functions

Yan Cui wrote a really interesting post about Restate and writing your own checkpointing system for repeatable AWS Lambda invocations. Step Functions are very cool but the lack of portability and platform-specific elements have been off-putting in the past. These approaches seem very interesting.

When it comes to Lambdas Yan’s newsletter is worth subscribing to.

Reading links

  • Some helpful pieces of good practice in Postgres although some things (soft deleting in particular, view use) will be context specific
  • Defra’s Playbook on AI coding offers a pragmatic view on how to get the best out of AI-led coding
  • Revenge of the Junior Developer, man with a vested interested in AI software developer is still angry at people struggling with it or not wanting to use it
  • AI Ambivalence a more sober assessment of the impact and utility of the current state of AI-assisted coding and a piece that resonated with me for asking the question about whether this iteration of coding retains the same appeal

There was an interesting exchange of views on LLM coding this month; Simon Willison wrote about his frustration with developers giving up on LLM-based coding when he has been having a lot of success with it. Chris Krycho wrote a response pointing out that developer’s responses weren’t unreasonable and that Simon was treating all his defensive techniques fro getting the best out of the approach as implicit knowledge that he was assuming that everyone possessed. It is actually a problem that LLM’s suggest invalid code and libraries that don’t exist.

Soon after that post Simon wrote another post summarising all his development practices and techniques related to using LLMs for coding that makes useful reading for anyone who is curious but struggling to get the same results that people like Simon have had. This post is absolutely packed with interesting observations and it is very much worth a read as a counterbalance to the drive-by opinions of people using Cursor and not being able to tell whether it is doing a good job or not.

Standard
Month notes

February 2025 month notes

Winter viruses knocked me about a bit this month so I wasn’t able to get out to all the tech events I had hoped to get to and there were a few bed bound days which were pretty disappointing.

I also have a bit of a backlog on writing up the things that I did attend this month.

Synchronising

While I pay for a few synchronisation services (SpiderOak and pCloud) their Linux integration is a bit cumbersome so I’ve been looking for simpler ways to share files between my local machines. I read a great tutorial about SyncThing. The project’s documentation on getting started with SyncThing was also pretty great.

It took less than an hour to get everything going and now I have two of my laptops sharing content in a single folder so it’s possible to move things like credential files around between them simply and hopefully securely. It doesn’t seem to be taking up any meaningful system resources so far.

I also want to spend more time with LocalSend which looks an app version of the PWA PairDrop (cute name, based on Snap Drop). All the functionality looks good and it seems to be lightweight. I’m not quite sure why the app makes a difference over the PWA version.

Zettelkasten

This month I had a bunch of headaches with Roam Research failing logins on Linux and AppImage having a bug which meant that Obsidian and Logseq have to be run outside the sandbox. Getting things working properly was frustrating and while Roam is web-based it has no really mobile web version.

So instead I’d like to stop subscribing to Roam and figure out what I’m using it for. The knowledge connecting is still the most valuable thing compared to pure outliners or personal wikis. Both Logseq and Obsidian are good for this and currently my preference is for Logseq but I think Obsidian is better maintained and has a bigger community.

The other thing I was doing was dropping links into the daily journal for later sorting and processing. I’ve created a little web app to make this easier, currently I’m just building a backlog but it will be interesting to see what I find useful when I do want to dig up a link.

I also started using Flatnotes deployed via PikaPods to have an indie web way of taking a note on the phone but editing and refining it on laptops.

It’s interesting that it has taken so many different services to replace Roam, maybe that’s a sign of value but I think that I was overloading it with different functionality and I’m refining things into different workloads now.

Eleventy

Eleventy is a very cool static website builder that I would like to use as my main website generator in the long run. For now though I am still trying to learn the 11ty way (I currently use Jekyll); this month I was trying to figure out how to use data files and tags, things that power a lot of tricks in my current site.

Eleventy is ridiculously powerful because you can define data files to be executing Javascript files that read URLs or the filesystem and generate data that is then passed on to the page generation context. As an example you can read the directory where the data file is located, read the contents, filter out the directories and then generate a derived value from the directory name and use that as a data value in the rendered page.

In the past I’ve tended to use templates and front-matter in Markdown posts but with Eleventy you can use a mix of shared templates, including inheritance, and a Nunjucks page using these powerful data files and not really need to use Markdown or front-matter so much. You can also share snippets between the Nunjucks pages to get consistency where you need it but have a lot more flexibility about the way page renders.

It is amazing how flexible the system is but it also means that as there are multiple ways to do things there can be a lot of reading to do to figure out what the best way to do something is for your context. Documentation of the basics is good but examples of approaches are spread out across example repos and people’s blogs.

Power is great but so is one obvious way of doing things.

Interesting links

It’s not a fun subject but my former colleague Matt Andrew’s post about coping with redundancy was a good read with good advice for any kind of job seeking regardless of the cause.

Ofcom is making a dog’s dinner of applying the Online Safetry Act (OSA) to small communities and it seems to be down to community members to try and engage them in the problems, this writeup gives examples of the problems and pointers on how the regulator can improve.

Standard
London, Work

Will humans still create software?

Recently I attended the London CTOs Unconference: an event where senior technical leaders discuss various topics of the day.

There were several proposed sessions on AI and its impact on software delivery and I was party of a group discussing the evolution of AI-assisted development and looking at the how this would change and ultimately what role people thought there would be for humans in software delivery (we put the impacts of artificial general intelligence to one side to focus on what happens with the technologies we currently have).

The session was conducted under Chatam House equivalent rules so I’m just going to record some of the discussion and key points in this post.

Session notes

Currently we are seeing automating of existing processes within the delivery lifecycle but there are opportunities to rethink how we deliver software that makes better use of the current generative AI tools and perhaps sets us up to take advantage of better options in future.

Rejecting a faster horse

Thinking about the delivery of the whole system rather than just modules of code, configuration or infrastructure allows us to think about setting a bigger task than simply augmenting a human. We can start to take business requirements in natural language, generate product requirement documents from these and then use formal methods to specify the behaviour of the system and verify that the resulting software that generative AI creates meets these requirements. Some members of the group had already been generating such systems and felt it was a more promising approach than automating different roles in the current delivery processes.

Although these methods have existed for a while they are not widely used currently and therefore it seems likely that reskilling will be required by senior technical leaders. Defining the technical outcomes they are looking for through more formal structures that work better with machines requires both knowledge and skill. Debugging is likely to move from the operation of code to the process of generation within the model and leading to an iterative cycle of refinement of both prompts and specifications. In doing this, the recent move to expose more chain of thought information to user is helpful and allows the user to refine their prompt when they can see flaws in the reasoning process of the model.

The fate of code

We discussed whether code would remain the main artefact of software production and we didn’t come to a definite conclusion. The existing state of the codebase can be given as a context to the generation process, potentially refined as an input in the same way as retrieval augmented generation works.

However if the construction of the codebase is fast and cheap then the value of code retention was not clear, particularly if requirements or specifications are changing and therefore an alternative solution might be better than the existing one.

People experimenting with whole solution generation do see major changes between iterations; for example where the model selects different dependencies. For things like UIs this matters in terms of UX but maybe it doesn’t matter so much for non-user facing things. If there is a choice of database mappers for example perhaps we only care that the performance is good and that SQL injection is not possible.

Specifications as artefacts

Specifications and requirements need to be versioned and change controlled exactly as source code is today. We need to ensure that requirements are consistent and coherent, which formal methods should provide, but analysis and resolution of differing viewpoints as to the way system works will remain an important technical skill.

Some participants felt that conflicting requirements would be inevitable and that it would be unclear as to how the generating code would respond to this. It is certainly clear that currently models do not seem to be able to identify the conflict and will most probably favour one of the requirements over the others. If the testing suite is independent than behavioural tests may reveal the resulting inconsistencies.

It was seen as important to control your own foundation model rather than using external services. Being able to keep the model consistent across builds and retain a working version of it decouples you from vendor dependencies and should be considered as part of the build and deployment infrastructure. Different models have different strengths (although some research contradicts this anecdotal observation). We didn’t discuss supplementation techniques but we did talk about priming the code generation process with coding standards or guidelines but this did not seem to be a technique currently in use.

For some participants using generative AI was synonymous with choosing a vendor but this is risky as one doesn’t control the lifespan of such API-based interactions or how a given model might be presented by the vendor. Having the skills to manage your own model is important.

In public repositories it has been noted that the volume of code produced has risen but quality has fallen and that there is definitely a trade-off being made between productivity and quality.

This might be different in private codebases where different techniques are used to ensure the quality of the output. People in the session trying these techniques say they are getting better results than what is observed in public reporting. Without any way of verifying this though people will just have to experiment for themselves and see if they can improve on the issues seen in the publicly observed code.

When will this happen in banks?

We talked a little bit about the rate of adoption of these ideas. Conservative organisations are unlikely to move on this until there is plenty of information available publicly. However if an automated whole system creation process works there are significant cost savings associated with it and projects that would previously have been ruled out as too costly become more viable.

What does cheap, quick codebases imply?

We may be able to retire older systems with a lot of embedded knowledge in them far more quickly than if humans had to analyse, extract and re-implement that embedded knowledge. It may even be possible to recreate a mainframe’s functionality on modern software and hardware and make it cheaper than training people in COBOL.

If system generation is cheap enough then we could also ask the process to create implementations with different constraints and compare different approaches to the same problem optimising for cost, efficiency or maintenance. We can write and throw away many times.

What about the humans?

The question of how humans are involved in the software delivery lifecycle, what they are doing and therefore what skills they need was unclear to us. However no-one felt that humans would have no role in software development but that it was likely to be different to the skill set the made people successful today.

It also seemed unlikely that a human would be “managing” a team of agents if the system of specification and constraints was adopted. Instead humans would be working at a higher level of abstraction with a suite of tools to deliver the implementation. Virtual junior developers seemed to belong to the faster horse school of thinking.

Wrapping up the session

The session lasted for pretty much the whole of the unconference and the topic often went broad which meant there were many threads and ideas that were not fully resolved. It was clear that there are currently at least two streams of experimentation: supplementing existing human roles in the software delivery cycle with AI assistance and reinventing the delivery process based on the possibilities offered by cheap large language models.

As we were looking to the future we most discussed the second option and this seems to be what people have in mind when they talk about not needing experienced software developers in future.

In some ways this is the technical architect’s dream in that you can start to work with pure expressions of solutions to problems that are faithfully adhered to by the implementer. However the solution designer now needs to understand how and why the solution generation process can go wrong and needs to verify the correctness and adherence of the final system. The non-deterministic nature of large language models are not going away and therefore solution designers need to think carefully about their invariants to ensure consistency and correctness.

There was a bit of an undertow in our discussions about whether it was a positive that a good specification almost leads to a single possible solution or whether we need to allow the AI to confound our expectation of the solution and created unexpected things that met our specifications.

The future could be a perfect worker realising the architect’s dream or it could be more like a partnership where the human is providing feedback on a range of potential solutions provided by a generation process, perhaps with automated benchmarking for both performance and cost.

It was a really interesting discussion and an interesting snapshot of what is happening in what feels like an area of frenzied activity from early adopters, later adopters probably can afford to give this area more time to mature as long as they keep an eye on whether the cost of delivery is genuinely dropping in the early production projects.

Standard
Month notes

January 2025 month notes

Another relatively quiet month. I’ve mostly been consolidating my tooling around just, uv and Node 23 (see my post on type stripping).

I did try to start switching from Prettier to Biome but it hasn’t all been smooth. I think I need to remove the Prettier plugins from my editors and I had to do some manual setting tweaking to get auto-formatting on save. I’ve increasing being preferring Ruff to Black, UV to pipenv and I suspect it isn’t going to be much different once I get Biome working.

I also tried using Claude for creating specific bits of code and it was a lot better than the Github Copilot experience. I think was the first time that I was getting better code than I could have written as interestingly some of the problems I was asking for solutions to had a wider range of inputs than I was considering. In my particular usecase these categories of input wouldn’t have occurred but I didn’t have any good reason as to why they would be excluded so my own code would probably have failed in some reasonably common scenarios.

Still I feel that so far I’ve been asking for solutions to problems that I know have been solved by someone, I just don’t know where to find the code. I haven’t tried to do anything properly hard or novel yet.

Standard
Month notes

December 2024 month notes

Not a whole lot to report on due to this being holiday season.

Colab

I started using Google’s Colab for quick Python notebooks. It’s pretty good and the notebook files integrate into regular Drive. Of course there is always the fear that Google will cancel it at a moment’s notice so I might look at the independent alternatives as well.

I’ve been looking at simulation code recently and it has been handy to run things outside a local setup and across laptops.

tRPC

You can’t spend much time in Typescript world without using Zod somewhere in your codebase. Zod was created by Colin McDonnell and this month I read an old blog post of his introducing the ideas behind tRPC. The post or essay is really interesting as it identifies a lot of problems that I’ve seen with GraphQL usage in projects (and to be fair some OpenAPI generated code as well).

It is quite rare to see a genuine REST API in the commercial world, it is more typical to see a REST and HTTP influenced one. GraphQL insists on a separation of the concepts of read (query) and write (mutation) which makes it more consistent than most REST-like interfaces but it completely fails to make use of HTTP’s rich semantics which leaves things like error handling as a bit of joke.

Remote Procedure Calls (RPC) preceded both REST and GraphQL and while the custom protocols and stub generators were dreadful the mental model associated with RPC is actually pretty close to what most developers actually do with both REST and GraphQL. They execute a procedure and get a return result.

Most commercial-world REST APIs are actually a kind of RPC over HTTP using JSON. See the aside in the post about GraphQL being RPC with a schema.

Therefore the fundamental proposition of the post seems pretty sound.

The second strong insight is that sharing type definitions is far preferable and less painful than sharing generated code or creating interface code from external API definitions (I shudder when I see a comment in a codebase that says something like “the API must be running before you build this code”). This is a powerful insight but one that doesn’t have a totally clean answer in the framework.

Instead the code reaches out to import the type definition from the server by having the local codebase available in some agreed location. I do think this is better than scraping a live API and type sharing code is clearly less coupled than sharing data structures but I’m not sure it is quite the panacea being claimed.

What it undoubtedly does improve on is generated code, generated code is notorious hard to read, leads to arguments about whether it should be version controlled or not and when it goes wrong there is almost inevitably the comparison dance between developers who have working generated code and those who don’t. Having a type definition that is version controlled and located in one place is clearly a big improvement.

I’ve only seen a few mentions of commercial use of tRPC and I haven’t used it myself. It is a relatively small obscure project but I’d be interested in reading production experience reports because on the face of it it does seem to be a considered improvement over pseudo-REST and GraphQL interfaces.

God-interfaces

The article did also remind me of a practice that I feel might be an anti-pattern but which I haven’t had enough experience so far to say for sure. That is taking a generated type of a API output and using it as the data type throughout the client app. This is superficially appealing: it is one consistent definition shared across all the code!

There are generally two problems I see with this approach, firstly is protocol cruft (which seems to be more of a problem with GraphQL and automagic serialisation tools) which is really just a form of leaky abstraction; secondly, if a data type is a response structure from a query type of endpoint then the response often has a mass of optional fields that continuously accrue as new requirements arrive.

You might be working on a simple component to do a nicely formatted presentation of a numeric value but what you’re being passed are twenty plus fields, none of which might exist or have complex dependencies between one another.

What I’ve started doing, and obviously prefer, is to try and isolate the “full fat” API response at the root component or a companion service object. Every other component in the client should use a domain typed definition of its interface.

Ideally the naming of the structures in the API response and the client components would allow each domain interface to be a subset of the full response (or responses) if the component is used across different endpoints.

In Typescript terms this means components effectively define interfaces for their parameters and passing the full response object to the component works but the code only needs to describe the data actually being used.

My experience is that this has led to code that is easier to understand, is easier to modify and is less prone to breaking if the definition of the API response changes.

The death of the developer

I’ve been reading this Steve Yegge post a lot as well The Death of the Stubborn Developer. Steve’s historical analysis has generally been right which gives me a lot of pause for thought in this post. He’s obviously quite invested in the technology that underpins this style of development though and I have a worry that it is the same kind of sales hustle that was involved in crypto. If people don’t adopt this then how is the investment in this kind of assisted coding going to be recouped.

Part of what I enjoy about coding is the element of craft involved in putting together a program and I’m not sure that the kind of programming described in the post is the kind of thing I would enjoy doing and that’s quite a big thing given that it has been how I’ve made a living up until now.

Standard
Month notes

November 2024 month notes

Rust tools

Rust seems to be becoming the defacto standard for tooling, regardless of the language being used at a domain level. This month I’ve talked to people from Deno who build their CLI with it, switched to the just command runner and ruff code formatter.

It’s an interesting trend in terms of both other languages being more comfortable about not writing their tooling in a different language and why Rust seems to have a strong showing in this area.

Gitlab pipelines

I have been working a lot with Gitlab CI/CD this month, my first real exposure to it. Some aspects are similar to Github Actions, you’re writing shell script in YAML and debugging is hard.

Some of the choices in the Gitlab job environments seems to make things harder than they need to be. By default the job checks out the commit hash of the push that triggered the build in a detached (fetch) mode. Depending on the natural of the commit (in a merge request, to a branch, to the default (main) branch) you seem to get different sets of environment variables populated. Choose the wrong type and things just don’t work, hurrah!

I’ve started using yq as tool for helping validate YAML files but I’m not sure if there is a better structural tool or linter for the specific Gitlab syntax.

Poetry

I’ve also being doing some work with Poetry. As everyone has said the resolution and download process is quite slow and there doesn’t seem to be a huge community around it is a tool. Its partial integration with pyproject.toml makes it feel more standard that it actually is with things under the Poetry key requiring a bit of fiddling to be accessible to other tools. Full integration with the later standard is expected in v2.

Nothing I’ve seen so far is convincing me that it can really make it in its current form. The fragmentation between the pure Python tools seems to have taken its toll and each one (I’ve typically used pipenv) has problems that they struggle to solve.

RSS Feeds

One of the best pieces of advice I was given about the Fediverse was that you need to keep following people until your timeline fills up with interesting things. I’ve been trying to apply that advice to programmers. Every time I read an interesting post I’m now trying to subscribe. Despite probably tripling the number of feeds I have subscribed to my unread view is improved but still dominated by “tech journalism”. I guess real developers probably don’t post that frequently.

Lobsters has been really useful for highlighting some really good writers.

CSS

Things continue to be exciting in the CSS world with more and more new modules entering into mainstream distribution (although only having three browsers in the world is probably helping). I had a little play around with Nested Selectors and while I don’t do lots of pseudo-selectors it is 100% a nice syntax for them. In terms of scoping rules, these actually seem a bit complex but at least they are providing some modularity. I think I’m going to need to play more to get an opinion.

The Chrome developer relations team have posted their review of 2024.

Not only is CSS improving that but Tailwind v4 is actually going to support (or improve support) some of these new features such as containers. And of course its underlying CSS tool is going to be Rust-powered, natch.

Standard
Month notes

October 2024 month notes

For small notes, links and thoughts see my Prose blog.

Web Components versus frameworks

Internet drama erupted over Web Components in what felt a needless way. Out of what often felt wasted effort there were some good insights Lea Verou had a good overview of the situation, along with an excellent line about standards work being “product work on hard mode”

Chris Ferdinandi had a good response talking about how web components and reactive frameworks can be used together in a way that emphasises their strengths.

One of my favourite takes on the situation was by Cory LaViska who pointed out that framework designers are perhaps not the best people to declare the future of the platform.

Web Components are a threat to the peaceful, proprietary way of life for frameworks that have amassed millions of users — the majority of web developers.

His call to iterate on the standard and try to have common parts to today’s competing implementations was echoed in Lea’s post.

The huge benefit of Web Components is interoperability: you write it once, it works forever, and you can use it with any framework (or none at all). It makes no sense to fragment efforts to reimplement e.g. tabs or a rating widget separately for each framework-specific silo, it is simply duplicated busywork.

The current Balkanisation of component frameworks is really annoying and it is developer’s fear and tribalism that has allowed it to happen and which has sustained it.

Postgres generated UUIDs

In my work I’ve often seen UUIDs be generated in the application layer and pushed into the database. I tried this in a hobby project this month and rapidly came to the conclusion that it is very tedious when you can just have the database handle it. In Postgres a generated UUID can just be the column default and I don’t think I’m going to do anything else in future if I have a choice about it.

Python 3.13

I’ve started converting my projects to the new Python version and it seems really fast and snappy even on projects that have a lazy container spin-up. I haven’t done any objective benchmarking but things just feel more responsive than 3.11.

I’m going to have a push to set this as the baseline for all my Python projects. For my Fly projects extracting out the Python version number as a Docker variable has meant migrating has been as simple as switching the version number so far.

For the local projects I’ve also been trying to use asdf for tool versioning more consistently and it has made upgrading easier where I’ve adopted it but it seems I have quite a few places where I still need to convert from either language specific tools or nothing.

uvx

uvx is part of the uv project and I started using it this month and its rapidly becoming my default way to run Python CLIs. The first thing I started using it with was pg-cli but I found myself using it to quickly run pytest over some quick scripting code I’d done as well as running ad-hoc formatters and tools. It’s quick and really handy.

There’s still the debate about whether the Python community should go all-in on uv, looking at the messy situation in Node where all manner of build and packaging tools could potentially be used (despite the ubiquity of npm) the argument for having a single way to package and run things is strong.

Standard