Programming

Type stripping in Node

I tried the new type stripping feature which is enabled by default in Node 23.6.0. Essentially the type information is stripped from the code file and things that require transformation are not supported by default yet. You’re essentially writing Typescript and running Javascript. This is essence of the Typescript promise that there’s nothing in the language that isn’t Javascript (ignore enums).

On the surface of it this might seem pointless but the genius of it is that all the source files are Typescript and all tooling works as normal. There’s just no complicated build or transformation step. I have some personal projects that were written as Javascript but which were using typing information via JSDoc. Type stripping is much less complicated and the type-checking and tooling integration is much richer.

I’m planning to try and convert my projects and see if I hit any issues but I feel that this should be the default way to use Typescript now and that it is probably worth looking to try and upgrade older projects. The gulf of functionality between Node 18 and 23 is massive now.

Standard
Month notes

December 2024 month notes

Not a whole lot to report on due to this being holiday season.

Colab

I started using Google’s Colab for quick Python notebooks. It’s pretty good and the notebook files integrate into regular Drive. Of course there is always the fear that Google will cancel it at a moment’s notice so I might look at the independent alternatives as well.

I’ve been looking at simulation code recently and it has been handy to run things outside a local setup and across laptops.

tRPC

You can’t spend much time in Typescript world without using Zod somewhere in your codebase. Zod was created by Colin McDonnell and this month I read an old blog post of his introducing the ideas behind tRPC. The post or essay is really interesting as it identifies a lot of problems that I’ve seen with GraphQL usage in projects (and to be fair some OpenAPI generated code as well).

It is quite rare to see a genuine REST API in the commercial world, it is more typical to see a REST and HTTP influenced one. GraphQL insists on a separation of the concepts of read (query) and write (mutation) which makes it more consistent than most REST-like interfaces but it completely fails to make use of HTTP’s rich semantics which leaves things like error handling as a bit of joke.

Remote Procedure Calls (RPC) preceded both REST and GraphQL and while the custom protocols and stub generators were dreadful the mental model associated with RPC is actually pretty close to what most developers actually do with both REST and GraphQL. They execute a procedure and get a return result.

Most commercial-world REST APIs are actually a kind of RPC over HTTP using JSON. See the aside in the post about GraphQL being RPC with a schema.

Therefore the fundamental proposition of the post seems pretty sound.

The second strong insight is that sharing type definitions is far preferable and less painful than sharing generated code or creating interface code from external API definitions (I shudder when I see a comment in a codebase that says something like “the API must be running before you build this code”). This is a powerful insight but one that doesn’t have a totally clean answer in the framework.

Instead the code reaches out to import the type definition from the server by having the local codebase available in some agreed location. I do think this is better than scraping a live API and type sharing code is clearly less coupled than sharing data structures but I’m not sure it is quite the panacea being claimed.

What it undoubtedly does improve on is generated code, generated code is notorious hard to read, leads to arguments about whether it should be version controlled or not and when it goes wrong there is almost inevitably the comparison dance between developers who have working generated code and those who don’t. Having a type definition that is version controlled and located in one place is clearly a big improvement.

I’ve only seen a few mentions of commercial use of tRPC and I haven’t used it myself. It is a relatively small obscure project but I’d be interested in reading production experience reports because on the face of it it does seem to be a considered improvement over pseudo-REST and GraphQL interfaces.

God-interfaces

The article did also remind me of a practice that I feel might be an anti-pattern but which I haven’t had enough experience so far to say for sure. That is taking a generated type of a API output and using it as the data type throughout the client app. This is superficially appealing: it is one consistent definition shared across all the code!

There are generally two problems I see with this approach, firstly is protocol cruft (which seems to be more of a problem with GraphQL and automagic serialisation tools) which is really just a form of leaky abstraction; secondly, if a data type is a response structure from a query type of endpoint then the response often has a mass of optional fields that continuously accrue as new requirements arrive.

You might be working on a simple component to do a nicely formatted presentation of a numeric value but what you’re being passed are twenty plus fields, none of which might exist or have complex dependencies between one another.

What I’ve started doing, and obviously prefer, is to try and isolate the “full fat” API response at the root component or a companion service object. Every other component in the client should use a domain typed definition of its interface.

Ideally the naming of the structures in the API response and the client components would allow each domain interface to be a subset of the full response (or responses) if the component is used across different endpoints.

In Typescript terms this means components effectively define interfaces for their parameters and passing the full response object to the component works but the code only needs to describe the data actually being used.

My experience is that this has led to code that is easier to understand, is easier to modify and is less prone to breaking if the definition of the API response changes.

The death of the developer

I’ve been reading this Steve Yegge post a lot as well The Death of the Stubborn Developer. Steve’s historical analysis has generally been right which gives me a lot of pause for thought in this post. He’s obviously quite invested in the technology that underpins this style of development though and I have a worry that it is the same kind of sales hustle that was involved in crypto. If people don’t adopt this then how is the investment in this kind of assisted coding going to be recouped.

Part of what I enjoy about coding is the element of craft involved in putting together a program and I’m not sure that the kind of programming described in the post is the kind of thing I would enjoy doing and that’s quite a big thing given that it has been how I’ve made a living up until now.

Standard
Work

2023: Year in review

2023 felt like a very chaotic year with big changes in what investors were looking for, layoffs that often felt on step away from panic, a push from business to return to the office but often without thinking what that would look like and a re-evaluation of technical truisms of the last decade. So much happened I think that’s why its taken so long to try and process it: it feels like lots of mini-years packed into one.

A few themes for the year…

Typescript/Javascript

So I think 2023 might be the year of Peak React and Facebook frontend in general. I think with Yarn finally quiet-quitting and a confused React roadmap that can’t seem to pose a meaningful answer to its critics we’re finally getting to place where we can start to reconsider what frontend development should look like.

The core Node/NPM combination seems to have responded to the challenges better than the alternative runtimes and also seem to be sorting out their community governance at a better clip.

Of course while we might have got to the point that not everyone should be copying Facebook we do seem to have a major problem with getting too excited about tooling provided by companies backed by VC money and with unclear goals and benefits. If developers had genuinely learned anything then they might be more critical of Vercel and Bun.

I tried Deno, I quite liked it. I’d be happy to use it. But if you’re deploying Javascript to NodeJS servers then Typescript is a complex type hinter that is transpiling to a convention that is increasingly out of step with Vanilla Javascript. The trick of using JSDoc’s ts-check seems like it could provide the checking benefits of Typescript along with the Intellisense experience in VSCode that developers love but without the need to actually transpile between languages and all the pain that brings.

It’s also good news the Javascript is evolving and moving forwards. Things seems to have significantly improved in terms of practical development for server-side Javascript this year and the competition in the ecosystem is actually driving improvement in the core which is very healthy for a language community.

Ever improving web standards

I attended State of the Browser again this year and was struck by how many improvements there have been to the adoption of new standards like Web Components, incremental improvements in CSS so that more and more functionality is now better achieved with standards-based approaches and how many historic hacks are counter-productive now.

It is easy to get used to the ubiquity of things like Grid or the enhanced Flexbox model but these are huge achievements and the work going on to allow slot use in both your own templates and the default HTML elements is really impressive and thoughtful.

Maybe the darker side of this was the steady erosion of browser choice but even here the Open Web Advocacy group has been doing excellent, often thankless work to keep Google and Apple accountable and pushing to provide greater choice to consumers in both the UK and EU.

Overall I feel very optimistic that people understand the value of the open web and that the work going on in the foundations of it are better than ever.

Go

The aphorism about chess that says the game is easy to learn but hard to master applies equally well to Go in my view. It is easy to start writing code and the breadth of the language is comparatively small. However the lack of batteries included means that you are often left with having to implement relatively straight-forward things like sets yourself or having to navigate what the approved third-parties are for the codebase you’re working on.

The fact that everyone builds their web services from very low-level primitives and then each shop has their own conventions about middleware and cross-cutting concerns is really wearisome if you are used to language communities with more mature conventions.

The type system is also really anaemic, it feels barely there. A million types of int and float, string and “thing”. Some of the actual type signatures in the codebases have felt like takes a thing and a thing and returns a thing. Structs are basically the same as their C counterparts except there’s a more explicit syntax about pointers and references.

I have concerns that the language doesn’t have good community leadership and guidance, it still looks to Google and Google do not feel like good stewards of the project. The fact that Google is funding Rust for its critical work (such as Android’s operating layer) and hasn’t managed to retire C++ from its blessed languages is not a good look.

That said though most projects that might have been done in Java are probably going to be easier and quicker in Go and most of the teams I know that have made the transition seem to have been pretty effective compared to the classic Spring web app.

It is also an easier language to work with than C, so its not all bad.

The economy

I’m not sure the economy is necessarily in that bad a shape, particularly compared to 2008 or 2001 but what is definitely true is that we had gotten very used to near-zero interest rates and we did not adapt to 5% interest rates very well at all.

It feels like a whole bunch of common-place practices are in the process of being re-evaluated. Can’t get by without your Borg clone? Maybe you can get by with FTP-ing the PHP files to the server.

Salaries were under-pressure due to the layoffs but inflation was in the double-digits so people’s ability to take a pay cut wasn’t huge. I think the net result is that fewer people are now responsible for a lot more than they were and organisations with limited capacity tend to be more fragile when situations change. There’s the old saw about being just one sick day from disaster and it will be interesting to see whether outages become more frequent and more acceptable for the associated costs.

Smaller teams and smaller budgets are the things that feel like they are most profoundly going to reshape the development world in the next five years. Historically there’s been a bit of an attitude of “more with less” but I feel that this time it is about setting realistic goals for the capacity you have but trying to have more certainty about achieving them.

Month notes

I started experimenting with months notes in 2023, I first saw week notes be really effective when I was working in government but it was really hard to write them when working at a small company where lots of things were commercially sensitive. It is still a bit of a balance to try and focus on things that you’re personally learning rather work when often the two can easily be conflated but I think its been worth the effort.

If nothing else then the act of either noting things down as they seem relevant and then the separate act of distillation helps reflect on the month and what you’ve been doing and why.

Standard