Web Applications

Searching for the perfect calendar

I have multiple calendars with different providers and of course my work calendar. I really love the schedule view in Google Calendar but I would also love not to be sending all my data to Google just to get one UI feature.

Calendar.com is US-based and therefore not much better than Google for privacy, they are also more focused on groups than individuals. Calendar.online seems to have the schedule view, is based in Germany and says it is not interested in collecting and selling customer data but sadly it doesn’t sync with Google Calendar.

Tutanota has an agenda view but again doesn’t allow you to sync calendars due to the way it secures information.

Proton Calendar has the ability to sync with other calendars but it’s agenda view only applies to a single day which isn’t great but probably will get the job done, there is a feature request for a schedule view but nothing like it is currently in the UI. I’ve downloaded the Proton Calendar app for Android and it does seem to be a reasonable offline capable way of viewing multiple calendars and keeping them in sync.

I haven’t been able to find the perfect solution to my problem so far but Proton seems to be the best option I have currently and I would love it if that feature request is moved forward. The calendar feature was good enough to upgrade my plan to cover its functionality so I guess it really is good enough. I’d be interested in hearing about alternatives though.

Standard
Programming, Python

Transcribing podcasts with Google’s Speech to Text API

I don’t really listen to podcasts, even now when I have quite a long commute. I generally read faster than I can listen and prefer to read through transcripts than listen, even when the playback speed is increased. Some shows have transcripts and generally I skim read those when available to see if it would be worth listening to segments of the podcasts. But what about the podcasts without transcripts? Well Google has a handy Speech to Text API so why not turn the audio into a text file and then turn it into a HTML format I can read on the phone on the tube?

tldr; the API is pretty much the same one as generates the Youtube automatic subtitling and transcripts. It can just about create something that is understandable as a human but its translation of vernacular voices is awful. If Youtube transcripts don’t work for you then this isn’t a route worth pursuing.

Streaming pods

I’m not very familiar with Google Cloud Services, I used to do a lot of App Engine development but that way of working was phased out in favour of something a bit more enterprise friendly. I have the feeling that Google Cloud’s biggest consumers are data science and analysis teams and the control systems intersect with Google Workspace which probably makes administration easier in organisations but less so for individual developers.

So I set up a new project, enabled billing, associated the billing account with a service account, associated the service account with the project and wished I’d read the documentation to know what I should have been doing. And after all that I created a bucket to hold my target files in.

You can use the API to transcribe local audio files but only if they are less than 60 seconds long. I needed to be using the long running asynchronous invocation version of the API. I also should have realised that I need to write the transcription to a bucket too, I ended up using the input file name with “.json” attached but until I started doing that I didn’t realise that my transcription was failing to recognise my input.

Learning the API

One really nice feature Google Cloud has is the ability to run guided tutorials in your account via CloudShell. You get a step by step guide that can simply paste the relevant commands to your shell. Authorising the shell to access the various services was also easier than generating credentials locally for what I wanted to do.

Within 10 minutes I had processed my first piece of audio and had a basic Python file setup. However the test file was in quite an unusual format and the example was the synchronous version of the API.

I downloaded a copy of the Gettysburg address and switched the API version but then had my CloudShell script await the outcome of the transcoding.

Can you transcribe MP3?

The documentation said yes (given a specific version) and while the client code accepted the encoding type, I never got MP3 to work and instead I ended up using ffmpeg to create FLAC copies of my MP3 files. I might have been doing something wrong but I’m not clear what it was and the job was accepted but it was returning an empty JSON object (this is where creating files for the output is much more useful that trying to print an empty response).

FLAC worked fine and the transcript seemed pretty on the money and converting the files didn’t seem that much of a big deal. I could maybe do an automatic conversion later when the file hit the bucket if I needed to.

However after my initial small files I found that waiting for the result of the API call resulted in hitting a timeout on the execution duration within the shell. I’ve hit something like this before when running scripts over Google Drive that copied directories. I didn’t have a smart solution then (I just skipped files that already existed and re-run the jobs a lot) and I didn’t have one now.

Despite the interactive session timing out the job completed fine and the file appeared in the storage bucket. Presumably this would have been where it would have been easier to be running the script locally or on some kind of temporary VM. Or perhaps I should have been able to get the run identifier and just have checked the job using that. The whole asynchronous execution of jobs in Google Cloud is another area where what you are meant to do is unclear to me and working on this problem didn’t require me to resolve my confusion.

Real audio is bobbins

So armed with a script that had successfully rendered the Gettysburg address I switched the language code to British English, converted my first podcast file to FLAC and set the conversion running.

The output is pretty hilarious and while you can follow what was probably being said it feels like reading a phonetic version of Elizabethan English. I hadn’t listened to this particular episode (because I really don’t listen to podcasts, even when I’m experimenting on them) but I did know that the presenters are excessively Northern and therefore when I read the text “we talk Bob” I realised that it probably meant “we are talking bobbins”. Other gems: “threw” had been rendered as “flu” and “loathsome” as “lord some”. Phonentically if you know the accent you can get the sense of what was being talked about and the more mundane the speech the better the transcription was. However it was in no way an easy read.

I realised that I was probably overly ambitious going from a US thespian performing a classic of political speechwriting to colloquial Northern and London voices. So next I chose a US episode, more or less the first thing I could get an MP3 download of (loads of the shows are actually shared on services that don’t allow you access to the raw material).

This was even worse because I lacked the cultural context but even if I had, I have no idea how to interpret “what I’m doing ceiling is yucky okay so are energy low-energy hi”.

The US transcript was even worse than the British one, partly I think because the show I had chosen seems to have the presenters talking over one another or speaking back and forth very rapidly. One of them also seems to repeat himself when losing his chain of thought or wanting to emphasise something.

My next thought was to try and find a NPR style podcast with a single professional presenter but at this point I was losing interest. The technology was driving what content I was considering rather than bringing the content I wanted to engage with to a different medium.

You Tube audio

If you’ve ever switched on automatic captioning in Youtube then you’ve actually seen this API in action, the text and timestamps in the JSON output are pretty much the same as what you see in both the text transcript and the in-video captioning. My experience is that the captioning is handy in conjunction with the audio but if I was fully deaf I’m not sure I would understand much about what was going on in the video from the auto-generated captions.

Similarly here, the more you understand the podcast you want to transcribe the more legible the transcription is. For producing a readable text that would reasonably represent the content of the podcasts at a skim reading level the technology doesn’t work yet. The unnatural construction of the text means you have to quite actively read it and put together the meaning yourself.

I had a follow-up idea of using speech to text and then automated translation to be able to read podcasts in other languages but that is obviously a non-starter as the native language context is vital for understanding the transcript.

Overall then a noble failure; given certain kinds of content you can actually create pretty good text transcriptions but as a way of keeping tabs on informal, casual audio material, particularly with multiple participants this doesn’t work.

Costs

I managed to blow through a whole £7 for this experiment which actually seemed like a lot for two podcasts of less than an hour and a seven minute piece of audio. In absolute terms though it is less than proverbial avocado on toast.

Future exploration

Meeting transcription technology is meant to be pretty effective including identifying multiple participants. I haven’t personally used any and most of the services I looked at seemed aimed at business and enterprise use and didn’t seem very pay as you go. These however might be a more viable path as there is clearly a level of specialisation that is needed on top of the off-the-shelf solutions to get workable text.

Links

Standard
Programming, Work

August 2023 month notes

I have been doing a GraphQL course that is driven by email. I can definitely see the joy of having autocompletion on the types and fields of the API interface. GraphQL seems to have been deployed way beyond its initial use case and it will be interesting to see if its a golden hammer or genuinely works better than REST-based services outside the abstraction to frontend service. It is definitely a complete pain in the ass compared to HTTP/JSON for hobby projects as having to ship a query executor and client is just way too much effort compared to REST and more again against maybe not doing a Javascript app interface.

I quite enjoyed the course, and would recommend it, but it mostly covered creating queries so I’ll probably need to implement my own service to understand how to bind data to the query language. I will also admit that while it is meant to be quite easy to do each day I ended up falling behind and then going through half of it on the weekend.

Hashicorp’s decision to change the license on Terraform has caused a lot of anguish on my social feeds. The OpenTerraform group has already announced that they will be creating a fork and are also promising to have more maintainers than Hashicorp. To some extent the whole controversy seems like a parade of bastards and it is hard to choose anyone as being in the right but it makes most sense to use the most open execution of the platform (see also Docker and Podman).

In the past I’ve used CloudFormation and Terraform, if I was just using AWS I would probably be feeling smug with the security of my vendor lock-in but Terraform’s extensibility via its provider mechanisms meant you could control a lot of services via the same configuration language. My current work uses it inconsistently which is probably the worst of all worlds but for the most part it is the standard for configuring services and does have some automation around it’s application. Probably the biggest advantage of Terraform was to people switching clouds (like myself) as you don’t have to learn a completely new configuration process, just the differences with the provider and the format of the stanzas.

The discussion of the change made we wonder if I should look at Pulumi again as one of the least attractive things about Terraform is its bizarre status as not quite a programming language, not quite Go and not quite a declarative configuration. I also found out about Digger which is attempting to avoid having two CI infrastructures for infrastructure changes. I’ve only ever seen Atlantis used for this so I’m curious to find out more (although it is such an enterprise level thing I’m not sure I’ll do much than have an opinion for a while).

I also spent some time this month moving my hobby projects from Dataset to using basic Pyscopg. I’ve generally loved using Dataset as it hides away the details of persistence in favour of passing dictionaries around. However it is a layer over SQLAlchemy which is itself going through some major point revisions so the library in its current form is stuck with older versions of both the data interaction layer and the driver itself. I had noticed that for one of my projects queries were running quite slowly and comparing the query time direct into the database compared to that arriving through the interface it was notable that some queries were taking seconds rather than microseconds.

The new version of Psycopg comes with a reasonably elegant set of query primitives that work via context managers and also allows results to be returned in a dictionary format that is very easy to combine with NamedTuples which makes it quite easy to keep my repository code consistent with the existing application code while completely revamping the persistence layer. Currently I have replaced a lot of the inserts and selects but the partial updates are proving a bit trickier as dataset is a bit magical in the way it builds up the update code. I think my best option would be to try and create an SQL builder library or adapt something like PyPika which I’ve used in another of my projects.

One of the things that has surprised me in this effort is how much the official Python documentation does not appear in Google search results. Tutorial style content farms have started to dominate the first page of search results and you have to add a search term like “documentation” to surface it now. People have been complaining about Google’s losing battle with content farms but this is the first personal evidence I have of it. Although I always add “MDN” to my Javascript and CSS searches so maybe this is just the way of the world now, you have to know what the good sites are to find them…

Standard
Web Applications

Migrating to Fly Apps v2

So having been distracted by other things I completely missed that Fly are deprecating their previous offering (now known as Fly Apps v1). An automated migration happened while I was no wiser and it was only when a database connection broke down that I found out what was happening. It was a bit frustrating but one of the good things about Fly is the fact that I’m currently paying zero dollars for my apps just like my old Heroku setup which makes it perfect for hobby experimentation.

The basics of migrating are not complicated, the configuration file for deployment is slightly different and you now need to associate a Fly Machine (a virtual machine) to the application. Running the migration command flyctl migrate-to-v2 successfully did that for me with all my applications.

The use of Machines is a little different from other Platform as a Service (PaaS) that I’ve used before. They are lightweight virtual machines that use the Firecracker system that is used in AWS Lambda and also later Fargate. You need to assign at least one machine to your application for it to run and Fly recommends at least two.

Since your app is already virtualised in a Docker container normally you leave the scheduling of the machines to the service based on the demand you have but this setup gives you a lot more control of the resources that are available for the app to run over. The basics of the technology are already proven on Lambdas.

One of the nice features about Fly Apps v1 is that they were “always on” at no extra cost. Now you need to think a bit more about how you want to allocate the Machines to the application. Fortunately for most hobby projects it is straight-forward, you can set the auto-stop-start configuration and you probably can just use one machine as you’re never really going to need to fallback to another instance. I’ve set a few of my apps to have two machines and kept the others at the default migrated value of one (because that was what you’re running before).

For the cost conscious hobbyist one of the nice aspects of Firecracker VMs is that they are relatively quick to start on demand so while 99% of the time you’re not using anything when you do want to use the app the spinup time for the single machine apps is about that you’d experience with something like a cold AWS Lambda, the two machine apps seem to be quicker to start but that might be a subjective coincidence.

New applications are now configured with two machines by default plus all the logic to leave managing the machine time to the service. I haven’t had enough time with the new default to say whether its better or worse than the previous setup but it does seem better than other spin-down free tiers like Render’s.

Moving to Fly had it’s ups and downs but now I’ve gotten over the learning curve Fly provides everything I wanted from Heroku and also feels like it could be a platform you could grow with if you wanted to do something more serious with.

V2 has also brought some changes to the deployment configuration file, mostly to simplify it for the common case of HTTP-based apps, which seems a good call. You also control whether the new autoscaling functionality applies to your application from the config, this defaults to the most cost-effective options, which seems right to me. However one oddity is that while you can specify the minimum number of machines you want active I’m not sure you can specify a maximum. Instead you need to apply that configuration via the command-line.

This seems a bit inconsistent but there are a ton of other options to allow scaling across regions so maybe the possibilities are not easy to boil down to a simple configuration format. Again though, the common hobbyist’s case is probably can be catered too with more sophisticated setups being configured via the CLI or Terraform.

Having gotten through the migration now I remain happy with the service and I think when spinning up new projects there is more to understand that there was before but the overall service is probably now better and less magical.

Standard
Blogging

RSS Readers for the mobile web

After every social media convulsion there is always a view that we’re heading back to blogs again. Regardless of whether this is true or not there is always an uptick in posting and blogs are definitely better for any kind of long form content compared to a 32 post “thread” on any kind of microblogging social platform. So I’ve been revising my line up of RSS readers (like email I use a few) and I wanted to post my notes on what I’ve tried and what I’ve ended up using.

My first key point of frustration is viewing content on a phone browser. My primary reader (which I migrated to from Google Reader) is Newsblur but the design of the site is not responsive and is large screen focused. My second issue is specifically around Blogger sites, while these do have a mobile view most of the themes for Blogger feel unreadable and harsh on smaller screens. Not to mention the cookie banner that is always floating around.

I have been using Feedbin whose main feature is that it can consolidate content from Twitter, RSS and email newsletters into a single web interface. It does deliver this promise but while its small screen experience and touch interface has been considered, the resulting UI is quite fiddly with a side-swipe scheme for drilling in and out of content and I often need to switch out of its default rendering mode to get something that is easy to read. I’m still using Feedbin to follow news sources on Twitter but have mostly given up on RSS there except indirectly through topic subscriptions.

I want to give an honourable mention here to Bubo RSS. This is essentially a static site builder that reads your subscriptions and builds a set of very lightweight pages that list out all the recent posts and uses the visited link CSS property to indicate the unread items. In the end this didn’t really solve my reading issues as you just link through to the original site rather than getting cleaned up small screen friendly view. However its idea of building a mini-site from your RSS feed and then publishing a static site would solve a lot of my problems. I was almost tempted to see if I could add a pull of the content and a Readability parse but I sense the size of the rabbit hole I was going into.

Another great solution I found was Nom which is a terminal RSS reader written in Go. You put your subscriptions into a config file and then read the content via the terminal. If I had any feedback for Nom it would be that the screen line length is not adjustable and the default feels a bit short. The pure text experience was the best reading experience for the Blogger subscriptions I have but ultimately I wanted something that I could read on a mobile phone web browser.

In the end the thing that has been working for me was Miniflux. You can self-host this but the hosted option seemed cheaper to me than the cost of the required hosting. I had only one issue with Miniflux’s reading mode out of the box which was to do with margins on small screens, I thought I might have to try and get a PR organised but helpfully you can save a custom CSS snippet in the settings and with a few lines of customisation I was entirely happy with the result. This is now what I’m using to read RSS-based content on my phone.

Standard
Work

July 2023 month notes

I’ve been playing around with the V language which describes itself as an evolution of Go. This means letting go of some unnecessary devotion to imperative programming by allowing first-order map and filter as well as using an option syntax for handling errors. The result is quite an interesting language that feels more modern and less quirky than Go but isn’t quite as full on as Rust. I’ve enjoyed my initial experience but I haven’t been doing that much in it so far.

I’ve been continuing to experiment with Deno as well and I’m continuing to enjoy it as a development experience but I’m going to have to start doing some web development with it soon because while it’s fine for doing some exploratory programming using Javascript for command-line and IO stuff is not great, even with async/await.

I’ve been re-reading Domain Driven Design by Eric Evans. I’d forgotten how radical this book was. The strict tiering and separation of the domain model from other kinds of code is quite inspiringly strict. I wanted to try and have an abstracted business logic implementation in my last business where I was leading development but we never really got there as it was hard to go back and remove the historical complecting.

I’ve been doing some shell scripting recently and using some new (to me) commands in addition to old faithful’s like sed; tr transforms its input string to its output parameter making it easier to replace full stops or spaces with hyphens.

I’ve been trying a new shell wezterm after years of using Terminator. The appeal of wezterm is that it is cross-platform so I can use the same key-strokes across OSX and Linux. Learning new keybindings is always difficult but I’ve had no complaint about reliability and performance so far.

It was OKR time this month something I haven’t done in a while. OKRs are far more popular than they are useful. They seem to work best in mature profitable businesses than are seeking to create ambitious plans around sustaining innovation. Smaller, early-stage businesses still benefit from the objective alignment process but probably should still be focused on learning and experimenting in the Lean Startup model. As part of this process I was also introduced to Opportunity Solution Trees which in theory should have squared the circle on this problem but in practice the two systems didn’t mesh. I think that was because the company OKRs were generated separately from the Solution Tree so the activity in support of the objectives wasn’t driven by the solutions and experiments but were generated in response to the company objectives.

Standard
Programming

London Django Meetup May 2023

Just one talk this time and it was more of a discussion of the cool things you can do with Postgres JSON fields. These are indeed very cool! Everything I wanted to do with NoSQL historically is now present in a relational database without compromise on performance or functionality, that is an amazing achievement by the Postgres team.

The one thing I did learn is that all the coercion and encoding information is held in the Django model and query logic which means you only have basic types in the column. I previously worked on a codebase that used SQLAlchemy and a custom encoder and decoder which split custom types into a string field with the Python type hint (e.g. Decimal, UUID) and the underlying value. By comparison with the Django implementation which appears to just use strings this is a leaky abstraction where the structure of the data is compromised by the type hint.

Using the Django approach would have been easier when using direct SQL on the database and followed the principle of least surprise.

The speaker was trying to make a case for performing aggregate calculations in the database but via the Django ORM query language which wasn’t entirely convincing. Perhaps if you have a small team but the resulting query language code was more complex that the underlying query and was quite linked to the Postgres implementation so it felt that maybe a view would have been a better approach unless you have very dynamic calculations that are only applied for a fixed timespan.

It was based on an experience report so it clearly worked for the implementing group but if felt like the approach strongly coupled the database, the web framework and the query language.

Standard
Work

How I have been using knowledge graphs

Within a week of using Roam Research’s implementation of a knowledge graph or Zettlekasen I decided to sign up because there was something special in this way of organising information. My initial excitement was actually around cooking, the ability to organise recipes around multiple dimensions (a list of ingredients, the recipe author, the cuisine) meant you could both search and browse by the ingredients that you had or the kind of food you wanted to eat.

Since then I’ve started to rely on it more for organising information for work purposes. Again the ability to have multiple dimensions to things is helpful. If you want to keep some notes about a library for handling fine grained authorisation you might want to come back to that via the topic of authorisation, the implementation language or the authorisation model used.

But is this massively different from a wiki? Well a private wiki with a search function would probably do all this too. For me personally though I never did actually set up something similar despite experiments with things like Tiddlywiki. So I think there are some additional things that make the Zettelkasten actually work.

The two distinctive elements missing from the wiki setup are the outliner UI and the concept of daily notes. Of the two the daily notes is the simplest, by default these systems direct you a diary page by default, giving you a simple context for all your notes to exist in. The emphasis is getting things out of your head and into the system. If you want to cross-link or re-organise you can do so at your leisure and the automatic back-referencing (showing you other pages that reference the content on the page you are viewing) makes it easy to remind you of daily notes that maybe you haven’t consciously remembered you want to re-organise. This takes a good practice and delivers a UI that makes it simple. Roam also creates an infinite page of daily notes that allows you to scroll back without navigating explicitly to another page. Again nothing complicated but a supportive UI feature to simplify doing the right thing.

The outliner element is more interesting and a bit more nuanced. I already (and continue to use) an outliner in the form of Workflowy. More specifically, I find it helpful for outlining talks and presentations, keeping meeting notes and documenting one to ones (where the action functionality is really helpful to differentiate items that need to be actioned from notes of the discussion). The kind of things where you want to keep a light record with a bit of hierarchical structure and some light audit trail on the entries. I do search Workflowy for references but I tend to access it in a pretty linear way and rarely access it without a task-based intention.

Roam and Logseq work in exactly the same way, indeed many of the things I describe above are also use-cases for those products. If I wanted to I could probably consolidate all my Workflowy usage into Roam except for Roam’s terrible mobile web experience. However there is a slight difference and that is due to the linking and wiki-like functionality. This means you can have a more open discovery journey within the knowledge graph. Creating it and reading, I have found, are two different experiences. I think I add content in much the same way as an outliner but I don’t consume it the same way. I am often less task-orientated when reviewing my knowledge graph notes and as they have grown in size I have had some serendipitous connection making between notes, concepts and ideas.

What the outliner format does within the context of the knowledge graph is provide a light way of structuring content so that it doesn’t end up a massive wall of text in the way that a wiki page sometimes can. In fact it doesn’t really suit a plain narrative set of information that well and I use my own tool to manage that need and then link to the content in the knowledge graph if relevant.

In the past I have often found myself vaguely remembering something that a colleague mentioned, a link from a news aggregator site or a newsletter or a Github repo that seemed interesting. Rediscovering it can be very hard in Google if it is neither recent nor well-established, often I have ended up reviewing and searching my browser history in an almost archaeological attempt to find the relevant content. Dumping interesting things into the knowledge graph has made them more discoverable as individual items but also adds value to them as you gain the big picture understanding of how things fit together.

It is possible to do achieve any outcome through any misuse of a given set of tools but personal wikis, knowledge graphs and outliners all have strengths that are best when combined as much as possible into a single source of data and which have dedicated UIs for specific, thoughtful task flows over the top. At the moment there’s not one tool that does it all but the knowledge graph is the strongest data structure even if the current tools lack the UI to bring out the best from it.

Standard
Software

Great software delivery newsletters

I currently subscribe to a number of great newsletters around technology and software delivery. While the Fediverse is also a great place to pick up news and gossip I have found that there is something really valuable in having a regular curated round up of interesting articles. It may be no surprise that the consistently great newsletters are produced by people who are engaged in consultancy. I think they inevitably get exposed to trends and concerns in the industry and also can commit the time to writing up their thoughts and reflecting on their chosen content.

Pat Kua‘s Level Up focuses on technical leadership and tends to have good pieces around human factors, managing yourself and creating good systems for delivery. It also often has advice pieces for people coming into technical management or leadership.

John Cutler’s The Beautiful Mess focuses on Product but is also great on strategy and importantly is always focused on getting to a better product process by emphasising collaboration and breaking down barriers between functional silos. I also enjoy reading how he approaches putting together alternatives to roadmaps and strategy documents. I think he has the best sense on how to use things like metrics and North Stars.

Emily Weber writes Posts from Awesome Folk has a focus on management, leadership, consensus building and healthy organisation cultures. As the title suggests it offers a carefully curated selection of posts that are often longer form and are generally from expert practitioners.

Michael Brunton-Spall‘s Cyber Weekly is your one stop shop for news on security and analysis of the key issues of the day.

Simon Willison‘s newsletter is more recent and feels more like a very long blog that is getting pushed into the newsletter format. Despite this Simon is one of the most creative and independent developers you could read and he was early into the LLM and generative AI and has lots of interesting insight into what you can do with these models and what works and what doesn’t. He’s also an (intimidating) role model for what independent, solo devs can achieve.

I have a lot of other subscriptions (and indeed a lot of people seem to be starting newsletters currently) so I will probably need to do a follow up to this post in a couple of months if I see that people are posting consistently useful things. One general thing to point out is that if I’m working on a particular technology (like Django, Go or React) I’ll often subscribe to the weekly community news roundups to get a feel for what’s happening. However I find the volume of links and items is overwhelming if you don’t have a specific interest or purpose in reading through them so I relegate them to RSS when I’m not actively working with them and have a more occasional catchup.

Standard
Programming

Version management with asdf

I typically use languages that are unmanageable without being able to version the language release you are dealing with (Python and Javascript). I have also been historically bad at keeping up to date with releases and therefore ending up with code that sometimes doesn’t run at all (Rust and Scala).

asdf is a version manager to rule them all. It provides a common set of commands to manage language dependencies (and the installation of different language versions) but has a plugin interface that different languages can use to bring in language specific concerns.

As a user you just need to learn one set of commands to manage all languages; implementations can build on a stable core system and simply focus on their requirements. Everyone is a winner.

One top of that instead of having multiple hidden files for multi-language projects (usually Javascript and some other language) you now have one file with all the language definitions in.

The only complication I’ve found is retraining myself to the new command set and remembering which commands work on asdf itself (things like updating the tool itself, setting specific versions in different scopes and managing the language plugins themselves) and which work on the plugins (installing new versions). The plugins also have no requirement to be consistent amongst themselves so in some you can specify “lts” as a target for example or “latest”. Others require the full three digit semantic version. These conventions seem to have come from the tools the plugins are replacing.

Overall though I think retraining myself to learn a single tool is probably going to be easier than having an increasing number of per language systems.

Standard