Programming, Work

December 2023 month notes

Web Components

I really want to try and understand these better as I think they are offering a standards-based, no-build solution for components combined with a better way of dropping in lightweight vanilla JS interactivity to a page where I might have used AlpineJS before now.

I’m still at the basic learning stage but I’ve been hopping around the Lean Web Club tutorials to get a sense of the basics. One of the things that is already interesting is that Web Components wrap their child HTML is quite a clear and scoped way so you can use them quite easily to mix server rendered content with runtime dynamic content. I haven’t found an elegant way to do that with other frameworks.

Scoping and Shaping

I attended an online course by John Cutler which was a pretty good introduction to idea of enabling constraints. Most times I like to attend courses and classes to learn something but every now and then it feels good to calibrate on what seems obvious and easy and understand other people’s struggles with what seems basic stuff.

A few takeaways: being a good stakeholder is an underrated skill and being clear about the boundaries of what you’re willing to accept is important to allow teams working on problems to be successful. If someone says they can’t work with your constraints then its not a good fit, if no-one can work with your constraints then you either need to do the work yourself or give up on it.

The most insightful piece of the meeting for me came around the psychology of leaders in the new economy where profits are more important than growth and experimentation. John’s theory is that this pressure makes it harder for executive teams to sign off on decisions or to give teams a lot of leeway in approaching the problem. To provide meaningful feedback to executing teams senior stakeholders feel they need more information and understanding about the decisions they are making and the more hierarchical an organisation the more information needs to go up the chain before decisions can come back down.

Before zero interest rates there used to be a principle that it wasn’t worth discussing something that wouldn’t make back the cost of discussing it. Maybe rather than doing more with less we should be trying to get back to simple not doing things unless they offer a strong and obvious return.

How I learned to love JS classes

I have never really liked or seen the point in Javascript’s class functionality. Javascript is still a prototype-based language so the class syntax is basically complex syntax sugar. React’s class-based implementation was complex in terms of how the class lifecycle and scope interacted with the component equivalent so I was glad to see it replaced by stateless components. However classes are pretty much the only way that you can work with Web Components so I’ve been doing a lot more of them recently than previously.

I’ve also been dropping them into work projects although it raises some interesting questions when you’re using Typescript as the difference between a class and an interface is quite blurry there. Presumably classes should either have static elements or also encapsulate behaviour to make the inheritance meaningful otherwise it’s simply an interface that the implementing class needs to provide.

Standard
Work

Interviewing in software 2023

With the waves of layoffs and the large numbers of developers looking for work there is a lot of frustrating and venting about interviewing and hiring processes. I have my own horror stories and that is part of the problem of writing about this topic. Interviewing generally and especially in tech is broken but writing about it when you’re going through it looks like the complaints of someone inadequate to the requirements of the role. Even when you have a job then where is the wisdom in criticising the process that brought you that job?

The recruitment process for developers has been pretty terrible for years but at least during the hiring boom the process tended to err on the side of taking a chance on people. Now employers seem to feel pretty confident that whatever bar they set they will find a suitable candidate or whatever conditions they apply will be accepted. That means that often the reasons you get back for not proceeding during an interview are often pretty flimsy. The interviewers are the gatekeepers into the roles and they don’t really have to justify themselves too much.

The fundamental problem

At its heart though the problem has always been and remains that most people are really bad at interviewing. Often people spend more time interviewing other people rather than going through interviews themselves. When conducting interviews they are mostly isolated from feedback unless another interviewer takes objection to what they are doing.

Therefore virtually every developer I’ve known who does interviews thinks they are really good at interviewing (including me, I’m really good at conducting interviews (I’ve also had some feedback from agents that I’m really terrible at interviewing, who are you going to believe?)). However most of them are really bad. They don’t really know how to frame open questions, they don’t stick to the scripts or they stick too literally to the scripts, they don’t use any scoring criteria or objective marking and they often freestyle some genuinely awful questions.

One of my favourite pieces of recent interview feedback was that I didn’t have a lot of experience in a particular area. Now while it might be true that I didn’t exhibit much evidence of that experience in the interview I would also have found it easier to do that if I had been asked questions about it. If an area of expertise is vital to the role then you need to have formulated some questions about it and also importantly make sure you allocate enough time in the interview to ask them. Flirting may require mastery of the tease but interviewing usually benefits from a very direct approach.

People who do interviews need to be trained in doing interviews. In an ideal world that would also mean doing some mock interviews where it is known whether the candidate has the skills to do the job. Their interviewing needs to be reviewed by managers from time to time and the easiest way to do that review is by having managers and managers of managers in the actual interviews.

In a previous role some engineering managers who reported to me did a little live roleplay of what they thought a good interview would look like, one taking the part of the interviewer and one the interviewee. Naturally the stakes were low but the exercise gave a template for the rest of the interviewers to set their expectations and give them a sense of where we thought good was.

Interviews, interviews, interviews

Employers confidence in being able to pick and choose is nowhere more exemplified by having loads of interview rounds. For more responsible roles I get that you often have to meet up and down the chain along with peers and stakeholders. In recent processes though I wouldn’t have been surprised to be interviewed by the office cat, probably just to see if I was desperate enough to put up with this kind of treatment. A personality fit with key stakeholders is important but I feel that previously this was done in a single call with multiple people at a time.

Candidate experience surveys

How can you try to improve the interviewing process and allow candidates to provide the feedback to the interviewers that they so desperately need? Some places have used candidate surveys. I’ve tried using these myself and occasionally you got some good feedback, in particular on how someone felt about the way communicated with them as a corporate body. However as a candidate (and in the current economy) I would never fill one out since it doesn’t help you secure an offer and seems in most cases to actually be positively risky as you can either give a high rating and look like a kiss-ass or a low rating that will automatically put the organisation on the defensive, especially those people who interviewed you.

Even after accepting a job I find it really hard to talk to the people who interviewed me about the interviewing experience. In some ways the only safe time to give feedback on the interview process is after you’ve received an offer and have decided not to accept it. At that point it truly does depend on how willing to learn an organisation is.

At a previous role I added a closing question to our script: “What question do you think we should have asked you?”. This was originally intended to be a way for candidates to draw attention to experience that they thought was relevant (even if maybe our scoring system did not take it into account). For a few candidates though it became an opening into discussing the interview process and their thoughts on it. It is the closest thing I’ve found to an effective feedback mechanism I’ve been able to find.

To sum it up

Interviewing generally sucks, right now it sucks even more because without the benefit of the doubt bad interviewing practices make it difficult to succeed as a candidate and enjoy the experience. A negative candidate experience causes brand issues for an employer and while they may not care about it now if the market tightens or visas stop being so easy to acquire then it might start to matter again. As an industry we should do better and genuinely try to find ways to improve how we find a fit between people and roles and make the hiring process less hateful.

Other posts in this area

Standard
Work

November 2023 month notes

The end of November marks the start of the Christmas corporate social hospitality season. It is easy to be cynical but it is nice to catch up with people and find out what has been happening with them.

Bun

We started using Bun at work for a project, more as a CLI build tool than a framework and runtime. It seems reasonably effective and has quite a few of the features that were interesting in Deno. Deno has a bit more ambition and thought in its overall project whereas Bun seems much for focused on trying to get itself embedded on projects. It reminds me quite a lot of Yarn and I think we may want to move to something more open in the future.

In the meantime though I have to admit that having a fast test runner is a joy compared to Jest. I attended Halfstack London this month and one of the talks there gave an illustration of how very slow Jest is and made the recommendation to use Node’s native runner which is an interesting alternative that I might try for my own projects.

AssemblyScript

I’ve been doing the Exercism 12 in 23 challenge (the standard “work with twelve languages in a year” but using Exercism’s problems as a proof of progress). It has thrown up a few interesting things already. I was surprised at how much I liked working with Raku (Perl was one of the first languages I learnt) and I should probably write up something about it. This month was assembly however and unlike most of the other languages this was an area I’ve never really ventured into. My first language was BASIC and I might have POKE’d and PEEK’d but I’ve never written any assembler.

I chose to tackle WebAssembly which seemed like it might have some work advantages if I knew more about it. WebAssembly comes with a representative form called WAT that is made up of s-expressions which looks quite elegant (especially if you are a LISP fan). However trying to write raw assembler felt too challenging so instead I choose to try AssemblyScript instead which is a Typescript style language which compiles to WASM and WAT. It also allows you to write tests in Javascript which import from the compiled output which is quite neat (I much prefer writing tests in dynamic rather than static languages).

It made doing the number-based exercises relatively straight-forward. For a few of the problems I did some hand tweaking of things like parameter calling and while AssemblyScript uses native Math for things like square roots I ended up manually creating a sequence to calculate the hypotenuse of a triangle to avoid library calls which seemed tricky to match between the two execution environments.

While doing this I did start to develop a sense of how assembly and the stack works but I feel I could probably do with a bit more of a structured introduction than trying to solve quite high-level problems with low-level tools. Overall I found it a good stretching exercise.

MDN’s documentation for Web Assembly is excellent and I probably learnt most about the way assembler works by messing around with their executable examples. Not only is this a great documentation format but I don’t think I would have completed the exercises without the explanations in the documentation.

Dependabot bundling

The thing that changed my work life this month was grouping dependencies. Javascript projects tend to have a lot of dependencies and often in the build step changes in these dependencies are pretty meaningless (type files or compilation edge-cases) but of equal effort to apply as security updates.

You can group dependency updates by expressions but more usefully you can group development dependencies (where supported by the dependency configuration) into a single update. Generally if you have a test suite and the build passes you can apply these altogether and have the effort of a single release for multiple changes.

There’s sometimes an argument that grouping too many changes together means that one breaking change blocks all the changes. So far I haven’t seen that in practice because the volume of small changes in Javascript is high but the change impact is very low.

The grouped PR is also sensibly automatically managed, with the group being added to as needed. Security updates are always broken out into their own PR so it is much easier to see priorities when looking at the PR list.

Standard
Books, Work

Book review: The Logic of Failure

This book was originally published in German at the end of the 80s. It described the result of conducting computer based simulations of situations such as running a town or sub-Saharan country. All the situations were fictional but based on real world scenarios and with a rich simulation model. The book is unable to describe how to succeed but instead focuses on patterns of behaviour that were frequently seen when people experienced failure, and often catastrophic failure, in the simulations.

Misunderstanding complex systems with exponential behaviour

The book offers a succinct and insightful picture of factors that are better understood today but often not in combination and not relating to issues of leadership and management. In no particular order these include the very real problem of differentiating linear and exponential processes. The human mind seems to bias towards linear models and struggles to accurately predict the outcomes of changes in the rate of change itself. Of course this situation is even harder at the start of the processes because the two look the same and therefore if you don’t have a clear understanding of the underlying processes there is no way to predict whether something will be linear or exponential.

Failure to understand exponential growth is one challenge but exponential collapse is even harder for our minds to predict and model. The chapter on predator and prey models were particularly fascinating as often there is massive growth in the population size of the predators before a huge collapse in their numbers. If a metric has been exponential and then becomes linear without a deep understanding of the processes at work you can’t tell whether you have encountered a plateau or precipice.

The book also feels that individual decision makers only rarely can hold a complex model in their minds, participants in the study would sometimes deny the information given to them in the briefing once they developed their own incorrect theories of how the simulation was working.

On the difficulty of being successful

One of the reasons the book can’t draw definite conclusions about what strategies are successful is that because there is no universal system that is successful in all circumstances. For example, generally people who asked more questions after each step in the simulation were more successful than those who didn’t, however at some point all the successful participants asked less questions and acted decisively in ways that advanced their goals. They seemed to better manage their need for information against the need to act and observe and were able to tune the mix of activities in an optimal way.

Experience generally seemed helpful but there is a warning about what the book calls “methodism” which I think might have other names now. What it describes though is the misapplication of prior knowledge or tactics. People look for a few identifying characteristics in the situation that match their experience and then they apply techniques or solutions that have worked for them in the past. In doing so they can ignore information in the current situation that contradicts the likelihood that the previous solution is appropriate.

The book uses “elaboration” as a way to measure whether someone’s proposed solution is based on the situation they are presented with rather than one they have encountered before. Elaborated solutions include principles guiding the attempted solution and potential compromises in executing it as well as mitigations against the failure of the attempted solution.

Essentially people who are more likely to be successful use their previous experience to inform their approach to a new problem but are rigorous in their analysis of the new situation and prepared to adapt previously successful approaches to the new situation.

Unsafety buffers

One very practical takeaway was around the use of buffers in safety procedures. Typically when designing a robust procedure you want to allow for issues in following the procedure or the timing of its execution and so on. This means that most safety procedures tell you to action early at a point when the system is quite far from failure and the capacity of the system is quite high. Ironically this means that if you perform the procedure late or incompletely then quite often it will still work.

The book gives the example of Chernobyl as a place where safety procedures were routinely ignored, abbreviated or circumvented because nothing bad ever happened when they were. If you draw the conclusion that the safety procedures are unnecessary or their buffer values are too high and you can use your own heuristically determined values instead then you start down the path to disaster.

It is important to remember that any conservative safety procedure is conservative to give it the maximum likelihood of working in a range of circumstances. One that has a narrow range of applicability is less likely to result in a safe outcome.

As the book points out it is impossible for individual humans to learn from catastrophic failures. Collectively though we should be studying and drawing conclusions from the worst outcomes that we have not personally experienced.

Defining success and avoiding failure

One key takeaway I took away from the book is that while it talks about failure and success even the successful outcomes involved trial and error and contained points where things were not as good as they could have been. Most of the outcomes described as successful involved the participant having an idea of some new stable situation that improved aspects of the current one and working methodically towards it. This is quite a modest definition of success compared to the way it is commonly used in business for example.

The terrifying thing about the book is that in most of the simulations the virtual people involved would probably have been better off if nothing had been done. The scenario usually starts in a stable situation that is sub-optimal and on my reading it seems the majority of participants took that situation and turned it into a hellscape of unsustainable growth or development followed by disaster and a collapse of society to levels below the starting point.

In many ways the book is a justification of small ‘c’ conservatism, sustainable improvements are hard to achieve and the advantage of time-tested solutions is that have been validated under real-world conditions. The counter-argument though is that improvements are possible and to not seek them out of fear is also an unhappy situation.

This is a small book and you can read the essence of it’s content in this paper. Like all the best book its ideas have an impact out of proportion with the amount of time it takes to explain them.

I think I first found about this book via a post from Tim Harford who was buying links if you’re interested (or details to order from your local bookshop).

Standard
Work

October 2023 month notes

I’ve been learning more about Postgres as I have been moving things from Dataset to Psycopg3. It is kind of ridiculous the kind of things you can do with it when strip away the homogenising translation layer of things like ORMs. Return a set of columns from your update? No problem. Upsert? Straight-forward.

However after completing a CONFLICT clause I received a message that no conflict was possible on the columns I was checking and I discovered that I had failed to add a Primary Key to the table when I created it. It probably didn’t matter to the performance of the table as it was a link table with indexes on each lookup column but I loved the way that the query parsing was able to do that level of checking on my structure.

Interestingly I had a conflict clause in my previous ORM statement I was replacing and it had never had an issue so presumably it was doing an update then insert pattern in a transaction rather than using native features. For me this shows how native solutions are often better than emulation.

Most of the apps I’ve converted to direct use of queries are feeling more responsive now (including the one I use to draft these posts) but I’m not 100% certain whether this is because of switch to lower-level SQL or because I’ve been fixing the problems in the underlying relational model that were previously being hidden from me.

We’re going to need a faster skateboard

I have been thinking a lot about the Gold-plated Donkey Cart this month. When you challenge problems with solutions you often first have a struggle to get people to admit that there is a problem and even if it is admitted then often the first response is to try and patch or amend the existing solution than consider whether the right response might be.

We have additive minds so this tendency to patch what is existing is natural but sometimes people aggressively defend the status quo, even when it is counter-productive to their overall success.

Weakly typed

I’ve had some interesting experiences with Typescript this month, most notably an issue with a duplicated package which resulted in code that has been running in production for months but which has either not been correctly typed or has been behind the intended version by maybe four major versions. Typescript is interesting amongst type-hinted languages in that it has typing files that are often supplied separately from the code itself and in some cases which exist independently of the code itself. My previous experience of Python typing for example stopped the checker at the boundaries of third-parties and therefore only applied to the code you are writing yourself.

I’m uncertain of the value of providing type files for Javascript libraries as the compile-time and runtime contexts seem totally different. I found a Javascript dependency that had a completely broken unit test file and on trying to correct it I found that it couldn’t have the behaviour that the tests were trying to verify. Again I wondered about how this code was working in production and predictably it turned out that the executed code path never included the incorrectly specified behaviour. Dynamic code can be very resilient and at the same time a time bomb waiting to happen no matter what your

I think Typescript code would be better off if it was clearer that any guarantees of correctness can only be provided for the code you have totally under your control and which is being compiled and checked by you.

Frozen in time

I’ve been thinking a lot as well about a line from this talk by Killian Valkhof where he mentions that our knowledge on how to do things often gets frozen based on how we initially learnt to do things. For developers who learnt React for frontend will be the future people who learnt to do frontend via jQuery. I’ve been looking at Web Components which I thought were pretty terrible when they first came out but now look delightfully free of complex build chains and component models.

But more fundamentally it has made me think about when I choose or reject things am I doing so based on their inherent qualities in the present moment or based on the moment in time when I first learnt and exercised those skills. For CSS for example I’m relatively old-fashioned and I have never been a fan of the CSS-in-JS idea. However I think this approach, while maybe being outside contemporary preferences, is sound. Sound CSS applies across any number of frontend component models and frameworks and the work that goes into the CSS standards is excellent where as (ironically) the limitations of Javascript frameworks to express CSS concepts means that often it is a frozen subset that is usable.

I’ve never been entirely comfortable with Docker or Kubernates though and generally prefer PaaS or “serverless” solutions. Is that because I enjoyed the Heroku developer experience and never really understood the advantages of containerisation as a result.

Technology is fashion and therefore discernment is a critical quality for developers. For most developers though it is not judgement that they manifest but a toxic self-belief in the truth of whatever milieu they entered into the industry in. As I slog through my third decade in the profession doubt is something that I feel strongly about my opinions and trying to frame my judgements in the evidence and reasoning available now seems a valuable technique.

Standard
Work

September 2023 month notes

I tried the Kitten framework as I was quite surprised to see it request permission to change my privileged port permissions on install. I had to read through the post and its related posts before I realised that the nature of restricted ports is so ingrained in my I never asked whether it was genuinely a security risk to have them accessible from userspace. I would recommend taking a look through the posts linked to from the above post because when I realised that often my answer to port restrictions is to sudo onto them and that is a bit of weird way of not actually being secure.

I haven’t done much with Kitten, just working my way through the tutorial. The static serving is fine, the dynamic pages are a bit odd and the default ability to retain server state feels very odd.

I’ve also been continuing to try and learn Koa although if this wasn’t related to work I wouldn’t be bothering. Taking a look at the State of Javascript indicates that Express is the runaway winner and all other frameworks are pretty esoteric.

As an aside the state of in-page linking in the State of Javascript is embarrassing, the page structure is really complicated and doesn’t seem able to assign a simple id to a section tag.

Koa is from the school of “everything is a plugin” so out of the box it is completely anaemic and you have the zero-joy experience of trying to figure what libraries and plugins you should use. Most of the core plugins haven’t been updated in years, which is good in terms of stability but makes it hard to understand which libraries are actually unmaintained and which are fundamental. I much prefer the Python approach of having batteries included but being able to swap things out if you have particular needs.

One thing that Koa does differently to Express is to use a combined Context object instead of explicit request and response objects. I don’t think that is really very helpful and I did manage to mix the concept up with Go Contexts. Koa contexts are just a big ol’ object that includes some default values and the response kind of magically fires after all the middleware has fired. I feel it is a bit of step backwards in terms of clarity. My guess is that it makes it easier for plugins to add functions into the context object rather than having to explicitly import them and use them within the handler code.

I’m building a basic old school webapp so I needed some templating and that was a bit of journey in terms of what is popular but Nunjacks is work-friendly and based on jinja2 so it feels very familiar.

I’ve been slowly continuing to replace my various Python database libraries with a simpler and faster set of string queries executed through psycopg3. Next on the chopping block is Pony, which while it is relatively enjoyable as an ORM is needlessly clever in its use of generators and lambdas to do queries. I found a broken query and despite reading through the documentation I couldn’t fix it. If you already know SQL an abstraction has to be pretty powerful to be worth the overhead on things like queries which are fundamental a string and a map of data bindings and not much more.

I attended the State of the Browser conference this month and it was a good edition that balanced the input of practicioners, browser makers and had practical technical advice and reminders. It also managed to limit itself to only one non-technical related talk. I’ll write up a few notes in a separate post but this felt like a great return on the time invested.

I also discovered Rosie Pattern Language this month, a parser based alternative to regular expressions, I was intrigued but it lacks a helpful tutorial or introduction article so it has gone into the backlog to investigate later.

I started reading the book The Logic of Failure this month and I’m about halfway through it. It is a fascinating read and describes a series of experiments done with computer simulations of various situations from a town, a sub-Saharan ecosystem to a fridge with a broken thermometer. The outcomes are then mapped to the participants voiced thoughts to try and identify patterns of behaviour and the underlying rationales that drive them. Obviously the goal of reading such books is to try and temper the causes of failure in yourself but some of the problems the book highlights such the behaviour of complex inter-related components and exponential behaviour are just things that all humans are bad at.

Standard
Programming, Work

August 2023 month notes

I have been doing a GraphQL course that is driven by email. I can definitely see the joy of having autocompletion on the types and fields of the API interface. GraphQL seems to have been deployed way beyond its initial use case and it will be interesting to see if its a golden hammer or genuinely works better than REST-based services outside the abstraction to frontend service. It is definitely a complete pain in the ass compared to HTTP/JSON for hobby projects as having to ship a query executor and client is just way too much effort compared to REST and more again against maybe not doing a Javascript app interface.

I quite enjoyed the course, and would recommend it, but it mostly covered creating queries so I’ll probably need to implement my own service to understand how to bind data to the query language. I will also admit that while it is meant to be quite easy to do each day I ended up falling behind and then going through half of it on the weekend.

Hashicorp’s decision to change the license on Terraform has caused a lot of anguish on my social feeds. The OpenTerraform group has already announced that they will be creating a fork and are also promising to have more maintainers than Hashicorp. To some extent the whole controversy seems like a parade of bastards and it is hard to choose anyone as being in the right but it makes most sense to use the most open execution of the platform (see also Docker and Podman).

In the past I’ve used CloudFormation and Terraform, if I was just using AWS I would probably be feeling smug with the security of my vendor lock-in but Terraform’s extensibility via its provider mechanisms meant you could control a lot of services via the same configuration language. My current work uses it inconsistently which is probably the worst of all worlds but for the most part it is the standard for configuring services and does have some automation around it’s application. Probably the biggest advantage of Terraform was to people switching clouds (like myself) as you don’t have to learn a completely new configuration process, just the differences with the provider and the format of the stanzas.

The discussion of the change made we wonder if I should look at Pulumi again as one of the least attractive things about Terraform is its bizarre status as not quite a programming language, not quite Go and not quite a declarative configuration. I also found out about Digger which is attempting to avoid having two CI infrastructures for infrastructure changes. I’ve only ever seen Atlantis used for this so I’m curious to find out more (although it is such an enterprise level thing I’m not sure I’ll do much than have an opinion for a while).

I also spent some time this month moving my hobby projects from Dataset to using basic Pyscopg. I’ve generally loved using Dataset as it hides away the details of persistence in favour of passing dictionaries around. However it is a layer over SQLAlchemy which is itself going through some major point revisions so the library in its current form is stuck with older versions of both the data interaction layer and the driver itself. I had noticed that for one of my projects queries were running quite slowly and comparing the query time direct into the database compared to that arriving through the interface it was notable that some queries were taking seconds rather than microseconds.

The new version of Psycopg comes with a reasonably elegant set of query primitives that work via context managers and also allows results to be returned in a dictionary format that is very easy to combine with NamedTuples which makes it quite easy to keep my repository code consistent with the existing application code while completely revamping the persistence layer. Currently I have replaced a lot of the inserts and selects but the partial updates are proving a bit trickier as dataset is a bit magical in the way it builds up the update code. I think my best option would be to try and create an SQL builder library or adapt something like PyPika which I’ve used in another of my projects.

One of the things that has surprised me in this effort is how much the official Python documentation does not appear in Google search results. Tutorial style content farms have started to dominate the first page of search results and you have to add a search term like “documentation” to surface it now. People have been complaining about Google’s losing battle with content farms but this is the first personal evidence I have of it. Although I always add “MDN” to my Javascript and CSS searches so maybe this is just the way of the world now, you have to know what the good sites are to find them…

Standard
Work

July 2023 month notes

I’ve been playing around with the V language which describes itself as an evolution of Go. This means letting go of some unnecessary devotion to imperative programming by allowing first-order map and filter as well as using an option syntax for handling errors. The result is quite an interesting language that feels more modern and less quirky than Go but isn’t quite as full on as Rust. I’ve enjoyed my initial experience but I haven’t been doing that much in it so far.

I’ve been continuing to experiment with Deno as well and I’m continuing to enjoy it as a development experience but I’m going to have to start doing some web development with it soon because while it’s fine for doing some exploratory programming using Javascript for command-line and IO stuff is not great, even with async/await.

I’ve been re-reading Domain Driven Design by Eric Evans. I’d forgotten how radical this book was. The strict tiering and separation of the domain model from other kinds of code is quite inspiringly strict. I wanted to try and have an abstracted business logic implementation in my last business where I was leading development but we never really got there as it was hard to go back and remove the historical complecting.

I’ve been doing some shell scripting recently and using some new (to me) commands in addition to old faithful’s like sed; tr transforms its input string to its output parameter making it easier to replace full stops or spaces with hyphens.

I’ve been trying a new shell wezterm after years of using Terminator. The appeal of wezterm is that it is cross-platform so I can use the same key-strokes across OSX and Linux. Learning new keybindings is always difficult but I’ve had no complaint about reliability and performance so far.

It was OKR time this month something I haven’t done in a while. OKRs are far more popular than they are useful. They seem to work best in mature profitable businesses than are seeking to create ambitious plans around sustaining innovation. Smaller, early-stage businesses still benefit from the objective alignment process but probably should still be focused on learning and experimenting in the Lean Startup model. As part of this process I was also introduced to Opportunity Solution Trees which in theory should have squared the circle on this problem but in practice the two systems didn’t mesh. I think that was because the company OKRs were generated separately from the Solution Tree so the activity in support of the objectives wasn’t driven by the solutions and experiments but were generated in response to the company objectives.

Standard
Work

How I have been using knowledge graphs

Within a week of using Roam Research’s implementation of a knowledge graph or Zettlekasen I decided to sign up because there was something special in this way of organising information. My initial excitement was actually around cooking, the ability to organise recipes around multiple dimensions (a list of ingredients, the recipe author, the cuisine) meant you could both search and browse by the ingredients that you had or the kind of food you wanted to eat.

Since then I’ve started to rely on it more for organising information for work purposes. Again the ability to have multiple dimensions to things is helpful. If you want to keep some notes about a library for handling fine grained authorisation you might want to come back to that via the topic of authorisation, the implementation language or the authorisation model used.

But is this massively different from a wiki? Well a private wiki with a search function would probably do all this too. For me personally though I never did actually set up something similar despite experiments with things like Tiddlywiki. So I think there are some additional things that make the Zettelkasten actually work.

The two distinctive elements missing from the wiki setup are the outliner UI and the concept of daily notes. Of the two the daily notes is the simplest, by default these systems direct you a diary page by default, giving you a simple context for all your notes to exist in. The emphasis is getting things out of your head and into the system. If you want to cross-link or re-organise you can do so at your leisure and the automatic back-referencing (showing you other pages that reference the content on the page you are viewing) makes it easy to remind you of daily notes that maybe you haven’t consciously remembered you want to re-organise. This takes a good practice and delivers a UI that makes it simple. Roam also creates an infinite page of daily notes that allows you to scroll back without navigating explicitly to another page. Again nothing complicated but a supportive UI feature to simplify doing the right thing.

The outliner element is more interesting and a bit more nuanced. I already (and continue to use) an outliner in the form of Workflowy. More specifically, I find it helpful for outlining talks and presentations, keeping meeting notes and documenting one to ones (where the action functionality is really helpful to differentiate items that need to be actioned from notes of the discussion). The kind of things where you want to keep a light record with a bit of hierarchical structure and some light audit trail on the entries. I do search Workflowy for references but I tend to access it in a pretty linear way and rarely access it without a task-based intention.

Roam and Logseq work in exactly the same way, indeed many of the things I describe above are also use-cases for those products. If I wanted to I could probably consolidate all my Workflowy usage into Roam except for Roam’s terrible mobile web experience. However there is a slight difference and that is due to the linking and wiki-like functionality. This means you can have a more open discovery journey within the knowledge graph. Creating it and reading, I have found, are two different experiences. I think I add content in much the same way as an outliner but I don’t consume it the same way. I am often less task-orientated when reviewing my knowledge graph notes and as they have grown in size I have had some serendipitous connection making between notes, concepts and ideas.

What the outliner format does within the context of the knowledge graph is provide a light way of structuring content so that it doesn’t end up a massive wall of text in the way that a wiki page sometimes can. In fact it doesn’t really suit a plain narrative set of information that well and I use my own tool to manage that need and then link to the content in the knowledge graph if relevant.

In the past I have often found myself vaguely remembering something that a colleague mentioned, a link from a news aggregator site or a newsletter or a Github repo that seemed interesting. Rediscovering it can be very hard in Google if it is neither recent nor well-established, often I have ended up reviewing and searching my browser history in an almost archaeological attempt to find the relevant content. Dumping interesting things into the knowledge graph has made them more discoverable as individual items but also adds value to them as you gain the big picture understanding of how things fit together.

It is possible to do achieve any outcome through any misuse of a given set of tools but personal wikis, knowledge graphs and outliners all have strengths that are best when combined as much as possible into a single source of data and which have dedicated UIs for specific, thoughtful task flows over the top. At the moment there’s not one tool that does it all but the knowledge graph is the strongest data structure even if the current tools lack the UI to bring out the best from it.

Standard
London, Programming, Web Applications, Work

Halfstack on the Shore(ditch) 2022

This is the first time the conference has been back at Cafe 1001 since the start of the Pandemic and my first HalfStack since 2021’s on the Shore event.

In some ways Halfstack can seem like a bit of an outlandish conference but generally things that are highly experimental or flaky here turn up in refined mainstream forms three to five years later. Part of the point of the event is to question what is possible with the technologies we have and what might be possible with changes that are due in the future. Novelty, niche or pushing the envelope talks are about expanding the conversation about what is possible.

The first standout talk this year was by Stephanie Shaw about Design Systems. It tries to make the absurdist argument that visual memes meet all the criteria to be a design system before looking at what are the properties of a good design system that would disqualify memes. The first major point that resonated with me was that design systems are hot and lots of people say they have them when what they actually have are design principles, a component library or an illustration of UI variant behaviour.

I was also impressed that the talk had a slide dedicated to when a design system would be inappropriate. Context always matters in terms of implementing ideas in organisations and it is important to understand what the organisation needs and capabilities that are required to get value from an idea. Good design systems provide a strong foundation for rapid, consistent development and should demonstrate a clear return on the investment in them.

One of the talks that has stayed with me the longest was one that was about things that can be done now. I’ve seen Chris Heilmann talk about dev tools at previous conferences but this time the frame of the talk was different and was about using dev tools in the browser to make the web sane again. He reminded me that you can use the dev tools to edit the page. Annoying pop-up? Delete it! Right-click hijacked? Go into the handler bindings and unbind the customer listener. Auto-playing video? Change it’s attributes or again just delete the whole thing. He also did explain some new things that I wasn’t aware of such as the ability to take a screenshot of a specific node from within the DOM inspector. I’ve actually used that a few times since in my work.

There was an impromptu talk that was grounded in a context that was a little hard to follow (maintaining peer to peer memes in a centralised internet apocalypse I think) but was about encoding images into QR codes that included an explanation of how QR codes actually work and encode information (something I didn’t know). The speaker took the image data, transformed it into a series of QR codes, then had a website that displayed the QR codes in sequence and a web app that used a phone camera to scan the codes and reassemble the image locally. The scanning app was also able to understand where in the sequence the QR code was which created a kind of scanning line effect as it built up the image which was very cool to watch.

There were three talks that all involved a significant amount of simultaneous interaction and each using slightly different methods but clearly the theme was having many people together on a webpage interacting in near real time.

The first thing to say is that I took a decent but relatively low-powered Pinebook laptop to the conference as I thought I would just need something simple to take notes and look things up on the internet, maybe code along with some Javascript. All of the interactive demos barely worked on it and the time to be active was significantly longer than say the attendees with the latest Macs. I think the issue was a combination of having really substantial downloads (which appeared not to be cached so refreshing the browser was fatal) but also just massive requirements on CPU in the local synchronisation code.

The first was by a pro developer relations person, Jo Franchetti, who works for Ably and who used the Ably API. Predictably this was the best working (and looking) demo with a fun Halloween theme around the idea of an ouija board or, more technically, trying to spell out messages by averaging all the subscribers’ mouse movements to create a single movement over the screen. However even using a commercial API, probably having no more than 25 connections and a single-screen UI my laptop still ground to a halt and had significant lag on the animations. It did look great projected on the big screen though.

Jo’s talk introduced me to an API I hadn’t heard of before scrollTo (part of a family of scrolling APIs). This is an example of how talks about things on the edge of the possible often come back to things that are more practical day to day.

James Allardice and Ross Greenhalf had the least successful take on the multiuser extension and in terms of presentation style seemed to be continuing an offstage squabble in front of everyone. I get the impression that they were very down on what they had been able to achieve and were perhaps hoping for a showcase example to promote their business.

Primarily they didn’t get this because they were bizarrely committed to AWS Lambda as the deployment platform. Their idea was to do a multiplayer version of Pong and it kind of worked, except the performance was terrible (for everyone this time, not just me). This in turn actually created a more fun experience that what they had intended to build as the lag meant you needed to be quite judicious in when you sent your command (up or down) to the server as there was a tendency to overshoot with too many people sending commands as ball approached and then another as they were waiting for the first one to take effect. You needed to slow down your reaction cycle and try and anticipate what other people would be doing.

The game also only lasted for the duration of a Lambda timeout of a single execution run as the whole thing was run in the execution memory of a single Lambda instance. This was a consequence of the flawed design but again it wasn’t hard to imagine how Lambda could be quite effective here as long as you’re not using web sockets for the push channel. It feels like this kind of thing would probably be pretty trivial in something like Elixir in a managed container but was a bit of a uphill battle in a Javascript monolith Function as a Service.

The most creative multi-user demo was by Mynah Marie (aka Earth to Abigail who has been a performer at previous Halfstacks) who used Estuary to create a 15 person online jam session which was surprisingly harmonious for a large group with little in the way of being able to monitor your own sound (I immediately had more empathy for any musician who has asked the desk for less drums in their monitor). However synchronisation was again a big problem, not only did other people paste over my loops but also after leaving the session one of my loops remained stubbornly playing until killed by the admin despite me not being able to access the session again, I was given a new user identity and no-one seemed able to reconnect with the orphan session.

Probably the most mindblowing technical talk was by Ulysses Popple about his tool Nodessey which is both a graph editor or notebook and a way to feed values into nodes that can then visualise the input they are receiving from their parent nodes. It reminded me a bit of PureData. I found following the talk, which was a mixture of notes and live-coded examples, a bit tricky as its an unusual design and trying to follow how the data structure was working while also trying to follow the implementation was tricky for me.

One thing I found personally interesting is that Nodessey is built on top of a minimal framework called Hyperapp which I love but have never seen anyone else use. I now see that I have very much underestimated the power of the framework and I want to start trying to use it more again.

Michele Riva did a talk about the use of English in programming languages which had a helpful introduction to programming languages that had been created in non-English languages. As an English speaker you tend to not need to ever leave the US-led universe of English based languages but it was interesting to see how other language communities had approached making programming accessible for non-English speakers. There was a light touch on non-alphabetic languages and symbolic languages like J (and of course brainfuck).

Perhaps the most practical talk of the conference was by Ante Barić around browser extensions. I’ve found these really valuable for creating internal organisation tooling in a very lightweight way but as Chris Heilmann reminded us in his talk too many extensions end up hammering browser performance as they all attempt to intercept the network requests and render cycle. The talk used a version of Clippy to create annoying commentary on the websites you were visiting but it had some useful insight into what is happening with browser extensions and future plans from both the Google and Mozilla teams as well as practical ways to build and use them.

Ante mentioned a tool that I was previously unaware of called web-ext that is a Mozilla project but which might be able to build out Chrome extensions in the future and gives a simplified framework for putting together extensions.

General notes

Food and drink is available when you want it just by showing the staff your conference lanyard. Personally I think it is great when conferences are able to be so flexible around letting people eat when they want to and avoiding the massive queues for food that typically happen when you try and cram an entire conference into a buffet in 90 minutes. I think it also helps include people who may have particular eating patterns that might not easily fit into scheduled tea and lunch breaks. It also makes it feel less like school.

In terms of COVID risk, the conference was mostly unmasked and since part of the appeal is the food and drink I felt like I wasn’t going to be changing my risk very much by wearing a mask during the talk sections. The ventilation seemed good (the room could be a bit cold if you were sitting in the wrong place) and there was plenty of room so I never had to sit right next to someone. This is probably going to remain a conference that focuses on in-person socialising and therefore isn’t going to appeal to everyone. Having a mask mandate in the current environment would take courage. The open air “beach” version of the conference on the banks of the Thames would probably be more suitable for someone looking to avoid indoor spaces.

Going back?

Halfstack is a lot of fun and I’ve booked my super early-bird for this year I think it offers a different balance of material compared to most web and Javascript conferences. This year I learnt practical things I could bring to my day job and was impressed by what other people have been able to achieve in theirs.

Standard