Programming

How to call instance methods by name on a class in Typescript

I recently wanted to parameterise a test so that it included the method to test as a parameter.

This is easy in Javascript:

const myClass = new MyClass();

['methodA', 'methodB'].forEach((methodName) => myClass[methodName]();

But when you try this naively in Typescript it fails with a message that the class cannot be indexed by the type string.

The method interfaces of the class actually form a type check that needs to be satisfied and this led me to the keyof operator which forms this type.

As I was working on a test I didn’t need a strict type check so I could simply declare my string as keyof(MyClass) and this resolved the type complaint.

If the code was actually in the production paths then I would be a bit warier of simply casting and would probably try to avoid dynamic programming because it feels like it working around the type-checking that I wanted by using Typescript in the first place.

I’m not sure how I expected this to work but I was kind of expecting the type-checker to be able to use the class definition to make the check rather than using a more generic reflection that works for objects too but at the cost of having to have more annotation of your intent.

Standard
Work

January 2024 month notes

Water CSS

I started giving this minimal element template a go after years of using various versions of Bootstrap. It is substantially lighter in terms of the components it offers with probably the navigation bar being the one component that I definitely miss. The basic forms and typography are proving fine for prototyping basic applications though.

Node test runner

Node now has a default test runner and testing framework. I’ve been eager to give it a go as I’ve heard that it is both fast and lightweight, avoiding the need to select and include libraries for testing, mocking and assertions. I got the chance to introduce it in a project that didn’t have any tests and I thought it was pretty good although it’s default text output felt a little unusual and the alternative dot notation might be a bit more familiar.

It’s interesting to see that the basic unit of testing is the assertion, something is shares with Go. It also doesn’t support parameterised tests which again is like Go which has a pattern of table-driven tests implemented with for loops except that Go allows more control of the dynamic test case naming.

I’d previously moved to the Ava library and I’m not sure there is a good reason not to use the built-in alternative.

Flask blueprints

In my personal projects I’ve tended to use quite a few cut and paste modules and over the years they tend to drift and get out of sync so I’ve been making a conscious effort to learn about and start adopting Flask Blueprints. Ultimately I want to try and turn these into personal module dependencies that I can update once and use in all the projects. For the moment though it is interesting how the blueprints format is pushing me to do some things like logging better (to understand what is happening in the blueprint) and also structuring the different areas of the application so that they are quite close to Django apps with various pieces of functionality now starting to be associated with a url prefix that makes it a bit easier to create middleware that is registered as part of the Blueprint rather than relying on imports and decorators.

Web components

I’ve been making a bit of progress with learning about web components. I realised that I was trying to do too much initially which is why they were proving complicated. Breaking things down a bit has helped with an initial focus on event listeners within the component. I’m also not bringing in external libraries at the moment but have got as far as breaking things up into [ESM modules](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Guide/Modules) which has mostly worked out so far.

Standard
Programming, Work

December 2023 month notes

Web Components

I really want to try and understand these better as I think they are offering a standards-based, no-build solution for components combined with a better way of dropping in lightweight vanilla JS interactivity to a page where I might have used AlpineJS before now.

I’m still at the basic learning stage but I’ve been hopping around the Lean Web Club tutorials to get a sense of the basics. One of the things that is already interesting is that Web Components wrap their child HTML is quite a clear and scoped way so you can use them quite easily to mix server rendered content with runtime dynamic content. I haven’t found an elegant way to do that with other frameworks.

Scoping and Shaping

I attended an online course by John Cutler which was a pretty good introduction to idea of enabling constraints. Most times I like to attend courses and classes to learn something but every now and then it feels good to calibrate on what seems obvious and easy and understand other people’s struggles with what seems basic stuff.

A few takeaways: being a good stakeholder is an underrated skill and being clear about the boundaries of what you’re willing to accept is important to allow teams working on problems to be successful. If someone says they can’t work with your constraints then its not a good fit, if no-one can work with your constraints then you either need to do the work yourself or give up on it.

The most insightful piece of the meeting for me came around the psychology of leaders in the new economy where profits are more important than growth and experimentation. John’s theory is that this pressure makes it harder for executive teams to sign off on decisions or to give teams a lot of leeway in approaching the problem. To provide meaningful feedback to executing teams senior stakeholders feel they need more information and understanding about the decisions they are making and the more hierarchical an organisation the more information needs to go up the chain before decisions can come back down.

Before zero interest rates there used to be a principle that it wasn’t worth discussing something that wouldn’t make back the cost of discussing it. Maybe rather than doing more with less we should be trying to get back to simple not doing things unless they offer a strong and obvious return.

How I learned to love JS classes

I have never really liked or seen the point in Javascript’s class functionality. Javascript is still a prototype-based language so the class syntax is basically complex syntax sugar. React’s class-based implementation was complex in terms of how the class lifecycle and scope interacted with the component equivalent so I was glad to see it replaced by stateless components. However classes are pretty much the only way that you can work with Web Components so I’ve been doing a lot more of them recently than previously.

I’ve also been dropping them into work projects although it raises some interesting questions when you’re using Typescript as the difference between a class and an interface is quite blurry there. Presumably classes should either have static elements or also encapsulate behaviour to make the inheritance meaningful otherwise it’s simply an interface that the implementing class needs to provide.

Standard
Programming

Halfstack on the Shore(ditch) 2023

Self-describing as an “anti-conference” or the conference that you get when you take all the annoying things about conferences away. It is probably one of the most enjoyable conferences I attend on a regular basis. This year is was in a new venue quite close to the previous base at Cafe 1001 which was probably one of my favourite locations for a conference.

The new venue is a small music venue and the iron pillars that fill the room were awkward for sightlines until I grabbed a seat at the front. The bar opened at midday and was entirely reasonable but the food was not as easily available as previously available but you were still able to walk to the nearby cafe and show your conference badge if you wanted.

Practical learnings

Normally I would say that HalfStack is about the crazy emergent stuff so I was surprised to actually learn a few things that are relevant to the day job (admittedly I have been doing a lot more backend Javascript than I was previously). I was quite intrigued to see some real-world stats that the Node’s in-built test runner is massively faster than Jest (which maybe should not be so surprising as it does some crazy things). I’ve been using [Bun]() recently which does have a faster runner and it makes TDD a lot more fun that with the normal Jest test runner.

I also learnt that NODE_ENV is used by library code to conditionally switch on paths in their code. This is obviously not a sound practice but the practical advice was to drop variables that map to environments completely and instead set parameters individually as per standard 12 factor app practice. I think you can refine that with things like dotenv but I’m basically in agreement. Two days later I saw a bunch of environment-based conditional code in my own workplace source code.

It was also interesting to see how people are tackling their dependency testing. It felt like the message is that your web framework should come with mocks or stubs for testing routing and requests as standard and that if it doesn’t then maybe you should change your framework. That feels a bit bold but that’s only because Javascript is notorious for having anaemic frameworks that offer choice but instead deliver complexity and non-trivial decisions. On reflection it seems like having a built-in unit testing strategy for your web framework seems like a must-have feature.

Crazy stuff

There was definitely less crazy stuff than in previous years. A working point of sale system including till management based on browser APIs was all quite practical and quite a good example of why you might want USB and serial port access within the browser.

There was also a good talk about converting ActionScript/Flash to Javascript and running emulation of old web games although that ultimately turned out to be a way of making a living as commercial games companies wanted to convert their historic libraries into something that people could continue to use rather than being locked away in a obsolete technology.

The impact of AI

One of the speakers talked about using ChatGPT for designing pitches (the generated art included some interesting interpretations our how cat claws work and how many claws they have) and I realised listening to it that for some younger people the distilled advice and recommendations that the model has been fed is exactly the kind of mentoring that they have desired. From a negative perspective this means an endless supply of non-critical ideas and suggestions that require little effort on the user’s part; just another way to avoid having to do some of the hard work of deliberative practice. On the positive side a wealth of knowledge that is now available to the young in minutes.

While I might find the LLMs trite, for people starting their careers the advice offered is probably more sound that their own instincts. There also seems to be some evidence appearing that LLMs can put a floor under poor performance by correctly picking up common mistakes and errors. At a basic level they are much better at spelling and grammar than non-native speakers for example. I don’t think they have been around long enough to have reliable information though and we need to decide what basic performance of tasks looks like.

I wonder what the impact will be on future conference talks as ChatGPT refines people to a common set of ideas, aesthetics and structures. Probably it will feel very samey and there will be a desire to have more quirky individual ideas. It feels like a classic pendulum swing.

Big tech, big failings

Christian Heilmann’s talk was coruscating about the failures of big tech during the acute phase of the COVID pandemic and more generally about being unable to tackle the big problems facing humanity and instead preferring to focus on fighting for the attention economy and hockey stick growth that isn’t sustained. He also talked about trying to persuade people that they don’t have to work at FAANGS to be valid people in technology.

His notes for this talk are on his blog.

Final thoughts

Chat GPT might need me to title this section as a conclusion to avoid it recommending that I add a conclusion. HalfStack this year is happening at a strange time for programming and the industry. There wasn’t much discussion of some topics that would have been interesting around the NodeJS ecosystem such as alternative runtimes and the role of companies, consultancy and investment money in the evolution of that ecosystem. The impact of a changed economic environment was clear and in some cases searing but it was a helpful reminder that it is possible to find your niche and make a living from it. You don’t necessarily need to hustle and try and make it big unless that is what you really want to do.

The relaxed anti-conference vibe felt like a welcome break from the churn, chaos and hamster wheel turning that 2023 has felt like. I’ve already picked up my tickets for next year.

Links

Standard
Work

Interviewing in software 2023

With the waves of layoffs and the large numbers of developers looking for work there is a lot of frustrating and venting about interviewing and hiring processes. I have my own horror stories and that is part of the problem of writing about this topic. Interviewing generally and especially in tech is broken but writing about it when you’re going through it looks like the complaints of someone inadequate to the requirements of the role. Even when you have a job then where is the wisdom in criticising the process that brought you that job?

The recruitment process for developers has been pretty terrible for years but at least during the hiring boom the process tended to err on the side of taking a chance on people. Now employers seem to feel pretty confident that whatever bar they set they will find a suitable candidate or whatever conditions they apply will be accepted. That means that often the reasons you get back for not proceeding during an interview are often pretty flimsy. The interviewers are the gatekeepers into the roles and they don’t really have to justify themselves too much.

The fundamental problem

At its heart though the problem has always been and remains that most people are really bad at interviewing. Often people spend more time interviewing other people rather than going through interviews themselves. When conducting interviews they are mostly isolated from feedback unless another interviewer takes objection to what they are doing.

Therefore virtually every developer I’ve known who does interviews thinks they are really good at interviewing (including me, I’m really good at conducting interviews (I’ve also had some feedback from agents that I’m really terrible at interviewing, who are you going to believe?)). However most of them are really bad. They don’t really know how to frame open questions, they don’t stick to the scripts or they stick too literally to the scripts, they don’t use any scoring criteria or objective marking and they often freestyle some genuinely awful questions.

One of my favourite pieces of recent interview feedback was that I didn’t have a lot of experience in a particular area. Now while it might be true that I didn’t exhibit much evidence of that experience in the interview I would also have found it easier to do that if I had been asked questions about it. If an area of expertise is vital to the role then you need to have formulated some questions about it and also importantly make sure you allocate enough time in the interview to ask them. Flirting may require mastery of the tease but interviewing usually benefits from a very direct approach.

People who do interviews need to be trained in doing interviews. In an ideal world that would also mean doing some mock interviews where it is known whether the candidate has the skills to do the job. Their interviewing needs to be reviewed by managers from time to time and the easiest way to do that review is by having managers and managers of managers in the actual interviews.

In a previous role some engineering managers who reported to me did a little live roleplay of what they thought a good interview would look like, one taking the part of the interviewer and one the interviewee. Naturally the stakes were low but the exercise gave a template for the rest of the interviewers to set their expectations and give them a sense of where we thought good was.

Interviews, interviews, interviews

Employers confidence in being able to pick and choose is nowhere more exemplified by having loads of interview rounds. For more responsible roles I get that you often have to meet up and down the chain along with peers and stakeholders. In recent processes though I wouldn’t have been surprised to be interviewed by the office cat, probably just to see if I was desperate enough to put up with this kind of treatment. A personality fit with key stakeholders is important but I feel that previously this was done in a single call with multiple people at a time.

Candidate experience surveys

How can you try to improve the interviewing process and allow candidates to provide the feedback to the interviewers that they so desperately need? Some places have used candidate surveys. I’ve tried using these myself and occasionally you got some good feedback, in particular on how someone felt about the way communicated with them as a corporate body. However as a candidate (and in the current economy) I would never fill one out since it doesn’t help you secure an offer and seems in most cases to actually be positively risky as you can either give a high rating and look like a kiss-ass or a low rating that will automatically put the organisation on the defensive, especially those people who interviewed you.

Even after accepting a job I find it really hard to talk to the people who interviewed me about the interviewing experience. In some ways the only safe time to give feedback on the interview process is after you’ve received an offer and have decided not to accept it. At that point it truly does depend on how willing to learn an organisation is.

At a previous role I added a closing question to our script: “What question do you think we should have asked you?”. This was originally intended to be a way for candidates to draw attention to experience that they thought was relevant (even if maybe our scoring system did not take it into account). For a few candidates though it became an opening into discussing the interview process and their thoughts on it. It is the closest thing I’ve found to an effective feedback mechanism I’ve been able to find.

To sum it up

Interviewing generally sucks, right now it sucks even more because without the benefit of the doubt bad interviewing practices make it difficult to succeed as a candidate and enjoy the experience. A negative candidate experience causes brand issues for an employer and while they may not care about it now if the market tightens or visas stop being so easy to acquire then it might start to matter again. As an industry we should do better and genuinely try to find ways to improve how we find a fit between people and roles and make the hiring process less hateful.

Other posts in this area

Standard
Work

November 2023 month notes

The end of November marks the start of the Christmas corporate social hospitality season. It is easy to be cynical but it is nice to catch up with people and find out what has been happening with them.

Bun

We started using Bun at work for a project, more as a CLI build tool than a framework and runtime. It seems reasonably effective and has quite a few of the features that were interesting in Deno. Deno has a bit more ambition and thought in its overall project whereas Bun seems much for focused on trying to get itself embedded on projects. It reminds me quite a lot of Yarn and I think we may want to move to something more open in the future.

In the meantime though I have to admit that having a fast test runner is a joy compared to Jest. I attended Halfstack London this month and one of the talks there gave an illustration of how very slow Jest is and made the recommendation to use Node’s native runner which is an interesting alternative that I might try for my own projects.

AssemblyScript

I’ve been doing the Exercism 12 in 23 challenge (the standard “work with twelve languages in a year” but using Exercism’s problems as a proof of progress). It has thrown up a few interesting things already. I was surprised at how much I liked working with Raku (Perl was one of the first languages I learnt) and I should probably write up something about it. This month was assembly however and unlike most of the other languages this was an area I’ve never really ventured into. My first language was BASIC and I might have POKE’d and PEEK’d but I’ve never written any assembler.

I chose to tackle WebAssembly which seemed like it might have some work advantages if I knew more about it. WebAssembly comes with a representative form called WAT that is made up of s-expressions which looks quite elegant (especially if you are a LISP fan). However trying to write raw assembler felt too challenging so instead I choose to try AssemblyScript instead which is a Typescript style language which compiles to WASM and WAT. It also allows you to write tests in Javascript which import from the compiled output which is quite neat (I much prefer writing tests in dynamic rather than static languages).

It made doing the number-based exercises relatively straight-forward. For a few of the problems I did some hand tweaking of things like parameter calling and while AssemblyScript uses native Math for things like square roots I ended up manually creating a sequence to calculate the hypotenuse of a triangle to avoid library calls which seemed tricky to match between the two execution environments.

While doing this I did start to develop a sense of how assembly and the stack works but I feel I could probably do with a bit more of a structured introduction than trying to solve quite high-level problems with low-level tools. Overall I found it a good stretching exercise.

MDN’s documentation for Web Assembly is excellent and I probably learnt most about the way assembler works by messing around with their executable examples. Not only is this a great documentation format but I don’t think I would have completed the exercises without the explanations in the documentation.

Dependabot bundling

The thing that changed my work life this month was grouping dependencies. Javascript projects tend to have a lot of dependencies and often in the build step changes in these dependencies are pretty meaningless (type files or compilation edge-cases) but of equal effort to apply as security updates.

You can group dependency updates by expressions but more usefully you can group development dependencies (where supported by the dependency configuration) into a single update. Generally if you have a test suite and the build passes you can apply these altogether and have the effort of a single release for multiple changes.

There’s sometimes an argument that grouping too many changes together means that one breaking change blocks all the changes. So far I haven’t seen that in practice because the volume of small changes in Javascript is high but the change impact is very low.

The grouped PR is also sensibly automatically managed, with the group being added to as needed. Security updates are always broken out into their own PR so it is much easier to see priorities when looking at the PR list.

Standard
Books, Work

Book review: The Logic of Failure

This book was originally published in German at the end of the 80s. It described the result of conducting computer based simulations of situations such as running a town or sub-Saharan country. All the situations were fictional but based on real world scenarios and with a rich simulation model. The book is unable to describe how to succeed but instead focuses on patterns of behaviour that were frequently seen when people experienced failure, and often catastrophic failure, in the simulations.

Misunderstanding complex systems with exponential behaviour

The book offers a succinct and insightful picture of factors that are better understood today but often not in combination and not relating to issues of leadership and management. In no particular order these include the very real problem of differentiating linear and exponential processes. The human mind seems to bias towards linear models and struggles to accurately predict the outcomes of changes in the rate of change itself. Of course this situation is even harder at the start of the processes because the two look the same and therefore if you don’t have a clear understanding of the underlying processes there is no way to predict whether something will be linear or exponential.

Failure to understand exponential growth is one challenge but exponential collapse is even harder for our minds to predict and model. The chapter on predator and prey models were particularly fascinating as often there is massive growth in the population size of the predators before a huge collapse in their numbers. If a metric has been exponential and then becomes linear without a deep understanding of the processes at work you can’t tell whether you have encountered a plateau or precipice.

The book also feels that individual decision makers only rarely can hold a complex model in their minds, participants in the study would sometimes deny the information given to them in the briefing once they developed their own incorrect theories of how the simulation was working.

On the difficulty of being successful

One of the reasons the book can’t draw definite conclusions about what strategies are successful is that because there is no universal system that is successful in all circumstances. For example, generally people who asked more questions after each step in the simulation were more successful than those who didn’t, however at some point all the successful participants asked less questions and acted decisively in ways that advanced their goals. They seemed to better manage their need for information against the need to act and observe and were able to tune the mix of activities in an optimal way.

Experience generally seemed helpful but there is a warning about what the book calls “methodism” which I think might have other names now. What it describes though is the misapplication of prior knowledge or tactics. People look for a few identifying characteristics in the situation that match their experience and then they apply techniques or solutions that have worked for them in the past. In doing so they can ignore information in the current situation that contradicts the likelihood that the previous solution is appropriate.

The book uses “elaboration” as a way to measure whether someone’s proposed solution is based on the situation they are presented with rather than one they have encountered before. Elaborated solutions include principles guiding the attempted solution and potential compromises in executing it as well as mitigations against the failure of the attempted solution.

Essentially people who are more likely to be successful use their previous experience to inform their approach to a new problem but are rigorous in their analysis of the new situation and prepared to adapt previously successful approaches to the new situation.

Unsafety buffers

One very practical takeaway was around the use of buffers in safety procedures. Typically when designing a robust procedure you want to allow for issues in following the procedure or the timing of its execution and so on. This means that most safety procedures tell you to action early at a point when the system is quite far from failure and the capacity of the system is quite high. Ironically this means that if you perform the procedure late or incompletely then quite often it will still work.

The book gives the example of Chernobyl as a place where safety procedures were routinely ignored, abbreviated or circumvented because nothing bad ever happened when they were. If you draw the conclusion that the safety procedures are unnecessary or their buffer values are too high and you can use your own heuristically determined values instead then you start down the path to disaster.

It is important to remember that any conservative safety procedure is conservative to give it the maximum likelihood of working in a range of circumstances. One that has a narrow range of applicability is less likely to result in a safe outcome.

As the book points out it is impossible for individual humans to learn from catastrophic failures. Collectively though we should be studying and drawing conclusions from the worst outcomes that we have not personally experienced.

Defining success and avoiding failure

One key takeaway I took away from the book is that while it talks about failure and success even the successful outcomes involved trial and error and contained points where things were not as good as they could have been. Most of the outcomes described as successful involved the participant having an idea of some new stable situation that improved aspects of the current one and working methodically towards it. This is quite a modest definition of success compared to the way it is commonly used in business for example.

The terrifying thing about the book is that in most of the simulations the virtual people involved would probably have been better off if nothing had been done. The scenario usually starts in a stable situation that is sub-optimal and on my reading it seems the majority of participants took that situation and turned it into a hellscape of unsustainable growth or development followed by disaster and a collapse of society to levels below the starting point.

In many ways the book is a justification of small ‘c’ conservatism, sustainable improvements are hard to achieve and the advantage of time-tested solutions is that have been validated under real-world conditions. The counter-argument though is that improvements are possible and to not seek them out of fear is also an unhappy situation.

This is a small book and you can read the essence of it’s content in this paper. Like all the best book its ideas have an impact out of proportion with the amount of time it takes to explain them.

I think I first found about this book via a post from Tim Harford who was buying links if you’re interested (or details to order from your local bookshop).

Standard
Programming

Enterprise programming 2023 edition

Back in the Naughties there were Enterprise Java Beans, Java Server Pages, Enterprise Edition clustered servers, Oracle databases and shortly thereafter the Spring framework with its dependency injection wiring. It was all complicated, expensive and to be honest not much fun to work with. One of the appeals of Ruby on Rails was that you could just get on an start writing a web application rather than staring at initialisation messages.

During this period I feel there was a big gap between the code that you wrote for living and the code you wrote for fun. Even if you were writing on the JVM you might be fooling around with Jython or Groovy rather than a full Enterprise Java Bean. After this period, and in particular post-Spring in everything, I feel the gap between hobby and work languages collapsed. Python, Ruby, Scala, Clojure, all of these languages were fun and were equally applicable to work and small-scale home projects. Then with Node gaining traction in the server space then gap between the two worlds collapsed pretty dramatically. There was a spectrum that started with an inline script in a HTML page that ran through to a server-side API framework with pretty good performance characteristics.

Recently though I feel the pendulum has been swinging back towards a more enterprisey setup that doesn’t have a lot of appeal for small project work. It often feels that a software delivery team can’t even begin to create a web application without deploying on a Kubernates cluster with Go microservices being orchestrated with a self-signing certificate system and a log shipping system with Prometheus and Grafana on top.

On the frontend we need an automatically finger-printing statically deployed React single-page app, ideally with some kind of complex state management system like sagas or maybe everything written using time-travelable reactive streams.

Of course on top of that we’ll need a design system with every component described in Storybook and using a modular class-based CSS system like Tailwind or otherwise a heavyweight styled component library based on Material design. Bonus points for adding React Native into this and a CI/CD system that is ideally mixes a task server with a small but passionate community with a home-grown pipeline system. We should also probably use a generic build tool like Bazel.

And naturally our laptop of choice will be Apple’s OSX with a dependency on XCode and Homebrew. We may use Github but we’ll probably use a monorepo along with a tool to make it workable like Lerna.

All of this isn’t much fun to work on unless you’re being paid for it and it is a lot of effort that only really pays off if you hit the growth jackpot. Most of the time this massive investment in complex development procedures and tooling simply throws grit into the gears of producing software.

I hope that soon the wheel turns again and a new generation of simplicity is discovered and adopted and that working on software commercially can be fun again.

Standard
Work

October 2023 month notes

I’ve been learning more about Postgres as I have been moving things from Dataset to Psycopg3. It is kind of ridiculous the kind of things you can do with it when strip away the homogenising translation layer of things like ORMs. Return a set of columns from your update? No problem. Upsert? Straight-forward.

However after completing a CONFLICT clause I received a message that no conflict was possible on the columns I was checking and I discovered that I had failed to add a Primary Key to the table when I created it. It probably didn’t matter to the performance of the table as it was a link table with indexes on each lookup column but I loved the way that the query parsing was able to do that level of checking on my structure.

Interestingly I had a conflict clause in my previous ORM statement I was replacing and it had never had an issue so presumably it was doing an update then insert pattern in a transaction rather than using native features. For me this shows how native solutions are often better than emulation.

Most of the apps I’ve converted to direct use of queries are feeling more responsive now (including the one I use to draft these posts) but I’m not 100% certain whether this is because of switch to lower-level SQL or because I’ve been fixing the problems in the underlying relational model that were previously being hidden from me.

We’re going to need a faster skateboard

I have been thinking a lot about the Gold-plated Donkey Cart this month. When you challenge problems with solutions you often first have a struggle to get people to admit that there is a problem and even if it is admitted then often the first response is to try and patch or amend the existing solution than consider whether the right response might be.

We have additive minds so this tendency to patch what is existing is natural but sometimes people aggressively defend the status quo, even when it is counter-productive to their overall success.

Weakly typed

I’ve had some interesting experiences with Typescript this month, most notably an issue with a duplicated package which resulted in code that has been running in production for months but which has either not been correctly typed or has been behind the intended version by maybe four major versions. Typescript is interesting amongst type-hinted languages in that it has typing files that are often supplied separately from the code itself and in some cases which exist independently of the code itself. My previous experience of Python typing for example stopped the checker at the boundaries of third-parties and therefore only applied to the code you are writing yourself.

I’m uncertain of the value of providing type files for Javascript libraries as the compile-time and runtime contexts seem totally different. I found a Javascript dependency that had a completely broken unit test file and on trying to correct it I found that it couldn’t have the behaviour that the tests were trying to verify. Again I wondered about how this code was working in production and predictably it turned out that the executed code path never included the incorrectly specified behaviour. Dynamic code can be very resilient and at the same time a time bomb waiting to happen no matter what your

I think Typescript code would be better off if it was clearer that any guarantees of correctness can only be provided for the code you have totally under your control and which is being compiled and checked by you.

Frozen in time

I’ve been thinking a lot as well about a line from this talk by Killian Valkhof where he mentions that our knowledge on how to do things often gets frozen based on how we initially learnt to do things. For developers who learnt React for frontend will be the future people who learnt to do frontend via jQuery. I’ve been looking at Web Components which I thought were pretty terrible when they first came out but now look delightfully free of complex build chains and component models.

But more fundamentally it has made me think about when I choose or reject things am I doing so based on their inherent qualities in the present moment or based on the moment in time when I first learnt and exercised those skills. For CSS for example I’m relatively old-fashioned and I have never been a fan of the CSS-in-JS idea. However I think this approach, while maybe being outside contemporary preferences, is sound. Sound CSS applies across any number of frontend component models and frameworks and the work that goes into the CSS standards is excellent where as (ironically) the limitations of Javascript frameworks to express CSS concepts means that often it is a frozen subset that is usable.

I’ve never been entirely comfortable with Docker or Kubernates though and generally prefer PaaS or “serverless” solutions. Is that because I enjoyed the Heroku developer experience and never really understood the advantages of containerisation as a result.

Technology is fashion and therefore discernment is a critical quality for developers. For most developers though it is not judgement that they manifest but a toxic self-belief in the truth of whatever milieu they entered into the industry in. As I slog through my third decade in the profession doubt is something that I feel strongly about my opinions and trying to frame my judgements in the evidence and reasoning available now seems a valuable technique.

Standard
Work

September 2023 month notes

I tried the Kitten framework as I was quite surprised to see it request permission to change my privileged port permissions on install. I had to read through the post and its related posts before I realised that the nature of restricted ports is so ingrained in my I never asked whether it was genuinely a security risk to have them accessible from userspace. I would recommend taking a look through the posts linked to from the above post because when I realised that often my answer to port restrictions is to sudo onto them and that is a bit of weird way of not actually being secure.

I haven’t done much with Kitten, just working my way through the tutorial. The static serving is fine, the dynamic pages are a bit odd and the default ability to retain server state feels very odd.

I’ve also been continuing to try and learn Koa although if this wasn’t related to work I wouldn’t be bothering. Taking a look at the State of Javascript indicates that Express is the runaway winner and all other frameworks are pretty esoteric.

As an aside the state of in-page linking in the State of Javascript is embarrassing, the page structure is really complicated and doesn’t seem able to assign a simple id to a section tag.

Koa is from the school of “everything is a plugin” so out of the box it is completely anaemic and you have the zero-joy experience of trying to figure what libraries and plugins you should use. Most of the core plugins haven’t been updated in years, which is good in terms of stability but makes it hard to understand which libraries are actually unmaintained and which are fundamental. I much prefer the Python approach of having batteries included but being able to swap things out if you have particular needs.

One thing that Koa does differently to Express is to use a combined Context object instead of explicit request and response objects. I don’t think that is really very helpful and I did manage to mix the concept up with Go Contexts. Koa contexts are just a big ol’ object that includes some default values and the response kind of magically fires after all the middleware has fired. I feel it is a bit of step backwards in terms of clarity. My guess is that it makes it easier for plugins to add functions into the context object rather than having to explicitly import them and use them within the handler code.

I’m building a basic old school webapp so I needed some templating and that was a bit of journey in terms of what is popular but Nunjacks is work-friendly and based on jinja2 so it feels very familiar.

I’ve been slowly continuing to replace my various Python database libraries with a simpler and faster set of string queries executed through psycopg3. Next on the chopping block is Pony, which while it is relatively enjoyable as an ORM is needlessly clever in its use of generators and lambdas to do queries. I found a broken query and despite reading through the documentation I couldn’t fix it. If you already know SQL an abstraction has to be pretty powerful to be worth the overhead on things like queries which are fundamental a string and a map of data bindings and not much more.

I attended the State of the Browser conference this month and it was a good edition that balanced the input of practicioners, browser makers and had practical technical advice and reminders. It also managed to limit itself to only one non-technical related talk. I’ll write up a few notes in a separate post but this felt like a great return on the time invested.

I also discovered Rosie Pattern Language this month, a parser based alternative to regular expressions, I was intrigued but it lacks a helpful tutorial or introduction article so it has gone into the backlog to investigate later.

I started reading the book The Logic of Failure this month and I’m about halfway through it. It is a fascinating read and describes a series of experiments done with computer simulations of various situations from a town, a sub-Saharan ecosystem to a fridge with a broken thermometer. The outcomes are then mapped to the participants voiced thoughts to try and identify patterns of behaviour and the underlying rationales that drive them. Obviously the goal of reading such books is to try and temper the causes of failure in yourself but some of the problems the book highlights such the behaviour of complex inter-related components and exponential behaviour are just things that all humans are bad at.

Standard