Programming

London Django Meetup May 2023

Just one talk this time and it was more of a discussion of the cool things you can do with Postgres JSON fields. These are indeed very cool! Everything I wanted to do with NoSQL historically is now present in a relational database without compromise on performance or functionality, that is an amazing achievement by the Postgres team.

The one thing I did learn is that all the coercion and encoding information is held in the Django model and query logic which means you only have basic types in the column. I previously worked on a codebase that used SQLAlchemy and a custom encoder and decoder which split custom types into a string field with the Python type hint (e.g. Decimal, UUID) and the underlying value. By comparison with the Django implementation which appears to just use strings this is a leaky abstraction where the structure of the data is compromised by the type hint.

Using the Django approach would have been easier when using direct SQL on the database and followed the principle of least surprise.

The speaker was trying to make a case for performing aggregate calculations in the database but via the Django ORM query language which wasn’t entirely convincing. Perhaps if you have a small team but the resulting query language code was more complex that the underlying query and was quite linked to the Postgres implementation so it felt that maybe a view would have been a better approach unless you have very dynamic calculations that are only applied for a fixed timespan.

It was based on an experience report so it clearly worked for the implementing group but if felt like the approach strongly coupled the database, the web framework and the query language.

Standard
Work

How I have been using knowledge graphs

Within a week of using Roam Research’s implementation of a knowledge graph or Zettlekasen I decided to sign up because there was something special in this way of organising information. My initial excitement was actually around cooking, the ability to organise recipes around multiple dimensions (a list of ingredients, the recipe author, the cuisine) meant you could both search and browse by the ingredients that you had or the kind of food you wanted to eat.

Since then I’ve started to rely on it more for organising information for work purposes. Again the ability to have multiple dimensions to things is helpful. If you want to keep some notes about a library for handling fine grained authorisation you might want to come back to that via the topic of authorisation, the implementation language or the authorisation model used.

But is this massively different from a wiki? Well a private wiki with a search function would probably do all this too. For me personally though I never did actually set up something similar despite experiments with things like Tiddlywiki. So I think there are some additional things that make the Zettelkasten actually work.

The two distinctive elements missing from the wiki setup are the outliner UI and the concept of daily notes. Of the two the daily notes is the simplest, by default these systems direct you a diary page by default, giving you a simple context for all your notes to exist in. The emphasis is getting things out of your head and into the system. If you want to cross-link or re-organise you can do so at your leisure and the automatic back-referencing (showing you other pages that reference the content on the page you are viewing) makes it easy to remind you of daily notes that maybe you haven’t consciously remembered you want to re-organise. This takes a good practice and delivers a UI that makes it simple. Roam also creates an infinite page of daily notes that allows you to scroll back without navigating explicitly to another page. Again nothing complicated but a supportive UI feature to simplify doing the right thing.

The outliner element is more interesting and a bit more nuanced. I already (and continue to use) an outliner in the form of Workflowy. More specifically, I find it helpful for outlining talks and presentations, keeping meeting notes and documenting one to ones (where the action functionality is really helpful to differentiate items that need to be actioned from notes of the discussion). The kind of things where you want to keep a light record with a bit of hierarchical structure and some light audit trail on the entries. I do search Workflowy for references but I tend to access it in a pretty linear way and rarely access it without a task-based intention.

Roam and Logseq work in exactly the same way, indeed many of the things I describe above are also use-cases for those products. If I wanted to I could probably consolidate all my Workflowy usage into Roam except for Roam’s terrible mobile web experience. However there is a slight difference and that is due to the linking and wiki-like functionality. This means you can have a more open discovery journey within the knowledge graph. Creating it and reading, I have found, are two different experiences. I think I add content in much the same way as an outliner but I don’t consume it the same way. I am often less task-orientated when reviewing my knowledge graph notes and as they have grown in size I have had some serendipitous connection making between notes, concepts and ideas.

What the outliner format does within the context of the knowledge graph is provide a light way of structuring content so that it doesn’t end up a massive wall of text in the way that a wiki page sometimes can. In fact it doesn’t really suit a plain narrative set of information that well and I use my own tool to manage that need and then link to the content in the knowledge graph if relevant.

In the past I have often found myself vaguely remembering something that a colleague mentioned, a link from a news aggregator site or a newsletter or a Github repo that seemed interesting. Rediscovering it can be very hard in Google if it is neither recent nor well-established, often I have ended up reviewing and searching my browser history in an almost archaeological attempt to find the relevant content. Dumping interesting things into the knowledge graph has made them more discoverable as individual items but also adds value to them as you gain the big picture understanding of how things fit together.

It is possible to do achieve any outcome through any misuse of a given set of tools but personal wikis, knowledge graphs and outliners all have strengths that are best when combined as much as possible into a single source of data and which have dedicated UIs for specific, thoughtful task flows over the top. At the moment there’s not one tool that does it all but the knowledge graph is the strongest data structure even if the current tools lack the UI to bring out the best from it.

Standard
Software

Great software delivery newsletters

I currently subscribe to a number of great newsletters around technology and software delivery. While the Fediverse is also a great place to pick up news and gossip I have found that there is something really valuable in having a regular curated round up of interesting articles. It may be no surprise that the consistently great newsletters are produced by people who are engaged in consultancy. I think they inevitably get exposed to trends and concerns in the industry and also can commit the time to writing up their thoughts and reflecting on their chosen content.

Pat Kua‘s Level Up focuses on technical leadership and tends to have good pieces around human factors, managing yourself and creating good systems for delivery. It also often has advice pieces for people coming into technical management or leadership.

John Cutler’s The Beautiful Mess focuses on Product but is also great on strategy and importantly is always focused on getting to a better product process by emphasising collaboration and breaking down barriers between functional silos. I also enjoy reading how he approaches putting together alternatives to roadmaps and strategy documents. I think he has the best sense on how to use things like metrics and North Stars.

Emily Weber writes Posts from Awesome Folk has a focus on management, leadership, consensus building and healthy organisation cultures. As the title suggests it offers a carefully curated selection of posts that are often longer form and are generally from expert practitioners.

Michael Brunton-Spall‘s Cyber Weekly is your one stop shop for news on security and analysis of the key issues of the day.

Simon Willison‘s newsletter is more recent and feels more like a very long blog that is getting pushed into the newsletter format. Despite this Simon is one of the most creative and independent developers you could read and he was early into the LLM and generative AI and has lots of interesting insight into what you can do with these models and what works and what doesn’t. He’s also an (intimidating) role model for what independent, solo devs can achieve.

I have a lot of other subscriptions (and indeed a lot of people seem to be starting newsletters currently) so I will probably need to do a follow up to this post in a couple of months if I see that people are posting consistently useful things. One general thing to point out is that if I’m working on a particular technology (like Django, Go or React) I’ll often subscribe to the weekly community news roundups to get a feel for what’s happening. However I find the volume of links and items is overwhelming if you don’t have a specific interest or purpose in reading through them so I relegate them to RSS when I’m not actively working with them and have a more occasional catchup.

Standard
Programming

Version management with asdf

I typically use languages that are unmanageable without being able to version the language release you are dealing with (Python and Javascript). I have also been historically bad at keeping up to date with releases and therefore ending up with code that sometimes doesn’t run at all (Rust and Scala).

asdf is a version manager to rule them all. It provides a common set of commands to manage language dependencies (and the installation of different language versions) but has a plugin interface that different languages can use to bring in language specific concerns.

As a user you just need to learn one set of commands to manage all languages; implementations can build on a stable core system and simply focus on their requirements. Everyone is a winner.

One top of that instead of having multiple hidden files for multi-language projects (usually Javascript and some other language) you now have one file with all the language definitions in.

The only complication I’ve found is retraining myself to the new command set and remembering which commands work on asdf itself (things like updating the tool itself, setting specific versions in different scopes and managing the language plugins themselves) and which work on the plugins (installing new versions). The plugins also have no requirement to be consistent amongst themselves so in some you can specify “lts” as a target for example or “latest”. Others require the full three digit semantic version. These conventions seem to have come from the tools the plugins are replacing.

Overall though I think retraining myself to learn a single tool is probably going to be easier than having an increasing number of per language systems.

Standard
Python

London Django Meetup April 2023

I’m not sure whether I’ve ever been to this Meetup before but it is definitely the first since 2020. It was hosted by Kraken Energy in their offices which have a plywood style auditorium with a nice AV setup for presentations and pizza and drinks (soft and hard) for attendees.

There were two talks: one on carbon estimates for websites built using Django and Wagtail; the other about import load times when loading a Django app into a shell (or more generally expensive behaviour in Python module imports).

Sustainable or low impact computing is a topic that is slowly gaining some traction in the wider development community and in the case of the web there are some immediate quick wins in the form of content negotiation on image formats, lazy loading and caching to be had.

One key takeaway from the talk is that the end user space is the area where most savings are possible. Using large scale cloud hosting means that you are already benefiting from energy efficiencies so things like the power required for a mobile phone screen matters because the impact of inefficient choices in content delivery is multiplied by the size of your audience.

There was a mention in passing that if a web application could be split into a Functions as a Service (FaaS) deployable then, for things like Django that have admin paths and end user paths, you can scale routes independently and save on overprovisioning. If this could be done automatically in the deployment build it would be seamless from the developer’s point of view. I think you can do this via configuration in the Serverless framework. It seems an interesting avenue for making more efficient deployments but at a cost in complexity for the framework builders.

There was quite an interesting research opportunity mentioned in the talk around serverless-style databases. For sites with intermittent or cyclical usage having an “always on” database represents a potentially big saving on cost and carbon. There was mention of the service neon.tech which seems to have a free personal tier which might be perfect for hobby sites where usage is very infrequent and a spin up time would be acceptable.

The import time talk was interesting, it focused on the developer experience of the Django shell boot time (although to be honest a Python shell for any major framework has the same issues). There were some practical tips on avoiding libraries with way too much going on during the import phase but really the issue of Python code doing expensive eager activity during import has been a live one for a long time.

I attended a talk about cold starts with Python AWS Lambdas in 2019 that essentially boiled down to much of the same issues (something addressed, but not very well in this AWS documentation on imports). Little seems to have improved since and assumptions about whether a process is going to be short or long-lived ultimately boils down to the library implementer and the web/data science split in Python means that code is run in very different contexts making sharing libraries across these two use cases hard.

The core language implementation is getting faster but consensus on good practice in import time behaviour is not a conversation that seems to be happening between the major library maintainers.

The performance enhancements for core Python actually linked the two talks because getting existing code onto more efficient runtimes helps reduce compute demands across all usage.

Standard
Web Applications

RSS Reader Review (2023)

After every social media convulsion there is always a view that we’re heading back to blogs again. Regardless of whether this is true or not there is always an uptick in posting and blogs are definitely better for any kind of long form content compared to a 32 post “thread” on any kind of microblogging social platform. So I’ve been revising my line up of RSS readers (like email I use a few) and I wanted to post my notes on what I’ve tried and what I’ve ended up using.

My first key point of frustration is viewing content on a phone browser; my primary reader (which I migrated to from Google Reader) is Newsblur but the design of the site is not responsive and is large screen focused. My second issue is specifically around Blogger sites; while these do have a mobile view most of the themes for Blogger feel unreadable and harsh on smaller screens. Not to mention the cookie banner that is always floating around.

I have been using Feedbin whose main feature is that it can consolidate content from Twitter, RSS and email newsletters into a single web interface. It does deliver this promise but while its small screen experience and touch interface has been considered, the resulting UI is quite fiddly with a side-swipe scheme for drilling in and out of content and I often need to switch out of its default rendering mode to get something that is easy to read. I’m still using Feedbin to follow news sources on Twitter but have mostly given up on RSS there except indirectly through topic subscriptions.

I want to give an honourable mention here to Bubo RSS. This is essentially a static site builder that reads your subscriptions and builds a set of very lightweight pages that list out all the recent posts and uses the visited link CSS property to indicate the unread items. In the end this didn’t really solve my reading issues as you just link through to the original site rather than getting cleaned up small screen friendly view. However its idea of building a mini-site from your RSS feed and then publishing a static site would solve a lot of my problems. I was almost tempted to see if I could add a pull of the content and a Readability parse but I sensed the size of the rabbit hole I was going into.

Another great solution I found was Nom which is a terminal RSS reader written in Go. You put your subscriptions into a config file and then read the content via the terminal. If I had any feedback for Nom it would be that the screen line length is not adjustable and the default feels a bit short. The pure text experience was the best reading experience for the Blogger subscriptions I have but ultimately I wanted something that I could read on a mobile phone web browser.

In the end the thing that has been working for me was Miniflux. You can self-host this but the hosted option seemed cheaper to me than the cost of the required hosting. I had only one issue with Miniflux’s reading mode out of the box which was to do with margins on small screens.

I thought I might have to try and get a PR organised but helpfully you can save a custom CSS snippet in the settings and with a few lines of customisation I was entirely happy with the reading experience. This is now what I’m using to read RSS-based content on my phone.

Standard
London, Programming, Web Applications, Work

Halfstack on the Shore(ditch) 2022

This is the first time the conference has been back at Cafe 1001 since the start of the Pandemic and my first HalfStack since 2021’s on the Shore event.

In some ways Halfstack can seem like a bit of an outlandish conference but generally things that are highly experimental or flaky here turn up in refined mainstream forms three to five years later. Part of the point of the event is to question what is possible with the technologies we have and what might be possible with changes that are due in the future. Novelty, niche or pushing the envelope talks are about expanding the conversation about what is possible.

The first standout talk this year was by Stephanie Shaw about Design Systems. It tries to make the absurdist argument that visual memes meet all the criteria to be a design system before looking at what are the properties of a good design system that would disqualify memes. The first major point that resonated with me was that design systems are hot and lots of people say they have them when what they actually have are design principles, a component library or an illustration of UI variant behaviour.

I was also impressed that the talk had a slide dedicated to when a design system would be inappropriate. Context always matters in terms of implementing ideas in organisations and it is important to understand what the organisation needs and capabilities that are required to get value from an idea. Good design systems provide a strong foundation for rapid, consistent development and should demonstrate a clear return on the investment in them.

One of the talks that has stayed with me the longest was one that was about things that can be done now. I’ve seen Chris Heilmann talk about dev tools at previous conferences but this time the frame of the talk was different and was about using dev tools in the browser to make the web sane again. He reminded me that you can use the dev tools to edit the page. Annoying pop-up? Delete it! Right-click hijacked? Go into the handler bindings and unbind the customer listener. Auto-playing video? Change it’s attributes or again just delete the whole thing. He also did explain some new things that I wasn’t aware of such as the ability to take a screenshot of a specific node from within the DOM inspector. I’ve actually used that a few times since in my work.

There was an impromptu talk that was grounded in a context that was a little hard to follow (maintaining peer to peer memes in a centralised internet apocalypse I think) but was about encoding images into QR codes that included an explanation of how QR codes actually work and encode information (something I didn’t know). The speaker took the image data, transformed it into a series of QR codes, then had a website that displayed the QR codes in sequence and a web app that used a phone camera to scan the codes and reassemble the image locally. The scanning app was also able to understand where in the sequence the QR code was which created a kind of scanning line effect as it built up the image which was very cool to watch.

There were three talks that all involved a significant amount of simultaneous interaction and each using slightly different methods but clearly the theme was having many people together on a webpage interacting in near real time.

The first thing to say is that I took a decent but relatively low-powered Pinebook laptop to the conference as I thought I would just need something simple to take notes and look things up on the internet, maybe code along with some Javascript. All of the interactive demos barely worked on it and the time to be active was significantly longer than say the attendees with the latest Macs. I think the issue was a combination of having really substantial downloads (which appeared not to be cached so refreshing the browser was fatal) but also just massive requirements on CPU in the local synchronisation code.

The first was by a pro developer relations person, Jo Franchetti, who works for Ably and who used the Ably API. Predictably this was the best working (and looking) demo with a fun Halloween theme around the idea of an ouija board or, more technically, trying to spell out messages by averaging all the subscribers’ mouse movements to create a single movement over the screen. However even using a commercial API, probably having no more than 25 connections and a single-screen UI my laptop still ground to a halt and had significant lag on the animations. It did look great projected on the big screen though.

Jo’s talk introduced me to an API I hadn’t heard of before scrollTo (part of a family of scrolling APIs). This is an example of how talks about things on the edge of the possible often come back to things that are more practical day to day.

James Allardice and Ross Greenhalf had the least successful take on the multiuser extension and in terms of presentation style seemed to be continuing an offstage squabble in front of everyone. I get the impression that they were very down on what they had been able to achieve and were perhaps hoping for a showcase example to promote their business.

Primarily they didn’t get this because they were bizarrely committed to AWS Lambda as the deployment platform. Their idea was to do a multiplayer version of Pong and it kind of worked, except the performance was terrible (for everyone this time, not just me). This in turn actually created a more fun experience that what they had intended to build as the lag meant you needed to be quite judicious in when you sent your command (up or down) to the server as there was a tendency to overshoot with too many people sending commands as ball approached and then another as they were waiting for the first one to take effect. You needed to slow down your reaction cycle and try and anticipate what other people would be doing.

The game also only lasted for the duration of a Lambda timeout of a single execution run as the whole thing was run in the execution memory of a single Lambda instance. This was a consequence of the flawed design but again it wasn’t hard to imagine how Lambda could be quite effective here as long as you’re not using web sockets for the push channel. It feels like this kind of thing would probably be pretty trivial in something like Elixir in a managed container but was a bit of a uphill battle in a Javascript monolith Function as a Service.

The most creative multi-user demo was by Mynah Marie (aka Earth to Abigail who has been a performer at previous Halfstacks) who used Estuary to create a 15 person online jam session which was surprisingly harmonious for a large group with little in the way of being able to monitor your own sound (I immediately had more empathy for any musician who has asked the desk for less drums in their monitor). However synchronisation was again a big problem, not only did other people paste over my loops but also after leaving the session one of my loops remained stubbornly playing until killed by the admin despite me not being able to access the session again, I was given a new user identity and no-one seemed able to reconnect with the orphan session.

Probably the most mindblowing technical talk was by Ulysses Popple about his tool Nodessey which is both a graph editor or notebook and a way to feed values into nodes that can then visualise the input they are receiving from their parent nodes. It reminded me a bit of PureData. I found following the talk, which was a mixture of notes and live-coded examples, a bit tricky as its an unusual design and trying to follow how the data structure was working while also trying to follow the implementation was tricky for me.

One thing I found personally interesting is that Nodessey is built on top of a minimal framework called Hyperapp which I love but have never seen anyone else use. I now see that I have very much underestimated the power of the framework and I want to start trying to use it more again.

Michele Riva did a talk about the use of English in programming languages which had a helpful introduction to programming languages that had been created in non-English languages. As an English speaker you tend to not need to ever leave the US-led universe of English based languages but it was interesting to see how other language communities had approached making programming accessible for non-English speakers. There was a light touch on non-alphabetic languages and symbolic languages like J (and of course brainfuck).

Perhaps the most practical talk of the conference was by Ante Barić around browser extensions. I’ve found these really valuable for creating internal organisation tooling in a very lightweight way but as Chris Heilmann reminded us in his talk too many extensions end up hammering browser performance as they all attempt to intercept the network requests and render cycle. The talk used a version of Clippy to create annoying commentary on the websites you were visiting but it had some useful insight into what is happening with browser extensions and future plans from both the Google and Mozilla teams as well as practical ways to build and use them.

Ante mentioned a tool that I was previously unaware of called web-ext that is a Mozilla project but which might be able to build out Chrome extensions in the future and gives a simplified framework for putting together extensions.

General notes

Food and drink is available when you want it just by showing the staff your conference lanyard. Personally I think it is great when conferences are able to be so flexible around letting people eat when they want to and avoiding the massive queues for food that typically happen when you try and cram an entire conference into a buffet in 90 minutes. I think it also helps include people who may have particular eating patterns that might not easily fit into scheduled tea and lunch breaks. It also makes it feel less like school.

In terms of COVID risk, the conference was mostly unmasked and since part of the appeal is the food and drink I felt like I wasn’t going to be changing my risk very much by wearing a mask during the talk sections. The ventilation seemed good (the room could be a bit cold if you were sitting in the wrong place) and there was plenty of room so I never had to sit right next to someone. This is probably going to remain a conference that focuses on in-person socialising and therefore isn’t going to appeal to everyone. Having a mask mandate in the current environment would take courage. The open air “beach” version of the conference on the banks of the Thames would probably be more suitable for someone looking to avoid indoor spaces.

Going back?

Halfstack is a lot of fun and I’ve booked my super early-bird for this year I think it offers a different balance of material compared to most web and Javascript conferences. This year I learnt practical things I could bring to my day job and was impressed by what other people have been able to achieve in theirs.

Standard
Software

Futurespectives

Futurespectives seem to be a much rarer practice than retrospectives. I learnt about using Futurespectives when I working with ThoughtWorks and I’ve used them a few times but I can’t seem to find a great online resource to introduce people to the idea. Liz Keogh’s advice on Futurespectives is probably the best I’ve found (beyond a lot of retro as a service companies writing marketing blog material about them).

One reason for their lack of adoption is that they require a certain amount of speculative imagination and that sometimes doesn’t come easily to developers who are very rooted in the realities of their work and sometimes think it is fanciful to speculate about the future.

However if you can persuade people to engage then I find this exercise to be very valuable for surfacing concerns and getting delivery teams to align on the broad shape of their approach to the work. It often sparks conversations that are being suppressed particular if people are being pressed for “commitments” on the upcoming work.

As with retrospectives, generating responses to the initial questions is best done independently and the consolidation of the individual answers and the discussion of what they reveal is best done collectively.

I ask the following questions but as with all practices it is often worth investing some time in trying to figure out what the purpose of the exercise is and what questions would best elicit responses that drive the conversation forward.

As a facilitator the frame for these questions are: “Imagine that we have completed our project. It has been a success even if at times it may have been hard work. The project meets the requirements and is working well in production. Our solution may be different from what we imagine today but we were able to adopt new ideas successfully. The team is happy and satisfied with how the work has gone and we didn’t need to make any excessive requests on their time and skills.”

  • How were we successful?
  • What problems did we have to overcome?
  • What are we proud about what we’ve done?

I ask people to generate responses from their perspective alone although they are free to speculate about how other teams and people will have helped or contributed along the way.

If people are struggling with the exercise I sometimes try to provide some starting questions. How do you feel about the project being complete? What do you feel satisfied to have done? What went better than you were expecting? How do other people feel about the work when they are talking to you about it?

Again the frame for all these questions is that the project has been successful (despite any doubts the participant may have now); the engineering mindset needs to accept that as definite thing and that the problem to be solved is: how was it successful despite this doubts? How were the problems solved or mitigated?

This last part is the critical step because it typically allows people to apply unconventional problem solving ideas. Typically people who are worried about a future problem cannot get past it if they feel it is unsurmountable however if you tell them that someone else has already solved it then just knowing that a solution exists allows people to reframe the problem and overcome their block on what the answer may be.

During the consolidation phase of the exercise, you bring the individual answers together and play them back to the group as a whole. This element is exactly the same as facilitating a regular retrospective. Try and ensure that any explanation of people’s ideas that the group needs is done during this phase. Often people are more aligned than they think but if there are any sharp disagreements in the approach they will typically come out now and its important that participants don’t reject any ideas at this stage because they will just return to their existing mindset.

Pay particular attention to similar ideas using different language, this can indicate that people are probably approaching the problems in a similar way but aren’t yet communicating enough to have shared ideas or a collaborative design approach. If there’s a lot of this it may be worth setting up a follow up to just review and consolidate the current state of play in the project. It may be that preparation is being rushed and the team isn’t having enough time to work together.

After creating and consolidating the initial input we now look at the three questions in a different way to help us generate actions from the futurespective. I sum up how we move from our imagined future to actions today in the following way for each question:

  • How do we realise our expected paths to success? What needs to happen to start towards that outcome? (Make true)
  • It is likely that we will encounter our anticipated problems, how can we minimise the impact they will have on us? (De-risk)
  • How can we ensure we have pride in our work? (Achieve)

At this point the session is more of a facilitated free-for-all with the initial phase being open to all ideas and suggestions. Some really common actions are that technical leaders realise they need to share more information on their vision and ideas with the rest of the team. It is also really common that when several people anticipate the same problem that prototyping, testing or training can be done very early on in the project plan to remove the problem or shift it a better understood class of problem.

The pride question often has actions that are associated with process, quality and shared standards and beliefs. Often though ideas about collaboration and the “team contract” come into play. Leaders can explain what others can rely on them for and what they want from the rest of the team. People can share fears in a way that allows people in authority to acknowledge that they share the same fears in a safe way. The format encourages not just the expression of fear but how will we manage our anxiety about the upcoming work.

In many ways if you’ve facilitated a retrospective you have all the skills that are required to run a futurespective, the tricky thing is about getting the participants in the right frame of mind.

In terms of measuring the impact of a successful futurespective you should be able to see a move from analysis to action and a growth of shared language and outcomes. Perceptions between the key participants of the project should be positive as they are already imagining a successful partnership ahead.

Standard
Programming, Software, Work

Defining the idea of “software engineering”

I have been reading Dave Farley’s Modern Software Engineering. Overall it’s a great read and thoroughly recommended (I’m still reading through it but I’ve read enough to know it is really interesting and a well-considered approach to common problems in development).

One of the challenges Dave tackles is to try and provide a definition of what software engineering actually is. This is actually a pretty profound challenge in my view. I’ve often felt that developers have usurped the title of engineer to provide a patina of respectability to their hacky habits. Even in Dave’s telling of the origin of the term it was used to try and provide parity of esteem with hardware engineers (by no lesser figure than Margaret Hamilton).

In large organisations where they have actual engineers it is often important to avoid confusion between what Dave categorises as Design and Production engineering. Software engineering sits in the world of design engineering. Software is malleable and easy to change unlike a supply chain or a partially completed bridge. Where the end result of the engineering process is an expensive material object Dave points that it is common to spend a lot of time modelling the end result and refining the delivery process for the material output as a result of the predictions of the model. For software to some extent our model is the product and we can often iterate and refine it at very low cost.

Dave proposes the following definition of engineering:

Engineering is the application of an empirical, scientific approach to finding efficient, economical solutions to practical problems.

Dave Farley, Modern Software Engineering

This definition is one I can live with and marries my experience of creating software to the wider principles of engineering. It also bridges across the two realms of engineering, leaving the differences to practices rather than principles.

It is grounded in practicality rather than aloof theories and it emphasises that capacities drive effective solutions as much as needs. This definition is a huge step forward in being able to build consensus around the purpose of a software engineer.

Standard
Programming

State of the Browser 2022

I’ve attended a few of these conferences and have always found them helpful. This year it had relocated to the Barbican Centre with the food and drink area overlooking the beautiful Conservatory there, great choice as a venue.

The conference was a hybrid in-person/online event that I think could serve as a model for other conferences that seem to only be focusing on their return to in-person. Due to other commitments I wasn’t able to be at the venue all day and so at lunchtime I headed home and picked up a few of the rest of the talks on the livestream. It was great to have the flexibility and made the whole conference more accessible.

Talks-wise it was interesting as ever and a little bit less inward looking or niche interest that it has been in the past. There were the usual mix of upcoming standards and challenges in implementing them, how to apply techniques to the current broad mainstream of browsers and a little bit of evangelism for playfulness and environment impact.

One of my key takeaways was on this last point; using an image CDN that can do automatic content negotiation to use an efficient modern image standard has a huge carbon saving. It feels a bit crazy that so many companies are still serving fixed sizes and formats off things like Cloudfront and S3.

Bruce Lawson kicked off the event with a good historical perspective talk on the history of standards (and the struggle to create and maintain them) and brought the issues of standardisation through the search for technical solutions to the world of regulation and better digital policy. Engaging with law makers is a more realistic way to improve the online world that the search of technical solutions to social problems.

More practically we can hope that Apple will be compelled as a digital gatekeeper to allow competition on browser implementations on its platform and maybe even fund its Safari team properly to have better compatibility with the general web standards on iOS. I felt it was nice for a recognition that government organisations can be engaged and willing to listen and that progress can be made be working together rather than outside of regular power structures.

Probably the best talk I heard was “Be the browser’s mentor not it’s micromanager” by Andy Bell this talk neatly encompassed two major ideas: the first was the way that layout systems in CSS have advanced to the point where you are describing structure and allowing the layout manager to actually decide the rendering and secondly on how digital design approaches have managed to fall between the abstractions of the grid system and the precise layout of magazine style layout.

By leaning on the layout engines the amount of CSS we have to write is much more minimal than the micromanaging fussiness typical to component design systems. It is also more powerful and expressive, avoiding the overly complex muddle that is often associated with component style systems but also not going too far down the class frenzy of utility class systems.

Sophie Koonin taught me how to use the prefers-reduced-motion preference via the medium of late 90s website chaos. A good example of the mixture of fun and practical content.

I also enjoyed Alistair Shepherd‘s talk which had a few technical bits and pieces but managed to bridge the themes of the conference by wanting to create a personal website that first and foremost reflected his personal interests and then used the tech to deliver the vision he had for himself. Although the idea to have websites that vary according to the time of date is quite an interesting idea.

I didn’t catch the last few talks so I’m hoping to be able to watch them when they come to YouTube (or maybe some federated alternative!).

Overall still one of the necessary conferences to catch for web technology and now easier to engage with than ever before.

Links

Standard