Programming, Python

London Python Coding Dojo February 2025

The Python coding dojo is back and this time it allows AI assisted coding which means that some of the standard katas become trivial and instead the challenges have to be different to either require different problems to be combined in an interesting way or have a very hard problem that doesn’t have a standard solution.

The team I was in worked on converting image files to ASCII art (with a secondary goal of trying to create an image that would work with the character limit of old-school Twitter).

We used ChatGPT and ran the code in Jupyter notebooks. To be honest ChatGPT one-shotted the answer, clearly this is a thing that has many implementations. Much of the solution was as you would expect, reading the image and converting it to greyscale. The magic code is this line (this is regenerated from Mistral rather than the original version).

ascii_chars = "@%#*+=-:. "

This string is used to map the value of each pixel to a character. It is really the key to a good solution in terms of the representation of the image and also when we tried to refine the solution to add more characters this was the bit of the code that went wrong as the generated code tends not to understand that the pixel mapping depends on the length of this string. A couple of versions of the code had an indexing issue as it kept the original mapping calculation but changed the size of the string.

On the one hand the experience was massively deflating, we were probably done in 15 or 20 minutes. Some of the team hadn’t used code assistance this way before so they got something out of it. Overall though I’m not sure what kind of learning experience we were having and whether the dojo format really helps build learning if you allow AI-assistance.

If the problems become harder to allow for the fact that anything trivial is already in the AI databank then the step up into understanding the problem as well as the output is going to be difficult for beginners.

There’s lots to think about here and I’m not sure there are any easy answers.

Standard
Software

Volunteering at State of Open 2025

I volunteered at the State of Open conference this month, the conference is put on by the Open Source technology advocacy groups OpenUK.

Volunteering allowed me to sit in on a few sessions. AI was obviously a hot topic. There is naturally a lot of unhappiness at what constitutes the idea of an “open” foundation model; it isn’t just the code and the model weights, there’s also an interest in the training corpus and any human refinement process that might be used.

It is reasonable to assume that a lack of transparency in training data is because a lot of it has been illegally obtained. The conference did have a discussion group on the UK government’s consultation on copyright and training material, one that critics have said represents a transfer of wealth from creators to technologists.

Overall though it felt that there was more unhappiness than solutions. The expectation seems to be that companies will be able to train their models on whatever material they want and can obtain.

This unhappiness rang again for the other topic I heard a lot about which was maintainer well-being and open source community health. Maintainers feel stretched and undervalued, companies have been withdrawing financial support and informal, volunteer-run organisations handle conflict poorly within its own pool of collaborators leading to people leaving projects where they feel criticised and undervalued.

The good news is that people’s belief in the importance and value of openness, transparency and collaboration is still strong. The speakers at the conference were here to share because they want to help others and believe in the power of shared efforts and knowledge.

Becoming a volunteer

Someone asked me about how you volunteer for the conference and to be honest it was pretty straight-forward, I saw an invitation on LinkedIn, filled out a Google Form and then just turned up to the briefings and did the jobs I was asked to do. If I have the time then I think it is always worth volunteering to help out at these kinds of events as while you might not be able to get to see everything you want it also means you have something meaningful to be doing if the schedule is kind of ropey.

You also get to interact with your fellow volunteers which is much more fun that going to a conference alone.

Links

  • Astronomer Apace Airflow as a service
  • dbt a tool for transforming data
  • Tessl a start up looking to switch from coding as we know it today to specification-driven development

Talk recommendations

This is purely based on what I was able to see.

Standard
London, Work

Will humans still create software?

Recently I attended the London CTOs Unconference: an event where senior technical leaders discuss various topics of the day.

There were several proposed sessions on AI and its impact on software delivery and I was party of a group discussing the evolution of AI-assisted development and looking at the how this would change and ultimately what role people thought there would be for humans in software delivery (we put the impacts of artificial general intelligence to one side to focus on what happens with the technologies we currently have).

The session was conducted under Chatam House equivalent rules so I’m just going to record some of the discussion and key points in this post.

Session notes

Currently we are seeing automating of existing processes within the delivery lifecycle but there are opportunities to rethink how we deliver software that makes better use of the current generative AI tools and perhaps sets us up to take advantage of better options in future.

Rejecting a faster horse

Thinking about the delivery of the whole system rather than just modules of code, configuration or infrastructure allows us to think about setting a bigger task than simply augmenting a human. We can start to take business requirements in natural language, generate product requirement documents from these and then use formal methods to specify the behaviour of the system and verify that the resulting software that generative AI creates meets these requirements. Some members of the group had already been generating such systems and felt it was a more promising approach than automating different roles in the current delivery processes.

Although these methods have existed for a while they are not widely used currently and therefore it seems likely that reskilling will be required by senior technical leaders. Defining the technical outcomes they are looking for through more formal structures that work better with machines requires both knowledge and skill. Debugging is likely to move from the operation of code to the process of generation within the model and leading to an iterative cycle of refinement of both prompts and specifications. In doing this, the recent move to expose more chain of thought information to user is helpful and allows the user to refine their prompt when they can see flaws in the reasoning process of the model.

The fate of code

We discussed whether code would remain the main artefact of software production and we didn’t come to a definite conclusion. The existing state of the codebase can be given as a context to the generation process, potentially refined as an input in the same way as retrieval augmented generation works.

However if the construction of the codebase is fast and cheap then the value of code retention was not clear, particularly if requirements or specifications are changing and therefore an alternative solution might be better than the existing one.

People experimenting with whole solution generation do see major changes between iterations; for example where the model selects different dependencies. For things like UIs this matters in terms of UX but maybe it doesn’t matter so much for non-user facing things. If there is a choice of database mappers for example perhaps we only care that the performance is good and that SQL injection is not possible.

Specifications as artefacts

Specifications and requirements need to be versioned and change controlled exactly as source code is today. We need to ensure that requirements are consistent and coherent, which formal methods should provide, but analysis and resolution of differing viewpoints as to the way system works will remain an important technical skill.

Some participants felt that conflicting requirements would be inevitable and that it would be unclear as to how the generating code would respond to this. It is certainly clear that currently models do not seem to be able to identify the conflict and will most probably favour one of the requirements over the others. If the testing suite is independent than behavioural tests may reveal the resulting inconsistencies.

It was seen as important to control your own foundation model rather than using external services. Being able to keep the model consistent across builds and retain a working version of it decouples you from vendor dependencies and should be considered as part of the build and deployment infrastructure. Different models have different strengths (although some research contradicts this anecdotal observation). We didn’t discuss supplementation techniques but we did talk about priming the code generation process with coding standards or guidelines but this did not seem to be a technique currently in use.

For some participants using generative AI was synonymous with choosing a vendor but this is risky as one doesn’t control the lifespan of such API-based interactions or how a given model might be presented by the vendor. Having the skills to manage your own model is important.

In public repositories it has been noted that the volume of code produced has risen but quality has fallen and that there is definitely a trade-off being made between productivity and quality.

This might be different in private codebases where different techniques are used to ensure the quality of the output. People in the session trying these techniques say they are getting better results than what is observed in public reporting. Without any way of verifying this though people will just have to experiment for themselves and see if they can improve on the issues seen in the publicly observed code.

When will this happen in banks?

We talked a little bit about the rate of adoption of these ideas. Conservative organisations are unlikely to move on this until there is plenty of information available publicly. However if an automated whole system creation process works there are significant cost savings associated with it and projects that would previously have been ruled out as too costly become more viable.

What does cheap, quick codebases imply?

We may be able to retire older systems with a lot of embedded knowledge in them far more quickly than if humans had to analyse, extract and re-implement that embedded knowledge. It may even be possible to recreate a mainframe’s functionality on modern software and hardware and make it cheaper than training people in COBOL.

If system generation is cheap enough then we could also ask the process to create implementations with different constraints and compare different approaches to the same problem optimising for cost, efficiency or maintenance. We can write and throw away many times.

What about the humans?

The question of how humans are involved in the software delivery lifecycle, what they are doing and therefore what skills they need was unclear to us. However no-one felt that humans would have no role in software development but that it was likely to be different to the skill set the made people successful today.

It also seemed unlikely that a human would be “managing” a team of agents if the system of specification and constraints was adopted. Instead humans would be working at a higher level of abstraction with a suite of tools to deliver the implementation. Virtual junior developers seemed to belong to the faster horse school of thinking.

Wrapping up the session

The session lasted for pretty much the whole of the unconference and the topic often went broad which meant there were many threads and ideas that were not fully resolved. It was clear that there are currently at least two streams of experimentation: supplementing existing human roles in the software delivery cycle with AI assistance and reinventing the delivery process based on the possibilities offered by cheap large language models.

As we were looking to the future we most discussed the second option and this seems to be what people have in mind when they talk about not needing experienced software developers in future.

In some ways this is the technical architect’s dream in that you can start to work with pure expressions of solutions to problems that are faithfully adhered to by the implementer. However the solution designer now needs to understand how and why the solution generation process can go wrong and needs to verify the correctness and adherence of the final system. The non-deterministic nature of large language models are not going away and therefore solution designers need to think carefully about their invariants to ensure consistency and correctness.

There was a bit of an undertow in our discussions about whether it was a positive that a good specification almost leads to a single possible solution or whether we need to allow the AI to confound our expectation of the solution and created unexpected things that met our specifications.

The future could be a perfect worker realising the architect’s dream or it could be more like a partnership where the human is providing feedback on a range of potential solutions provided by a generation process, perhaps with automated benchmarking for both performance and cost.

It was a really interesting discussion and an interesting snapshot of what is happening in what feels like an area of frenzied activity from early adopters, later adopters probably can afford to give this area more time to mature as long as they keep an eye on whether the cost of delivery is genuinely dropping in the early production projects.

Standard
Events

Barcamp 13

Barcamp 13 is a general tech and nerdery unconference in London. In its current incarnation it is a one day event at an academy school next to the Tottenham Hotspur stadium.

Although most of the topics were technology related the analog sessions were amongst the most memorable, in particular the Cèilidh session was really fun and a total change in energy. I wasn’t expecting to dance when arrived. I also enjoyed the Minimal Viable Zine session which was about making single sheet zines for communicating urgent information indirectly but person to person.

The work relevant sessions included Nested CSS, which I’ve now started to adopt for my CSS work and looking at how to apply retrospectively to my hobby projects.

I also went to a session about tackling polarisation which was quite interesting as it had both self-proclaimed leftists and Hungarian fans of Orban. I was curious as to why the convener felt that polarisation was new and a problem. The answer the group came up with was that if the polarisation results in shrinking the envelope of who are considered people or a reduction in the rights of people in society then you are potentially talking about life and death. We’ve seen this in the treatment of both refugees and trans rights.

A few things that came through in this session was that there was a strong belief in the power of media (traditional and social) to change social attitudes. I think that would be something interesting to follow up on.

There was also a session on AI-generated music which was interesting but also worrying and which I think probably deserves a post in its own right.

I learned some things, I had fun, I enjoyed chatting to the other attendees and I managed to not be on fire. It was a super interesting day and I will definitely make an effort to get to the next one.

Standard
Software

Halfstack London 2024

Halfstack is a really interesting conference which is increasingly heading into a space that is unrepentantly about hobby projects, the love of technology and amateurism. But also a few talks by professional developer advocates or relationship managers.

It happens on Brick Lane and has an open bar in the afternoon. You might think that this is either hip or insufferable. Both opinions might be right.

In terms of work relevant material there was an interesting if often incoherent talk on the phases of the event loop which seems to be a popular topic but which was full of new information for me (being pretty basic I suppose); a sales pitch for the developer experience in Deno and using Tensorflow JS to do image recognition in mode.

Christian Heilmann has switched from giving talks on developer tools in the browser to the state of developer employment and this time highlighted the dilemma facing junior developer roles. While demand has fallen back for developers (compared to the incredible growth in demand for the previous five to ten years) it has done dramatically for entry-level roles and less so for experienced developers.

When the industry turns off the pipeline like this the effects tend to take years to feed through, as experienced people retire or switch to other roles there are less people taking their place as entrants have responded to the market signal and are doing something else.

The industry gamble here is that AI is going to make up the gap but the risk is whether the people using the AI are going to have a deep enough understanding of what is being created that they can support and maintain the result.

Maintaining codebases was in a way the theme of the talk, with all the emphasis on producing more code with the help of AIs is anyone thinking about what will happen to these digital products in five years time?

The real highlight of the day was a talk that combined the history of the 808 and 909 with a reminder of how crazy some of the browser API support is. Did you know that your browser probably supports MIDI?

According to the talk, you can read the slides online, the 808 and 909 were both flops on release that became classics after hitting the bargain bin so that a different kind of musician could access them and apply a different aesthetic sense to their capabilities.

The talk then used web APIs to recreate the 808 sound with samples via the ToneJS library and to trigger them with a USB connected device (less well supported). That was followed up with a mini-sequencer that was good enough to do a little live performance.

The day ended with a talk on using technology in murder mystery parties which was a bit crazy and obsessional and interesting in the way that people who have Gone Too Far can be. There was a bit where a trunk was being wired up to the mains where I thought the biggest danger might be the risk of death by homemade electronics.

Tickets for 2025 are available and next year’s conference and it has just recently been confirmed that enough pre-sales have been made to ensure the venue can be booked and that therefore next year’s conference is definite. In a period of declining sponsorship and stretched personal budgets that’s a vote of confidence in the conference from its audience.

Standard
Python

Django October 2024 Hackfest

This session was a little more informal than I thought it was going to be but it wasn’t time wasted as it provided an incentive to switch some of projects over to Python 3.13 (which was a great idea so far by the way).

As part of the suggested activities at the session I tried testing a Django template formatting tool called dJade (pronounced just Jade) (introductory post). It worked and seemed pretty good to me, although I don’t really have any complicated projects to work and had to use some off the internet for the testing.

I used uvx to run the formatter and felt that there was something strange going on when I’m running a Rust tool to run a Rust tool and the only Python element was a Pypi listing and the fact that it formats Python code.

The suggestions also included helping out on Narwhals, which I hadn’t heard of before but aims to be a compatibility layer between different dataframe implementations. It seemed an interesting project but not one I have the right background to help with.

Standard
Python

London Python meetup May 2024

The meetup was held at Microsoft’s Reactor offices near Paddington which have a great view down the canal towards Maida Vale. Attendees got an email with a QR code to get in through the gate which all felt very high-tech.

The first talk was not particularly Python related but was an introduction to vector databases. These are having a hot moment due to the way that machine learning categorisation maps easily into flat vectors that can then be stored and compared through vector stores.

Then can then be used to complement LLMs through the Retrieval Augmented Generation (RAG) which combines the LLM’s ability to synthesis and summarise content with more conventional search index information.

It was fine as it went and helped demystify the way that RAG works but probably this langchain tutorial is just as helpful as to the practical application.

The second talk was about langchain but was from a Microsoft employee who was demonstrating how to use Bing as a agent augmentation in the Azure hosted environment. It was practical but the agent clearly spun out of control in the demo and while the output was in the right ballpark I think it illustrated the trickiness of getting these things to work reliably and to generate reliable output when the whole process is essentially random and different each run.

It was a good shop window into the hosted langchain offering but could have done more to explore the agent definition.

The final talk was by Nathan Matthews CTO of Retrace Software. Retrace allows you to capture replay logs from production and then reproduce issues in other environments. Sadly there wasn’t a demo but it is due to be released as open source soon. The talk went through some of the approaches that had been taken to get to the release. Apparently there is a “goldilocks zone” for data capture that avoids excessive log size and performance overhead. This occurs at the library interface level with a proxy capture system for C integration (and presumably all native integration). Not only is lower level capture chatty but capturing events at a higher-level of abstraction makes the replay process more robust and easier to interact with.

The idea is that you can take the replay of an issue or event in production, replay it on a controlled environment with a debugger attached to try and find out the cause of the issue without ever having to go onto a production environment. Data masking for sensitive data is promised which then means that the replay logs can have different data handling rules applied to them.

Nathan pointed out that our currently way of dealing with unusual and intermittent events in production is invest heavily in observability (which often just means shipping a lot of low-level logging to a search system). The replay approach seems to promise a much simpler approach for analysing and understand unusual behaviour in environments with access controls.

It was interesting to hear about poking into the internals of the interpreter (and the OS) as it is not often that people get a chance to do it. However the issue of what level of developer access to production is the bigger problem to solve and it would be great to see some evidence of how this works in a real environment.

Standard
Programming

Halfstack on the Shore(ditch) 2023

Self-describing as an “anti-conference” or the conference that you get when you take all the annoying things about conferences away. It is probably one of the most enjoyable conferences I attend on a regular basis. This year is was in a new venue quite close to the previous base at Cafe 1001 which was probably one of my favourite locations for a conference.

The new venue is a small music venue and the iron pillars that fill the room were awkward for sightlines until I grabbed a seat at the front. The bar opened at midday and was entirely reasonable but the food was not as easily available as previously available but you were still able to walk to the nearby cafe and show your conference badge if you wanted.

Practical learnings

Normally I would say that HalfStack is about the crazy emergent stuff so I was surprised to actually learn a few things that are relevant to the day job (admittedly I have been doing a lot more backend Javascript than I was previously). I was quite intrigued to see some real-world stats that the Node’s in-built test runner is massively faster than Jest (which maybe should not be so surprising as it does some crazy things). I’ve been using [Bun]() recently which does have a faster runner and it makes TDD a lot more fun that with the normal Jest test runner.

I also learnt that NODE_ENV is used by library code to conditionally switch on paths in their code. This is obviously not a sound practice but the practical advice was to drop variables that map to environments completely and instead set parameters individually as per standard 12 factor app practice. I think you can refine that with things like dotenv but I’m basically in agreement. Two days later I saw a bunch of environment-based conditional code in my own workplace source code.

It was also interesting to see how people are tackling their dependency testing. It felt like the message is that your web framework should come with mocks or stubs for testing routing and requests as standard and that if it doesn’t then maybe you should change your framework. That feels a bit bold but that’s only because Javascript is notorious for having anaemic frameworks that offer choice but instead deliver complexity and non-trivial decisions. On reflection it seems like having a built-in unit testing strategy for your web framework seems like a must-have feature.

Crazy stuff

There was definitely less crazy stuff than in previous years. A working point of sale system including till management based on browser APIs was all quite practical and quite a good example of why you might want USB and serial port access within the browser.

There was also a good talk about converting ActionScript/Flash to Javascript and running emulation of old web games although that ultimately turned out to be a way of making a living as commercial games companies wanted to convert their historic libraries into something that people could continue to use rather than being locked away in a obsolete technology.

The impact of AI

One of the speakers talked about using ChatGPT for designing pitches (the generated art included some interesting interpretations our how cat claws work and how many claws they have) and I realised listening to it that for some younger people the distilled advice and recommendations that the model has been fed is exactly the kind of mentoring that they have desired. From a negative perspective this means an endless supply of non-critical ideas and suggestions that require little effort on the user’s part; just another way to avoid having to do some of the hard work of deliberative practice. On the positive side a wealth of knowledge that is now available to the young in minutes.

While I might find the LLMs trite, for people starting their careers the advice offered is probably more sound that their own instincts. There also seems to be some evidence appearing that LLMs can put a floor under poor performance by correctly picking up common mistakes and errors. At a basic level they are much better at spelling and grammar than non-native speakers for example. I don’t think they have been around long enough to have reliable information though and we need to decide what basic performance of tasks looks like.

I wonder what the impact will be on future conference talks as ChatGPT refines people to a common set of ideas, aesthetics and structures. Probably it will feel very samey and there will be a desire to have more quirky individual ideas. It feels like a classic pendulum swing.

Big tech, big failings

Christian Heilmann’s talk was coruscating about the failures of big tech during the acute phase of the COVID pandemic and more generally about being unable to tackle the big problems facing humanity and instead preferring to focus on fighting for the attention economy and hockey stick growth that isn’t sustained. He also talked about trying to persuade people that they don’t have to work at FAANGS to be valid people in technology.

His notes for this talk are on his blog.

Final thoughts

Chat GPT might need me to title this section as a conclusion to avoid it recommending that I add a conclusion. HalfStack this year is happening at a strange time for programming and the industry. There wasn’t much discussion of some topics that would have been interesting around the NodeJS ecosystem such as alternative runtimes and the role of companies, consultancy and investment money in the evolution of that ecosystem. The impact of a changed economic environment was clear and in some cases searing but it was a helpful reminder that it is possible to find your niche and make a living from it. You don’t necessarily need to hustle and try and make it big unless that is what you really want to do.

The relaxed anti-conference vibe felt like a welcome break from the churn, chaos and hamster wheel turning that 2023 has felt like. I’ve already picked up my tickets for next year.

Links

Standard
Python

London Django Meetup April 2023

I’m not sure whether I’ve ever been to this Meetup before but it is definitely the first since 2020. It was hosted by Kraken Energy in their offices which have a plywood style auditorium with a nice AV setup for presentations and pizza and drinks (soft and hard) for attendees.

There were two talks: one on carbon estimates for websites built using Django and Wagtail; the other about import load times when loading a Django app into a shell (or more generally expensive behaviour in Python module imports).

Sustainable or low impact computing is a topic that is slowly gaining some traction in the wider development community and in the case of the web there are some immediate quick wins in the form of content negotiation on image formats, lazy loading and caching to be had.

One key takeaway from the talk is that the end user space is the area where most savings are possible. Using large scale cloud hosting means that you are already benefiting from energy efficiencies so things like the power required for a mobile phone screen matters because the impact of inefficient choices in content delivery is multiplied by the size of your audience.

There was a mention in passing that if a web application could be split into a Functions as a Service (FaaS) deployable then, for things like Django that have admin paths and end user paths, you can scale routes independently and save on overprovisioning. If this could be done automatically in the deployment build it would be seamless from the developer’s point of view. I think you can do this via configuration in the Serverless framework. It seems an interesting avenue for making more efficient deployments but at a cost in complexity for the framework builders.

There was quite an interesting research opportunity mentioned in the talk around serverless-style databases. For sites with intermittent or cyclical usage having an “always on” database represents a potentially big saving on cost and carbon. There was mention of the service neon.tech which seems to have a free personal tier which might be perfect for hobby sites where usage is very infrequent and a spin up time would be acceptable.

The import time talk was interesting, it focused on the developer experience of the Django shell boot time (although to be honest a Python shell for any major framework has the same issues). There were some practical tips on avoiding libraries with way too much going on during the import phase but really the issue of Python code doing expensive eager activity during import has been a live one for a long time.

I attended a talk about cold starts with Python AWS Lambdas in 2019 that essentially boiled down to much of the same issues (something addressed, but not very well in this AWS documentation on imports). Little seems to have improved since and assumptions about whether a process is going to be short or long-lived ultimately boils down to the library implementer and the web/data science split in Python means that code is run in very different contexts making sharing libraries across these two use cases hard.

The core language implementation is getting faster but consensus on good practice in import time behaviour is not a conversation that seems to be happening between the major library maintainers.

The performance enhancements for core Python actually linked the two talks because getting existing code onto more efficient runtimes helps reduce compute demands across all usage.

Standard
Software

State of the Browser 2021

This is the first in-person conference I’ve been to since the pandemic and since it normally clashes with PyCon UK this is also the first State of the Browser that I’ve been too in a while.

As a high-level pitch for the conference it a chance to hear from standard makers and browser developers about their thoughts on the web, web standards and issues in web development.

The conference had an audience of probably a third of what I felt it had the last time I attended in person. There was not issue with distancing and you could add a stickers to your attendee batch to nix photography and to ask for people to keep their distance.

Usually the chance to socialise and network is a major part of the conference experience but once I was there I realised that I didn’t really want to spend the time required to get to know someone new while COVID is as prevalent as it is, not attend the generous post-conference drinks.

Which made me wonder why I was there at all. The answer, on reflection, is that being physically present meant that I was actually present for the talks as well. I’ve bought tickets for virtual events earlier in the year and I still haven’t watch the videos.

By physically turning up I did pay more attention and I did engage and learn more than I did virtually.

I found a few things about the conference frustrating though. Firstly a number of the speakers weren’t there and instead had recorded a talk so being at the conference ended up being a collective video watch without being able to control the video and skip the boring bits. Also there were no questions from the audience because that was being handled on Discord. Now most of my Discord is taken up with gaming because, y’know that’s what Discord pretty much is for the most part. So I wasn’t able to see that side of things because I didn’t have time to set up some kind of work account. But generally whether it was Slack or something else I kind of think having the questions on the conference chat meant that the talks were actually lectures and where the speakers weren’t that proficient with their delivery it made the talks more boring.

So at the end of the experience I have no idea as to whether my attendance was a good idea or not. I probably would have been distracted at home but at least I could have sorted out Discord and have watched the pre-recorded videos in a more comfortable environment (I certainly could of dodged the morning torrential rain).

But when there was a good in-person speaker it was great. Rachel Andrew was the standout for managing a review of the history of layout systems while also previewing the thinking of the standards groups. In particular drawing a fascinating line between the necessity of the contains CSS directive to the ability to be able to look forward to container queries. Stephanie Stimac shared similar insight into what the future may hold for the development of the Form elements and their backwards-compatible codification and customisation.

Alex Russell offered a rebuttal of the locked down mobile ecosystems from a capitalist perspective but failed to really offer remedies given that this overall is a capitalist failure.

In a totally different vibe Heydon Pickering did a talk about requiring people to switch off Javascript to read his blog. It was closer to standup and I did laugh out loud several times although trying to explain what made it funny and entertaining has proven highly difficult.

Rachel Andrew is one of the people behind Notist which a few people were using to share slide links. I hadn’t heard of it before and I can see it’s pretty handy compared to trawling Youtube trying to figure out if some talk you half remember has been posted there.

Overall I think it was worth the effort, I felt I got outside my bubble for a while and felt a bit more connected to the efforts that are still ongoing to safeguard and advance the web as a whole.

Standard