Blogging

State of the Browser 2023

The State of the Browser conference is an annual staple for me (although it clashes with the excellent Pycon UK) and this year’s edition was another good day out. The logistics of the conference are great with live captioning on the talks, a live stream and the videos going online quickly after the event. If you attend in person then the new venue at the Barbican is comfy and well-organised, the only glitch being that the breakfast rolls (particular the vegetarian ones) ran out very quickly.

The keynote was a bit poor and rambly but did make a good point that of the top 100 websites none has completely valid HTML which seems crazy but also points to how hard it can be to create a complex website.

Amy Hupe‘s It all means nothing in the end was the only talk that was about personal experience rather than technology and I was grateful for that because at some conferences it can be a third of the content these days. The talk is absolutely fine, I don’t think it gets the definition of burnout quite right because the workplace is an integral part of the definition and therefore for a self-employed contractor I think you’re not really talking about burnout but about the expectations that you put on yourself. I was also interested that maybe people are only being exposed to OKR-style stretch goals at work now and maybe incremental ways of working, getting things done techniques and tiny habits are not as well known anymore.

Ian Lloyd‘s talk on UI accessibility horror stories was funny and it was very thought-provoking about how bad the theatre and cinema seat pickers are in particular that the accessible seating is not identified as such to assistive technology. However the second half of the talk lacked a bit of clarity about how to breakdown more complex UIs, in particular I was curious to know about how detail can be exposed progressively as needed.

Killian Valkhof’s talk on how common requirements can be met with pure standards-based techniques was the best technical talk of the day for me but it was absolutely blighted by a technical issue with the monitor connection. I hope he blogs about some the ideas he shared on the day because they are a lot simpler than a lot of approaches I’ve seen. I think the major takeaway from his talk for me was that our knowledge gets fossiled at the point we learn how to do something. It is harder to relearn something with new techniques than to learn a new thing altogether.

Diego González was the only representative of a browser maker this year, this input if vital to the conference so it would be great to have something from Mozilla in future years. His talk about PWAs involved an extended pastiche of Blue Planet which was amusing but I didn’t take that much away from it on a personal level. It is interesting to know that there is some self-reflection going on about what constitutes a progressive web apps and what their right relationship with native apps is. As compere Dave Letorey said during the conference “the browser is the everything app” if you’re not attempting to create a platform capitialist business then how much do apps matter to you compared to access to services and information?

The talk on caching by Harry Roberts was a great overview of cache headers which are the kind of thing you think you understand but I appreciated the clarity that you can just delete everything except Cache-Control and ETag and then map the behaviour into Cache Control directives with ETags as a bonus if relevant.

The final talk was a technical demo that used the speech to text API that is built-in to browsers from Apple and Google but isn’t really a web standard. The talk highlighted the huge burden that is put on Mozilla to make this something that can geuninely be open and used across implementations. To be honest I’ve seen talks like this before and they are fun but I’m not sure that this was the right forum if there wasn’t going to be a discussion about the compromise that is required to share your voice data with tech giants.

Chris Ferdinandi provided pre-recorded inserts between talks which were very professionally done (as you might expect from someone who does it for a living). He ignited a new interest in Web Components in me. I had previously been a bit sceptical but now they are fully supported it feels silly to use even a microframework if there is a standard available.

The attendees were very friendly and the environment as attractive as ever and the tickets are really a bargain although there was an appeal from the organisers for people to buy the tickets early to avoid them being personally liable for the deposit. Another Saturday well-spent.

Finally a shout out to the makers of Invidious for making it easier to review the videos from the day in a logical interview than actual Youtube. I’m sure it can’t be long before it gets shutdown somehow.

Links

Previous reports

Standard
Programming

Tackley’s law of caching

Tackley’s law of caching is that the right cache time is the time it takes someone to come to your desk to complain about a problem.

If you obey this law then when you open your browser to check the issue the person is complaining about it will already have resolved itself.

Tackers may not have invented this rule but I heard it from him first and it one of the soundest pieces of advice in software development I’ve ever had.

Standard
Web Applications, Work

The myth of “published” content

Working at the Guardian you often end up having conversations with people about the challenges you face in scaling to meet the often spiky traffic you get in online media. One thing that comes up again and again is the idea that content, once published is essentially static. Now there is a lot to be said for this as digital journalism sticks pretty close to a lot of the conventions of print media; copy is often culled from the print version and follows the 24 hour media cycle quite strongly.

However what is often surprising is the amount of edits a piece of content receives, particularly if it is not a print feature article. The initial version of an article is often the mandatory information and a few paragraphs sufficient to get across the basic story. It then goes through a number of revisions that often happen while the article is draft. Often but not always.

Once the article gets published online though it triggers a new wave of edits as language gets cleaned up and readers, editors and lawyers all descend on it. Editors now have a lot more tools to see what the reaction of the audience to a piece of content is and see how it is playing in social media. You also have articles picked up externally and that means making sure the article works as a landing page.

Naturally stories often develop their own momentum that requires you to switch from a single piece to a set of stories that are approaching different aspects of the overall reporting. You then need to link the different pieces of content together to form a logic package of content.

One thing that is interesting is looking at how many articles are changed after seven days. It is a surprising number as new stories often create a need to create a historic context and often historical stories look dusty in the light of breaking events. We have also had strange things happen with social news where aggregating sites pick up some story that was overlooked at the time.

All of this means that you cannot naively treat content as static but in fact means that you have an interesting decaching problem as it is true that content doesn’t change much, until it does start changing and then it needs to reflect the changes reasonably rapidly if you want to be picked up by things like Google.

 

Standard