Programming

/dev/winter 2015

The Dev Sessions are a Cambridge tech conference organised by the same people who do FPDays. The conference was free, held on a Saturday and was based in the Moeller Centre near the Churchill College campus. The only practical way to and from the station was via taxi (befriend those on expenses, thank you John Stevenson).

The talks were on broad topics relating to development. I had pitched a talk on Developer Autonomy, something I'm engaged with in the day job.

Misjudging the train times I arrived a little late and jumped in to the talk on using graph databases in game design. This turned out to be a much more general talk about how the speaker had created tooling to support the game designers in his job. Being a fellow tool provider my interest was immediately piqued.

The game the team were building was some weird monster trapping game, something like Pokemon but more complicated. To trap monsters you need a trap, a lure or bait and you would need to craft both so acquiring recipes and components. Trapped animals provide you with components for other baits and traps and a monetary reward.

The talk was pretty wide-ranging, they were using Neo4J to analyse circular dependencies in "quests" to capture monsters. When designers changed the game data it would get loaded into the graph and all the dependencies checked that they are like a tree (flowing forward) rather than having inter-dependencies (circular references).

It was also possible to generate a "map" of everything in the game and what elements of the game were central and which were on the periphery (which should be the high-level monsters near the end of the game).

All the game data is in text files that are stored in Git, the developers had built a tool over the VCS that simplified the presentation of the many JSON files but it was also possibly for designers to edit them directly with whatever editor they favoured.

All the game data then gets built, validated and packed so it can be shipped off to the servers to power the game.

I think, if I understood the talk correctly, that the build also includes the localised text which is then powered from the server rather than updating a binary datafile on the client.

The final really interesting part of the talk involved the use of genetic algorithms to try and create game data. Data is captured from the game indicating what percentage of the players have captured a particular monster. The designer can then enter the percentage that they intend to capture the monster and the program goes off and tries to generate variations on the monster stats and trap requirements that it predicts will be more achievable by players. If any suitable combinations are found the designer can review them and choose the one they prefer.

Again having selected some changes these are applied to the data files via the tool and then packed and shipped.

It was a really interesting talk about how engineers can make a real difference by building tools and was completely undersold by its title.

The Mixcloud talk on scaling on a bootstrap budget was very interesting as most talks on scaling are about reliability, volume and throughput. It is very rare to get one that focuses purely on trying to create the lowest cost stack.

One of the key things they do to achieve this is a lot of capacity planning with just-in-time rental, buying capacity just ahead of rising usage, something that is much easier when you have a focused product with a limited scope that all your engineers can focus on.

They were also using some interesting hacks like ruthlessly using their right to renew contracts to make sure their applications ran on the newest hardware that was being brought into the datacentre instead of staying on the older blades. A few of the other things I'd heard of before: like setting your requirements so you require individual boxes and therefore do not share your infrastructure with someone else instead of building smaller services with numerous deployments.

There were a few blanket statements that I didn't agree with. For example S3 was condemned as being "expensive" when its really not the more nuanced statement is that S3 bandwidth is expensive and it really is more of a storage solution than something you use to directly serve the public at scale.

One of the big domain specific issues was around streaming audio files, of which, intriguingly was the idea that when you serve the files the connection is so fast you serve the whole asset to the browser when the user is perhaps only going to listen to ten seconds to see if they like it.

A lot of the talk was really about building a single point of presence CDN on the cheap. I did wonder if there wasn't something smart to be done with servers that regulated the downloads more evenly or using a customer player and streaming format.

I stopped by the Julia introduction and there was some interesting points but it was very slow. Julia is quite an interesting language though and I should spend more time with it.

The final talk of the day was on "smells" in automated testing. I thought this would be an interesting topic because I think automated testing was hard but a combination of obscure slide illustrations, fairly old testing strategies and dodgy OO-code examples at the end of the day resulted in a talk that was side-tracked. Testing is hard, and since test code is code then it does not seem worth calling out tests as something special within a codebase. Writing good test code means writing good code and applying the same scrutiny of solution design to the test code just makes sense.

Two things that were not mentioned in the talk but which I think matter when you are talking about the subject as a whole are monitoring and generative testing. I think any talk about testing now needs to cover an approach to generative testing, the old world of testing examples and specifications might be helpful for illustrating code but should not be considered as really being proper test code.

Things that can be extremely difficult to test might be trivial to monitor. Time spent understanding the performance of code in production can be just as valuable as investing a lot of time in creating complex test code.

The whole day was full of interesting talks and bits and pieces and I'm definitely interested in trying to make the trip to the summer version of the event.

Standard
Web Applications

State of the Browser 2014

I haven’t been to State of the Browser before. It is a very cheap one day conference during the weekend on the topic of web standards and the web in general.

Conway Hall, the venue is a beautiful place and very recommended. However the grand aura of humanist lectures did remind you how lame most slide-based presentations are. Shut out the light, we can’t see the cat gif!

The theme and topics of the conference are vague and therefore there was a lot of variety in the talks. More than half were coming from professional vendor advocates and while slick and enjoyable there was a palpable sense of yearly objectives being ticked off. Community communication, check; reminder of organisation mission, check. The rest of the talks were pretty crappy though so its not all roses in the community either.

I’ve put down a few immediate reaction thoughts but I thought I would try and formulate some general takeaways.

Firstly the meaning of the web is very vague, there was an attempt to formulate the meaning of a “web platform” but it floundered a bit. The difficulty is not really what is the web, which is fundamentally unchanged since its inception, but rather what are all the companies doing when they try and build and expand on web?

Essentially what do browser vendors talk about when they talk about the web? To them the web is the input that the browser will accept. Microsoft, Mozilla, Opera and Google were all represented along with Telefonica who are making a big bet on Firefox OS.

One key theme was the belief that affordable smartphones (say below £50 to by and presumably close to £10 a month to run) are imminent and they will herald a new wave of traffic and content consumption. I feel that broadening on-demand access to the web is a good opportunity but the value of this audience, beyond hopefully buying data plans that are more expensive than talk minutes and text bundles, was utterly unproven and seemed an issue of no concern to the speakers.

One interesting thing about web development is that it is a place where visual design, technology and content creation collide into one huge grope box orgy where everything gets mixed up with everything else.

The visual design of the web was mentioned more than a few times and a lot of the standards work was essentially about delivering more fidelity to conceptual designs. It’s interesting that this is seen as fundamentally good thing rather than being interrogated. Perhaps it was discussed in earlier years.

There was also an interesting division in what people saw as their responsibilities. Javascript is now sufficiently complex that there is stratification and specialisation even with this niche. “Glass” people do UX, HTML and CSS, Javascript people do MVC “backend” work and performance and literally no-one is thinking about how the server could make any of this easier.

There was a dispiriting sense from a technology perspective of people hitting everything in sight with a golden hammer made of HTML/CSS/JS. About a fifth of the things discussed on stage boiled down to “a written standard for accessing OS capabilities based on an implementation of that standard”. It makes you appreciate things like Linux where there is pressure to actually tackle root problems and needs rather than layering hack on hack. The acceptance of the diabolic state of touch detection is an example, leading to the suggestion that you should progressive enhance on the detection of mouse events. I mean after all why use a filesystem abstraction when you could just iterate over /dev yourself?

The same paucity of leadership came up on the issue of HTTP 2 where it became clear that the vendors regard it as a way of dealing with the overhead of HTTP connections not really as a way to create the right kind of networking for the new activity we want to perform online.

It was also nice to see not one but two “standards” for defining viewport relative sizes: vw in the viewport spec (which seems very sensible and progressive by the way) and w in the picture/srcset responsive images standard.

There were a few moments when people seemed to touch on a better way of doing things, for example, declarative programmatic rules for layout; but these were rare. Maybe it’s just not that kind of conference.

In terms of talks the clear standout was Martin Beeby’s talk on what the Internet Explorer team have been doing to remove bottlenecks from their rendering. Most of the stuff was sensible and straight-forward but the detail on GPU interaction was fascinating, particularly on picture loading.

One massive problem with the conference was the weird idea that speakers weren’t going to take questions after their talks. Martin mentioned that buffers between the browser and the GPU were small and I would have loved to have know whether than was an intrinsic limitation or not. The lack of ability to follow up on issues diminished the utility of all the talks.

Other than that the walkthroughs of specifications of viewport, service workers (particularly the caching API) and the picture tag were all helpful. Andreas Bovens’s talk also had a helpful review of pixel density and its new related units.

The talks were filmed, I have no idea whether they will posted at some point but those are the ones I’d recommend.

The ticket was very cheap but the main issue of the conference was the time it takes. The programming is very baggy, I felt if all the talks had been halved in length and the panel discussion chopped to make room for post-talk questions there would have been a really good long afternoon of material.

I’ll probably give it another go next year but be a bit more ruthless about what talks to attend.

Standard
Programming

Scale Summit 2014

Scale Summit is the new Scale Camp, an unconference aimed at bringing the same kind of topics as you might expect at Velocity.

This was the first Scale Summit, the venue was excellent as was the food (especially the bacon rolls, from Eden apparently) and supply of drink. Scale Summit happens under Chatham House rules so there’s no attribution of what is said which allows the attendees to be really frank and also for people to be free with what they really know rather than hedging and trying to be “on message”. It makes for a fascinating gathering.

The sessions varied in their organisation but all focussed on discussion between the participants. I managed to go to the Elasticsearch session, which was interesting for the practical boundaries that people were finding and also the operational knowledge. On the subject of using ES as the primary application store, the feeling seemed to be “not yet”, but there was also some words of wisdom about separating out document stores and search functionality and not finding a superficial unity in the two purposes.

The microservices session was a fast and furious fishbowl, easily the liveliest event and one that is going to require a post in its own right. It was interesting to see that the room split into practitioners and people who were sceptical that microservices were a thing or held value over conventional service development.

After lunch I sat in on what can be done to get frontend testing off the critical path to production (not much now but clearly more effort needs to be made), distributed DOS attacks on transactional sites (not as scary as I imagined but again we have to be thinking about how this works), distributed data stores (good war stories, felt better informed for going), getting ops and developers to work together and Linux containers (definitely going to try Docker now).

I had quite a few questions going into the event and while I didn’t get all the answers I hoped for I did at least establish that smart people don’t have simple answers to them either which is reassuring. It’s hard to tell in the heat of it all whether you’re on the edge of things doing things that are pushing the boundaries or simply over-complicating your situation.

The attendees were nicely mixed and from a range of backgrounds, ops, architecture and developers were all well-represented so you felt you were seeing a rounded situation.

The unconference format left me wanting more rather than feeling I had had enough. The openess was amazing and I am planning on being there next year.

Standard