Games

Feral Vector 2014

I decide to take a break from my regular technology concerns and take a day off to visit indie games conference Feral Vector. The conference program was packed through the whole day with 20 minute talks (making it a little tricky to judge when to leave for things like lunch). The programming was really good and the shorter format made it feel more lively that other recent events I’ve been at like State of the Browser.

The venue was the Crypt on the Green aka the crypt of St.James in Clerkenwell, which is a pretty great venue and particularly in the summer has the advantage of having the church grounds/public park. In terms of layout though the talks were in the sideroom with the far larger room being given over the game demos. So the talks were packed all day while the main room always felt empty. A swap on the day might have worked a lot better. The small mezzanine used as a tea room also ended up feeling like a sauna.

In terms of the demos I liked the folk games tutorial of Turtle Wushu and I real liked Night in the Woods which felt like a platforming Slackers that was in the same cultural space as Gone Home. Hohokum was very weird, it definitely has that play feel but lacks enough feedback to make you feel like you’re actually interacting with the world.

I didn’t see all the talks so I’m going to talk about the ones that I liked. Standouts were Tim Hunkin on the arcade game booths he builds. The units are witty takes on conventional games of physical strength and dexterity. Adam Hay gave a great overview of how music and audio design has developed in videogames and was the closest that the event came to a technical talk. The explanations of the different synth chips versus sampled sound was interesting along with the way that sound design was initially a technical challenge due to hardware but then becomes a simulation challenge once hardware ceases to be a limit.

There were a few performance pieces (and quite a few journalists or writers in here): Christos Reid oscillated between confessional and an analysis of autobiographical games, making lots of good points but never really being clear about what any of it meant. Alice O’Conner mixed spoken word performance of mod readme files with her confession that she was losing interest in gaming and an awkward attempt to contextualise the readme file writers; the recital element was the strongest. Hannah Nicklin gave the strongest performance on the subject of how games break for her but the strength of the performance robbed the analysis of power as you ended up appreciating the delivery but feeling that the material lacked the depth and reflection it deserved. You felt that it was more about hitting a beat than exploring an idea. Near the end of the day James Parker did a puppet show Q&A that took a few easy pop shots but was also laugh out loud funny; his turn actually did the best job of marrying form and material.

Tammy Nicholls talked about world building and how it is valuable to both game depth and play as well as the commercial aspects of intellectual property but never really got into the details so I felt I was hearing half an argument. I often find that a good narrative and deep background carries you through some poor gameplay but perhaps it is undervalued in terms of game development.

Luke Whittaker talked about working on the game Lumino city and why for this game and previous game Lume the studio have focussed on physical materials translated into a game format. While the details of laser-cutting cardboard to make the city were fascinating I’m not sure really whether any meaningful justification for the approach was offered except the aesthetic which seems to be partly a nostalgia for a certain era of animation. However there’s no denying that the aesthetic is unique and its worth looking through the screenshots on the site.

There was also an interesting piece on combining art styles by SFB Games and collaborator Catherine Unger which again had a little bit of technical detail as to the issues and the solutions.

Finally there was a talk about physical puzzle rooms, a genre I don’t like even in digital format, but it did mention the interesting intersection between immersive, participative theatre and physical gaming. This was relatively new ground for me (although obviously people have raved about Secret Cinema). I was interested by the idea of things like 2.8 days later and the Heist. Not enough to want to participate yet but definitely more curious about the possibilities.

I think the talks were all recorded, although the room was often in darkness to make the projection work so I’m not sure how that worked out.

Standard
Programming

Programming as Pop Culture

The “programming is pop culture” quote has been doing the rounds from a 2004 interview with Alan Kay in terms of the debate on the use of craft as a metaphor for development. Here’s a recap:

…as computing spread out much, much faster than educating unsophisticated people can happen. In the last 25 years or so, we actually got something like a pop culture, similar to what happened when television came on the scene and some of its inventors thought it would be a way of getting Shakespeare to the masses. But they forgot that you have to be more sophisticated and have more perspective to understand Shakespeare. What television was able to do was to capture people as they were.

On the face of it this is a snotty quote from someone who feels overlooked; but then I am exactly one of those unsophisticated people who entered a democratised medium and is now rediscovering the past!

As a metaphor though in trying analyse how programmers talk and think about their work and the way that development organisations organise what they do then the idea that programming is pop culture is powerful, relevant and useful.

I don’t necessarily think that pop culture is necessarily derogatory. However in applying it to programming I think that actually you have to accept that it is negative. Regular engineering for example doesn’t discuss the nature of its culture, it is much more grounded in reality, the concrete and stolen sky as Locke puts it.

Architecture though, while serious, ancient and storied is equally engaged in its own pop culture. After all this is the discipline that created post-modernism.

There are two elements to programming pop culture that I think are worth discussing initially: fashion and justification by existence.

A lot of things exist in the world of programming purely because they are possible: Brainfuck, CSS versions of the Simpsons, obfuscated C. These are the programming equivalent of the Ig Nobles, weird fringe activities that contribute little to the general practice of programming.

However ever since the earliest demoscene there has been a tendency to push programming to extremes and from there to playful absurdity. These artifacts are justified through existence alone. If they have an audience then they deserve to exist, like all pop culture.

Fashion though is the more interesting pop culture prism through which we can investigate programming. Programming is extremely faddish in the way it adopts and rejects ideas and concepts. Ideas start out as scrappy insurgents, gain wider acceptance and then are co-opted by the mainstream, losing the support of their initial advocates.

Whether it is punk rock or TDD the patterns of invention, adoption and rejection are the same. Little in the qualitative nature of ideas such as NoSQL and SOA changes but the idea falls out of favour usually at a rate proportional to the fervour with which it was initially adopted.

Alpha geeks are inherently questors for the new and obscure, the difference between them and regular programmers is their guru-like ability to ferret out the new and exciting before anyone else. However their status and support creates an enthusiasm for things. They are tastemakers, not critics.

Computing in general has such fast cycles of obsolescence that I think its adoption of pop culture mores is inevitable. It is difficult to articulate a consistent philosophical position and maintain it for years when the field is in constant churn and turmoil. Programmers tend to attach to concrete behaviour and tools rather than abstract approaches. In this I have sympathy for Alan Kay’s roar of pain.

I see all manner of effort invested in CSS spriting that is entirely predicated on the behaviour of HTTP 1.1 and which will all have to be changed and undone when the new version HTTP changes file downloading. Some who didn’t need the micro-optimisation in initial download time will have better off if they ignored the advice and waiting for the technology to improve.

When I started programming professional we were at the start of a golden period of Moore’s Law where writing performant code mainframe-style was becoming irrelevant. Now at the end of that period we still don’t need to write performant code, we just want easy ways to execute it in parallel.

For someone who loves beautiful, efficient code the whole last decade and a half is just painful.

But just as in music, technical excellence doesn’t fill stadiums. People go crazy for terrible but useful products and to a lesser degree for beautiful but useless products.

We rediscover the wisdom of the computer science of the Sixties and Seventies only when we are forced to in the quest to find some new way to solve our existing problems.

Understanding programming as pop culture actually makes it easier to work with developers and software communities than trying to apply an inappropriate intellectual, academic, industrial or engineering paradigms. If we see the adoption, or fetishism, of the new as a vital and necessary part of deciding on a solution then we will not be frustrated with change for its own sake. Rather than scorning over-engineering and product ivory towers we can celebrate them as the self-justifying necessities of excess that are the practical way that we move forward in pop culture.

We will not be disappointed in the waste involved in recreating systems that have the same functionality implemented in different ways. We will see it as a contemporary revitalisation of the past that makes it more relevant to programmers now.

We will also stop decrying things as the “new Spring” or “new Ruby on Rails”. We can still say that something is clearly referencing a predecessor but we see the capacity for the homage to actually put right the flaws in its ancestor.

Pop culture isn’t a bad thing. Pop culture in programming isn’t a bad thing. But it is a very different vision of our profession that the one we have been trying to sell ourselves, but as Kay says maybe it better captures who we really are.

Standard
culture

Peak Peak

The Guardian recent published a little spattering of articles talking about such frivolous things as peak beard and peak craft beer. While this is a cute way of poking fun at current trends I worry that it is devaluing a useful term.

Most people are using “peak X” to mean simply “X suffers diminishing returns”. Namely that at some point supply of a product, be it facial hair or small-batch beer, exceeds demand and its over-supply actually diminishes demand.

The original form of the term, peak oil, refers to the moment when you maximise the conversion of a finite resource. The point about peak oil is that once you’ve hit it you can no longer achieve the same output again. After the peak, the value of the resource begins to rise due to its scarcity and the diminishing availability of the resource starts to outweigh the efficiency gains in its conversion.

Although it seems that in terms of popular usage the misapplication of “peak” seems to be winning, I think it would be a shame if people start misunderstanding the original meaning of the term due to the misapplication to renewal resources.

Standard
Gadgets, Programming, Python

Creating the Guardian’s Glassware

For the last two months on and off I’ve been developing the Guardian’s Glassware in conjunction with my colleague Lindsey Dew.

Dealing with secret-pre-alpha hardware has at times being interesting but the actual process of writing services using the Mirror API is actually pretty straight-forward.

Glass applications are divided into native, using the Glass SDK, and service-based Glassware using Mirror. Mirror is built on web-friendly technologies such as HTTP, JSON and OAuth2. It also follows the Google patterns for APIs so things like authentication, discovery and the client libraries are all as you would expect if you’ve used a modern Google API before.

For our project, which was focussed on trying to create a sensible, useful newsfeed to Glass, we went with Mirror. If you want to do things like geolocation or picture and video upload then you’ll want to go native.

For various reasons we had a very narrow initial window for development. Essentially we had to start and finish in May. Our prototyping was done with a sample app from Google (you can use Mirror without an actual device), the Mirror playground and a lot of imagination.

When we actually got our Glass devices it took about a week to get my head round what the usecase was. I was thinking that it was like a very lightweight mobile phone but it is much more pervasive with lots of light contact points. I realised that we could be more aggressive about pushing information out and could go for larger sets of stories (although that was dialled back a bit in the final app to emphasise editorial curated content).

Given the tight, fixed deadline for an unknown product the rest of the application was build using lots of known elements. We used a lot of the standard Glass card templates. We used Python on Google App Engine to simplify the integration service and because we had been building a number of apps on that same stack. The application has a few concerns:

  • performing Google Authentication flow
  • polling the Guardian’s Content API and our internal Notification platform
  • writing content to Mirror
  • handling webhook callbacks from Mirror
  • storing a user’s saved stories

We use Content API all the time and normally we are rendering it into widgets or pages but here we are just transforming JSON into JSON.

The cards are actually rendered by our application and the rendered content is packaged into the JSON along with a text representation. However rendering according to the public Glass stylesheet and the actual device differed, and therefore checking the actual output was important.

The webhooks are probably best handled using deferred tasks so that you are handing off the processing quickly and limiting the concern to just processing the webhook’s payload.

For the most part the application is a mix of Google stock API code and some cron tasks that reads a web API and writes to one.

Keeping the core simple meant it was possible to iterate on things like the content mix and user interactions. The need to verify everything in device served as a limiting factor.

Glass is a super divisive technology, people are very agitated when they see you wearing it. No-one seems to have an indifferent opinion about them.

Google have done a number of really interesting things with Glass that are worth considering from a technology point of view even if you feel concerned about privacy and privilege.

Firstly the miniaturisation is amazing. The Glass hardware is about the size of a highlighter and packs a camera, memory, voice synth, wifi and bluetooth. The screen is amazingly vivid and records and plays video well. It has a web browser that seems really capable of standard HTML rendering.

The vocal recognition and command menus are really interesting and you feel a little bit space age when you fire off a Google query and get the information you’re looking for read back to you in seconds.

Developing with the Mirror API is really interesting because it solves the Android fragmentation issue. My application talks to Mirror, not to the native device. If Google want to change the firmware, wire protocol or security they can without worrying about how it will affect the apps. If they do have to make breaking changes then can use the standard webapi versioning they already use.

Unlike most of the Guardian projects this one has been embargoed before the UK launch but it is great to see it out in the open. Glass might not be the ultimate wearable tech answer; just as the brick phones didn’t directly point to the iPhone. Glass is a bold device and making the Guardian’s journalism available on a new platform has been an interesting test of our development processes and an interesting challenge to the idea of what web-capable devices are (just as the Pixel exposed some flaky thinking about what a touch device is).

What will be interesting from here is how journalists will use Glass. Our project didn’t touch on how you can use Glass to share content from the scene, but the Glass has powerful capabilities to capture pictures and video hands-free and deliver it back to desk editors. There’s already a few trials planned in less stressful feature pieces and it will be interesting to see if people find the interface intuitive and more convenient that firing up their phone.

Standard
Software

In praise of fungible developers

The “fungibility” of developers is a bit of hot topic at the moment. Fungibility means the ability to substitute one thing for another for the same effect; so money is fungible for goods in modern economies.

In software development that means taking a developer in one part of the organisation and substituting them elsewhere and not impacting the productivity of either developer involved in the exchange.

This is linked to the mythical “full-stack” developer by the emergence of different “disciplines” within web software development, usually these are: devops, client-side (browser-based development) and backend development (services).

It is entirely possible for developers to enter one of these niches and spend all their time in it. In fact sub-specialisations in things like responsive CSS and single-page apps (SPA) are opening up.

Now my view has always been that a developer should always aspire to have as broad a knowledge base as possible and to be able to turn their hand to anything. I believe when you don’t really understand what is going on around your foxhole then problems occur. Ultimately we are all pushing electric pulse-waves over wires and chips and it is worth remembering that.

However my working history was pretty badly scarred by the massive wave of Indian outsourcing that happened post the year 2000 and as a consequence the move up the value-chain that all the remaining onshore developers made. Chad Fowler’s book is a pretty good summary of what happened and how people reacted to it.

For people getting specialist pay for niche work, full-stack development doesn’t contain much attraction. Management sees fungibility as a convenient way of pushing paper resources around projects and then blaming developers for not delivering. There are also some well-written defences of specialisation.

In defence of broad skills

But I still believe that we need full-stack developers and if you don’t like that title then let’s call them holistic developers.

Organisations do need fungibility. Organisations without predictable demand or who are experiencing disruption in their business methodology need to be flexible and they need to respond to situations that are unexpected.

You also need to fire drill those situations where people leave, fall ill or have a family crisis. Does the group fall apart or can it readjust and continue to deliver value? In any organisation you never know when you need to change people round at short notice.

Developers with a limited skill set are likely to make mistakes that someone with a broader set of experiences wouldn’t. It is also easier for a generalist developer to acquire specialist knowledge when needed than to broaden a specialist.

Encouraging specialism is the same as creating knowledge silos in your organisation. There are times when this might be acceptable but if you aren’t doing it in a conscious way and accompanying it with a risk assessment then it is dangerous.

Creating holistic developers

Most organisations have an absurd reward structure that massively benefits specialists rather than generalists. You can see that in iOS developer and mobile responsive web CSS salaries. The fact that someone is less capable than their colleagues means they are rewarded more. This is absurd and it needs to end.

Specialists should be treated like contractors and consultants. They have special skills but you should be codifying their knowledge and having them train their generalist colleagues. A specialist should be seen as a short-term investment in an area where you lack institutional memory and knowledge.

All software delivery organisations should practice rotation. Consider it a Chaos Monkey for your human processes.

Rotation puts things like onboarding processes to the test. It also brings new eyes to the solution and software design of the team. If something is simple it should make sense and be simply to newcomer, not someone who has been on the team for months.

Rotation applies within teams too. Don’t give functionality to the person who can deliver it the fastest, give it to the person who would struggle to deliver it. Then force the rest of the team to support that person. Make them see the weaknesses in what they’ve created.

Value generalists and go out of your way to create them.

Standard
Web Applications, Work

Why don’t online publishers use https?

Why don’t big publishers use https instead of https? The discussion comes up every three to six months at the Guardian and there seems to be no technical barrier to doing this. There has been a lot of talk about where the secure termination happens and how to get certificates onto the CDN but there seem to be good answers to all the good questions. There doesn’t seem to be any major blockers or even major disadvantages in terms of network resources.

So why doesn’t it happen? Well public content publishers are dependent for the most part on advertising and online advertising is a total mess.

Broken and miss-configured advertising is a major source of issues and the worst aspect of the situation is that you really don’t have much control over what is happening. When you call out to the ad server you essentially yield control to whatever the ad server is going to do.

Now your first-level campaigns, the stuff that are in-house, premium or bespoke campaigns are usually designed to run well on the site and issues with this are often easy to fix because you can talk to your in-house advertising operations team.

However in a high-volume site this is a tiny amount of the advertising you run because you tend to have a much larger inventory (capacity to serve ads) in practice than you can sell. That is generally because supply of online advertising massively outstrips demand.

The way the discrepancy is made good is via ad exchanges which are really clever pieces of technology that try to find the best price for available both publisher and ad buyer. Essentially the ad exchanges try to establish a spot price for an available ad slot amongst all the campaigns the buyers have set up.

However you have virtually no say over what the format of the advert the exchange is going to serve up. The bundle of content that makes up the ad is called the “creative” and might be a simple image but more likely is a script or iframe that is going to load the actual advert, run personalisation and tracking systems.

You have no real control as to what the creatives are and they certainly haven’t been written with your site in mind and most probably security is a very minimal concern compared to gathering marketing information on your view.

So if the creative contains any security breaking rule or any resource that is not also https they you get a security exception on the site. The customer then blames you for being insecure.

One of our consumer products, which do all run under https, ran ads and every other month this issue would come up. In the end we decided that the value of the subscription was more than the value of any advertising that was undermining the image of being secure and reliable so we took the advertising off.

And therefore until agencies and ad exchanges change their policies so that ads are only served off https this situation is unlikely to change. Ironically there is no reason for ads to be served off https since they don’t want to be cached and wants to do lots of transactional stuff with the client anyway.

If the online advertising business went secure-only then online publishers would be able to follow them. Until then public pages are likely to remain on http.

Standard
Work

The gold-plated donkey cart

I'm not sure if he came up with the term but I'm going to credit this idea to James Lewis who used it in relation to a very stuck client we were working on at ThoughtWorks.

The golden donkey cart is a software delivery anti-pattern where a team ceases to make step changes to their product but instead undergoes cycles of redevelopment of the same product making every more complex and rich iterations of the same feature set.

So a team creates a product, the donkey cart, and it's great because you can move big heavy things around in it. After a while though you're used to the donkey cart and you're wondering if things couldn't be better. So the team gets together and realise that they could add suspension so the ride is not so bumpy and you can get some padded seats and maybe the cart could have an awning and some posts with rings and hooks to make it easier to lash on loads. So donkey cart v2 is definitely better and it is easier and more comfortable to use so you wonder, could it be better yet.

So the team gets back together and decides that this time they are going to build the ultimate donkey cart. It's going to be leather and velvet trim, carbon fibre to reduce weight, a modular racking system with extendible plates for cargo. The reins are going to have gold medallions, it's going to be awesome.

But it is still going to be a donkey cart and not the small crappy diesel truck that is going to be the first step on making donkey carts irrelevant.

The gold-plated donkey cart is actually a variant on both the Iron Law of Oligarchy and the Innovator's Dilemma.

The donkey cart is part of the valuable series of incremental improvements that consists of most business as usual. Making a better donkey cart makes real improvements for customers and users.

The donkey cart team is also there to create donkey carts. That's what they think and talk about all the time. It is almost churlish to say that they are really the cargo transport team because probably no-one has ever expressed their purpose or mission in those terms because no-one else has thought of the diesel truck either.

Finally any group of people brought together for a purpose will never voluntarily disband itself. They will instead find new avenues of donkey cart research that they need to pursue and there will be the donkey cart conference circuit to be part of. The need for new donkey cart requirements will be self-evident to the team as well as the need for more people and time to make the next donkey cart breakthrough, before one of their peers does.

Standard