Web Applications

Reviving RSS

Google’s announcement of the end of Reader created all kinds of interesting consequences. It gave a sense of the scale the Google now prefers to operate at. As people migrated away from Reader they were literally bringing alternative services down with the volume of demand being created.

For me personally it made me think about RSS for the first time in quite a while, I have a Reader account and the accompanying Google app but in reality I only really looked at it when I was bored. Given all the excitement and information flying around about alternative products I thought I would have a look at what was on offer.

The two I seriously kicked the tires on were Skimr and NewsBlur, I also looked at feedly but as I am more mobile web than mobile apps I wasn’t that taken with the pitch. I was also swayed by a NewsBlur blog that pointed out that moving from freemium to freemium wasn’t exactly solving the problem whereas an open source subscription model was more likely to avoid history repeating itself. Skimr was an interesting experiment and for things like Reddit and Hacker News where there isn’t really any body to the posts it was as good as any other alternative. However I realised that for blogs and news sites I didn’t really want to read a summary, particularly as news sites frequently truncate the content in the RSS feed anyway.

NewsBlur seems heavy on the client-side and has put its hands up to scaling issues but initially it was clunky and slow. I dared not run it on any other browser than Chrome due to its pig-like hogging of the browser resources. However things have got better and the extremely rich interface has become more bearable although there are still fundamental annoyances like hijacking right-click. Initial features that I didn’t like very much, such as site previewing, are actually useful in practice and the product feels like it is going somewhere.

The most interesting thing about the exercise was actually re-engaging with RSS generally. I had been relying on things like skimming Twitter and Reddit to catch up on all the key issues, it works and it isn’t a bad strategy for dealing with information overload. However as I started to subscribe to blogs from friends or even on the basis of enjoying a piece recommended socially I started to enjoy that feeling of spontaneity, it turned out that my friends were posting more than I thought and that in some areas such as science posting rates are slow but the quality is high so subscribing was a sensible way of catching up on them.

Some sites also turned out to be doing a terrible job of presenting their content and RSS actually revealed more pieces that I was interested in, take Review31 whose feed is interesting and also very different to their front page (not intentionally I would imagine).

In terms of the value of  a newsfeed I realised that I should have implemented an RSS feeds (global and per user) for Wazoku’s Idea Spotlight product. At the time I was obsessed with the fact that as an app requiring authentication there wasn’t a good fit between the idea of a public feed of data and a closed private app. In retrospect I should have seen RSS as a robust way of capturing an activity feed and allowing a user to browse it. As a machine-parsable format it would have made it easy to generate catchup pages. It is kind of irrelevant whether the feed is public or not. It feels good to see this sudden rebirth of interest and activity in RSS and shows that often change is something we need rather than want.

Standard
Web Applications

Give Draft a go

Draft is a terrific new service that I’ve been using for a while. Imagine Dillinger but with documents stored in the cloud, and the clutter-free aesthetic influence of Svbtle and a lot of additional helpful utilities such as a dynamic word count. It is a really simple idea that in some ways has you kicking yourself for not having thought of it yourself.

I’m using for a mix of purposes, partly replacing Google Docs where what I want is to ultimately generate  clean HTML, partly to provide a drafting facility for products that don’t include (Posthaven and Google Sites for example). It is also handy simply as a document drafter rather than having to install an app like Markdown Editor or UberWriter on various machines.

The service also offers the ability to collaborate with others on the draft documents which is something I’d like to give a go as having to discuss other people’s writing by passing emails of drafts back and forth is painful. So therefore I’m encouraging people to jump on the service and give it a go.

Standard
Clojure, Programming, Web Applications

A batteries included Clojure web stack

Inspired by the developer experience of the Play framework as well as that of Django and Ruby on Rails I’ve been giving some thought to what a “batteries included” experience might be for Clojure web development. Unlike things like Pedestal which focuses on trying to keep LISPers happy and writing LISP as much as possible I’m approaching this from the point of view of what would be attractive to frontend developers who choose between things like Rails, Sinatra or Express.

First lets focus on what we already have. Leiningen 2 gives us the ability to create application templates that define the necessary dependencies and directory structures as well as providing an excellent REPL. This should allow us to build a suitable application with a single command. The Compojure plugin already does a lot of the setup necessary to quickstart an application. It downloads dependencies and fires up a server that auto-reloads as the application changes.

The big gap though is that the plugin creates a very bare bones application structure, useful for generating text on the web but not much else. To be able to create a basic (but conventional) web app I think we need to have some standard things like a templating system that works with conventional HTML templates and support for generating and consuming JSON.

Based on my experience and people’s feedback I think it would be worth basing our package on the Mustache templating language via Clostache and using Cheshire to generate and parse the JSON (I like core.data’s lack of dependencies but this is web programming for hackers so we should favour what hackers want to use).

I also think we need to set up some basic static resources within the app like Modernizr and jQuery. A simple, plain skin might also be a good idea unless we can offer a few variations within the plugin such as Bootstrap and Foundation which would be even better.

Supporting a datastore is probably too hard at the moment due to the lack of consensus about what a good allround database is. However I think it would be sensible to offer some instructions as to how to back the app with Postgres, Redis and MongoDB.

I would include Friend by default to make authentication easy and because its difficult to to do that much interesting stuff without introducing some concept of a user. However I think it is important that by default the stack is essentially stateless so authentication needs to be cookie-based by default with an easy way of switching between persistence schemes such as memory and memcache.

Since webapps often spend a lot of time consuming other web services I would include clj-http by default as well. Simple caching that can be backed by memcache also seems important since wrapping Spymemcache is painful and the current Clojure wrappers over it don’t seem to work well with the environment constraints of cloud platforms like Heroku.

A more difficult requirement would be asset pipelining. I think by default the application should be capable of compiling and serving LESS and Coffeescript, with reloading, for development purposes. However ideally during deployment we want to extract all our static resources and output the final compiled versions for serving out of a static handler or alternatively a static resource host. I hate asset fingerprinting due to the ugliness it introduces into urls, I would prefer an ETag solution but fingerprinting is going to work with everything under the sun. I think it should be the default with an option to use ETags as an alternative.

If there was a lein plugin that allowed me to create an application like this with one command I would say that we’re starting to have a credible web development platform.

Standard
Web Applications, Work

Guardian May 2013 Hackday

You can see the reportage in these two liveblogs: Day 1 and Day 2 (note the terrible naming conventions). The theme of the hackday was “growth”. For the most part I took the theme to mean growth hacking and I did a lot of work along those lines which is difficult to talk publicly about.

However my prior lunchtime hacks had revealed to me that one of the fundamental problems the Guardian has is the volume of content it produces. This is not inherently a bad thing but the key thing to understand is that there is vastly more content than can fit onto what are called “fronts” in the jargon. A front is something like the front page of the site or the Environment section. These fronts produce a lot of traffic to content and for regular readers they are the essential navigation tool for the Guardian’s content.

Therefore I was interested in how we consider the dimension of time and perhaps use it to our advantage to help present content. This aspect of my hackday work is more open because actually I need a lot of help to understand to and because I’ve made some effort to try and use the public Content API rather than our internal content.

I called this work the “Time Trilogy” because it consists of three web apps that each use time as a way of accessing Guardian content.

The three apps are Guardian Word Count which was the original and gives you a sense of the challenge of navigating the content. It is also pretty fun to watch during the day and see the words tick up. So the Word Count spawned TickTickTick and Guardian In Review. TickTickTick is really a daily content explorer and was the first tool I needed to start sorting and exploring the breakdown of what we produce. It is a tool at its heart for exploring the daily news cycle. In Review is slightly different, it takes the one hundred most popular pieces of content over the last seven days and renders it. Initially I wanted it to be a kind of automatically generated magazine but actually looking at what people liked meant that I couldn’t make my initial idea work. People really like videos of meteors and Russian car crashes. What it is now is a way to explore material in the medium term, for content that perhaps has left the news cycle but is still relevant.

Neither app is really finished and the way I work is that I am very reliant on having working software to understand what I am doing and what is wrong or right about my approach. TickTickTick is much closer to being a complete product than In Review and it is providing more insight into the nature of the content being produced. For example there is a massive cluster of material between three and five minutes long.

I am going to continue to work on the apps because they help give me feedback into my work and ultimately these prototypes and toys tend to graduate into working components or theory on the main site itself. I may blog a bit more about them individually as I move them closer to something that genuinely creates value. I’m curious about feedback but acting on it is limited by my aims for the apps and realistically the time I have available.

I also wanted to talk a little bit about how I was working this hack day because I decided to reject advice and work solo rather than part of a team (although I did a little bit of backseat driving on the online magazines product and I did come up with the idea that actually won the hackday (and will hopefully be implemented and awesome)). Working alone does mean that your creations are going to be quite rough but it helps cover a lot of ground, I ended up doing five hacks and working on a total of seven. Working with other people means communicating well whereas solo you just need to express what you want very quickly.

My preferred tool for these kinds of hacks is Python on App Engine, which is what I use for my lunchtime hacks and for which I have a standard application template. With each new application that I do I can start to move the common patterns into the template. To avoid having to faff around with testing I use a loosely functional paradigm that I’ve carried over from Wazoku. It generally works quite well but there are a lot of rules to doing it.

This time around I was doing a bit more frontend work than my day job requires because I was working solo. Again having the startup experience was useful because I was more rediscovering a skillset than learning it. Hacks also means selecting your platform and choosing for optimal output.

For that reason I only targeted Firefox and Chrome (Firefox was actually easier to develop for in terms of standards) and I made liberal use of client-side Less and Coffeescript. I was impressed with how good the error-handling was in both. An obscure bug can wipe out all the productivity gains of a higher-order language but both worked great for me.

On top of that I tried experimenting with the new departmental standard of SMACSS (or at least my cherry-picking of it) and I made a lot of use of both Knockout and Bacon.js.

When I say I made use of SMACSS essentially what I did was namespace my classes to produce simple selectors. This did get me out of a problem I had in In Review so while it is truly the ugliest CSS standard and I suspect in time we may come to hate its rejection of rich functionality I concede that it is effective. Expect to see some of it applied to the main website sometime soon.

Knockout isn’t that popular in the department due to performance issues at a particular level of complexity but for me it did a brilliant job of simply syncing the visual DOM to the data feeds. I was really happy with it, other people were using AngularJS for more dynamic applications but they also had a lot more code than I did and again working solo less is so much more.

Bacon.js was really interesting. A lot of my approach to Javascript is functional and event-based but so far the events have been manually worked via jQuery. Bacon made it easier to create event sources with generic handlers and I probably didn’t use 10% of its full features. I’m curious to see what the rest of the department thinks of it but for my hacks it has definitely earned a place.

It was nice to do something outside the run of normal work and one thing that is quite cool about the hackday is that you can use it to tackle a technology that is entirely new to you and not have to worry about whether you succeed or fail.

Next time (May I believe) I think I want to learn about browser plugins as this is a way of producing better functionality for the Guardian without the hassle of having to make it work for the general population of browsers. Some people’s hacks this time around could have been released to the app/plugin stores and we could have been getting valuable user feedback by now.

Standard
Web Applications, Work

The myth of “published” content

Working at the Guardian you often end up having conversations with people about the challenges you face in scaling to meet the often spiky traffic you get in online media. One thing that comes up again and again is the idea that content, once published is essentially static. Now there is a lot to be said for this as digital journalism sticks pretty close to a lot of the conventions of print media; copy is often culled from the print version and follows the 24 hour media cycle quite strongly.

However what is often surprising is the amount of edits a piece of content receives, particularly if it is not a print feature article. The initial version of an article is often the mandatory information and a few paragraphs sufficient to get across the basic story. It then goes through a number of revisions that often happen while the article is draft. Often but not always.

Once the article gets published online though it triggers a new wave of edits as language gets cleaned up and readers, editors and lawyers all descend on it. Editors now have a lot more tools to see what the reaction of the audience to a piece of content is and see how it is playing in social media. You also have articles picked up externally and that means making sure the article works as a landing page.

Naturally stories often develop their own momentum that requires you to switch from a single piece to a set of stories that are approaching different aspects of the overall reporting. You then need to link the different pieces of content together to form a logic package of content.

One thing that is interesting is looking at how many articles are changed after seven days. It is a surprising number as new stories often create a need to create a historic context and often historical stories look dusty in the light of breaking events. We have also had strange things happen with social news where aggregating sites pick up some story that was overlooked at the time.

All of this means that you cannot naively treat content as static but in fact means that you have an interesting decaching problem as it is true that content doesn’t change much, until it does start changing and then it needs to reflect the changes reasonably rapidly if you want to be picked up by things like Google.

 

Standard
Web Applications

The web is a graph

Last week I gave a talk on how I have been creating web applications that very lightly wrap an underlying graph to provide not just content for a page but also the workflow and state of the user’s current interaction with the application.

As part of the talk I have created two demo apps that are available on Heroku. Crumbly Castle is inspired by Dark/Demon Souls and allows you to explore a castle that is populated by the ghosts of everyone who has ever played it.  The other offers a questionnaire system that generates characters in the style of the Elder Scroll or Fallout games. The code for the applications is on Github so you can fork it and deploy it for yourself. Both use the hosted Neo4J addon for Heroku which provides hassle-free hosting but is currently only available to beta program members.

You can obviously use both on your local machine.

Both of the demos are metaphors for more serious kinds of enterprise applications but I think it is often easier to produce prototypes or demos that are based on immediately engaging concepts. It certainly helps to have something that the audience can play with during the talk!

So briefly I just wanted to summarise the points I try to make during the talk and explain why you might want to look at using a graph as your web application store. So my major point is that web application development is usually page-centric, when you hit a page the controller tends to examine the whole state of the application to find out why you came to the page. Are you logged in? Were you trying to look at something? Is there a session associated with you?

I posit that we should instead be looking at the journeys between the pages as being the interesting things. Given where you are in the journey graph where can you go next? Essentially I am taking the same logic as a state machine or rule engine uses and instead expressing it as a relationships in a graph.

The most common trick the applications use is to assign a fixed url to a user session that identifies a node in the graph. Then with each transition I change the relationships the node has to other data based on the user’s actions and then simply send a redirect back to the fixed url which will then render a different result based on the current state of the node.

This means that the web application becomes very simple to write and the controller simply has to select the template and the related nodes that are needed to generate links and actions.

I think it is a really interesting approach that is a really natural fit for simplifying a lot of session-state heavy apps.

Standard