Programming, Web Applications

AngularJS migration: PhantomJS and Angular Mocks

I have recently been upgrading a project from Angular 1.3 to 1.5 in an attempt to get the majority of our projects to a state where a migration to Angular 2 might be more likely.

The upgrade from 1.4 to 1.5 was for the most part entirely painless as the migration notes had promised. The application built and ran and none of our code seemed to be relying on any of the breaking behaviour between the versions.

There was just one problem, all our tests were failing. All the mocks were coming back as undefined with an obscure error url that didn’t really help as the advice it gave was about implementing a provider which applied to none of the mock setup that was happening in the code.

It took a bit of Googling around the problem (and hence this blog post to try and improve the situation) to find a related issue in Github that finally clued me off to the solution that we needed to update the Karma PhantomJS runner and more crucially the version of PhantomJS we were using.

As far as I can tell switching Karma to use PhantomJS 2 is a good idea irrespective of what version of Angular you are using so I think it would probably sensible to do this before you start updating Angular itself.

Standard
Blogging, Programming, Web Applications

An overview of Javascript reactive frameworks

This post is only meant to be a snapshot of the current state of the various DOM virtualising webframeworks that are around. I’m partly publishing it to try and discover more that I may not be aware of.

Many of these frameworks trace an ancestry back to Om and React. However each one tries to deal with perceived problems with the original frameworks. The most common being that React is too heavy and opinionated while not providing a consistent data model for components. Om on the other hand is in Clojurescript and therefore represents too much to learn in terms of a new language and build process.

Libraries

Most of the libraries build on a few common building blocks that I’m not going to elaborate on here. Virtualdom was an early attempt to separate the core idea of React from the rest of the library code. Virtualdom is only concerned with creating, manipulating and stringifying DOM structures in-memory. Browser DOM APIs involving linking to the actual rendered document so managing virtual DOM is more efficient and simpler because you’re not interacting with these underlying libraries.

ImmutableJS provides a Javascript-idiom interpretation of the Clojure data structures that Om uses (and which are available as the standalone library Mori).

Omniscient

The first interesting framework to discuss is Omniscient, which as its name suggests is heavily influenced by Om but is written in Javascript and therefore does not require you to learn Clojure to use the same techniques that Om uses. Omniscient is built on top of React and ImmutableJS and uses its own library Immstruct to add reference cursors to ImmutableJS structures. Reference cursors allow a component to observe and change sections of a data structure without having to manipulate the whole thing. So for example a component can be given a single sub-key in an object that represents its state and it cannot access or change anything that is not under that key. The code can also be simplified to behave as if the sub-key was actually just the whole data object.

Omniscient doesn’t suggest an alternative to Om’s CSP, instead providing a mechanism for passing event flow functions down the component tree. You’re free to choose your own event libraries. It also means that you’re free to make your own mistakes here as no guidance is really given as to how to structure your event scheme appropriately.

Omniscient is one of the earliest frameworks to re-implement Om and therefore has one of the better sets of documentation on its Github pages. That said there’s not a lot of documentation and the framework does not have a massive community. The situation is worse in most of the other frameworks though so this might tip you over in favour of Omniscient.

Ractive

This is a bit of a Guardian shout out as the primary developer Rich Harris is a Guardian interactive developer.

Ractive (Github) is a little be different from the other frameworks as you can essentially think of it as Mustache templates backed by Observables. You declare a data-binding and write templates in normal Mustache syntax but behind the scenes Ractive is driven by changes in the data and then writes new section of DOM in-memory according to what has changed rather than DOM diff’ing.

Also Ractive sticks with two-way databinding rather than unidirectional data flow so failures in synchronisation or rendering can be problematic.

If what you want to do is render content over a Javascript data model then there is a lot in Ractive that is very compelling. It uses templates with a standard syntax that is well understood and is a soup and nuts framework that sticks to core Javascript syntax and features. However if you want to use your own event or data model you are out of luck.

Mercury

Mercury on the other hand prides itself on modularity. A microframework it attempts to create a glue layer that allows other libraries to interact in a sensible and consistent way. The default components are Virtualdom and its own observer pattern to wrap state.

Mercury’s biggest problem right now is its lack of documentation. There is an expectation that you are going to read the source code to understand what the framework is doing and how to interact with the API. I frankly think this is unrealistic. The project doesn’t currently supply the incentive to do that. Unless you have a very particular desire to avoid any framework lock-in or you want to use a very specific combination of libraries that is not supported elsewhere its hard to understand why you would invest your effort here rather than in frameworks that offer more support.

Cycle

Cycle is similarly experimental, its biggest claim is that it is truly reactive and that the rendered page is purely the result of change in state. The introduction is couched in computer science theory but it would seem that at its heart Cycle wraps RxJS and Virtualdom in a glue layer that has the programmer writing the transform sequence between the event and the DOM structure.

I think it is a positive feature that Cycle re-uses a popular library to manage its state-transitions rather than implementing yet another custom version of the Observable pattern. It also makes the framework easier to get started with if you are familiar with the Rx.

Using established libraries also makes the lack of documentation more acceptable as the Cycle readme only needs to explain how the glue works in the framework.

As something built on reactivity you have to get used to dealing with intermediate state which can be bit difficult for the beginner.

Essentially any event where the user would expect feedback means you need write the conditional structure in the output. So if the user types a character in an input box then you need to write the value of the input box to be the characters the user has typed so far. Most frameworks work at a higher level of abstraction or rather they map closer to the DOM APIs, so getting a working application means grokking the way the dataflow works.

If you’re looking for purity (and a resulting simplicity in implementation) but not to have to learn a bespoke API Cycle is nicely positioned.

WebRx

WebRx is similarly built on top of RxJS Observables but is a much fuller-fat framework that is much more a spiritual successor to Knockout than owing much to the influence Om or React.

Rather like React WebRx doesn’t really provide generalised event handling but instead has special sauce bindings for DOM events and a MessageBus system built over Rx.

It is also written in Typescript and generally looks to play well within the Microsoft ecosystem. It’s interesting to me as an example of how different a language has to be before its regarded as a barrier. Clearly the use of Typescript means there are people who will refuse to use the framework regardless of whether it works for their use case. Other people are going to be attracted exactly because it uses Typescript.

Deku

Language choices are also interesting in Deku which is another attempt to re-implement React in a superficial way.

Deku makes use of ES6 and 7 features and doesn’t aim to support a broad range of browsers (unlike say Ractive). Again that is going to rule it out for some people but this is a more interesting as now we are within dialects of the same core language. Language choice for implementing frameworks is not straightforward. What are you looking for? Conciseness? Editor support?

Deku aims to take the dom diffing approach but avoid getting caught in React’s framework and approach. In particular components are defined just as Javascript objects rather that classes and instances. Something I think makes it more elegant that normal React Components.

It does however still use JSX which is quite interesting as the framework claims to be taking a functional approach but actually uses a DSL for all its DOM construction.

The lifecycle hooks are slightly different with more hooks for different stages of the process and Deku uses some interesting function passing to send changed data down the tree to components.

Deku doesn’t take much influence from Om though. It doesn’t have sophisticated event handling and uses mutable data with generous access and callbacks on data write to do re-renders. This means bugs and state issues are no less likely to happen than with any other framework. It does adopt the single atom idea with a single tree representing the app and the app renderer being bound to the body element.

As such if you like the idea of React but don’t want to bound into its concept of how a Component should be defined but do like JSX and trust the implementors to create a better dom diff than Facebook or Virtualdom, this is the project for you.

Conclusion

I’ve only chosen a handful of frameworks to look at here, mostly based on the ones I know, I’m expecting people to point out more in the comments. I also haven’t used all of these frameworks. Road-testing all of them would be a bigger task than just trying to describe the design choices they’ve made.

The most common pattern is to try and improve the rendering time versus React by using different virtual dom difference algorithms. Usually this is combined with Observed variables that provide a Reactive component that allows changes in the data model to be conveyed to the DOM model with no coding required.

Few of the frameworks engage with the functional reactive programming paradigm by building abstract event streams or indeed any abstraction over discrete events.

The idea that the app should be a single data structure that represents the whole page seems to be gaining significant traction with several of the frameworks recommending this as an approach.

The explosion of frameworks resulting from the release of React is, I think, a positive thing. Initially it seems really daunting that you have all these choices but when you look at the real level of difference between them you can see that they are actually quite tightly coupled around a few common and core ideas and that mostly they express differences about the concerns that a framework should have which feeds into the wider conversation about micro or comprehensive frameworks.

Standard
Web Applications, Work

Why don’t online publishers use https?

Why don’t big publishers use https instead of https? The discussion comes up every three to six months at the Guardian and there seems to be no technical barrier to doing this. There has been a lot of talk about where the secure termination happens and how to get certificates onto the CDN but there seem to be good answers to all the good questions. There doesn’t seem to be any major blockers or even major disadvantages in terms of network resources.

So why doesn’t it happen? Well public content publishers are dependent for the most part on advertising and online advertising is a total mess.

Broken and miss-configured advertising is a major source of issues and the worst aspect of the situation is that you really don’t have much control over what is happening. When you call out to the ad server you essentially yield control to whatever the ad server is going to do.

Now your first-level campaigns, the stuff that are in-house, premium or bespoke campaigns are usually designed to run well on the site and issues with this are often easy to fix because you can talk to your in-house advertising operations team.

However in a high-volume site this is a tiny amount of the advertising you run because you tend to have a much larger inventory (capacity to serve ads) in practice than you can sell. That is generally because supply of online advertising massively outstrips demand.

The way the discrepancy is made good is via ad exchanges which are really clever pieces of technology that try to find the best price for available both publisher and ad buyer. Essentially the ad exchanges try to establish a spot price for an available ad slot amongst all the campaigns the buyers have set up.

However you have virtually no say over what the format of the advert the exchange is going to serve up. The bundle of content that makes up the ad is called the “creative” and might be a simple image but more likely is a script or iframe that is going to load the actual advert, run personalisation and tracking systems.

You have no real control as to what the creatives are and they certainly haven’t been written with your site in mind and most probably security is a very minimal concern compared to gathering marketing information on your view.

So if the creative contains any security breaking rule or any resource that is not also https they you get a security exception on the site. The customer then blames you for being insecure.

One of our consumer products, which do all run under https, ran ads and every other month this issue would come up. In the end we decided that the value of the subscription was more than the value of any advertising that was undermining the image of being secure and reliable so we took the advertising off.

And therefore until agencies and ad exchanges change their policies so that ads are only served off https this situation is unlikely to change. Ironically there is no reason for ads to be served off https since they don’t want to be cached and wants to do lots of transactional stuff with the client anyway.

If the online advertising business went secure-only then online publishers would be able to follow them. Until then public pages are likely to remain on http.

Standard
Web Applications

State of the Browser 2014

I haven’t been to State of the Browser before. It is a very cheap one day conference during the weekend on the topic of web standards and the web in general.

Conway Hall, the venue is a beautiful place and very recommended. However the grand aura of humanist lectures did remind you how lame most slide-based presentations are. Shut out the light, we can’t see the cat gif!

The theme and topics of the conference are vague and therefore there was a lot of variety in the talks. More than half were coming from professional vendor advocates and while slick and enjoyable there was a palpable sense of yearly objectives being ticked off. Community communication, check; reminder of organisation mission, check. The rest of the talks were pretty crappy though so its not all roses in the community either.

I’ve put down a few immediate reaction thoughts but I thought I would try and formulate some general takeaways.

Firstly the meaning of the web is very vague, there was an attempt to formulate the meaning of a “web platform” but it floundered a bit. The difficulty is not really what is the web, which is fundamentally unchanged since its inception, but rather what are all the companies doing when they try and build and expand on web?

Essentially what do browser vendors talk about when they talk about the web? To them the web is the input that the browser will accept. Microsoft, Mozilla, Opera and Google were all represented along with Telefonica who are making a big bet on Firefox OS.

One key theme was the belief that affordable smartphones (say below £50 to by and presumably close to £10 a month to run) are imminent and they will herald a new wave of traffic and content consumption. I feel that broadening on-demand access to the web is a good opportunity but the value of this audience, beyond hopefully buying data plans that are more expensive than talk minutes and text bundles, was utterly unproven and seemed an issue of no concern to the speakers.

One interesting thing about web development is that it is a place where visual design, technology and content creation collide into one huge grope box orgy where everything gets mixed up with everything else.

The visual design of the web was mentioned more than a few times and a lot of the standards work was essentially about delivering more fidelity to conceptual designs. It’s interesting that this is seen as fundamentally good thing rather than being interrogated. Perhaps it was discussed in earlier years.

There was also an interesting division in what people saw as their responsibilities. Javascript is now sufficiently complex that there is stratification and specialisation even with this niche. “Glass” people do UX, HTML and CSS, Javascript people do MVC “backend” work and performance and literally no-one is thinking about how the server could make any of this easier.

There was a dispiriting sense from a technology perspective of people hitting everything in sight with a golden hammer made of HTML/CSS/JS. About a fifth of the things discussed on stage boiled down to “a written standard for accessing OS capabilities based on an implementation of that standard”. It makes you appreciate things like Linux where there is pressure to actually tackle root problems and needs rather than layering hack on hack. The acceptance of the diabolic state of touch detection is an example, leading to the suggestion that you should progressive enhance on the detection of mouse events. I mean after all why use a filesystem abstraction when you could just iterate over /dev yourself?

The same paucity of leadership came up on the issue of HTTP 2 where it became clear that the vendors regard it as a way of dealing with the overhead of HTTP connections not really as a way to create the right kind of networking for the new activity we want to perform online.

It was also nice to see not one but two “standards” for defining viewport relative sizes: vw in the viewport spec (which seems very sensible and progressive by the way) and w in the picture/srcset responsive images standard.

There were a few moments when people seemed to touch on a better way of doing things, for example, declarative programmatic rules for layout; but these were rare. Maybe it’s just not that kind of conference.

In terms of talks the clear standout was Martin Beeby’s talk on what the Internet Explorer team have been doing to remove bottlenecks from their rendering. Most of the stuff was sensible and straight-forward but the detail on GPU interaction was fascinating, particularly on picture loading.

One massive problem with the conference was the weird idea that speakers weren’t going to take questions after their talks. Martin mentioned that buffers between the browser and the GPU were small and I would have loved to have know whether than was an intrinsic limitation or not. The lack of ability to follow up on issues diminished the utility of all the talks.

Other than that the walkthroughs of specifications of viewport, service workers (particularly the caching API) and the picture tag were all helpful. Andreas Bovens’s talk also had a helpful review of pixel density and its new related units.

The talks were filmed, I have no idea whether they will posted at some point but those are the ones I’d recommend.

The ticket was very cheap but the main issue of the conference was the time it takes. The programming is very baggy, I felt if all the talks had been halved in length and the panel discussion chopped to make room for post-talk questions there would have been a really good long afternoon of material.

I’ll probably give it another go next year but be a bit more ruthless about what talks to attend.

Standard
Web Applications

Better than Freemium

The new Kickstarted blogging platform Ghost has an interesting payment model. At the free tier you have full access to the platform but you are allowed zero views of the content you create.

Normally with blogging software you want to encourage as many page views as you can get to help promote your platform. The Ghost approach is an interesting way of dealing with the issue of trying to explain your product and have people try it without resorting to free tiers or advertising-supported freemium.

However it also means that you are making an open-ended commitment to the platform, if you ever stop paying then all your content disappears off the internet.

Posthaven is more appealing because it makes explicit promises about the persistence of your content. On the other hand as a replacement for Posterous it has less need to explain its proposition.

Having encountered the issue of offering free trials at Wazoku I wondered whether what we were really learning was that our product wasn’t simple enough to pitch a minimum subscription.

Making people pay something, no matter how notional is a more effective way of gathering feedback than the analytics and subjective feedback of free trials.

This pretty cool blog post on removing free plans at Trak.io makes a load of really good points about what kinds of thing goes wrong with free plans and the freemium model.

I’m not sure what the answer is to people not understanding your product but free trials are not the answer. The ultimate feedback on your product is whether someone will pay for it or not.

Standard
Web Applications

Reviving RSS

Google’s announcement of the end of Reader created all kinds of interesting consequences. It gave a sense of the scale the Google now prefers to operate at. As people migrated away from Reader they were literally bringing alternative services down with the volume of demand being created.

For me personally it made me think about RSS for the first time in quite a while, I have a Reader account and the accompanying Google app but in reality I only really looked at it when I was bored. Given all the excitement and information flying around about alternative products I thought I would have a look at what was on offer.

The two I seriously kicked the tires on were Skimr and NewsBlur, I also looked at feedly but as I am more mobile web than mobile apps I wasn’t that taken with the pitch. I was also swayed by a NewsBlur blog that pointed out that moving from freemium to freemium wasn’t exactly solving the problem whereas an open source subscription model was more likely to avoid history repeating itself. Skimr was an interesting experiment and for things like Reddit and Hacker News where there isn’t really any body to the posts it was as good as any other alternative. However I realised that for blogs and news sites I didn’t really want to read a summary, particularly as news sites frequently truncate the content in the RSS feed anyway.

NewsBlur seems heavy on the client-side and has put its hands up to scaling issues but initially it was clunky and slow. I dared not run it on any other browser than Chrome due to its pig-like hogging of the browser resources. However things have got better and the extremely rich interface has become more bearable although there are still fundamental annoyances like hijacking right-click. Initial features that I didn’t like very much, such as site previewing, are actually useful in practice and the product feels like it is going somewhere.

The most interesting thing about the exercise was actually re-engaging with RSS generally. I had been relying on things like skimming Twitter and Reddit to catch up on all the key issues, it works and it isn’t a bad strategy for dealing with information overload. However as I started to subscribe to blogs from friends or even on the basis of enjoying a piece recommended socially I started to enjoy that feeling of spontaneity, it turned out that my friends were posting more than I thought and that in some areas such as science posting rates are slow but the quality is high so subscribing was a sensible way of catching up on them.

Some sites also turned out to be doing a terrible job of presenting their content and RSS actually revealed more pieces that I was interested in, take Review31 whose feed is interesting and also very different to their front page (not intentionally I would imagine).

In terms of the value of  a newsfeed I realised that I should have implemented an RSS feeds (global and per user) for Wazoku’s Idea Spotlight product. At the time I was obsessed with the fact that as an app requiring authentication there wasn’t a good fit between the idea of a public feed of data and a closed private app. In retrospect I should have seen RSS as a robust way of capturing an activity feed and allowing a user to browse it. As a machine-parsable format it would have made it easy to generate catchup pages. It is kind of irrelevant whether the feed is public or not. It feels good to see this sudden rebirth of interest and activity in RSS and shows that often change is something we need rather than want.

Standard
Web Applications

Give Draft a go

Draft is a terrific new service that I’ve been using for a while. Imagine Dillinger but with documents stored in the cloud, and the clutter-free aesthetic influence of Svbtle and a lot of additional helpful utilities such as a dynamic word count. It is a really simple idea that in some ways has you kicking yourself for not having thought of it yourself.

I’m using for a mix of purposes, partly replacing Google Docs where what I want is to ultimately generate  clean HTML, partly to provide a drafting facility for products that don’t include (Posthaven and Google Sites for example). It is also handy simply as a document drafter rather than having to install an app like Markdown Editor or UberWriter on various machines.

The service also offers the ability to collaborate with others on the draft documents which is something I’d like to give a go as having to discuss other people’s writing by passing emails of drafts back and forth is painful. So therefore I’m encouraging people to jump on the service and give it a go.

Standard
Clojure, Programming, Web Applications

A batteries included Clojure web stack

Inspired by the developer experience of the Play framework as well as that of Django and Ruby on Rails I’ve been giving some thought to what a “batteries included” experience might be for Clojure web development. Unlike things like Pedestal which focuses on trying to keep LISPers happy and writing LISP as much as possible I’m approaching this from the point of view of what would be attractive to frontend developers who choose between things like Rails, Sinatra or Express.

First lets focus on what we already have. Leiningen 2 gives us the ability to create application templates that define the necessary dependencies and directory structures as well as providing an excellent REPL. This should allow us to build a suitable application with a single command. The Compojure plugin already does a lot of the setup necessary to quickstart an application. It downloads dependencies and fires up a server that auto-reloads as the application changes.

The big gap though is that the plugin creates a very bare bones application structure, useful for generating text on the web but not much else. To be able to create a basic (but conventional) web app I think we need to have some standard things like a templating system that works with conventional HTML templates and support for generating and consuming JSON.

Based on my experience and people’s feedback I think it would be worth basing our package on the Mustache templating language via Clostache and using Cheshire to generate and parse the JSON (I like core.data’s lack of dependencies but this is web programming for hackers so we should favour what hackers want to use).

I also think we need to set up some basic static resources within the app like Modernizr and jQuery. A simple, plain skin might also be a good idea unless we can offer a few variations within the plugin such as Bootstrap and Foundation which would be even better.

Supporting a datastore is probably too hard at the moment due to the lack of consensus about what a good allround database is. However I think it would be sensible to offer some instructions as to how to back the app with Postgres, Redis and MongoDB.

I would include Friend by default to make authentication easy and because its difficult to to do that much interesting stuff without introducing some concept of a user. However I think it is important that by default the stack is essentially stateless so authentication needs to be cookie-based by default with an easy way of switching between persistence schemes such as memory and memcache.

Since webapps often spend a lot of time consuming other web services I would include clj-http by default as well. Simple caching that can be backed by memcache also seems important since wrapping Spymemcache is painful and the current Clojure wrappers over it don’t seem to work well with the environment constraints of cloud platforms like Heroku.

A more difficult requirement would be asset pipelining. I think by default the application should be capable of compiling and serving LESS and Coffeescript, with reloading, for development purposes. However ideally during deployment we want to extract all our static resources and output the final compiled versions for serving out of a static handler or alternatively a static resource host. I hate asset fingerprinting due to the ugliness it introduces into urls, I would prefer an ETag solution but fingerprinting is going to work with everything under the sun. I think it should be the default with an option to use ETags as an alternative.

If there was a lein plugin that allowed me to create an application like this with one command I would say that we’re starting to have a credible web development platform.

Standard
Web Applications, Work

Guardian May 2013 Hackday

You can see the reportage in these two liveblogs: Day 1 and Day 2 (note the terrible naming conventions). The theme of the hackday was “growth”. For the most part I took the theme to mean growth hacking and I did a lot of work along those lines which is difficult to talk publicly about.

However my prior lunchtime hacks had revealed to me that one of the fundamental problems the Guardian has is the volume of content it produces. This is not inherently a bad thing but the key thing to understand is that there is vastly more content than can fit onto what are called “fronts” in the jargon. A front is something like the front page of the site or the Environment section. These fronts produce a lot of traffic to content and for regular readers they are the essential navigation tool for the Guardian’s content.

Therefore I was interested in how we consider the dimension of time and perhaps use it to our advantage to help present content. This aspect of my hackday work is more open because actually I need a lot of help to understand to and because I’ve made some effort to try and use the public Content API rather than our internal content.

I called this work the “Time Trilogy” because it consists of three web apps that each use time as a way of accessing Guardian content.

The three apps are Guardian Word Count which was the original and gives you a sense of the challenge of navigating the content. It is also pretty fun to watch during the day and see the words tick up. So the Word Count spawned TickTickTick and Guardian In Review. TickTickTick is really a daily content explorer and was the first tool I needed to start sorting and exploring the breakdown of what we produce. It is a tool at its heart for exploring the daily news cycle. In Review is slightly different, it takes the one hundred most popular pieces of content over the last seven days and renders it. Initially I wanted it to be a kind of automatically generated magazine but actually looking at what people liked meant that I couldn’t make my initial idea work. People really like videos of meteors and Russian car crashes. What it is now is a way to explore material in the medium term, for content that perhaps has left the news cycle but is still relevant.

Neither app is really finished and the way I work is that I am very reliant on having working software to understand what I am doing and what is wrong or right about my approach. TickTickTick is much closer to being a complete product than In Review and it is providing more insight into the nature of the content being produced. For example there is a massive cluster of material between three and five minutes long.

I am going to continue to work on the apps because they help give me feedback into my work and ultimately these prototypes and toys tend to graduate into working components or theory on the main site itself. I may blog a bit more about them individually as I move them closer to something that genuinely creates value. I’m curious about feedback but acting on it is limited by my aims for the apps and realistically the time I have available.

I also wanted to talk a little bit about how I was working this hack day because I decided to reject advice and work solo rather than part of a team (although I did a little bit of backseat driving on the online magazines product and I did come up with the idea that actually won the hackday (and will hopefully be implemented and awesome)). Working alone does mean that your creations are going to be quite rough but it helps cover a lot of ground, I ended up doing five hacks and working on a total of seven. Working with other people means communicating well whereas solo you just need to express what you want very quickly.

My preferred tool for these kinds of hacks is Python on App Engine, which is what I use for my lunchtime hacks and for which I have a standard application template. With each new application that I do I can start to move the common patterns into the template. To avoid having to faff around with testing I use a loosely functional paradigm that I’ve carried over from Wazoku. It generally works quite well but there are a lot of rules to doing it.

This time around I was doing a bit more frontend work than my day job requires because I was working solo. Again having the startup experience was useful because I was more rediscovering a skillset than learning it. Hacks also means selecting your platform and choosing for optimal output.

For that reason I only targeted Firefox and Chrome (Firefox was actually easier to develop for in terms of standards) and I made liberal use of client-side Less and Coffeescript. I was impressed with how good the error-handling was in both. An obscure bug can wipe out all the productivity gains of a higher-order language but both worked great for me.

On top of that I tried experimenting with the new departmental standard of SMACSS (or at least my cherry-picking of it) and I made a lot of use of both Knockout and Bacon.js.

When I say I made use of SMACSS essentially what I did was namespace my classes to produce simple selectors. This did get me out of a problem I had in In Review so while it is truly the ugliest CSS standard and I suspect in time we may come to hate its rejection of rich functionality I concede that it is effective. Expect to see some of it applied to the main website sometime soon.

Knockout isn’t that popular in the department due to performance issues at a particular level of complexity but for me it did a brilliant job of simply syncing the visual DOM to the data feeds. I was really happy with it, other people were using AngularJS for more dynamic applications but they also had a lot more code than I did and again working solo less is so much more.

Bacon.js was really interesting. A lot of my approach to Javascript is functional and event-based but so far the events have been manually worked via jQuery. Bacon made it easier to create event sources with generic handlers and I probably didn’t use 10% of its full features. I’m curious to see what the rest of the department thinks of it but for my hacks it has definitely earned a place.

It was nice to do something outside the run of normal work and one thing that is quite cool about the hackday is that you can use it to tackle a technology that is entirely new to you and not have to worry about whether you succeed or fail.

Next time (May I believe) I think I want to learn about browser plugins as this is a way of producing better functionality for the Guardian without the hassle of having to make it work for the general population of browsers. Some people’s hacks this time around could have been released to the app/plugin stores and we could have been getting valuable user feedback by now.

Standard