Blogging, Programming, Web Applications

An overview of Javascript reactive frameworks

This post is only meant to be a snapshot of the current state of the various DOM virtualising webframeworks that are around. I’m partly publishing it to try and discover more that I may not be aware of.

Many of these frameworks trace an ancestry back to Om and React. However each one tries to deal with perceived problems with the original frameworks. The most common being that React is too heavy and opinionated while not providing a consistent data model for components. Om on the other hand is in Clojurescript and therefore represents too much to learn in terms of a new language and build process.

Libraries

Most of the libraries build on a few common building blocks that I’m not going to elaborate on here. Virtualdom was an early attempt to separate the core idea of React from the rest of the library code. Virtualdom is only concerned with creating, manipulating and stringifying DOM structures in-memory. Browser DOM APIs involving linking to the actual rendered document so managing virtual DOM is more efficient and simpler because you’re not interacting with these underlying libraries.

ImmutableJS provides a Javascript-idiom interpretation of the Clojure data structures that Om uses (and which are available as the standalone library Mori).

Omniscient

The first interesting framework to discuss is Omniscient, which as its name suggests is heavily influenced by Om but is written in Javascript and therefore does not require you to learn Clojure to use the same techniques that Om uses. Omniscient is built on top of React and ImmutableJS and uses its own library Immstruct to add reference cursors to ImmutableJS structures. Reference cursors allow a component to observe and change sections of a data structure without having to manipulate the whole thing. So for example a component can be given a single sub-key in an object that represents its state and it cannot access or change anything that is not under that key. The code can also be simplified to behave as if the sub-key was actually just the whole data object.

Omniscient doesn’t suggest an alternative to Om’s CSP, instead providing a mechanism for passing event flow functions down the component tree. You’re free to choose your own event libraries. It also means that you’re free to make your own mistakes here as no guidance is really given as to how to structure your event scheme appropriately.

Omniscient is one of the earliest frameworks to re-implement Om and therefore has one of the better sets of documentation on its Github pages. That said there’s not a lot of documentation and the framework does not have a massive community. The situation is worse in most of the other frameworks though so this might tip you over in favour of Omniscient.

Ractive

This is a bit of a Guardian shout out as the primary developer Rich Harris is a Guardian interactive developer.

Ractive (Github) is a little be different from the other frameworks as you can essentially think of it as Mustache templates backed by Observables. You declare a data-binding and write templates in normal Mustache syntax but behind the scenes Ractive is driven by changes in the data and then writes new section of DOM in-memory according to what has changed rather than DOM diff’ing.

Also Ractive sticks with two-way databinding rather than unidirectional data flow so failures in synchronisation or rendering can be problematic.

If what you want to do is render content over a Javascript data model then there is a lot in Ractive that is very compelling. It uses templates with a standard syntax that is well understood and is a soup and nuts framework that sticks to core Javascript syntax and features. However if you want to use your own event or data model you are out of luck.

Mercury

Mercury on the other hand prides itself on modularity. A microframework it attempts to create a glue layer that allows other libraries to interact in a sensible and consistent way. The default components are Virtualdom and its own observer pattern to wrap state.

Mercury’s biggest problem right now is its lack of documentation. There is an expectation that you are going to read the source code to understand what the framework is doing and how to interact with the API. I frankly think this is unrealistic. The project doesn’t currently supply the incentive to do that. Unless you have a very particular desire to avoid any framework lock-in or you want to use a very specific combination of libraries that is not supported elsewhere its hard to understand why you would invest your effort here rather than in frameworks that offer more support.

Cycle

Cycle is similarly experimental, its biggest claim is that it is truly reactive and that the rendered page is purely the result of change in state. The introduction is couched in computer science theory but it would seem that at its heart Cycle wraps RxJS and Virtualdom in a glue layer that has the programmer writing the transform sequence between the event and the DOM structure.

I think it is a positive feature that Cycle re-uses a popular library to manage its state-transitions rather than implementing yet another custom version of the Observable pattern. It also makes the framework easier to get started with if you are familiar with the Rx.

Using established libraries also makes the lack of documentation more acceptable as the Cycle readme only needs to explain how the glue works in the framework.

As something built on reactivity you have to get used to dealing with intermediate state which can be bit difficult for the beginner.

Essentially any event where the user would expect feedback means you need write the conditional structure in the output. So if the user types a character in an input box then you need to write the value of the input box to be the characters the user has typed so far. Most frameworks work at a higher level of abstraction or rather they map closer to the DOM APIs, so getting a working application means grokking the way the dataflow works.

If you’re looking for purity (and a resulting simplicity in implementation) but not to have to learn a bespoke API Cycle is nicely positioned.

WebRx

WebRx is similarly built on top of RxJS Observables but is a much fuller-fat framework that is much more a spiritual successor to Knockout than owing much to the influence Om or React.

Rather like React WebRx doesn’t really provide generalised event handling but instead has special sauce bindings for DOM events and a MessageBus system built over Rx.

It is also written in Typescript and generally looks to play well within the Microsoft ecosystem. It’s interesting to me as an example of how different a language has to be before its regarded as a barrier. Clearly the use of Typescript means there are people who will refuse to use the framework regardless of whether it works for their use case. Other people are going to be attracted exactly because it uses Typescript.

Deku

Language choices are also interesting in Deku which is another attempt to re-implement React in a superficial way.

Deku makes use of ES6 and 7 features and doesn’t aim to support a broad range of browsers (unlike say Ractive). Again that is going to rule it out for some people but this is a more interesting as now we are within dialects of the same core language. Language choice for implementing frameworks is not straightforward. What are you looking for? Conciseness? Editor support?

Deku aims to take the dom diffing approach but avoid getting caught in React’s framework and approach. In particular components are defined just as Javascript objects rather that classes and instances. Something I think makes it more elegant that normal React Components.

It does however still use JSX which is quite interesting as the framework claims to be taking a functional approach but actually uses a DSL for all its DOM construction.

The lifecycle hooks are slightly different with more hooks for different stages of the process and Deku uses some interesting function passing to send changed data down the tree to components.

Deku doesn’t take much influence from Om though. It doesn’t have sophisticated event handling and uses mutable data with generous access and callbacks on data write to do re-renders. This means bugs and state issues are no less likely to happen than with any other framework. It does adopt the single atom idea with a single tree representing the app and the app renderer being bound to the body element.

As such if you like the idea of React but don’t want to bound into its concept of how a Component should be defined but do like JSX and trust the implementors to create a better dom diff than Facebook or Virtualdom, this is the project for you.

Conclusion

I’ve only chosen a handful of frameworks to look at here, mostly based on the ones I know, I’m expecting people to point out more in the comments. I also haven’t used all of these frameworks. Road-testing all of them would be a bigger task than just trying to describe the design choices they’ve made.

The most common pattern is to try and improve the rendering time versus React by using different virtual dom difference algorithms. Usually this is combined with Observed variables that provide a Reactive component that allows changes in the data model to be conveyed to the DOM model with no coding required.

Few of the frameworks engage with the functional reactive programming paradigm by building abstract event streams or indeed any abstraction over discrete events.

The idea that the app should be a single data structure that represents the whole page seems to be gaining significant traction with several of the frameworks recommending this as an approach.

The explosion of frameworks resulting from the release of React is, I think, a positive thing. Initially it seems really daunting that you have all these choices but when you look at the real level of difference between them you can see that they are actually quite tightly coupled around a few common and core ideas and that mostly they express differences about the concerns that a framework should have which feeds into the wider conversation about micro or comprehensive frameworks.

Standard
Programming

No-one loves bad ideas

Charles Arthur has an interesting piece of post-Guardian vented frustration on his blog. His argument about developers and journalists sitting together is part-bonkers opinion and partly correct. Coders and journalists are generally working on different timeframes and newsroom developers generally don’t focus enough on friction in the tools that they are creating for journalists.

Journalists however focus too much on the deadline and the frenzy of the news cycle. I often think newsroom developers are a lot like the street sweepers who clean up after a particularly exuberant street market. Everything has to be tidied up and put neatly away before the next day’s controlled riot takes place.

The piece of the article I found most interesting was something very personal though. The central assumption that runs through Arthur’s narrative is that it is valuable to let readers pre-order computer games via Amazon. One of the pieces of work I’ve done at the Guardian is to study the value of the Amazon links in the previous generation of the Guardian website. I can’t talk numbers but the outcome was that the expense of me looking at how much money was earned resulted in all the “profits” being eaten up by cost of my time. You open the box but the cat is always dead.

Similarly Arthur’s Quixotic quest meant that he spent more money in developer’s time than the project could ever possibly earn. Amazon referrals require huge volumes to be anything other than a supplement to an individual’s income.

His doomed attempt to get people to really engage with his idea really reflected the doomed nature of the idea. British journalism favours action and instinct and sometimes that combination generates results. Mostly however it just fails and regardless of whom is sitting next to whom, who can get inspired by a muddle-minded last-minute joyride on the Titanic except deadline-loving action junkies?

Standard
Programming

Python: Preferring Named Tuples over Classes

One of the views that I decided to take in my recent Python teaching is that named tuples and functions are preferable to class-based data structures.

Python's object-orientated (OO) code is slightly strange anyway since it is retrospectively applied to the original language and most programmers find things like the self reference confusing compared to OO idioms in languages like Ruby or Java.

On top of this Python's dynamic nature means that objects are actually "open" (i.e. can take new attributes at runtime) and have few strong encapsulation guarantees. Most of which is going to be surprising to most OO-programmers who would expect the type to be binding.

Named-tuples on the other hand are immutable so their values cannot be changed and they cannot be expanded or reduced by adding or removing attributes. Their behaviour is much more defined while retaining syntax-sugar access to the attributes themselves.

Functions that operate on tuples and return tuples have some nice properties in terms of working with code. Firstly you know that there are no sequencing issues. A function that takes a tuple as an argument cannot change it so any other function is free to consume it again as an argument.

In addition you know that you are free to consume the tuple value generated by a function. As the value cannot be changed it is safe to pass it around the codebase.

I think the question should be: where are classes appropriate in ways that tuples are not?

The most common valid use of classes and inheritance is to provide a structure in a library where you expect other programmers to supply appropriate behaviour. Using classes you can simply allow the relevant methods to be implemented in the inheriting implementation. A number of Python web frameworks use this Template pattern to allow the behaviour of handlers to be defined.

Even then this is not the definitive solution. Frameworks such as Flask, use decorators instead which fits with the functional approach.

So in general I think it is simpler and easier to maintain programs that consists of functions taking and generating immutable data structures like tuples. Using Python's object-orientation features should be considered advanced techniques and used only when necessary.

Standard
Programming

/dev/winter 2015

The Dev Sessions are a Cambridge tech conference organised by the same people who do FPDays. The conference was free, held on a Saturday and was based in the Moeller Centre near the Churchill College campus. The only practical way to and from the station was via taxi (befriend those on expenses, thank you John Stevenson).

The talks were on broad topics relating to development. I had pitched a talk on Developer Autonomy, something I'm engaged with in the day job.

Misjudging the train times I arrived a little late and jumped in to the talk on using graph databases in game design. This turned out to be a much more general talk about how the speaker had created tooling to support the game designers in his job. Being a fellow tool provider my interest was immediately piqued.

The game the team were building was some weird monster trapping game, something like Pokemon but more complicated. To trap monsters you need a trap, a lure or bait and you would need to craft both so acquiring recipes and components. Trapped animals provide you with components for other baits and traps and a monetary reward.

The talk was pretty wide-ranging, they were using Neo4J to analyse circular dependencies in "quests" to capture monsters. When designers changed the game data it would get loaded into the graph and all the dependencies checked that they are like a tree (flowing forward) rather than having inter-dependencies (circular references).

It was also possible to generate a "map" of everything in the game and what elements of the game were central and which were on the periphery (which should be the high-level monsters near the end of the game).

All the game data is in text files that are stored in Git, the developers had built a tool over the VCS that simplified the presentation of the many JSON files but it was also possibly for designers to edit them directly with whatever editor they favoured.

All the game data then gets built, validated and packed so it can be shipped off to the servers to power the game.

I think, if I understood the talk correctly, that the build also includes the localised text which is then powered from the server rather than updating a binary datafile on the client.

The final really interesting part of the talk involved the use of genetic algorithms to try and create game data. Data is captured from the game indicating what percentage of the players have captured a particular monster. The designer can then enter the percentage that they intend to capture the monster and the program goes off and tries to generate variations on the monster stats and trap requirements that it predicts will be more achievable by players. If any suitable combinations are found the designer can review them and choose the one they prefer.

Again having selected some changes these are applied to the data files via the tool and then packed and shipped.

It was a really interesting talk about how engineers can make a real difference by building tools and was completely undersold by its title.

The Mixcloud talk on scaling on a bootstrap budget was very interesting as most talks on scaling are about reliability, volume and throughput. It is very rare to get one that focuses purely on trying to create the lowest cost stack.

One of the key things they do to achieve this is a lot of capacity planning with just-in-time rental, buying capacity just ahead of rising usage, something that is much easier when you have a focused product with a limited scope that all your engineers can focus on.

They were also using some interesting hacks like ruthlessly using their right to renew contracts to make sure their applications ran on the newest hardware that was being brought into the datacentre instead of staying on the older blades. A few of the other things I'd heard of before: like setting your requirements so you require individual boxes and therefore do not share your infrastructure with someone else instead of building smaller services with numerous deployments.

There were a few blanket statements that I didn't agree with. For example S3 was condemned as being "expensive" when its really not the more nuanced statement is that S3 bandwidth is expensive and it really is more of a storage solution than something you use to directly serve the public at scale.

One of the big domain specific issues was around streaming audio files, of which, intriguingly was the idea that when you serve the files the connection is so fast you serve the whole asset to the browser when the user is perhaps only going to listen to ten seconds to see if they like it.

A lot of the talk was really about building a single point of presence CDN on the cheap. I did wonder if there wasn't something smart to be done with servers that regulated the downloads more evenly or using a customer player and streaming format.

I stopped by the Julia introduction and there was some interesting points but it was very slow. Julia is quite an interesting language though and I should spend more time with it.

The final talk of the day was on "smells" in automated testing. I thought this would be an interesting topic because I think automated testing was hard but a combination of obscure slide illustrations, fairly old testing strategies and dodgy OO-code examples at the end of the day resulted in a talk that was side-tracked. Testing is hard, and since test code is code then it does not seem worth calling out tests as something special within a codebase. Writing good test code means writing good code and applying the same scrutiny of solution design to the test code just makes sense.

Two things that were not mentioned in the talk but which I think matter when you are talking about the subject as a whole are monitoring and generative testing. I think any talk about testing now needs to cover an approach to generative testing, the old world of testing examples and specifications might be helpful for illustrating code but should not be considered as really being proper test code.

Things that can be extremely difficult to test might be trivial to monitor. Time spent understanding the performance of code in production can be just as valuable as investing a lot of time in creating complex test code.

The whole day was full of interesting talks and bits and pieces and I'm definitely interested in trying to make the trip to the summer version of the event.

Standard
Programming

Scale Summit 2015: Testing in production session

One of the most interesting sessions I went to at Scale Summit 2015 was one about testing in production. It was not that well attended compared to the other sessions so I don't know if there was implied agreement with the topic.

One of the questions was why it is important to test in production. For me the biggest thing is that you can only really get realistically distributed traffic from genuine traffic. Most load-testing or replay strategies fail for me at the first hurdle by only creating load from a few points of presence, usually in the big Amazon availability zones. You also have to be careful that traffic is routed outside of Amazon's internal data connections if you want to get realistic numbers. Dealing with load from a few different locations with large data pipelines between them is very different from distributed clients on the public network.

Replay strategies allow for "realistic" traffic patterns and behaviours but one of the more interesting ideas discussed was to generate fake load during off-peak periods. This is generated alongside the genuine user traffic. The fake load exercises key revenue generating pathways with some procedural randomisation. Injecting this additional fake load allows capacity planning and scaling strategies to be tested to a known excess capacity.

Doing testing in production means being responsible so we talked about how to identify fake test traffic (HTML headers with verification seemed sensible) so that you can do things like circuit-break that traffic and also segment it in reporting.

During the conversation I realised that the Guardian's practice of asking native app users to join the beta programme was also an example of testing in production. Most users who enter the scheme don't leave so you are creating a large segment of users who are validating releases and features ahead of the wider user base.

In the past we've also used the Facebook trick of duplicating user requests into multiple systems to make sure that systems that are being developed can deal with production load. If you don't like doing that client-side you can do it server-side by using a simple proxy that queues up work with a variety of systems but offloads everything that isn't the user's genuine request. Essentially you throw away the additional responses but the services will still do the work.

We also talked about the concept of having advanced healthchecks that report on the status of things like the availability of dependencies. I've used this technique before but interestingly I've made the machines go into failure mode if their mandatory dependencies aren't available where as other people were simply dashboarding the failures (and presumably alerting on them).

At the end of the session I was pretty convinced that testing in production is not only sensible but that actually there are a number of weaknesses in pre-production testing approaches. The key one being that you should assume that pre-production testing represents the best case scenario. You are testing your assumed scenario in a controlled environment.

There is also a big overlap between good monitoring and production testing. You have to have the first before you can reasonably do the second. The monitoring needs to be freely accessible to everyone as well. There's no good reason to hide monitoring away in an operations group and developers and non-technical team members need to be able to see and understand what is actually happening in production if they are to have the same conversation.

Standard
Programming

Trading performance for asynchronicity

An unusual conversation came up at one of the discussion groups in the day job recently. One of the interesting things that the Javascript language specification provides is a very good description of asynchronous execution that is then embodied in execution environments like NodeJS. Asynchronicity on the JVM isĀ  emulated by an event loop mechanism on top of the usual threaded execution environment. In general if you run JVM code in a single-thread environment bad things will happen I would prefer to do it on at least two cores.

So I made the argument that if you want asynchronous code you would be better off executing code on NodeJS rather than emulating via something like Akka.

Some of my colleagues shot back that execution on NodeJS would be inferior and I didn’t disagree. Just like Erlang sometimes you want to trade raw execution performance to get something more useful out of the execution environment.

However people felt that you would never really want to trade performance for a pure asynchronous environment, which I found very odd. Most of the apps we write in the Guardian are not that performant because they don’t really need to be. The majority of our volume is actually handled by caching and a lot of the internal workloads are handled by frameworks like Elasticsearch that we haven’t written.

In follow up discussion I realised that people hadn’t understood the fundamental advantage of asynchronous execution which is that it is easier to reason about than concurrent code. Asynchronous execution contexts on NodeJS provide a guarantee that only one scope is executing at a time so whenever you come to look at an individual function you know that scope is limited entirely the block you are looking at.

Not many programmers are good at parsing and understanding concurrent code. Having used things like Clojure I have come to the conclusion that I don’t want to do concurrency without excellent language support. In this context switching to asynchronous code can be massively helpful.

Another common situation is where you want to try and achieve data locality. With concurrent code it is really easy to actually end up with net poorly performing code due to contention on contexts. Performing a logical and cohesive unit of work is arguably a lot easier in asynchronous code blocks. It should be easier to establish a context, complete a set of operations and then throw away the whole context, knowing that you won’t need to reload that context again as the task will now be complete.

It is hard to make definite statements of what appropriate solutions are for in particular situations. I do know though that performance is a poor place to start in terms of solution design. Understanding the pros and cons of execution modes matters considerably more.

Standard
Clojure, Programming

Creating Javascript with Clojure

This post is an accompaniment to my lightning talk at Clojure Exchange 2014 and is primarily a summary with lots of links to the libraries and technologies mentioned in the presentation.

The first step is to to use Wisp a compiler that can turn a Clojure syntax into pure Javascript, with no dependencies. Wisp will translate some Clojure idioms into Javascript but does not contain anything from the core libraries including sequence handling. Your code must work as Javascript.

One really interesting thing about Wisp is that it supports macros and therefore can support semantic pipelining with the threading macros. Function composition solved!

If you want the core library functionality the logical thing to add in next is a dependency on Mori which will add in data structures and all the sequence library functions you are used to with a static invocation style that is closer to Clojure syntax.

At this point you have an effective Clojure coding setup that uses pure Javascript and requires a 50 to 60K download.

However you can go further. One alternative to Mori is ImmutableJS which uses the JavaScript interfaces (object methods) for Array and Map. If you use ImmutableJS you can also make use of a framework called Omniscient that allows you develop ReactJS applications in the same way you do in Om.

ImmutableJS can also be used by TransducersJS to get faster sequence operations so either library can be a strong choice.

Standard