Web Applications

Roam Research: initial thoughts

Roam Research not only justified subscribing pretty much up front but has also made it onto my pinned tabs in virtually no time flat. It’s basically a web-based knowledge management system. I’m already a fan of Workflowy so I’m already comfortable with putting information into trees and hierarchies, in fact there’s a lot of overlap between the two applications as you can just use Roam as a kind of org-mode bulleted list organiser.

The thing that makes it different is the ability to overlay a wiki-like ability to turn any piece of text into a link which creates another list page to store other notes.

The resulting page highlights the linked portions of the trees in other pages as well as containing it’s own content.

The links then form a graph that can be explored but I haven’t generate enough content for it to be generating any useful insight yet.

The pages are searchable so you can either take wiki-like journeys of discovery through your notes or just search and jump to anything relevant in your knowledge graph.

By default the system creates a daily “diary” page for you to record notes in an initially unstructured way organically as you roll through the day. I’m still primarily in my todo lists in a Getting Things Done mode during the day but I have found it a useful end of day technique for reflecting or summarising ideas to follow up on.

Roam is very much influenced by and part of the new wave of knowledge management tools based on Zettelkasten. If you’re unfamiliar it’s worth reading up on it (I don’t know it well enough to create a pithy summary).

To date though everything I’ve tried in this space was a bit formal and tricky to get going or fit into my existing ways of working. Roam on the other hand is web-based, relatively quick and usable and uses enough metaphors from existing systems to make it feel accessible.

Weirdly the first use that convinced me I needed this service was actually recipes. You can have a hierarchy of different types of recipes but use a link and you can have a vertical slice across ingredients or techniques.

The second was while genuinely doing some market research on Javascript enhancement frameworks where I wanted to have one page for my overall thoughts (“Is this something to pursue?”) and was able to break the list of all the frameworks I was looking at into their own pages with links to the frameworks and any thoughts I had as I was playing around with them.

The mobile experience isn’t quite as good, it’s a kind of fast noting system where I’m not sure how I can quickly attach a thought to an existing page. Here it’s still easier to use a note-taking app and consolidate thoughts later.

Overall though this is still the most exciting web app I’ve used this year.

Standard
Programming

Svelte – a first look

Rich Harris is a Javascript wizard who has already created the build tool Rollup and the framework Ractive. So therefore when he announced a new framework called Svelte I definitely wanted to take a look and see what problems he is trying to tackle with it.

Having spent some trivial time with some examples I have some understanding of what’s going on and how Svelte compares to other frameworks and approaches to building dynamic web pages.

One of the big things is that Svelte is based around a compiler that creates the deployed package which is just a variation on a Javascript file. So far I’ve found the compiler to be straight-forward and errors easy to understand. The compilation phase put Svelte closer to the Elm camp of pushing problems earlier in the development phase.

Svelte also offers a take on the Web Component, a Svelte component is responsible for managing its own dependencies and CSS. The definition of a Svelte component feels a little different to most component systems though. The basics of a templated piece of HTML is pretty standard but the component lives inside a HTML file that also uses the script and style tags to define the behaviour and appearance of the component respectively.

Using standard tags for this is, perhaps unsurprisingly, much more intuitive than defining React or Riot components.

Standard
Web Applications

State of the Browser 2014

I haven’t been to State of the Browser before. It is a very cheap one day conference during the weekend on the topic of web standards and the web in general.

Conway Hall, the venue is a beautiful place and very recommended. However the grand aura of humanist lectures did remind you how lame most slide-based presentations are. Shut out the light, we can’t see the cat gif!

The theme and topics of the conference are vague and therefore there was a lot of variety in the talks. More than half were coming from professional vendor advocates and while slick and enjoyable there was a palpable sense of yearly objectives being ticked off. Community communication, check; reminder of organisation mission, check. The rest of the talks were pretty crappy though so its not all roses in the community either.

I’ve put down a few immediate reaction thoughts but I thought I would try and formulate some general takeaways.

Firstly the meaning of the web is very vague, there was an attempt to formulate the meaning of a “web platform” but it floundered a bit. The difficulty is not really what is the web, which is fundamentally unchanged since its inception, but rather what are all the companies doing when they try and build and expand on web?

Essentially what do browser vendors talk about when they talk about the web? To them the web is the input that the browser will accept. Microsoft, Mozilla, Opera and Google were all represented along with Telefonica who are making a big bet on Firefox OS.

One key theme was the belief that affordable smartphones (say below £50 to by and presumably close to £10 a month to run) are imminent and they will herald a new wave of traffic and content consumption. I feel that broadening on-demand access to the web is a good opportunity but the value of this audience, beyond hopefully buying data plans that are more expensive than talk minutes and text bundles, was utterly unproven and seemed an issue of no concern to the speakers.

One interesting thing about web development is that it is a place where visual design, technology and content creation collide into one huge grope box orgy where everything gets mixed up with everything else.

The visual design of the web was mentioned more than a few times and a lot of the standards work was essentially about delivering more fidelity to conceptual designs. It’s interesting that this is seen as fundamentally good thing rather than being interrogated. Perhaps it was discussed in earlier years.

There was also an interesting division in what people saw as their responsibilities. Javascript is now sufficiently complex that there is stratification and specialisation even with this niche. “Glass” people do UX, HTML and CSS, Javascript people do MVC “backend” work and performance and literally no-one is thinking about how the server could make any of this easier.

There was a dispiriting sense from a technology perspective of people hitting everything in sight with a golden hammer made of HTML/CSS/JS. About a fifth of the things discussed on stage boiled down to “a written standard for accessing OS capabilities based on an implementation of that standard”. It makes you appreciate things like Linux where there is pressure to actually tackle root problems and needs rather than layering hack on hack. The acceptance of the diabolic state of touch detection is an example, leading to the suggestion that you should progressive enhance on the detection of mouse events. I mean after all why use a filesystem abstraction when you could just iterate over /dev yourself?

The same paucity of leadership came up on the issue of HTTP 2 where it became clear that the vendors regard it as a way of dealing with the overhead of HTTP connections not really as a way to create the right kind of networking for the new activity we want to perform online.

It was also nice to see not one but two “standards” for defining viewport relative sizes: vw in the viewport spec (which seems very sensible and progressive by the way) and w in the picture/srcset responsive images standard.

There were a few moments when people seemed to touch on a better way of doing things, for example, declarative programmatic rules for layout; but these were rare. Maybe it’s just not that kind of conference.

In terms of talks the clear standout was Martin Beeby’s talk on what the Internet Explorer team have been doing to remove bottlenecks from their rendering. Most of the stuff was sensible and straight-forward but the detail on GPU interaction was fascinating, particularly on picture loading.

One massive problem with the conference was the weird idea that speakers weren’t going to take questions after their talks. Martin mentioned that buffers between the browser and the GPU were small and I would have loved to have know whether than was an intrinsic limitation or not. The lack of ability to follow up on issues diminished the utility of all the talks.

Other than that the walkthroughs of specifications of viewport, service workers (particularly the caching API) and the picture tag were all helpful. Andreas Bovens’s talk also had a helpful review of pixel density and its new related units.

The talks were filmed, I have no idea whether they will posted at some point but those are the ones I’d recommend.

The ticket was very cheap but the main issue of the conference was the time it takes. The programming is very baggy, I felt if all the talks had been halved in length and the panel discussion chopped to make room for post-talk questions there would have been a really good long afternoon of material.

I’ll probably give it another go next year but be a bit more ruthless about what talks to attend.

Standard
Web Applications, Work

The myth of “published” content

Working at the Guardian you often end up having conversations with people about the challenges you face in scaling to meet the often spiky traffic you get in online media. One thing that comes up again and again is the idea that content, once published is essentially static. Now there is a lot to be said for this as digital journalism sticks pretty close to a lot of the conventions of print media; copy is often culled from the print version and follows the 24 hour media cycle quite strongly.

However what is often surprising is the amount of edits a piece of content receives, particularly if it is not a print feature article. The initial version of an article is often the mandatory information and a few paragraphs sufficient to get across the basic story. It then goes through a number of revisions that often happen while the article is draft. Often but not always.

Once the article gets published online though it triggers a new wave of edits as language gets cleaned up and readers, editors and lawyers all descend on it. Editors now have a lot more tools to see what the reaction of the audience to a piece of content is and see how it is playing in social media. You also have articles picked up externally and that means making sure the article works as a landing page.

Naturally stories often develop their own momentum that requires you to switch from a single piece to a set of stories that are approaching different aspects of the overall reporting. You then need to link the different pieces of content together to form a logic package of content.

One thing that is interesting is looking at how many articles are changed after seven days. It is a surprising number as new stories often create a need to create a historic context and often historical stories look dusty in the light of breaking events. We have also had strange things happen with social news where aggregating sites pick up some story that was overlooked at the time.

All of this means that you cannot naively treat content as static but in fact means that you have an interesting decaching problem as it is true that content doesn’t change much, until it does start changing and then it needs to reflect the changes reasonably rapidly if you want to be picked up by things like Google.

 

Standard
Programming, Work

The beauty of small things

I am very interested in the idea of “constellation architecture” and microapps as new model for both web and enterprise architecture. It feels to me like it a genuinely new way of looking at things that can deliver real benefit.

It is also not a new way of doing things, it is really just an extension of the UNIX tools idea and taking ideas like service-orientated architecture and some of the patterns of domain-driven design and taking them to their logical extreme conclusion.

If I take ls and I pipe it through grep, you wouldn’t find that particularly exciting or noteworthy. However creating a web application or service that does just one thing and then creating applications by aggregating the output of those many small components does some novel and slightly adventurous to some.

SOA failed before it began and the DDD silos of vertical responsibility seem poorly understood in practice. Both have good aspects though. However both saw their unit of composition as being something much larger than a single function. An SOA architecture for payments for example tended to include a variety of payment functions rather than just offering one service, authorising a payment for example.

There is a current trend to look at a webpage as being composed of widgets, whether they be written as server-side components or as client operated components. I think this is wrong and we need to see a page as being composed of the output of many different webapps.

Logging in a web-application whose only responsibility is to authenticate users, the most popular pages are delivered by an application whose responsibility to determine which pages are popular.

This applications should be as small as we can make them and still function. Ideally they should be a few lines of domain code linking together libraries and frameworks. They should have acceptance/behaviour tests to guarantee their external functionality and that’s about it.

It seems to me that the only way we are going to get good large-scale functionality is by aggregating useful, small segment small functionality. Building large functional stacks takes a lot of time and doesn’t deliver value exponentially to the effort of its creation.

Standard
Blogging

Experimenting with Tumblr

I have recently hived off a few bits of posting that used to be in this blog to Tumblr, a startup that ValleyWag described as being, like Twitter, “unencumbered by revenue”. It’s been an interesting experience.

As this blog has become a bit more work-focussed and more formal I was feeling like writing about Doctor Who wasn’t quite the right thing to mix with the more esoteric tech stuff. I like WordPress a lot and I thought about starting up a second blog here. However I did feel that I wanted something that was a little bit lighter and light-hearted as the topics were going to be relatively trivial.

Signing up was easy (all very Web2.0: massive fonts, custom urls, etc.) but when I saw that you could use Markdown to write up posts rather than WSIWYG editors I was sold. Since I know it anyway it saves me a lot of time not frigging around with generated HTML. I also liked the AJAX UI that made it seem quite easy to just post a few thoughts.

In my mind Tumblr fits a kind of position between Twitter and WordPress. Where you have something to say that is more than a sentence but it isn’t a whole lot more than a paragraph. It is the kind of thing that Blogger should have become after it was clear that WordPress had completely whupped it on almost every front.

I have found Tumblr to be fun and also something that entices you into just jotting down a few thoughts. In terms of the experience it is all light, responsive and dynamic up front but you can dig around behind the scenes to take control of the visual aspects of your site via CSS and HTML (something that is paid for in WordPress) as well as get more options for posting.

So what do I miss from WordPress? Well the first thing is the Stats crack, obviously. WordPress has a killer feature in telling you exactly how many people are reading your articles and how they came to read them. There are also a lot of features that surround this like auto-promotion of articles to Google, the related articles list and the Blogs of the Day. Publishing something in WordPress feels like launching it into the world, by comparision Tumblr posts are a much more muted affair. It feels more like a secret club. I know Tumblr does the promotion as well but I guess WordPress does a better job of closing the feedback loop.

Not having comments on Tumblr is also part of that. Given that comments on your blog can be a very mixed bag I was surprised to find myself missing them. Somehow I must have gotten used to them and their lack now feels like silence. I know some people have used Intense Debate to add in comments but if I was really that bothered about it then I would probably have gone back to WordPress.

So I’m enjoying Tumblr but I am also hoping that they keep it simple and don’t get tempted to add every feature there is from other blogging software.

Standard
Web Applications

Important! !important is a danger sign

Until recently I had never seen the CSS keyword !important used in a production site. However just recently I have seen it in use and also had to use it myself to fix a few cascade issues.

CSS selectors work by assigning a “magic number” that indicates how specific the selector is in relation to the other selectors. Important works by boosting that number by a magnitude or ignoring all other selectors entirely. You can read the exact rules in the specification.

Important is really powerful and as a result you never want to use it. It’s kind of like the CSS A-Bomb, if you ever have to use it, something has gone wrong. The biggest problem with !important is that it can “lock” a style element and make it hard to override it in other cascades. Inevitably this becomes a problem because there is pretty much nothing in CSS that can be regarded as universal in the appearance and rendering of a website.

This then leads to other stylesheets also using !important in their selectors to overcome the earlier !important. This limits their reuse as they now, in turn, export their overly powerful rules and therefore require yet more !important use and so on and so on until every selector has !important on it.

Stylesheets should try to have as weak as possible selectors (without going overboard and perhaps applying some styling information too liberally). This makes them more generally useful as often people only dislike a few elements in a style or an individual page only has a few components that do not gel well with the general style.

I think !important should never be used when creating CSS. There are perhaps two exceptions I can think of: firstly client styles, you know best, fill your boots; secondly, stylesheets that you know represent the real bottom of a cascade. For example an optional stylesheet that renders the site in monochrome can reasonably be expected to represent the final word in a cascade.

Standard
Web Applications

Who is I?

Want to take a new look at news feeds? Whoisi is a feed aggregator with a few distinctive features, firstly it is orientated around people, secondly it allows you to associate an individual with pretty much anything that provides an RSS feed, it is also an experiment with anonymous collaboration.

News feeds are organised around people (e.g. John Resig) and for people who just have one blog it isn’t very exciting but if someone has a Twitter stream, a blog or two, Flickr and a LiveJournal then suddenly you are looking at a consolidated view of everything that person is up to.

Which is either really cool or is the behaviour of a demented stalker. For people who have a strong web presence and are generally pretty cool and interesting then it is really useful to get a single view of them. For example John’s JQuery conference posting works better when you combine Twitter and the photostream.

I think I kind of prefer Whoisi’s liberal anarchy to most of the other sites I have seen. It asks important questions as to how the web should work. Why do we need accounts and passwords? If information is public then do individuals get a say in how they information they provide is organised?

Standard
Web Applications, Work

The death of MVC

The MVC pattern is so embedded into the concept of modern web development I feel quite the heretic for declaring it over. Yet more and more I think we are moving away from it as a pattern. Views have been boiled down to a special case of templating and now Controllers are the next under the microscope. What does a Controller do? Well it marshalls the model and exposes it to the view.

However with the relentless march of REST how much controlling is the Controller really doing? The HTTP request tells you the format of the data, identifies the resource it is interested in. How much need is there for a controller for each request type? Surely the Uber Controller that responds to the HTTP request is all that’s required.

I have also been using Groovlets recently and when using them I feel like “why not mix your model lookup with the ‘view'”. In my Groovlets I essentially lookup the data for the view either directly via Groovy SQL or via the Service layer that is injected out of Guice. The view is then created using Markup Builder.

Since my scripts end up at around 50 lines of code I think that any benefit I might have in separating things is outweighed by the fact that the entire interaction is in one place and can be found, read and changed very easily.

MVC saved us from really painful web architectures but as we grow more sophisticated in the way we handle HTTP Requests and the more we understand the implications of HTTP and the less ceremonial our languages becomes the less benefit we get from it.

Standard
Web Applications

Try our new, new services!

So on Friday not one but two long awaited beta service invitations arrived. The first was the announcement of the addition of Jotspot to Google Apps (finally) and the other about the Amazon Simple DB service. Typical buses…

I didn’t have a lot of time this weekend so I plumped for signing up for Google Apps and trying the new wiki functionality as I was hoping for a beefed up version of Pages. The Simple DB service also needs me to beef up my Web Service scripting fu.

It is too early to say much about either service but after signing up for a Google Apps account (apparently you cannot simply drive one off your regular Google Account). I was slightly underwhelmed by the new Google Sites service. It has taken how long to make a basic and acceptable wiki service available?

Still you can have a lot of separate wiki sites and you have a lot of flexibility on how you share and collaborate on them so maybe I need to build up some content first and then try to share it around. I would like to know whether you can hook Analytics up to some Sites content. That would be useful for some of the content that otherwise would go on something like a WordPress page.

Standard