Work

February 2024 month notes

Postgres

Cool thing of the month is pgmem which is a NodeJS in-memory database with a Postgres compatible API. It makes it easy to create very complete integration or unit tests covering both statement testing and object definitions. So far everything that has worked with pgmem has been flawless in both Docker-ised Postgres instances and CloudSQL Postgres.

The library readme says that containers for testing are overkill and it has delivered on that claim for me. Highly recommended.

Less good has been adventures in CloudSQL’s IAM world. A set of overlapping work requirements means that the conventional practices of using roles and superuser permissions is effectively impossible so I’ve been diving deeper than I’ve ever expected to go into the world of Postgres’s permission model.

My least favourite discovery this month has been that it is possible to successfully grant a set of permissions to a set of users that generates no errors (admittedly via a Terraform module; I need to check whether the Postgres directly complains about this) but also gets denied by the permission system.

The heart of the problem seems to be that the owner of the database objects defines the superset of permissions that can be accessed by other users but that you can happily grant other users permissions outside of that superset without error except when you try to use that permission.

The error thrown was reported on a table providing a foreign key constraint so there were more than a few hours spent wondering why the user could read the other table but then get permission denied on it. The answer seemingly being that the insert into the child table triggers the permission violation but that the validation of the constraint in the constraining table triggers the permission system.

I’m not sure any of this knowledge will ever be useful again because this setup is so atypical. I might try and write a DevTo article to provide something for a future me to Google but I’m not quite sure how to phrase it to match the query.

Eager initialisation

I learnt something very strange about the Javascript test data generation FakerJS this month but it just a specific example of libraries that don’t make an effort to lazy load their functionality. I’ve come across this issue in Python where it affected start times in on-demand code, Java where the assumption that initialisation is a one-time cost meant that multiple deployments a day meant the price was never amortised and now I’ve encountered it in Javascript.

My takeaways are that it is important to [set aggressive timeouts](https://nodejs.org/api/cli.html#–test-timeout) on your testing suite rather than take the default of no timeouts.. This only surfaced because some fairly trivial tests using the Faker data couldn’t run in under a second which seemed very odd behaviour.

Setting timeouts also helps surface broken asynchronous testing and makes it less tedious to wait for the test suite to fail or hang.

Standard
Work

January 2024 month notes

Water CSS

I started giving this minimal element template a go after years of using various versions of Bootstrap. It is substantially lighter in terms of the components it offers with probably the navigation bar being the one component that I definitely miss. The basic forms and typography are proving fine for prototyping basic applications though.

Node test runner

Node now has a default test runner and testing framework. I’ve been eager to give it a go as I’ve heard that it is both fast and lightweight, avoiding the need to select and include libraries for testing, mocking and assertions. I got the chance to introduce it in a project that didn’t have any tests and I thought it was pretty good although it’s default text output felt a little unusual and the alternative dot notation might be a bit more familiar.

It’s interesting to see that the basic unit of testing is the assertion, something is shares with Go. It also doesn’t support parameterised tests which again is like Go which has a pattern of table-driven tests implemented with for loops except that Go allows more control of the dynamic test case naming.

I’d previously moved to the Ava library and I’m not sure there is a good reason not to use the built-in alternative.

Flask blueprints

In my personal projects I’ve tended to use quite a few cut and paste modules and over the years they tend to drift and get out of sync so I’ve been making a conscious effort to learn about and start adopting Flask Blueprints. Ultimately I want to try and turn these into personal module dependencies that I can update once and use in all the projects. For the moment though it is interesting how the blueprints format is pushing me to do some things like logging better (to understand what is happening in the blueprint) and also structuring the different areas of the application so that they are quite close to Django apps with various pieces of functionality now starting to be associated with a url prefix that makes it a bit easier to create middleware that is registered as part of the Blueprint rather than relying on imports and decorators.

Web components

I’ve been making a bit of progress with learning about web components. I realised that I was trying to do too much initially which is why they were proving complicated. Breaking things down a bit has helped with an initial focus on event listeners within the component. I’m also not bringing in external libraries at the moment but have got as far as breaking things up into [ESM modules](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Guide/Modules) which has mostly worked out so far.

Standard
Programming, Work

December 2023 month notes

Web Components

I really want to try and understand these better as I think they are offering a standards-based, no-build solution for components combined with a better way of dropping in lightweight vanilla JS interactivity to a page where I might have used AlpineJS before now.

I’m still at the basic learning stage but I’ve been hopping around the Lean Web Club tutorials to get a sense of the basics. One of the things that is already interesting is that Web Components wrap their child HTML is quite a clear and scoped way so you can use them quite easily to mix server rendered content with runtime dynamic content. I haven’t found an elegant way to do that with other frameworks.

Scoping and Shaping

I attended an online course by John Cutler which was a pretty good introduction to idea of enabling constraints. Most times I like to attend courses and classes to learn something but every now and then it feels good to calibrate on what seems obvious and easy and understand other people’s struggles with what seems basic stuff.

A few takeaways: being a good stakeholder is an underrated skill and being clear about the boundaries of what you’re willing to accept is important to allow teams working on problems to be successful. If someone says they can’t work with your constraints then its not a good fit, if no-one can work with your constraints then you either need to do the work yourself or give up on it.

The most insightful piece of the meeting for me came around the psychology of leaders in the new economy where profits are more important than growth and experimentation. John’s theory is that this pressure makes it harder for executive teams to sign off on decisions or to give teams a lot of leeway in approaching the problem. To provide meaningful feedback to executing teams senior stakeholders feel they need more information and understanding about the decisions they are making and the more hierarchical an organisation the more information needs to go up the chain before decisions can come back down.

Before zero interest rates there used to be a principle that it wasn’t worth discussing something that wouldn’t make back the cost of discussing it. Maybe rather than doing more with less we should be trying to get back to simple not doing things unless they offer a strong and obvious return.

How I learned to love JS classes

I have never really liked or seen the point in Javascript’s class functionality. Javascript is still a prototype-based language so the class syntax is basically complex syntax sugar. React’s class-based implementation was complex in terms of how the class lifecycle and scope interacted with the component equivalent so I was glad to see it replaced by stateless components. However classes are pretty much the only way that you can work with Web Components so I’ve been doing a lot more of them recently than previously.

I’ve also been dropping them into work projects although it raises some interesting questions when you’re using Typescript as the difference between a class and an interface is quite blurry there. Presumably classes should either have static elements or also encapsulate behaviour to make the inheritance meaningful otherwise it’s simply an interface that the implementing class needs to provide.

Standard
Programming

Halfstack on the Shore(ditch) 2023

Self-describing as an “anti-conference” or the conference that you get when you take all the annoying things about conferences away. It is probably one of the most enjoyable conferences I attend on a regular basis. This year is was in a new venue quite close to the previous base at Cafe 1001 which was probably one of my favourite locations for a conference.

The new venue is a small music venue and the iron pillars that fill the room were awkward for sightlines until I grabbed a seat at the front. The bar opened at midday and was entirely reasonable but the food was not as easily available as previously available but you were still able to walk to the nearby cafe and show your conference badge if you wanted.

Practical learnings

Normally I would say that HalfStack is about the crazy emergent stuff so I was surprised to actually learn a few things that are relevant to the day job (admittedly I have been doing a lot more backend Javascript than I was previously). I was quite intrigued to see some real-world stats that the Node’s in-built test runner is massively faster than Jest (which maybe should not be so surprising as it does some crazy things). I’ve been using [Bun]() recently which does have a faster runner and it makes TDD a lot more fun that with the normal Jest test runner.

I also learnt that NODE_ENV is used by library code to conditionally switch on paths in their code. This is obviously not a sound practice but the practical advice was to drop variables that map to environments completely and instead set parameters individually as per standard 12 factor app practice. I think you can refine that with things like dotenv but I’m basically in agreement. Two days later I saw a bunch of environment-based conditional code in my own workplace source code.

It was also interesting to see how people are tackling their dependency testing. It felt like the message is that your web framework should come with mocks or stubs for testing routing and requests as standard and that if it doesn’t then maybe you should change your framework. That feels a bit bold but that’s only because Javascript is notorious for having anaemic frameworks that offer choice but instead deliver complexity and non-trivial decisions. On reflection it seems like having a built-in unit testing strategy for your web framework seems like a must-have feature.

Crazy stuff

There was definitely less crazy stuff than in previous years. A working point of sale system including till management based on browser APIs was all quite practical and quite a good example of why you might want USB and serial port access within the browser.

There was also a good talk about converting ActionScript/Flash to Javascript and running emulation of old web games although that ultimately turned out to be a way of making a living as commercial games companies wanted to convert their historic libraries into something that people could continue to use rather than being locked away in a obsolete technology.

The impact of AI

One of the speakers talked about using ChatGPT for designing pitches (the generated art included some interesting interpretations our how cat claws work and how many claws they have) and I realised listening to it that for some younger people the distilled advice and recommendations that the model has been fed is exactly the kind of mentoring that they have desired. From a negative perspective this means an endless supply of non-critical ideas and suggestions that require little effort on the user’s part; just another way to avoid having to do some of the hard work of deliberative practice. On the positive side a wealth of knowledge that is now available to the young in minutes.

While I might find the LLMs trite, for people starting their careers the advice offered is probably more sound that their own instincts. There also seems to be some evidence appearing that LLMs can put a floor under poor performance by correctly picking up common mistakes and errors. At a basic level they are much better at spelling and grammar than non-native speakers for example. I don’t think they have been around long enough to have reliable information though and we need to decide what basic performance of tasks looks like.

I wonder what the impact will be on future conference talks as ChatGPT refines people to a common set of ideas, aesthetics and structures. Probably it will feel very samey and there will be a desire to have more quirky individual ideas. It feels like a classic pendulum swing.

Big tech, big failings

Christian Heilmann’s talk was coruscating about the failures of big tech during the acute phase of the COVID pandemic and more generally about being unable to tackle the big problems facing humanity and instead preferring to focus on fighting for the attention economy and hockey stick growth that isn’t sustained. He also talked about trying to persuade people that they don’t have to work at FAANGS to be valid people in technology.

His notes for this talk are on his blog.

Final thoughts

Chat GPT might need me to title this section as a conclusion to avoid it recommending that I add a conclusion. HalfStack this year is happening at a strange time for programming and the industry. There wasn’t much discussion of some topics that would have been interesting around the NodeJS ecosystem such as alternative runtimes and the role of companies, consultancy and investment money in the evolution of that ecosystem. The impact of a changed economic environment was clear and in some cases searing but it was a helpful reminder that it is possible to find your niche and make a living from it. You don’t necessarily need to hustle and try and make it big unless that is what you really want to do.

The relaxed anti-conference vibe felt like a welcome break from the churn, chaos and hamster wheel turning that 2023 has felt like. I’ve already picked up my tickets for next year.

Links

Standard
Work

November 2023 month notes

The end of November marks the start of the Christmas corporate social hospitality season. It is easy to be cynical but it is nice to catch up with people and find out what has been happening with them.

Bun

We started using Bun at work for a project, more as a CLI build tool than a framework and runtime. It seems reasonably effective and has quite a few of the features that were interesting in Deno. Deno has a bit more ambition and thought in its overall project whereas Bun seems much for focused on trying to get itself embedded on projects. It reminds me quite a lot of Yarn and I think we may want to move to something more open in the future.

In the meantime though I have to admit that having a fast test runner is a joy compared to Jest. I attended Halfstack London this month and one of the talks there gave an illustration of how very slow Jest is and made the recommendation to use Node’s native runner which is an interesting alternative that I might try for my own projects.

AssemblyScript

I’ve been doing the Exercism 12 in 23 challenge (the standard “work with twelve languages in a year” but using Exercism’s problems as a proof of progress). It has thrown up a few interesting things already. I was surprised at how much I liked working with Raku (Perl was one of the first languages I learnt) and I should probably write up something about it. This month was assembly however and unlike most of the other languages this was an area I’ve never really ventured into. My first language was BASIC and I might have POKE’d and PEEK’d but I’ve never written any assembler.

I chose to tackle WebAssembly which seemed like it might have some work advantages if I knew more about it. WebAssembly comes with a representative form called WAT that is made up of s-expressions which looks quite elegant (especially if you are a LISP fan). However trying to write raw assembler felt too challenging so instead I choose to try AssemblyScript instead which is a Typescript style language which compiles to WASM and WAT. It also allows you to write tests in Javascript which import from the compiled output which is quite neat (I much prefer writing tests in dynamic rather than static languages).

It made doing the number-based exercises relatively straight-forward. For a few of the problems I did some hand tweaking of things like parameter calling and while AssemblyScript uses native Math for things like square roots I ended up manually creating a sequence to calculate the hypotenuse of a triangle to avoid library calls which seemed tricky to match between the two execution environments.

While doing this I did start to develop a sense of how assembly and the stack works but I feel I could probably do with a bit more of a structured introduction than trying to solve quite high-level problems with low-level tools. Overall I found it a good stretching exercise.

MDN’s documentation for Web Assembly is excellent and I probably learnt most about the way assembler works by messing around with their executable examples. Not only is this a great documentation format but I don’t think I would have completed the exercises without the explanations in the documentation.

Dependabot bundling

The thing that changed my work life this month was grouping dependencies. Javascript projects tend to have a lot of dependencies and often in the build step changes in these dependencies are pretty meaningless (type files or compilation edge-cases) but of equal effort to apply as security updates.

You can group dependency updates by expressions but more usefully you can group development dependencies (where supported by the dependency configuration) into a single update. Generally if you have a test suite and the build passes you can apply these altogether and have the effort of a single release for multiple changes.

There’s sometimes an argument that grouping too many changes together means that one breaking change blocks all the changes. So far I haven’t seen that in practice because the volume of small changes in Javascript is high but the change impact is very low.

The grouped PR is also sensibly automatically managed, with the group being added to as needed. Security updates are always broken out into their own PR so it is much easier to see priorities when looking at the PR list.

Standard
London, Programming, Web Applications, Work

Halfstack on the Shore(ditch) 2022

This is the first time the conference has been back at Cafe 1001 since the start of the Pandemic and my first HalfStack since 2021’s on the Shore event.

In some ways Halfstack can seem like a bit of an outlandish conference but generally things that are highly experimental or flaky here turn up in refined mainstream forms three to five years later. Part of the point of the event is to question what is possible with the technologies we have and what might be possible with changes that are due in the future. Novelty, niche or pushing the envelope talks are about expanding the conversation about what is possible.

The first standout talk this year was by Stephanie Shaw about Design Systems. It tries to make the absurdist argument that visual memes meet all the criteria to be a design system before looking at what are the properties of a good design system that would disqualify memes. The first major point that resonated with me was that design systems are hot and lots of people say they have them when what they actually have are design principles, a component library or an illustration of UI variant behaviour.

I was also impressed that the talk had a slide dedicated to when a design system would be inappropriate. Context always matters in terms of implementing ideas in organisations and it is important to understand what the organisation needs and capabilities that are required to get value from an idea. Good design systems provide a strong foundation for rapid, consistent development and should demonstrate a clear return on the investment in them.

One of the talks that has stayed with me the longest was one that was about things that can be done now. I’ve seen Chris Heilmann talk about dev tools at previous conferences but this time the frame of the talk was different and was about using dev tools in the browser to make the web sane again. He reminded me that you can use the dev tools to edit the page. Annoying pop-up? Delete it! Right-click hijacked? Go into the handler bindings and unbind the customer listener. Auto-playing video? Change it’s attributes or again just delete the whole thing. He also did explain some new things that I wasn’t aware of such as the ability to take a screenshot of a specific node from within the DOM inspector. I’ve actually used that a few times since in my work.

There was an impromptu talk that was grounded in a context that was a little hard to follow (maintaining peer to peer memes in a centralised internet apocalypse I think) but was about encoding images into QR codes that included an explanation of how QR codes actually work and encode information (something I didn’t know). The speaker took the image data, transformed it into a series of QR codes, then had a website that displayed the QR codes in sequence and a web app that used a phone camera to scan the codes and reassemble the image locally. The scanning app was also able to understand where in the sequence the QR code was which created a kind of scanning line effect as it built up the image which was very cool to watch.

There were three talks that all involved a significant amount of simultaneous interaction and each using slightly different methods but clearly the theme was having many people together on a webpage interacting in near real time.

The first thing to say is that I took a decent but relatively low-powered Pinebook laptop to the conference as I thought I would just need something simple to take notes and look things up on the internet, maybe code along with some Javascript. All of the interactive demos barely worked on it and the time to be active was significantly longer than say the attendees with the latest Macs. I think the issue was a combination of having really substantial downloads (which appeared not to be cached so refreshing the browser was fatal) but also just massive requirements on CPU in the local synchronisation code.

The first was by a pro developer relations person, Jo Franchetti, who works for Ably and who used the Ably API. Predictably this was the best working (and looking) demo with a fun Halloween theme around the idea of an ouija board or, more technically, trying to spell out messages by averaging all the subscribers’ mouse movements to create a single movement over the screen. However even using a commercial API, probably having no more than 25 connections and a single-screen UI my laptop still ground to a halt and had significant lag on the animations. It did look great projected on the big screen though.

Jo’s talk introduced me to an API I hadn’t heard of before scrollTo (part of a family of scrolling APIs). This is an example of how talks about things on the edge of the possible often come back to things that are more practical day to day.

James Allardice and Ross Greenhalf had the least successful take on the multiuser extension and in terms of presentation style seemed to be continuing an offstage squabble in front of everyone. I get the impression that they were very down on what they had been able to achieve and were perhaps hoping for a showcase example to promote their business.

Primarily they didn’t get this because they were bizarrely committed to AWS Lambda as the deployment platform. Their idea was to do a multiplayer version of Pong and it kind of worked, except the performance was terrible (for everyone this time, not just me). This in turn actually created a more fun experience that what they had intended to build as the lag meant you needed to be quite judicious in when you sent your command (up or down) to the server as there was a tendency to overshoot with too many people sending commands as ball approached and then another as they were waiting for the first one to take effect. You needed to slow down your reaction cycle and try and anticipate what other people would be doing.

The game also only lasted for the duration of a Lambda timeout of a single execution run as the whole thing was run in the execution memory of a single Lambda instance. This was a consequence of the flawed design but again it wasn’t hard to imagine how Lambda could be quite effective here as long as you’re not using web sockets for the push channel. It feels like this kind of thing would probably be pretty trivial in something like Elixir in a managed container but was a bit of a uphill battle in a Javascript monolith Function as a Service.

The most creative multi-user demo was by Mynah Marie (aka Earth to Abigail who has been a performer at previous Halfstacks) who used Estuary to create a 15 person online jam session which was surprisingly harmonious for a large group with little in the way of being able to monitor your own sound (I immediately had more empathy for any musician who has asked the desk for less drums in their monitor). However synchronisation was again a big problem, not only did other people paste over my loops but also after leaving the session one of my loops remained stubbornly playing until killed by the admin despite me not being able to access the session again, I was given a new user identity and no-one seemed able to reconnect with the orphan session.

Probably the most mindblowing technical talk was by Ulysses Popple about his tool Nodessey which is both a graph editor or notebook and a way to feed values into nodes that can then visualise the input they are receiving from their parent nodes. It reminded me a bit of PureData. I found following the talk, which was a mixture of notes and live-coded examples, a bit tricky as its an unusual design and trying to follow how the data structure was working while also trying to follow the implementation was tricky for me.

One thing I found personally interesting is that Nodessey is built on top of a minimal framework called Hyperapp which I love but have never seen anyone else use. I now see that I have very much underestimated the power of the framework and I want to start trying to use it more again.

Michele Riva did a talk about the use of English in programming languages which had a helpful introduction to programming languages that had been created in non-English languages. As an English speaker you tend to not need to ever leave the US-led universe of English based languages but it was interesting to see how other language communities had approached making programming accessible for non-English speakers. There was a light touch on non-alphabetic languages and symbolic languages like J (and of course brainfuck).

Perhaps the most practical talk of the conference was by Ante Barić around browser extensions. I’ve found these really valuable for creating internal organisation tooling in a very lightweight way but as Chris Heilmann reminded us in his talk too many extensions end up hammering browser performance as they all attempt to intercept the network requests and render cycle. The talk used a version of Clippy to create annoying commentary on the websites you were visiting but it had some useful insight into what is happening with browser extensions and future plans from both the Google and Mozilla teams as well as practical ways to build and use them.

Ante mentioned a tool that I was previously unaware of called web-ext that is a Mozilla project but which might be able to build out Chrome extensions in the future and gives a simplified framework for putting together extensions.

General notes

Food and drink is available when you want it just by showing the staff your conference lanyard. Personally I think it is great when conferences are able to be so flexible around letting people eat when they want to and avoiding the massive queues for food that typically happen when you try and cram an entire conference into a buffet in 90 minutes. I think it also helps include people who may have particular eating patterns that might not easily fit into scheduled tea and lunch breaks. It also makes it feel less like school.

In terms of COVID risk, the conference was mostly unmasked and since part of the appeal is the food and drink I felt like I wasn’t going to be changing my risk very much by wearing a mask during the talk sections. The ventilation seemed good (the room could be a bit cold if you were sitting in the wrong place) and there was plenty of room so I never had to sit right next to someone. This is probably going to remain a conference that focuses on in-person socialising and therefore isn’t going to appeal to everyone. Having a mask mandate in the current environment would take courage. The open air “beach” version of the conference on the banks of the Thames would probably be more suitable for someone looking to avoid indoor spaces.

Going back?

Halfstack is a lot of fun and I’ve booked my super early-bird for this year I think it offers a different balance of material compared to most web and Javascript conferences. This year I learnt practical things I could bring to my day job and was impressed by what other people have been able to achieve in theirs.

Standard
Programming

Slow SPAs are worse than NoSPA

I got a digital subscription to the Economist for my birthday last month so I’ve started reading a lot more content on their site. As a result I’ve noticed a lot of weirdness with their page loads that was hardly noticeable when I was using the free tier of a few articles per week.

The site seems to be built as a SPA with a page shell that loads quite quickly but takes far longer to fill with content and which has some odd layout choices and occasional pops and content shifts.

The basic navigation between the current issue index and the articles is hampered by what appears to be a slow load or render phase. Essentially it is hard to know whether the click on a link or the back button has registered.

By replacing traditional page navigation the experience is actually worse. The site would be better if the effort going into the frontend went into faster page serving.

I’m not sure if the page is meant to be doing something clever with local storage for offline use but it seems to need to be connected when browsing so I’m assuming that this is something to do with the need for a subscription and payment gateway that prevents a fast server load of content.

It still feels as if the page and the 200 words or so should be public and CDN-cached with the remaining content of the article being loaded after page-load for subscribers.

The current solution feels like someone has put a lot of effort and thought into making someone that is actually worse than a conventional webpage and that seems a shame for a site with relatively little content that is mostly updated once a week.

Standard
Programming, Software, Web Applications, Work

Prettier in anger

I’ve generally found linting to be a pretty horrible experience and Javascript/ES haven’t been any exception to the rule. One thing that I do agree with the Prettier project is that historically linters have tried to perform two tasks to mixed success: formatting code to conventions and performing static analysis.

Really only the latter is useful and the former is mostly wasted cycles except for dealing with language beginners and eccentrics.

Recently at work we adopted Prettier to avoid having to deal with things like line-lengths and space-based tab sizes. Running Prettier over the codebase left us with terrible-looking cramped two-space tabbed code but at least it was consistent.

However having started to live with Prettier I’ve been getting less satisfied with the way it works and Prettier ignore statements have been creeping into my code.

The biggest problem I have is that Prettier has managed its own specific type of scope creep out of the formatting space. It rewrites way too much code based on line-size limits and weird things like precedent rules in boolean statements. So for example if you have a list with only one entry and you want to place the single entry on a separate line to make it clear where you intend developers to extend the list Prettier will put the whole thing on a single line if it fits.

If you bracket a logical expression to help humans parse the meaning of the statements but the precedent rules mean that brackets are superfluous then Prettier removes them.

High-level code is primarily written for humans, I understand that the code is then transformed to make it run efficiently and all kinds of layers of indirection are stripped out at that point. Prettier isn’t a compiler though, it’s a formatter with ideas beyond its station.

Prettier has also benefited from the Facebook/React hype cycle so we, like others I suspect, are using it before it’s really ready. It hides behind the brand of being “opinionated” to avoid giving control over some of its behaviour to the user.

This makes using Prettier a kind of take it or leave it proposition. I’m personally in a leave it place but I don’t feel strongly enough to make an argument to remove from the work codebase. For me currently tell Prettier to ignore code, while an inaccurate expression of what I want it to do, is fine for now while another generation of Javascript tooling is produced.

Standard
Programming

Google Cloud Functions

I managed to get onto the Google Cloud Functions (GCF) alpha so I’ve had a chance to experiment with it for a while. The functionality is now in beta and seems to be generally available.

GCF is a cloud functions, functions as a service, AWS Lambda competitor. However thanks to launching after Lambda it has the advantage of being able to refine the offering rather than cloning it.

The major difference between GCF and Lambda is that GCF allows functions to be bound to HTTP triggers trivially and exposes HTTPS endpoints almost without configuration. There’s no messing around with API Gateway here.

The best way I can describe the product is that it brings together the developer experience of App Engine with the on-demand model of Lambda.

Implementing a Cloud Function

The basic HTTP-triggered cloud function is based on Express request handling. Essentially the function is just a single handler. Therefore creating a new endpoint is trivial.

Dependencies are automagically handled by use of a package.json file in the root of the function code.

I haven’t really bothered with local testing, partly because I’ve been hobby-programming but also because each function is so dedicated the functionality should be trivial.

For JSON endpoints you write a module that takes input and generates a JSON-compatible object and test that. You then marshal the arguments in the Express handler and use the standard JSON response to send the result of the module call back to the user.

Standard
Programming

Svelte – a first look

Rich Harris is a Javascript wizard who has already created the build tool Rollup and the framework Ractive. So therefore when he announced a new framework called Svelte I definitely wanted to take a look and see what problems he is trying to tackle with it.

Having spent some trivial time with some examples I have some understanding of what’s going on and how Svelte compares to other frameworks and approaches to building dynamic web pages.

One of the big things is that Svelte is based around a compiler that creates the deployed package which is just a variation on a Javascript file. So far I’ve found the compiler to be straight-forward and errors easy to understand. The compilation phase put Svelte closer to the Elm camp of pushing problems earlier in the development phase.

Svelte also offers a take on the Web Component, a Svelte component is responsible for managing its own dependencies and CSS. The definition of a Svelte component feels a little different to most component systems though. The basics of a templated piece of HTML is pretty standard but the component lives inside a HTML file that also uses the script and style tags to define the behaviour and appearance of the component respectively.

Using standard tags for this is, perhaps unsurprisingly, much more intuitive than defining React or Riot components.

Standard