Month notes, Ruby, Work

September 2025 month notes

Loop

I’ve been doing more technical leadership work recently and apart from spreadsheets that means more documents. Loop is Microsoft’s version of Notion, focusing on wiki-like pages with block content and the ever-present slash command to insert specialised blocks.

As a noting experience it is much more pleasant than OneNote which seems to have gone to a very weird place since last I used it with an odd block UI rather than the Evernote style of noting. Loop is pretty much type inline and use slash to embed or nest another page.

Sharing is a bit more complex than Notion style products as you seem to only be able to share the entire Loop Workspace or nothing. It is hard to understand the organisational visibility of content.

Mermaid and Todo lists all embed as you’d expect and action lists integrate with Microsoft 365 todo lists and notifications system. You can also embed Loop components into Teams and other applications and it mostly seems to not just work but also be dynamically bi-directional so you can edit a component in the embed present in a chat rather than having to move to an edit mode.

Compared to most of the Office 365 suite it feels bracingly online and dynamic.

I’ve never been much of a macro or scripting person in these products so I don’t know if you can do some of the page and list magic that you can with Notion but all the core content features seem present and correct.

This seems like a great additional to the O365 suite and replaces the need for a bunch of adhoc hackery like the endless Word doc.

Ruby drama

Ruby (and Rails) has a problem in that it has never developed proper community governance. I realise now what a major step it was for Guido van Rossum to step away from the Benevolent Dictator For Life (BDFL) role and force the community to step.

This month Ruby Central took over the responsibility for managing key elements of the Ruby ecosystem (Gem, Bundler) and alienated most of the external open source community contributors. This just doesn’t happen in well run communities.

At the heart of the problem is really the Rails BDFL DHH. Now the core selling point of Rails as a framework is that it is opinionated and maintained by contributors whose livings depend on it. By frameworks that have more community governance like Django are seen as being slow moving and unresponsive. However at moments like these when a major leader in a project is going off the rails (pun intended) it is often a virtual to move slowly and make considered moves

Rails is still one of the smartest web frameworks for quick responsible development and its contrarian technology choices have been an excellent counterbalance to the groupthink that has prevailed in say the Node community where ultimately have burned themselves out on the endless upgrade trail of re-architecture after failed re-architecture.

The cost has been an over-dependence on a few key individuals with too much power and a few companies using their money and influence to run things to their benefit.

It will be interesting to see how this develops coming on the heels of WordPress debacle (and I’m conscious of the irony of me still posting here after all that has happened in that space). I suspect that the answer may be just a lot of quiet quitting to other projects and communities.

The more promising signs are an open letter asking for a Rails fork and better governance and the emergence of the Gem Cooperative. Maybe this will be the trigger to get to something much better than it replaces.

Reading list

Fun tools

Standard
Month notes

June 2025 month notes

Foreign keys and link tables in SQL Alchemy

Foreign keys are surprisingly difficult to define as where as normally a basic Foreign Key is unidirectional with a parent and a child relationship SQL Alchemy often needs you to define the attribute on both models in a circular dependency that it then has to resolved by using strings to define the object relationships as Python doesn’t support forward references yet.

Indexed Constraints in Postgres

Primary Keys and Unique Constraints both generate associated indexes automatically but foreign keys, while they need to reference indexed columns do not automatically have an associated index and potentially don’t need them until your query planner tells you that is the bottleneck. I found this last idea a bit counter-intuitive but on reflection I think it must make sense given the lookup times of the parent rows. I guess the index may matter more if the relationship is one to many with potentially large numbers of children.

Thoughts on FastAPI

I finally did a lot of work on a FastAPI codebase recently, my first use of that framework. It is a lot like Flask but its support for Web Sockets and Async routes means that depending on what you’re working with it might be the only practical choice for your application.

FastAPI is actually an aggregrate over other libraries, in particular Starlette and as you dig through the layers you see that the key foundations are maintained by a few individuals with a very opaque governance structure.

I’ve used more obscure frameworks before but I didn’t really think it was a smart idea then and I don’t think it is great now. In theory the FastAPI layer could switch between implementations without breaking consumers but it all seems a bit more fragile than I had realised especially when you add on some of the complications of the various Python async ecosystem.

It’s made me wonder whether you’re better off sticking to synchronous until you have a situation where that can’t possibly work.

Coding with LLMs

LLM generated code means that some relatively uncommon idioms in programming languages come up more and more often when looking at colleagues code. The effect is quite incogrous when you have relatively inexperienced programmers using quite sophisticated or nuanced constructions.

This month in Python I noticed use of the else clause in loops which is akin to a finally block in iterations but isn’t widly used because (I think) the else clause can be hard to related to the enclosing loop and is easily visually confused with conditional clauses inside the loop.

Use of sum instead of len or count, Python’s sum allows you to pass a generator function directly to it instead of having to use an intermediate list comprehension or similar. This means you can save a bit of memory, some programmers use this habitually but I’ve only ever really seen it where people are trying to get a bit of extra performance in special cases. Most of the iterables in code often contain too few items for it to matter much and compared to the performance gain of moving to a more recent version of Python I’m not sure the gain is really noticeable.

Reading list

Standard
Month notes

May 2025 month notes

Computer vision

A colleague at work has been experimenting with feeding streamed camera and screen sharing video into Gemini 2 Flash and the results are really good. The model can accurately describe things in near real-time and is able to make an overall assessment on what activity was being carried out during the screenshare.

Computer vision is one of the more mature areas of machine learning anyway but the speed and ability to describe a higher-level set of actions rather than just matching individual items or pictures was brand-new to me. The functionality can even work in near realtime using the standard browser camera API.

Lessons learned

Although I kind of knew this it managed to catch me out again. Alembic, the Python migration manager for the SQLAlchemy ORM doesn’t really have a dryrun mode and the history command doesn’t use the env.py file and instead works (I think) by simply reading through the migration files.

People have said that switching to the SQL generation mode is an alternative but I was working in the CI pipeline and probably would have needed to also come up with a minor migration file to have some useful feedback.

Standard
Month notes

March 2025 month notes

Community

I came across this really interesting project that aims to ensure that the JavaScript libraries are keeping dependencies up to date and that features available in the LTS versions of Node area being used. JavaScript is infamous for fragmented and poorly maintained libraries so this is a great initiative.

But even better I think is Jazzband, a Python initiative that allows for collaborative maintenance but also can provide a longer term home for participating projects.

Digital sovereignty

With the US deciding to switch off defence systems there has been an upswing in interest in how and where digital services are provided. One key problem is that all the cloud services are American every other Western country has let US companies dominate the space and many of the services they offer are now unique to them.

This is probably impossible to solve now but the article linked to above is a useful starting point on how more commodity services could be provided by local providers.

There was also an announcement of an open source alternative to Notion funded by the French (and other) governments. The strategic investment in open source that can be run in the service capacity that European states do have seems a key part of a potential solution and helps share costs and benefits across countries.

Editors

I have been trying out the Fleet editor again, this is Jetbrain’s take on a VSCode style editor that has the power of their IntelliJ IDE but without a lot of the complexity. It’s obviously not as powerful or fully featured as VSCode with all its plugins.

I liked the live Markdown preview but couldn’t get the soft-wrapping that was meant to be enabled by default to work. It was also frustrating that some actions do not seem to be available via the Command Palette and that the Actions aren’t the default tab when hitting Ctrl-Shift-P.

LLMs

I experimented with AWS’s Bedrock this month (via Python scripts), the service has a decent selection of services available with a clear statement of how interactions with them are used. If you’re already an AWS user then access to them casually is effectively free (although how viable that is in the long run is an interesting question) making it a great way to experiment.

I thought have the AWS Nova models might help me write code to interact with Bedrock but they turned out to not really be able to add more than the documentation and a crafty print statement told me.

The Mistral model seemed quite capable though and I’ve used it a couple of times since then for creating Python and Javascript code and haven’t had any real problems with it although it predictably does much better on problems that have been solved a lot.

An alternative to Step Functions

Yan Cui wrote a really interesting post about Restate and writing your own checkpointing system for repeatable AWS Lambda invocations. Step Functions are very cool but the lack of portability and platform-specific elements have been off-putting in the past. These approaches seem very interesting.

When it comes to Lambdas Yan’s newsletter is worth subscribing to.

Reading links

  • Some helpful pieces of good practice in Postgres although some things (soft deleting in particular, view use) will be context specific
  • Defra’s Playbook on AI coding offers a pragmatic view on how to get the best out of AI-led coding
  • Revenge of the Junior Developer, man with a vested interested in AI software developer is still angry at people struggling with it or not wanting to use it
  • AI Ambivalence a more sober assessment of the impact and utility of the current state of AI-assisted coding and a piece that resonated with me for asking the question about whether this iteration of coding retains the same appeal

There was an interesting exchange of views on LLM coding this month; Simon Willison wrote about his frustration with developers giving up on LLM-based coding when he has been having a lot of success with it. Chris Krycho wrote a response pointing out that developer’s responses weren’t unreasonable and that Simon was treating all his defensive techniques fro getting the best out of the approach as implicit knowledge that he was assuming that everyone possessed. It is actually a problem that LLM’s suggest invalid code and libraries that don’t exist.

Soon after that post Simon wrote another post summarising all his development practices and techniques related to using LLMs for coding that makes useful reading for anyone who is curious but struggling to get the same results that people like Simon have had. This post is absolutely packed with interesting observations and it is very much worth a read as a counterbalance to the drive-by opinions of people using Cursor and not being able to tell whether it is doing a good job or not.

Standard
Programming, Python

London Python Coding Dojo February 2025

The Python coding dojo is back and this time it allows AI assisted coding which means that some of the standard katas become trivial and instead the challenges have to be different to either require different problems to be combined in an interesting way or have a very hard problem that doesn’t have a standard solution.

The team I was in worked on converting image files to ASCII art (with a secondary goal of trying to create an image that would work with the character limit of old-school Twitter).

We used ChatGPT and ran the code in Jupyter notebooks. To be honest ChatGPT one-shotted the answer, clearly this is a thing that has many implementations. Much of the solution was as you would expect, reading the image and converting it to greyscale. The magic code is this line (this is regenerated from Mistral rather than the original version).

ascii_chars = "@%#*+=-:. "

This string is used to map the value of each pixel to a character. It is really the key to a good solution in terms of the representation of the image and also when we tried to refine the solution to add more characters this was the bit of the code that went wrong as the generated code tends not to understand that the pixel mapping depends on the length of this string. A couple of versions of the code had an indexing issue as it kept the original mapping calculation but changed the size of the string.

On the one hand the experience was massively deflating, we were probably done in 15 or 20 minutes. Some of the team hadn’t used code assistance this way before so they got something out of it. Overall though I’m not sure what kind of learning experience we were having and whether the dojo format really helps build learning if you allow AI-assistance.

If the problems become harder to allow for the fact that anything trivial is already in the AI databank then the step up into understanding the problem as well as the output is going to be difficult for beginners.

There’s lots to think about here and I’m not sure there are any easy answers.

Standard
Programming

Don’t use Postgres Enums for domain concepts

If you’ve ever read a piece of advice about using Postgres Enums you’ve probably read not to use them and to use a peer table with foreign key constraints instead. If you’ve ever seen a Typescript codebase in the wild recently chances are that this advice has absolutely been ignored and enums are all over the place.

I’m not really sure why this should be when even Typescript discourages the use of enums itself. I think it a combination of a spurious sensation of type safety combined with a desire to think about a table with a simple column of constrained values that maps to an object with a limited set of typed constants. My main theory is that the issue is that values in a Check constraint are difficult to map into a Typescript ORM.

But to be clear there’s no ORM in any language that magically makes using enums painless. The issues stem from the Postgres implementation which is often just made worse by bad ORM magic trying to hide the problem through complex migrations.

Domain ideas change, enums shouldn’t

First of all I want to be clear that my intent isn’t to complain about Postgres enums. They have completely valid use cases and if you want to describe units of measure or ordinal weekday mappings then there’s nothing particularly wrong with them or their design. Anything that is a fundamental immutable concept is probably a great fit for the current enums model.

My issue is with mapping domain values onto these enums. We all know that business concepts and operations are subject to change, these ideas are far more malleable than the speed of light or the freezing point of water. Therefore they should be easy to change when our understanding of the domain changes.

And this is where the recommendation to use foreign keys instead of enums comes in. Changing a row in a table is lot easier than trying to migrate a enum. Changing a label, adding and removing rows, all of them become easier and follow existing patterns for managing relational data. You can also expose these relationships are a configuration layer without having to make changes to the database definition library.

Changing enums in Postgres

In Postgres you can expand an enum but if you want to delete or rename a value then you actually end up deleting the enum entirely and recreating it.

While you’re deleting it you have to really block all operations on anything that uses that enum, any mutation is going to be a nightmare to handle. For some organisations that don’t allow this kind of migration this would be a dealbreaker already.

And then you have value consistency, what do you do if you’re removing a value? Typically there is a bit of a dance where you create two enumerations, one representing the current valid values and another representing the future value set. You then edit the values of the column and swap over the types.

Overall the entire process feels crazy when you could just be editing parent rows like any other piece of relational data.

What about language enums?

I don’t feel that exercised about enums expressed in source code and used within a well-defined context. I’m not sure you want to try and persist them to the storage layer but if you do then having a well-defined process for doing so like a mapper or repository could do the heavy lifting of seeing whether the code values and the storage values are in sync.

If enums make your code easier to work with then that’s fine, let the interface deal with synchronisation and have a plan on how the code should work if it is out of sync with its external collaborators.

However if you are using Typescript then do look at the advice on using constant objects versus enums.

But please don’t encode a malleable domain concept in an immutable data storage implementation.

Standard
Month notes

February 2025 month notes

Winter viruses knocked me about a bit this month so I wasn’t able to get out to all the tech events I had hoped to get to and there were a few bed bound days which were pretty disappointing.

I also have a bit of a backlog on writing up the things that I did attend this month.

Synchronising

While I pay for a few synchronisation services (SpiderOak and pCloud) their Linux integration is a bit cumbersome so I’ve been looking for simpler ways to share files between my local machines. I read a great tutorial about SyncThing. The project’s documentation on getting started with SyncThing was also pretty great.

It took less than an hour to get everything going and now I have two of my laptops sharing content in a single folder so it’s possible to move things like credential files around between them simply and hopefully securely. It doesn’t seem to be taking up any meaningful system resources so far.

I also want to spend more time with LocalSend which looks an app version of the PWA PairDrop (cute name, based on Snap Drop). All the functionality looks good and it seems to be lightweight. I’m not quite sure why the app makes a difference over the PWA version.

Zettelkasten

This month I had a bunch of headaches with Roam Research failing logins on Linux and AppImage having a bug which meant that Obsidian and Logseq have to be run outside the sandbox. Getting things working properly was frustrating and while Roam is web-based it has no really mobile web version.

So instead I’d like to stop subscribing to Roam and figure out what I’m using it for. The knowledge connecting is still the most valuable thing compared to pure outliners or personal wikis. Both Logseq and Obsidian are good for this and currently my preference is for Logseq but I think Obsidian is better maintained and has a bigger community.

The other thing I was doing was dropping links into the daily journal for later sorting and processing. I’ve created a little web app to make this easier, currently I’m just building a backlog but it will be interesting to see what I find useful when I do want to dig up a link.

I also started using Flatnotes deployed via PikaPods to have an indie web way of taking a note on the phone but editing and refining it on laptops.

It’s interesting that it has taken so many different services to replace Roam, maybe that’s a sign of value but I think that I was overloading it with different functionality and I’m refining things into different workloads now.

Eleventy

Eleventy is a very cool static website builder that I would like to use as my main website generator in the long run. For now though I am still trying to learn the 11ty way (I currently use Jekyll); this month I was trying to figure out how to use data files and tags, things that power a lot of tricks in my current site.

Eleventy is ridiculously powerful because you can define data files to be executing Javascript files that read URLs or the filesystem and generate data that is then passed on to the page generation context. As an example you can read the directory where the data file is located, read the contents, filter out the directories and then generate a derived value from the directory name and use that as a data value in the rendered page.

In the past I’ve tended to use templates and front-matter in Markdown posts but with Eleventy you can use a mix of shared templates, including inheritance, and a Nunjucks page using these powerful data files and not really need to use Markdown or front-matter so much. You can also share snippets between the Nunjucks pages to get consistency where you need it but have a lot more flexibility about the way page renders.

It is amazing how flexible the system is but it also means that as there are multiple ways to do things there can be a lot of reading to do to figure out what the best way to do something is for your context. Documentation of the basics is good but examples of approaches are spread out across example repos and people’s blogs.

Power is great but so is one obvious way of doing things.

Interesting links

It’s not a fun subject but my former colleague Matt Andrew’s post about coping with redundancy was a good read with good advice for any kind of job seeking regardless of the cause.

Ofcom is making a dog’s dinner of applying the Online Safetry Act (OSA) to small communities and it seems to be down to community members to try and engage them in the problems, this writeup gives examples of the problems and pointers on how the regulator can improve.

Standard
Software

Volunteering at State of Open 2025

I volunteered at the State of Open conference this month, the conference is put on by the Open Source technology advocacy groups OpenUK.

Volunteering allowed me to sit in on a few sessions. AI was obviously a hot topic. There is naturally a lot of unhappiness at what constitutes the idea of an “open” foundation model; it isn’t just the code and the model weights, there’s also an interest in the training corpus and any human refinement process that might be used.

It is reasonable to assume that a lack of transparency in training data is because a lot of it has been illegally obtained. The conference did have a discussion group on the UK government’s consultation on copyright and training material, one that critics have said represents a transfer of wealth from creators to technologists.

Overall though it felt that there was more unhappiness than solutions. The expectation seems to be that companies will be able to train their models on whatever material they want and can obtain.

This unhappiness rang again for the other topic I heard a lot about which was maintainer well-being and open source community health. Maintainers feel stretched and undervalued, companies have been withdrawing financial support and informal, volunteer-run organisations handle conflict poorly within its own pool of collaborators leading to people leaving projects where they feel criticised and undervalued.

The good news is that people’s belief in the importance and value of openness, transparency and collaboration is still strong. The speakers at the conference were here to share because they want to help others and believe in the power of shared efforts and knowledge.

Becoming a volunteer

Someone asked me about how you volunteer for the conference and to be honest it was pretty straight-forward, I saw an invitation on LinkedIn, filled out a Google Form and then just turned up to the briefings and did the jobs I was asked to do. If I have the time then I think it is always worth volunteering to help out at these kinds of events as while you might not be able to get to see everything you want it also means you have something meaningful to be doing if the schedule is kind of ropey.

You also get to interact with your fellow volunteers which is much more fun that going to a conference alone.

Links

  • Astronomer Apace Airflow as a service
  • dbt a tool for transforming data
  • Tessl a start up looking to switch from coding as we know it today to specification-driven development

Talk recommendations

This is purely based on what I was able to see.

Standard
London, Work

Will humans still create software?

Recently I attended the London CTOs Unconference: an event where senior technical leaders discuss various topics of the day.

There were several proposed sessions on AI and its impact on software delivery and I was party of a group discussing the evolution of AI-assisted development and looking at the how this would change and ultimately what role people thought there would be for humans in software delivery (we put the impacts of artificial general intelligence to one side to focus on what happens with the technologies we currently have).

The session was conducted under Chatam House equivalent rules so I’m just going to record some of the discussion and key points in this post.

Session notes

Currently we are seeing automating of existing processes within the delivery lifecycle but there are opportunities to rethink how we deliver software that makes better use of the current generative AI tools and perhaps sets us up to take advantage of better options in future.

Rejecting a faster horse

Thinking about the delivery of the whole system rather than just modules of code, configuration or infrastructure allows us to think about setting a bigger task than simply augmenting a human. We can start to take business requirements in natural language, generate product requirement documents from these and then use formal methods to specify the behaviour of the system and verify that the resulting software that generative AI creates meets these requirements. Some members of the group had already been generating such systems and felt it was a more promising approach than automating different roles in the current delivery processes.

Although these methods have existed for a while they are not widely used currently and therefore it seems likely that reskilling will be required by senior technical leaders. Defining the technical outcomes they are looking for through more formal structures that work better with machines requires both knowledge and skill. Debugging is likely to move from the operation of code to the process of generation within the model and leading to an iterative cycle of refinement of both prompts and specifications. In doing this, the recent move to expose more chain of thought information to user is helpful and allows the user to refine their prompt when they can see flaws in the reasoning process of the model.

The fate of code

We discussed whether code would remain the main artefact of software production and we didn’t come to a definite conclusion. The existing state of the codebase can be given as a context to the generation process, potentially refined as an input in the same way as retrieval augmented generation works.

However if the construction of the codebase is fast and cheap then the value of code retention was not clear, particularly if requirements or specifications are changing and therefore an alternative solution might be better than the existing one.

People experimenting with whole solution generation do see major changes between iterations; for example where the model selects different dependencies. For things like UIs this matters in terms of UX but maybe it doesn’t matter so much for non-user facing things. If there is a choice of database mappers for example perhaps we only care that the performance is good and that SQL injection is not possible.

Specifications as artefacts

Specifications and requirements need to be versioned and change controlled exactly as source code is today. We need to ensure that requirements are consistent and coherent, which formal methods should provide, but analysis and resolution of differing viewpoints as to the way system works will remain an important technical skill.

Some participants felt that conflicting requirements would be inevitable and that it would be unclear as to how the generating code would respond to this. It is certainly clear that currently models do not seem to be able to identify the conflict and will most probably favour one of the requirements over the others. If the testing suite is independent than behavioural tests may reveal the resulting inconsistencies.

It was seen as important to control your own foundation model rather than using external services. Being able to keep the model consistent across builds and retain a working version of it decouples you from vendor dependencies and should be considered as part of the build and deployment infrastructure. Different models have different strengths (although some research contradicts this anecdotal observation). We didn’t discuss supplementation techniques but we did talk about priming the code generation process with coding standards or guidelines but this did not seem to be a technique currently in use.

For some participants using generative AI was synonymous with choosing a vendor but this is risky as one doesn’t control the lifespan of such API-based interactions or how a given model might be presented by the vendor. Having the skills to manage your own model is important.

In public repositories it has been noted that the volume of code produced has risen but quality has fallen and that there is definitely a trade-off being made between productivity and quality.

This might be different in private codebases where different techniques are used to ensure the quality of the output. People in the session trying these techniques say they are getting better results than what is observed in public reporting. Without any way of verifying this though people will just have to experiment for themselves and see if they can improve on the issues seen in the publicly observed code.

When will this happen in banks?

We talked a little bit about the rate of adoption of these ideas. Conservative organisations are unlikely to move on this until there is plenty of information available publicly. However if an automated whole system creation process works there are significant cost savings associated with it and projects that would previously have been ruled out as too costly become more viable.

What does cheap, quick codebases imply?

We may be able to retire older systems with a lot of embedded knowledge in them far more quickly than if humans had to analyse, extract and re-implement that embedded knowledge. It may even be possible to recreate a mainframe’s functionality on modern software and hardware and make it cheaper than training people in COBOL.

If system generation is cheap enough then we could also ask the process to create implementations with different constraints and compare different approaches to the same problem optimising for cost, efficiency or maintenance. We can write and throw away many times.

What about the humans?

The question of how humans are involved in the software delivery lifecycle, what they are doing and therefore what skills they need was unclear to us. However no-one felt that humans would have no role in software development but that it was likely to be different to the skill set the made people successful today.

It also seemed unlikely that a human would be “managing” a team of agents if the system of specification and constraints was adopted. Instead humans would be working at a higher level of abstraction with a suite of tools to deliver the implementation. Virtual junior developers seemed to belong to the faster horse school of thinking.

Wrapping up the session

The session lasted for pretty much the whole of the unconference and the topic often went broad which meant there were many threads and ideas that were not fully resolved. It was clear that there are currently at least two streams of experimentation: supplementing existing human roles in the software delivery cycle with AI assistance and reinventing the delivery process based on the possibilities offered by cheap large language models.

As we were looking to the future we most discussed the second option and this seems to be what people have in mind when they talk about not needing experienced software developers in future.

In some ways this is the technical architect’s dream in that you can start to work with pure expressions of solutions to problems that are faithfully adhered to by the implementer. However the solution designer now needs to understand how and why the solution generation process can go wrong and needs to verify the correctness and adherence of the final system. The non-deterministic nature of large language models are not going away and therefore solution designers need to think carefully about their invariants to ensure consistency and correctness.

There was a bit of an undertow in our discussions about whether it was a positive that a good specification almost leads to a single possible solution or whether we need to allow the AI to confound our expectation of the solution and created unexpected things that met our specifications.

The future could be a perfect worker realising the architect’s dream or it could be more like a partnership where the human is providing feedback on a range of potential solutions provided by a generation process, perhaps with automated benchmarking for both performance and cost.

It was a really interesting discussion and an interesting snapshot of what is happening in what feels like an area of frenzied activity from early adopters, later adopters probably can afford to give this area more time to mature as long as they keep an eye on whether the cost of delivery is genuinely dropping in the early production projects.

Standard