Computer Games, Programming

ProcJam 2014 talks

Last weekend I had the chance to go to the ProcJam talks in London that were part of the ProcJam game jam. Due to some difficulties running errands and never having been to Goldsmith’s before I missed the first talk and arrived halfway through the second.

I was hoping for some tech-focussed talks about coding up procedural generation. As it turned out it wasn’t quite that kind of event but it was still pretty interesting.

Tanya Short’s talk about some of the theory behind procedural generation was quite interesting (despite a flaky Google Hangout connection). The key takeaway from her talk was to look at the structure of what you are creating. Creating believable content means creating a simulation of the creation itself while making it satisfying to explore means adding an emotional dimension.

An example she gave was of name generation, you can totally do the Elite random syllable process but to create more believable names you can introduce structure, like lineage, nicknames and titles appropriate to the society and social ranking of the character. Introduce aspects of the way that people name their children in the real world: things like fashionable names and trends that apply to characters of a certain age, repeated usage of names within families, the use of names to honour significant people in the parent’s lives. These things add emotional depth.

Tanya is working on the game Moon Hunters and she mentioned an example from the game of how personalities consisted of the following factors:

  • the results of the character’s deeds
  • the perception of others
  • how the characters contrast to the villains
  • the needs of the story and plot

I was particularly struck by the idea that the way a character behaves is bound by how they think other people expect them to behave. That’s something we see a lot in the real world and creates exactly the right emotional connection in someone who is trapped between their “true” and “public” selves.

The talk I most enjoyed was by Hazel McKendrick who is one of the developers on No Man’s Sky. For someone who doesn’t really do game programming or procedural generation it was a helpful introduction to some of the basics.

First of all she made it clear that procedural generation isn’t about randomness, abrupt jumps in the underlying numbers creates a jarring inconsistent experience for the player. In mathematical terms we don’t want to use truly random generation but instead functions that generate continuous (smooth) results. This means using noise functions rather than pseudo-random algorithms. Most people mentioned Perlin noise but Hazel also mentioned Perlin worms and Whorley noise.

She also made the great point that games are made by their special moments and rare events in the game. If the generator creates content uniformly then the world is overwhelming and might actually become boring as repetition robs the feeling of being special.

In No Man’s Sky they want the bulk of worlds to be fairly unexciting and plain and allow the player to control their experience by having the more alien or extreme worlds be at the periphery and key game world experiences at the core of the galaxy. The player determines what kind of experience they have by the depth of the region they choose to explore.

Objects in the world matter more when they exist in relationships to one another. Hazel used the example of trees, undergrowth and bushes but it also fits in with Tanya’s talk in that people are more interesting if they exist within societies, lineages and families.

Therefore the generators need to be aware of the context around them and simplest way to do that is to nest the generators so that a generator creates a set of consistent generators that then generate sympathetic output. These generators can nest to any depth and are testable as units and as chains, something that I agree with from my experience with using functional programming although Hazel is working in C++ in a slightly more conventional programming situation.

When nesting generators these way you need to check very large datasets to see that the results truly are consistent rather than frequently consistent. You can program limits on the generator to bound their output but part of the point of using procedural generation is to create things you weren’t expecting. In her slides Hazel had some examples like spaceships where it was possible to generate large catalogues of spaceships to browse visually for anomalies.

You can also write test suites to search for situations that are out of bounds and again from a functional perspective I could see the value of generative testing to exhaustively examine output to find issues with particular inputs identified as the problem.

In a question after the talk Hazel also mentioned that generators are always consistent and finite and that changes to the player’s world are held in state outside of the generators. No Man’s Sky is a mingleplayer game where players will be able to play in the same universe and share discoveries but don’t necessarily have to be in sync to play the game or impacted by the other players activities. You can explore the game at your own pace.

Hazel’s talk is on YouTube

The other interesting talk was by Mark Johnson who is working on Ultima Ratio Regum which will one day be a rogue-like exploration game about empires and the evolution of societies but at the moment is an insanely detailed world-building program that creates a world, nations within that world, their history, cities and human terrain.

At one point in the talk Mark demonstrated how it was possible to zoom into a terrain square in the world and look at the leaves in the tree.

In terms of generation Mark’s talk was similar to Hazel’s but he used the metaphor of card decks. You shuffle cards and draw and the selected card opens new card decks that can be used for choices at a more detailed level.

His example was selected the leadership model of a nation and then its economic and foreign policy. So if you select Isolationist as the foreign policy you don’t want to have Free Trade as the economic policy. Similarly religious freedom should not be available if an earlier choice was theocracy.

This cascade set of consequences even extends to where the religious buildings are placed in cities. A populist religion mean that religious buildings are scattered throughout the city, a state religion concentrates them in the governmental quarter.

Mark’s analogue metaphor was interesting and more accessible perhaps, as long as you get the card decks nested within card decks idea. He mentioned that he thought board games were the most exciting realm of game design as you can experiment relatively cheaply but as the players run the game you must be capable of expressing the rules extremely clearly so they can be understood.

One nice detail was that the game produces books about the history of nations and people within the nation. This will ultimately be how a player discovers things in the game beyond literal exploration and conversation with characters. He illustrated this by showing how he could locate the grave of a historical character from the information in that character’s biography.

While its an achievement to have a game that has that level of internal consistency at some point he does also want to allow for alternative narratives where, say, a character involved in a war between two nations would be mentioned by the histories of both but with perhaps very different perspectives on that character’s behaviour.

Mark Johnson’s talk

I’d like to complement the organisers of the talk for putting together a pretty interesting programme.

Standard
Programming

Programming as Pop Culture

The “programming is pop culture” quote has been doing the rounds from a 2004 interview with Alan Kay in terms of the debate on the use of craft as a metaphor for development. Here’s a recap:

…as computing spread out much, much faster than educating unsophisticated people can happen. In the last 25 years or so, we actually got something like a pop culture, similar to what happened when television came on the scene and some of its inventors thought it would be a way of getting Shakespeare to the masses. But they forgot that you have to be more sophisticated and have more perspective to understand Shakespeare. What television was able to do was to capture people as they were.

On the face of it this is a snotty quote from someone who feels overlooked; but then I am exactly one of those unsophisticated people who entered a democratised medium and is now rediscovering the past!

As a metaphor though in trying analyse how programmers talk and think about their work and the way that development organisations organise what they do then the idea that programming is pop culture is powerful, relevant and useful.

I don’t necessarily think that pop culture is necessarily derogatory. However in applying it to programming I think that actually you have to accept that it is negative. Regular engineering for example doesn’t discuss the nature of its culture, it is much more grounded in reality, the concrete and stolen sky as Locke puts it.

Architecture though, while serious, ancient and storied is equally engaged in its own pop culture. After all this is the discipline that created post-modernism.

There are two elements to programming pop culture that I think are worth discussing initially: fashion and justification by existence.

A lot of things exist in the world of programming purely because they are possible: Brainfuck, CSS versions of the Simpsons, obfuscated C. These are the programming equivalent of the Ig Nobles, weird fringe activities that contribute little to the general practice of programming.

However ever since the earliest demoscene there has been a tendency to push programming to extremes and from there to playful absurdity. These artifacts are justified through existence alone. If they have an audience then they deserve to exist, like all pop culture.

Fashion though is the more interesting pop culture prism through which we can investigate programming. Programming is extremely faddish in the way it adopts and rejects ideas and concepts. Ideas start out as scrappy insurgents, gain wider acceptance and then are co-opted by the mainstream, losing the support of their initial advocates.

Whether it is punk rock or TDD the patterns of invention, adoption and rejection are the same. Little in the qualitative nature of ideas such as NoSQL and SOA changes but the idea falls out of favour usually at a rate proportional to the fervour with which it was initially adopted.

Alpha geeks are inherently questors for the new and obscure, the difference between them and regular programmers is their guru-like ability to ferret out the new and exciting before anyone else. However their status and support creates an enthusiasm for things. They are tastemakers, not critics.

Computing in general has such fast cycles of obsolescence that I think its adoption of pop culture mores is inevitable. It is difficult to articulate a consistent philosophical position and maintain it for years when the field is in constant churn and turmoil. Programmers tend to attach to concrete behaviour and tools rather than abstract approaches. In this I have sympathy for Alan Kay’s roar of pain.

I see all manner of effort invested in CSS spriting that is entirely predicated on the behaviour of HTTP 1.1 and which will all have to be changed and undone when the new version HTTP changes file downloading. Some who didn’t need the micro-optimisation in initial download time will have better off if they ignored the advice and waiting for the technology to improve.

When I started programming professional we were at the start of a golden period of Moore’s Law where writing performant code mainframe-style was becoming irrelevant. Now at the end of that period we still don’t need to write performant code, we just want easy ways to execute it in parallel.

For someone who loves beautiful, efficient code the whole last decade and a half is just painful.

But just as in music, technical excellence doesn’t fill stadiums. People go crazy for terrible but useful products and to a lesser degree for beautiful but useless products.

We rediscover the wisdom of the computer science of the Sixties and Seventies only when we are forced to in the quest to find some new way to solve our existing problems.

Understanding programming as pop culture actually makes it easier to work with developers and software communities than trying to apply an inappropriate intellectual, academic, industrial or engineering paradigms. If we see the adoption, or fetishism, of the new as a vital and necessary part of deciding on a solution then we will not be frustrated with change for its own sake. Rather than scorning over-engineering and product ivory towers we can celebrate them as the self-justifying necessities of excess that are the practical way that we move forward in pop culture.

We will not be disappointed in the waste involved in recreating systems that have the same functionality implemented in different ways. We will see it as a contemporary revitalisation of the past that makes it more relevant to programmers now.

We will also stop decrying things as the “new Spring” or “new Ruby on Rails”. We can still say that something is clearly referencing a predecessor but we see the capacity for the homage to actually put right the flaws in its ancestor.

Pop culture isn’t a bad thing. Pop culture in programming isn’t a bad thing. But it is a very different vision of our profession that the one we have been trying to sell ourselves, but as Kay says maybe it better captures who we really are.

Standard
Programming

Scale Summit 2014

Scale Summit is the new Scale Camp, an unconference aimed at bringing the same kind of topics as you might expect at Velocity.

This was the first Scale Summit, the venue was excellent as was the food (especially the bacon rolls, from Eden apparently) and supply of drink. Scale Summit happens under Chatham House rules so there’s no attribution of what is said which allows the attendees to be really frank and also for people to be free with what they really know rather than hedging and trying to be “on message”. It makes for a fascinating gathering.

The sessions varied in their organisation but all focussed on discussion between the participants. I managed to go to the Elasticsearch session, which was interesting for the practical boundaries that people were finding and also the operational knowledge. On the subject of using ES as the primary application store, the feeling seemed to be “not yet”, but there was also some words of wisdom about separating out document stores and search functionality and not finding a superficial unity in the two purposes.

The microservices session was a fast and furious fishbowl, easily the liveliest event and one that is going to require a post in its own right. It was interesting to see that the room split into practitioners and people who were sceptical that microservices were a thing or held value over conventional service development.

After lunch I sat in on what can be done to get frontend testing off the critical path to production (not much now but clearly more effort needs to be made), distributed DOS attacks on transactional sites (not as scary as I imagined but again we have to be thinking about how this works), distributed data stores (good war stories, felt better informed for going), getting ops and developers to work together and Linux containers (definitely going to try Docker now).

I had quite a few questions going into the event and while I didn’t get all the answers I hoped for I did at least establish that smart people don’t have simple answers to them either which is reassuring. It’s hard to tell in the heat of it all whether you’re on the edge of things doing things that are pushing the boundaries or simply over-complicating your situation.

The attendees were nicely mixed and from a range of backgrounds, ops, architecture and developers were all well-represented so you felt you were seeing a rounded situation.

The unconference format left me wanting more rather than feeling I had had enough. The openess was amazing and I am planning on being there next year.

Standard
Programming

Writing code without tests

This post is aimed at people who have mastered test-driven development and ideally also behaviour-driven development and who are familiar with XCheck testing. If you don’t have good basic steps then trying to jump onto some of these techniques are likely to backfire on you as you will probably struggle to assess the risks correctly.

There is a reason TDD was invented, it represents the refinement of good testing practice and the philosophy of good software design. TDD is a relatively simple practice to describe that requires effort to implement. Writing code driven by tests is safer that straight-coding.

Writing untested code is a kind of mastery technique. It is high-risk and relies on the skills and the knowledge of the programmer. I don’t think it is ever responsible if the programmer is not going to be the person supporting the result in production. Without this condition then the programmer’s interests are not properly aligned with the consumers of their code.

So with all those caveats in place what if we want to create code faster because we don’t have to write tests?

Well we have to understand where bugs come from and we will have to write code that doesn’t allow those situations to arise.

There are two important principles to start with. If you can rely on tested library code, then you can rely on the underlying quality of the tested code and leverage it in your own application. Secondly the code you don’t write will not have bugs.

Therefore we should be aiming to write the smallest amount of code possible and we should never try to code what others have coded for us.

The next point is about where bugs occur. I think we’re now at a consensus point that most bugs occur in the way we change and maintain state. In both procedural and functional languages it is rare to get a mistake in the order of steps that something must happen in for example. These kind of problems tend to be misunderstandings of the domain (that get written into test suites as well so testing doesn’t help catch them) rather than genuinely unexpected consequences of the programmer’s code. Object-orientated code is really hard to reason about from this point of view as objects don’t have an implied order of execution.

This is why quick scripts of less than 200 lines tend to do stable sterling service for years whereas larger applications are more tortured in their existence.

Therefore whatever language we are coding in we need to adopt the functional principle of operating only on our parameters and returning values that can be consumed by the caller.

Size matters, a lot, if whole program can fit into a single file and you can pretty much hold the whole thing in your head then it will be easy to reason about what the program is doing and see flaws in the logic of the program. A single complex line of code is better than many lines and is much better than many lines split across many files.

One way to bring down the size of code files is to be ruthless about concerns. For example recently in my Python programming I have been assigning only one purpose to each module: this module renders reports, this one provides JSON endpoints.

Another technique is to not persist any state, this is actually surprisingly easy in web programming since each request is completely separate event and by default you can trade CPU time for isolation.

If you are doing batch or server-side programming then it is worth considering using something like parallel to create many separate bubbles of execution rather than trying to write code yourself to distribute work.

Another aspect of state that causes issues are making global modifications, whether it be to a database or a filesystem. Try and defer all global changes to the final moment of a program and do all the manipulation in-memory instead. If you never change the world then you can run a program over and over again refining what it does.

Assertions are more powerful than logging in writing test-less code, it is better to kill a thread of execution rather than let it do something you weren’t expecting. Logging is really about helping build your intuition about what a program does and how it works.

Assertions allow you to create strong pre and post-conditions on the operation of the program. Essentially they allow you to guarantee the “happy path” execution of your code and avoid having to test all the negative situations that might occur.

Despite this you always want to code for failure, use short-circuit logic to abort code flow early and therefore simplify the context of the code in the rest of the function.

Remember all the basic rules of cyclomatic complexity, don’t nest, don’t do conditionals, do try and express your looping as list comprehensions.

Don’t write generic code, ever. The more potential inputs a function has, the more you end up needing unit-tests to verify the interactions. If something is meant to work on strings don’t try to make it work on strings and integers. Your detection code ends up being a potential source of bugs that needs testing.

If you write dynamic interpreted languages then you are going to have do some manual testing, unless you can remember the names and orders of the functions exactly. Don’t forget to dive into the shell or REPL and play around with the code in isolation. If you can verify the behaviour of individual parts of your program without having to wire together multiple components then you have the right level of granularity for your code.

Re-use code that is already working. Code re-use is generally best achieved by cut and pasting files and then importing the functions you need. Don’t try and synchronise your code, updating some library code ultimately means that you are going to know whether the new library code works as you expect with your functionality and that means you’ll need a test suite.

Don’t refactor your code, rewrite it. Refactoring requires unit tests. Don’t be afraid of things like myfunction2 (although once you have the new functionality you need to delete the old unused stuff). Re-writing allows you to ditch all your assumptions about the code and attempt to express your new understanding of the problem and the requirements as simply as possible.

Don’t work with large numbers of people on the same code base. The more people trying to modify and change the code the more you need tests to try and clarify your different intents for the code base. Again, try divide and conquer on the problem, rather than six people working on the same code can you get three sets of two people collaborating on three smaller codebases.

Finally don’t be afraid to write a test. Writing the right unit-test to prove you can rely on a base piece of functionality means that you then don’t have to write tests for all the pieces of code that use that underlying function. I like to try and write code without tests to maximise the flexibility of the code base when I’m tackling problems with unclear solutions. It is not an ideological thing to have no tests whatsoever, it is rather that when tempted to write a test I think “Could I do this in a way that is trivial and doesn’t require a test?”. Simplicity is the cornerstone of test-free code.

Standard
Programming

Clojure Exchange 2013: Tommy Hall on Concurrency versus Parallelism

One of the really interesting talks at Clojure Exchange 2013 was one by Tommy Hall with the (not good) title You came for the concurrency, right?.

The talk had two main threads. It served as a helpful review of Clojure’s state-handling, concurrency and parallel processing features. Useful for beginners but also a helpful recap for the more experienced.

The other was a discussion of what we mean by concurrency and parallelism. Something that I hadn’t really thought about before (although I was aware there was a difference). Tommy referenced this talk Concurrency is not Parallelism (it’s better) by Rob Pike.

In the talk Rob gives the following definitions:

Concurrency: programming as the composition of independently executing processes

Parallelism: programming as the simultaneous execution of (possibly related) computations

In the talk Tommy gives the future function as an example of programming to indicate concurrency boundaries and the reducers library as an example of parallelism.

Rob’s presentation is worth reading in full but his conclusion is that concurrency is not a guarantee of parallel execution but that achieving parallelism without concurrency is very hard or impossible. Tommy’s talk uses an Erlang example to make the same point.

Don’t fear the monoid

While discussing reducers Tommy finally explained something that I struggled before. A lot of type champions point at reducers and shout Monoids! As if that was some kind of argument.

During his talk Tommy explains that parallel combination function needs to be able to return an identity so the function always returns a value and that because ordering of values is not guaranteed (unlike the implied order in a regular reduce)  it needs to be associative.

That makes sense. Turns out that those are also the properties of a monoid.

Fans of type systems throw terms like Monad, Dual and Monoid not to help add understanding to a discussion but to use them as shibboleths. It was far more enlightening to see an example of where the needs of a problem were driving to a category of function with certain properties. If that is common enough to deserve a shorthand-name, fair enough, but the name itself is not magical and knowing the various function category names is a feat of learning rather akin to memorising all those software pattern names from the Gang of Four’s book.

Standard
Programming

Metrics and craftmanship

Ever since we’ve had access to increasingly more comprehensive and easy to comprehend metrics there has been a conflict between the artisan and craftsmanship elements of software development and the data-driven viewpoints.

Things like code quality are seen as being difficult to express in terms of user-affecting metrics. I suspect that is because most of the craft concerns of software development do not affect the overall value of a product. This is not to align myself fully with those driven by metrics.

There are lots of situations where two approaches result in the same metrics outcome. It is tempting to give in to the utilitarian argument that in this case what you should do is simply choose the lowest cost option.

That is too reductionist though and while it may lead to an optimised margin-generating product it feels to me that it is just as likely to create a spiral of compromise that jeopardises the ability to make further improvements.

It is here at the point where metrics are silent that we are put to the test of making good decisions. Our routes forward are neutral from a data point of view but good decisions will unlock better possibilities in the future. It is at this moment that I feel our preferences for things like craft and aesthetic and our understanding of things like cost and consequence matter. Someone able to understand how to achieve beauty and simplicity in software for the same cost as the compromise while achieve very different outcomes.

So we need to be metrics-first so we know we are being honest with ourselves but once we are doing things in a truthful environment, our experience and discretion can make all the difference.

Standard
Programming

Effective dynamic programming: Don’t change variable types

A lot of dynamic languages actually use strong underlying type systems which means that you can’t interchange between strings and number types, you have to explicitly convert.

A typical web example is a http url or parameter that represents a number. All HTTP is string-based but when acting on the value we often want to compare its numeric value against other numbers.

One common technique is to convert the string value to a native type invisibly in the framework but I think that in practice what is more useful is to treat all HTTP content as strings, exactly as it comes off the wire.

Then in the context of the response handler you explicitly convert it to a number when you need to, at the point it needs to be converted.

I think this makes it easier to read the code: parameters passed from the request are always strings. Once the string is bound to an identity that identity should never be rebound to a different type. If we need the value of that identity in a different form we perform an explicit conversion at the site where the different value is needed.

If we use that value more than once then DRY suggests that we create a different identity, say parameter_as_number (unless we have a better name), that also has a consistently typed value, number.

Standard
Programming

Getting to grips with Generators in Javascript

One of the things that makes Python such an amazing language is its far-sighted inclusion of generators and first-order list comprehensions. Both of these things are now coming to Javascript with a healthy sense of homage in the syntax. All the examples in this post have been tested in the console of Firefox 23.

What are generators?

Generators are functions that can be called multiple times and on each call they resume execution from the state they were in when they last stopped executing. To do this they have a special keyword called yield instead of return.

Let’s consider the following trivial example:

function hello() { return "Hello"; }

hello(); // "Hello"

function lazyHello() { yield "Hello"; }

lazyHello(); // [object Generator]

var g = lazyHello(); g.next(); // "Hello"

This example might seem trivial but it actually encapsulates all the important values of generators and lazy evaluation. The difference between hello and lazyHello is that lazyHello defers execution of itself. Instead of executing it returns a Generator object that can then be evaluated for an answer later, rather like Deferreds.

This lazy evaluation means that you describe very complex or expensive calculations but not have to execute them until you need them. Now for a function that returns a single value that isn’t going to seem very exciting.

Writing infinite sequences with Generators

How can describe the world of all the positive integers? Well one way is to take the value of 1 as the start of arithmetic sum and simply add one to the previous value of the sum.

function postiveIntegers() {
  var i = 1;

  while(true) {
    yield i++;
  }
}

var g = postiveIntegers();
g.next(); // 1
g.next(); // 2

When we call next on the Generator we start executing the function, so we initialise the value of the variable and enter the loop. We then call yield and the function stops running. When we call next again we do not run the function again, we resume from the point where we stopped last time, inside the while loop.

Generators provide strong encapsulation of their data since once we have obtained a generator we can no longer access its state, just the results of its calculations. If you wanted to do this sum computation with a regular function then you would need to hold the state of the last call to the function somewhere outside the function and pass it again when you wanted to calculate the next value.

However that infinite loop is still a bit tricky and if you get it wrong then you will lock up the browser. Here’s a safer version.

function postiveIntegers(maxValue) {
  var i = 1;

  while(i <= maxValue) {
    yield i++;
  }

}

var g = positiveIntegers(2);

g.next(); // 1
g.next() // 2
g.next() // [object StopIteration]

Truly infinite sequences are really powerful but in practice you probably want to use weak-sauce versions that have some kind of bound.

Why are generators awesome?

Well firstly they allow the expression of things that would otherwise be impossible in computer code, things like very long or infinite sequences that would not fit in memory if every member had to be expressed or stored.

As part of this lazy evaluation generators are actually very economic in terms of resources, you can define a lot of rules and potential values but then only express the ones you are interested in, minimising the actual allocations.

Finally they are strongly encapsulated functions which hide their internal implementation without the need to resort to object semantics.

Standard
Clojure, Programming

Leiningen doesn’t compile Protocols and Records

I don’t generally use records or protocols in my Clojure code so the fact that Clojure compiler doesn’t seem to detect changes in the function bodies of either took me by surprise recently. Googling turned up this issue for Leiningen. Reading through the issue I ended up specifying all the namespaces containing these structures in the :aot definition in the lein project.clj. This meant that the namespace was re-compiled every time but that seemed the lesser of two evils compared to the clean and build approach.

Where this issue really stung was in the method-like function specifications in the records and as usual it felt that structure and behaviour was getting muddled up again when ideally you want to keep them separate.

Standard
Programming

Cognitive bias and the difficulty of evolving strongly typed solutions

Functional Exchange 2013 featured an interesting talk by Paul Dale that had been mostly gutted but had a helpful introduction to cognitive bias. That reminded me of something I was trying to articulate about in my own talk when I was talking about the difficulty of evolving typed solutions. I used the analogy of the big ball of mud where small incremental changes to the model of the system result in an increasing warped codebase. Ultimately you need someone to come along and rationalise the changes or you end up with something that is ultimately too difficult to work with and is re-built.

This of course is an example of anchoring where the initial type design for the system tends to remain no matter how far the domain and problem have moved from the initial circumstances of their creation.

Redesigning the type definition of a program is also expensive in terms of having to create a new mental model of what is happening rather than just adapting the existing one. Generally it is easier to adapt and incorporate an exception or set of exceptions to the existing model rather than recreating an entire system.

Dynamic or data-driven systems are more sympathetic to the adaptive approach. However they too have their limits, special cases need to be abstracted periodically and holistic data objects can bloat out of control into documents that are dragging the whole world along with them.

Type-based solutions on the other hand need to be periodically re-engineered and the difficulty is that the whole set of type definitions need to be worked on at the same time. Refactoring patterns of object-orientated code often focus on reorganisation that is easy, such as pulling out traits or extracting new structures. This is still anchoring on the original solution though.

If a type system is to be help rather than a hindrance you need to be rework the overall structure. I think with most type-systems this is actually impossible. Hence the pattern of recreating the application if you want to take a different approach.

The best typed solution currently seems to be type unions where you can make substantial changes and abstractions but not have to work about things like type hierarchies or accidentally polluting the edge cases of the system.

Where these aren’t available then good strongly-typed solutions actually rely heavily on good, proactive technical leaders to regularly drive good change through the system and manage the consequences.

Standard