Tools for thought should evolve building blocks
What genetic algorithms can tell us about tools for thought
I’ve been thinking about this paper on designing genetic algorithms (Goldberg, 1998).
Genetic algorithms get computers to evolve solutions instead of having to design them ourselves. Since evolution will emerge in any system with mutation, heredity, and selection, we just need a few ingredients. A string of 1s and 0s, some mutation operations, and a fitness function will do the trick. Press play, and watch designs evolve from the bottom-up.
Goldberg’s paper is about designing GAs, but the patterns he identifies reveal deep insights about innovation in general. They also suggest new ways of thinking about tools for thought. Let's unpack!
Innovation is about evolving building blocks
A genetic algorithm evolves building blocks—chunks of DNA that encode useful traits.
Genetic algorithms work through a mechanism of quasi-decomposition and reassembly… The basic idea is that GAs (1) implicitly identify building blocks or subassemblies of good solutions and (2) recombine different subassemblies to form very high performance solutions.
(Goldberg, 1998)
So the GA ends up unbundling, remixing, and composing building blocks to make new, higher-level building blocks, in a (hopefully) upward spiral.
We see this building block pattern emerge in other kinds of evolving systems, too.
Technology evolves through composition. Things get invented, then get modularized. These modules are integrated into new technologies, which themselves get modularized.
Ideas compose too! We call it citation. Ideas get connected, creating new higher-level ideas, and so on.
As Goldberg puts it, identification and exchange of building blocks is the critical path to innovative success.
You need a diversity of building blocks to innovate
GAs don’t evolve toward a single “perfect” thing. Evolution doesn’t work like that! So what are GAs evolving? A population of possibilities.
Understanding selectorecombinative GAs helps us understand that the decision making among different, competing notions is statistical in nature, and that as we increase the population size, we increase the likelihood of making the best possible decisions. (Goldberg, 1998)
Building blocks combine with other building blocks to create new building blocks. Each building block unlocks new possible combinations, expanding our adjacent possible. The more building blocks we have, the more we can create.
You see in this beauty a dynamic stabilizing effect essential to all life. Its aim is simple: to maintain and produce coordinated patterns of greater and greater diversity. Life improves the closed system's capacity to sustain life. Life—all life—is in the service of life. Necessary nutrients are made available to life by life in greater and greater richness as the diversity of life increases. The entire landscape comes alive, filled with relationships and relationships within relationships.
(Liet Keynes, Dune, Frank Herbert)
Hard problems are hard because their building blocks are hard to find
Goldberg calls these BB-hard problems.
…This may be because the BBs are deep or complex, hard to find, or because different BBs are difficult to separate, but whatever the difficulty, it may be understood in strictly mechanistic terms.
This insight clicks together nicely with insights from assembly theory. The basic intuition here is that evolution works from the bottom-up, in an upward spiral. Complex things, like animals, are composed of simpler things, like cells, DNA, molecules, atoms. Since evolution assembles things from the bottom-up, certain simple things end up being prerequisite to the emergence of complex things. By measuring the complexity of a system, we can get a sense of its evolutionary depth, its assembly index.
Many BB-Hard problems are problems that require a relatively high assembly index to solve.
Innovation has to outpace takeover, or evolution gets stuck
Genetic algorithms sometimes prematurely converge toward a single solution, and then get stuck. That’s a problem. We don’t want to get stuck in a local maxima. We want continual innovation.
Why does this happen? Goldberg identifies two important variables: time-to-innovation (ti
) and time-to-takeover (t*
).
Time-to-innovation (
ti
) is how long it takes for some new innovative building block to emerge by natural selection.Time-to-takeover (
t*
) is how long it takes for an innovation to diffuse through the population.
Putting them together, we get two measures along a diffusion curve.
Since evolution requires a diverse pool of building blocks to draw from,
If time-to-innovation (
ti
) is shorter than time-to-takeover (t*
), then new innovations disrupt old ones before they can cement a dominant position. When (ti < t*)
, you get steady-state innovation.If time-to-innovation (
ti
) is longer than time-to-takeover (t*
), the incumbent trait dominates. It saturate the population and evolution gets stuck. There is too much of just one thing, and not enough building blocks for evolution to work with.
It turns out we’re in a race between innovation and ossification. When ti < t*
, you have a functioning free market. When ti > t*
, you get oligarchy.
A tool for thought should evolve building blocks
How might we apply some of these insights when designing tools for thought? Some thoughts and questions.
Your tool for thought should evolve building blocks. What if we saw our notes as building blocks for ideas? What are the qualities we might look for in a building-block note?
Building blocks encode a useful trait. A building block note encodes an idea.
Building blocks are atomic. You want your BB-notes to be as small as possible, but no smaller. This maximizes combinatorial surface area.
Building blocks are composable. BB-notes are focused on composition too. Big ideas are composed from smaller ideas, through hyperlinking and transclusion.
Aha! We’ve rediscovered the evergreen note pattern. The building block hypothesis gives us a explanation of why evergreen notes work so well for knowledge generation.
Grow the pool of building blocks. The bigger your pool of atomic notes, the greater the number of possible combinations. Each note expands your adjacent possible. Increase the population size to increase the likelihood of innovation.
How might we expand our population of notes?
Is there any upper limit? How big is too big?
Introduce mutation. Permanent notes are sort of like DNA. They act as a durable repository for memory across time. What might happen if we introduced mutation to the system?
What if we introduced game loops to combine, remix, and rework notes?
What other kinds of mutation might we introduce?
Add a selection pressure. Evolution requires mutation, memory, and one more thing—selection.
How might you, yourself, act as a selection pressure on your own notes?
Can we introduce game loops to help us manually curate, combine, and prune notes?
What other signals might we use to introduce selection?
Expand the diversity of building blocks. As Goldberg puts it, diversity is a necessary condition of selectorecombinative success. The broader our ecology of notes, the greater the combinatorial possibility.
In what ways might we increase the diversity of our notes?
How might we connect notes laterally, across topic boundaries?
Build an engine to discover hard building blocks. These building blocks are probably going to have a high assembly index. What does this mean for note-taking? We want to build game loops that encourage us to compose small ideas into bigger ones, in a recursive loop. The higher the assembly index, the greater the chance you will stumble upon your next big idea.
How might we measure the assembly index of a note? Length? Number of links? Depth of links?
How might we increase the assembly index of our thoughts?
Accelerate time-to-innovation. Innovation has to outpace takeover, or evolution gets stuck. Ideas can get ossified too. If ti < t*
, then you experience continual inspiration. If ti > t*
, you get creative block.
What does takeover mean in a note-taking context? Big ideas that you return to over and over? Ideological capture? Restating your priors?
Are there progressive taxes on (t*) that might subsidize or accelerate (
ti
)? Can we slow time-to-takeover? Should we? Which is the more important variable for provoking steady-state innovation?Can we increase the rate of mutation to accelerate time-to-innovation? How fast is too fast?
What if we preferentially resurfaced drafts, or less-connected notes, giving them a chance to become building blocks?