I’ve just started reading Inventing the Internet by Janet Abbate. The opening chapters are great, narrating Paul Baran’s efforts at RAND to design a nationwide communication network. Baran had one design goal for this network—survivability. Specifically, survivability in the face of nuclear catastrophe.
What does it look like to design a system for survivability? This is a different very goal from those that drove the design of previous communication systems, goals like what is easiest, most efficient, most cost-effective, most profitable. Asking a different question lead Baran to different answers, and breakthroughs that made the internet we have today.
For me, “design for survivability” immediately conjures up a desire to protect. If we want to ensure the survival of something, perhaps we might place it in a fortification, like a bunker, or an arctic vault, or in the middle of a mountain, or perhaps encase it in bricks?
But this is K-selected thinking. What is the r-selected response?
LOCKSS stands for "Lots of Copies Keep Stuff Safe," a cornerstone principle for robust digital preservation. More copies of data will tend to make it safer, regardless of the system used to manage that data.
(LOCKSS: Stanford Library Preservation)
Make like a dandelion and spread seeds!
It’s not just dandelions. Nature is resilient, in part because it is redundant. It creates lots of decentralized backups.
Nature has had a lot of time to experience catastrophes, and to learn. Where we might experience a wildfire as a single catastrophe, nature experiences it repeatedly on the millions of years timescale. From this perspective, catastrophes look like normal accidents, part of an ongoing cycle of growth, death, rebirth. Punctuated equilibrium.
So, millions of years of evolutionary learning get layered into the relationships of an ecology. Beavers go about their business, coincidentally creating redundant caches of high-density biodiversity, distributed throughout the landscape.
These pocket ecologies expand nature’s repertoire of possible behavior in response to change. The richer the diversity, the wider the range of possible responses, a reflection of Ashby’s Law of Requisite Variety:
If a system is to be stable, the number of states of its control mechanism must be greater than or equal to the number of states in the system being controlled.
The environment changes, nature evolves and remembers, expanding its diversity to meet the diversity of environmental conditions.
Already we can begin to see certain themes that are related to nature’s resilience:
Decentralization
Redundancy
Diversity
Adaptability
These qualities make interesting contrasts to the qualities we often see expressed in modern management and machine production:
Centralize for economies of scale
Eliminate redundancy as waste, cut costs
Standardize on approach
Restrict range of motion as a source of error
Where machine production optimizes for the steady state, nature optimizes for change. Decentralization, redundancy, diversity, adaptability might be inefficient during stable periods, but together they create resilience. And it is only a matter of time before you encounter a forest fire, pandemic, or some other crisis. To be efficient is to be fragile, and to be fragile, over the long run, is to go extinct.
Nature has been doing this 3.7 billion years, and probably has a few good ideas about survivability.
Back to Paul Baran, and his survivable network.
So, traditional telecommunication networks were designed around a hierarchy of switching stations. Many users shared one local switching office. To make a call across the country, you would route from your local office, up to a regional, then to a national office, then back down the hierarchy. Like most hierarchical systems, it was cost effective, because it shared infrastructure between many users and avoided wasteful duplication. It was also easy to understand and manage by hand.
The problem was, this hierarchical network—like all hierarchical networks—was vulnerable to targeted attack. Knock out a single node and you cut off lots of users, even whole regions.
This was not a comfortable position to be in during the height of the Cold War, so AT&T responded by proposing a new network called AUTOVON.
AUTOVON aimed to protect the network the way you might protect Michaelangelo’s David, by encasing it in bricks. Switching stations would be placed in hardened locations, usually underground, away from cities. AUTOVON would evenly distribute many stations to increase network redundancy. Yet the whole system was still centrally controlled.
Although AUTOVON had nodes distributed throughout the system, control of those nodes was concentrated in a single operations center, where operators monitored warning lights, analyzed traffic levels and controlled system operations. If traffic had to be rerouted, it was done manually: operators at the control center would make the decision and then contact the operators at the switching nodes with instructions to change routes. (Abbate, 1999. Inventing the Internet)
Baran flipped all of this on its head. Instead of a static network with hardened switching stations, he proposed a dynamic network with disposable routers that could freely join or leave. Instead of centralized routing, he proposed dynamic ad-hoc software-based routing.
Messages would be broken up into little packets, and each packet would be wrapped in a header envelope describing its desired destination. A router receiving a packet would independently choose where to send it next, like gossip. Packets would be passed across the network like hot potatoes until they found their destination. Packet switching!
Now it didn’t much matter if a router was blown up. Routes were ad-hoc, so the network could dynamically route around the damage. New routers could freely join. The network could adapt to new conditions.
What is strange is that by optimizing for decentralization, redundancy, diversity, adaptability, Baran’s network design created the conditions for permissionless innovation and the vibrant commercialized internet we have today.
Maybe this is not so strange. Nature does it. It’s the original innovator. And really innovation is just another word for evolvability, and evolvability enhances survivability.
Packet switching made the network decentralized, survivable, but centralization has re-emerged, this time at the application layer, one layer up. The web ecosystem manifests the same vulnerabilities that were present in hierarchical circuit-switched telephony. If Wikipedia goes down, I lose access to the world’s information. If Okta gets hacked, so do we all. If AWS goes down, it takes much of the internet with it.
This recentralization has been driven by the very economic forces that were unleashed by the internet’s decentralized permissionless architecture. So it goes. Recentralization is inevitable. We see it in nature, too.
Still, this recentralization is on top of a substrate that is fundamentally permissionless. That seems significant. I try to picture how the internet might have played out over centralized circuit switching. No decentralization would mean no permissionless innovation would mean no bottom-up evolutionary product discovery. I bet a centralized internet would have looked like cable TV.
Perhaps there is a rule of thumb here? If you decentralize, the system will recentralize, but one layer up. Something new will be enabled by decentralization. That sounds like evolution through layering, like upward-spiraling complexity. That sounds like progress to me.