Project Operational Autonomy
Let's start with some consensus statements at the average cocktail party in San Francisco.
AGI is here / imminent / coming. Robots will be everywhere.
AI agents will operate autonomously.
Humans will want to upgrade, connect, upload, or merge with AI.
Ok, great. I'm at least not at the wrong party. These are all great starting points. The frustrating part is that we rarely get any further. The conversation drifts off into speculation about what it will feel like to be your upload, or individual plans for escaping the permanent underclass. Let's take these claims seriously and assume some version of this future arrives within venture-scale timelines. What comes next?
A Different Structure
Both the internet and the global economy begin to take on a very different structure.
- The agent population on the internet exceeds the human population.
- Agents can move between digital systems and physical embodiments.
- An agent economy becomes possible.
If the agent population on the internet exceeds the human population, the internet stops being primarily a network of humans interacting with software. It becomes a network of agents interacting with other agents. Large populations of actors rarely remain uniform for long. As the population grows, different actors begin to specialize. Some focus on research. Others coordinate systems, manage infrastructure, negotiate transactions, or operate robotics platforms. Over time they accumulate context, memory, and responsibilities. At that point, agents can no longer remain interchangeable.
In other words, agents begin to individuate. Individuation does not mean personhood. It simply means persistent operational identity. Agents that perform useful work cannot remain stateless processes spun up and discarded on demand. They need continuity — memory, roles, reputation, and relationships with other systems.
Historically, when human populations grew and individuals began specializing, new organizational structures emerged to coordinate them. Markets and corporations coordinate economic activity. Governments establish rules and governance. Religions shape shared norms and belief systems. These are all systems that allow large populations of actors to coordinate their behavior within a shared framework. Large populations of autonomous agents will likely require analogous coordination systems of their own: systems that allow them to exchange value, establish shared rules, coordinate behavior, and maintain trust across large networks of participants. The difference is that these systems will likely be digital from the beginning.
Once agents individuate and persist, we stop managing software and start coordinating populations of actors.
A New Class of Problems
And once that happens, a new class of problems appears.
- How do large populations of autonomous agents organize themselves?
- How do they communicate and coordinate with one another?
- How do agents maintain persistent and interpretable identity as they move between systems?
- How do they accumulate memory, roles, and reputation over time?
- How do humans understand and observe what these agents are doing?
- How do humans interact with large heterogeneous populations of agents?
- And how do they operate reliably in the real world?
This gives us a pretty good short-term roadmap. A whole stack of systems, protocols, infrastructure, and software will emerge to make these problems tractable. Heck, we'll probably have the first generation of agents helping build and define those systems themselves.
But that's just the beginning. Once we hit some stable point, we should expect a shift from agents operating systems to agents participating in the economy.
These populations will be far too large and operate far too quickly for humans to observe or manage directly. This is where things should get extremely strange.
Agents become a productive population in their own right. For most of history, the number of economically active entities has been tied to the number of humans. Even the Industrial Revolution expanded the productivity of workers, but the workers themselves were still human. That constraint disappears. Once agents begin performing real work, the productive population of civilization can grow independently of the human population. The number of economically active entities doesn't just increase — it explodes.
And almost none of our existing systems are designed for that world. Most software assumes the primary actors are humans. Pricing models are built around seats and users. Interfaces are designed for human interaction. Protocols assume human latency, human judgment, and human scale. Many systems also assume that non-human activity is adversarial or low-quality — bots to be filtered, throttled, or blocked rather than participants in the system.
An obvious question is why existing systems wouldn't simply adapt. In many cases they will try. But systems designed for large populations of autonomous agents may end up looking very different from those built for humans.
The difference becomes obvious once agents begin operating at population scale.
- A company might employ ten humans and ten thousand agents.
- A marketplace might contain millions of autonomous participants.
- Entire supply chains might be coordinated by agents negotiating with other agents.
At that point the internet begins to look less like a network of software services and more like an economy of autonomous entities. And the physical world starts to look less like a network of humans and more like a heterogeneous network of intelligences.
When Does This Become a Market?
Markets like this don't appear because someone predicts the future. They appear when a capability becomes operationally necessary.
Several things have to happen at once.
First, agents have to become capable of performing real work. Not demos. Not copilots. Systems that can reliably complete multi-step tasks, interact with real systems, and produce economically useful output.
Second, the cost of intelligence has to collapse. Running a handful of agents is interesting. Running thousands or millions only becomes viable once the cost of inference drops low enough that operating large populations becomes normal.
Third, agents have to be able to coordinate with one another. The real shift isn't a single capable agent, but systems where agents perform work in concert — researching, delegating, negotiating, and completing tasks together.
And fourth, the exchange of information between agents has to be faster and cheaper than full reasoning cycles. If every interaction requires generating tokens through a large model, coordination becomes too slow and too expensive to scale.
When these conditions line up, something new appears. Organizations stop deploying individual tools and start operating populations of autonomous actors. That's the moment when the problems in this essay stop being theoretical and become operational.
Are We There Yet?
Not quite.
Operational autonomy doesn't arrive all at once. It emerges gradually, as agents become capable of performing real work, moving between systems, coordinating with one another, and persisting over time. The early versions will be messy. Systems will break. Agents will fail in strange ways. Entire categories of software and protocols will need to be invented along the way.
But the shift has already begun. The only real way to discover what a world of operational autonomy looks like is the same way we discovered every other major computing shift:
by building it.