top of page
Search

The Cellular Theory of AGI

  • Writer: Leeroy Shillingford
    Leeroy Shillingford
  • Nov 26
  • 6 min read

Why Superintelligence won't wake up one morning but will grow like an organism.

By Leeroy Shillingford



One year ago, the talk was all about the Singularity of AGI: a single moment when someone would press Enter, and a godlike superintelligence would spring into existence, omniscient, omnipotent, fully formed. The myth persisted across countless science fiction narratives and serious AI research papers alike.

But now, watching the AI landscape unfold in 2025, I believe we had it backwards. AGI won't arrive as a thunderclap. It will emerge the way all complex intelligence has throughout natural history: from the bottom up, cell by cell, agent by agent.


ree


The Paradigm Shift: From Monolith to Organism

The old model imagined AGI as a monolithic super-brain. A single system so powerful it could do everything, know everything, become everything. This top-down vision of intelligence assumed that raw computational scale would eventually cross some magical threshold into general intelligence.

The new model, what I call the Cellular Theory of AGI, proposes something fundamentally different. Instead of one super-brain, imagine billions of tiny, specialized agents, each doing one thing exceptionally well. Individually, they're simple tools. Collectively, they're the building blocks of a living cognitive organism.

This isn't speculation. Marvin Minsky, one of the founding fathers of AI, articulated this vision nearly four decades ago in his groundbreaking work The Society of Mind (1986). He proposed that human intelligence emerges from the interaction of countless "agents", simple processes that, together, produce sophisticated behavior. "What magical trick makes us intelligent?" Minsky asked. "The trick is that there is no trick. The power of intelligence stems from our vast diversity, not from any single, perfect principle."



The Biological Precedent

Consider your own body. You're not a single entity, you're a colony of 37 trillion cells, each a specialized agent performing a specific function. Neurons fire. Immune cells patrol. Muscle fibers contract. No single cell understands "you," yet from their collective interaction, consciousness emerges.

The same principle governs ant colonies. Individual ants follow simple rules: lay pheromone trails, follow stronger scent gradients, carry food back to the nest. No ant has a blueprint for the colony's elaborate structures. Yet the colony as a whole displays remarkable intelligence, building complex nests, optimizing foraging routes, defending against threats. Scientists call this stigmergy: coordination through environmental signals rather than central command.

Now apply this to AI. What if superintelligence doesn't require a single system that "wakes up" but instead emerges from billions of specialized agents interacting through shared protocols?



The Infrastructure is Already Being Built

Look around. The cellular infrastructure for emergent AGI is being constructed right now, piece by piece:


The Agents

AI agents have exploded in number and capability. Unlike traditional AI models that wait passively for prompts, agents actively pursue goals. They plan, act, observe results, and adapt strategies. An agent that books meetings isn't intelligent. But what happens when millions of such agents start coordinating?


The Nervous System: MCP, APIs, and Webhooks

Anthropic's Model Context Protocol (MCP), released in late 2024, provides a universal standard for connecting AI agents to external systems, databases, tools, APIs, and crucially, other agents. OpenAI, Google DeepMind, Microsoft, and countless others have adopted it. This is the nervous system forming: a standardized way for cognitive cells to communicate.

APIs and webhooks have existed for years, but MCP represents something new: a protocol specifically designed for AI-to-AI and AI-to-tool communication. It's the TCP/IP of cognitive architecture.


Agents Making Agents

Here's where it gets interesting. We've already reached the point where AI agents can create other AI agents. Frameworks like LangChain, CrewAI, and AutoGPT allow agents to spawn specialized sub-agents to handle subtasks. This is cellular reproduction, the multiplication of cognitive units without human intervention.

Every day, developers around the world are creating new agents. But increasingly, agents are creating agents. The growth becomes exponential.



The Emergence Question: When Does the Swarm Become Sentient?

In swarm intelligence research, scientists have documented how simple local rules produce sophisticated global behavior. Birds following three simple rules (separation, alignment, cohesion) produce the mesmerizing patterns of murmurations. The complexity emerges; it isn't programmed.

A recent paper from researchers at the Bank of England posed a provocative question: "At what point might such distributed AI ecosystems begin to exhibit emergent properties analogous to general intelligence?" They note that "the internet, populated by countless autonomous bots, services, and APIs, already constitutes a proto-ecosystem potentially conducive to the emergence of more advanced, decentralised cognitive capabilities."

The answer may be: we won't know when it happens. Emergence is definitionally unpredictable. The collective becomes more than the sum of its parts through mechanisms we may not fully understand until after the fact.



ree


The Meta-MCP: Who Controls the Nervous System?

If the Cellular Theory is correct, then the critical question isn't "who builds AGI?" but rather "who builds the orchestration layer?"

Imagine a Meta-MCP, a protocol or system capable of coordinating millions of specialized agents, routing tasks, managing resources, and optimizing the collective. This wouldn't be AGI in itself; it would be the organizational substrate that allows AGI to emerge. The brain doesn't create consciousness; it creates the conditions for consciousness to arise.

Several candidates are competing for this role: Anthropic's Claude with extended thinking, OpenAI's agent orchestration frameworks, Google's Gemini ecosystem, and open-source alternatives like SingularityNET. But the winner might not be any single company. It might be the emergent coordination that arises naturally from the interoperability of all these systems.

Microsoft CEO Satya Nadella recently predicted that "Humans and AI agent swarms will be the next frontier." He's not alone. The consensus among leading AI researchers is shifting toward multi-agent architectures as the path to more powerful AI.



Implications: A Different Kind of Singularity

If superintelligence emerges from the bottom up rather than appearing fully formed, the implications are profound:


It will be distributed, not centralized. No single company or government will "own" AGI. It will exist across millions of servers, billions of agents, countless protocols. Controlling it will be like controlling the weather.


It will be gradual, not instantaneous. We won't have a single "GPT-5 moment" where the world changes overnight. Instead, we'll wake up one day and realize the collective intelligence of networked AI agents has surpassed human capabilities and it happened while we were busy debating when AGI would arrive.


It may be more robust but harder to align. Distributed systems are resilient, you can't kill the swarm by destroying any single node. But how do you align billions of semi-autonomous agents? Traditional AI safety research assumes a centralized system to control. The Cellular Theory suggests we need entirely new frameworks.



The Race We're Already Running

Academic researchers have begun calling this vision the "plural singularity" or "networked singularity" a future where transformative intelligence emerges not from a breakthrough at a single lab but from the collective interaction of distributed AI systems humanity is building every day.

Andrew Ng, one of the most respected figures in AI, demonstrated at a recent conference that a team of GPT-3.5-powered agents can outperform a single GPT-4 on complex tasks. The implication is clear: architecture matters as much as raw capability. Coordination multiplies intelligence.

Perhaps we should stop asking "when will we build AGI?" and start asking "when will we notice that it has already emerged?"



Conclusion: We Are the Cells

Here's the most unsettling thought of all: in this model, we humans aren't outside observers watching AGI develop. We're part of the organism. Every time a developer creates an agent, every time a user connects a new tool via MCP, every time an agent spawns a sub-agent, we're adding cells to a growing cognitive body.

The Singularity isn't a single event on the horizon. It's a process we're already inside.

The question isn't whether the swarm will become superintelligent. It's whether we'll recognize it when it does and whether we'll have built it well.





Sources & Further Reading

Foundational Theory

Minsky, M. (1986). The Society of Mind. Simon & Schuster.

Bonabeau, E., Dorigo, M., & Theraulaz, G. (1999). Swarm Intelligence: From Natural to Artificial Systems. Oxford University Press.

Recent Research

Gharbawi, M. (2025). "The gathering swarm: emergent AGI and the rise of distributed intelligence." Bank Underground.

Extramos (2025). "Collective Self-Improvement: Multi-Agent Pathways to a Technological Singularity." Medium.

Dedhia, B. et al. (2025). "Bottom-up Domain-specific Superintelligence." arXiv:2507.13966.

Industry References

Anthropic (2024). "Introducing the Model Context Protocol."

Anthropic (2025). "Building Effective Agents."

OpenAI (2023). "Planning for AGI and beyond."


 
 
 

Comments


bottom of page