Elephants Don’t Play Chess: Why AI Should Learn Like Toddlers, Not Gods
Reflecting on Rodney Brooks’s radical AI philosophy that true intelligence arises from worldly experience, not from top-down design.
Intro: In 1990, roboticist Rodney Brooks penned an essay with a curious title: “Elephants Don’t Play Chess.” In it, he challenged the AI establishment and proposed an alternative route to artificial intelligence. Rather than hand-crafting high-level reasoning or simulating a grandmaster’s intellect in code, Brooks argued for a humbler, ground-up philosophy. He suggested that intelligence should emerge through a being’s physical interaction with the real world – much like how a child learns – instead of being engineered from on high like some grand design. As someone who fully agrees with Brooks’ perspective, I find his ideas not only refreshing, but profoundly right. This post is a thoughtful critique (in the sense of critical alignment) with Brooks’s essay and philosophy. I’ll explore why mimicking human learning through experience is a simpler, more elegant path to AI, using Brooks’s ideas to provoke a rethinking of our approach to artificial minds.
The “God Complex” in Classical AI
For decades, much of AI research had what you might call a “God complex.” Researchers attempted to play creator, designing the human mind from scratch – often without truly understanding the intricacies of how our own intelligence works. Classical AI (sometimes called symbolic AI or GOFAI, “Good Old-Fashioned AI”) focused on top-down design: explicitly modeling the world with symbols, logic rules, and exhaustive planning. The ambition was lofty: if we just program in enough knowledge and clever algorithms, voilà ! – human-level intelligence would emerge as if breathed into the code. It’s an approach akin to trying to sculpt a person out of clay without ever observing a real human. Not surprisingly, Brooks saw fundamental problems with this. He argued that the traditional symbol-system hypothesis – the idea that intelligence can be built from abstract symbol manipulation alone – is “fundamentally flawed”, relying on “unfounded great leaps of faith” that such systems could somehow jump to human-level smarts. In other words, classical AI expected a miracle: build a disembodied brain in cyberspace and assume it will magically exhibit all the common sense and adaptability of a real creature.
This top-down ethos often resulted in brittle, narrow programs. Sure, an early AI could be taught to play grandmaster-level chess or prove logical theorems, but take it out of that tiny microworld and it was as helpless as a newborn. It was as if we designed an all-knowing deity of chess, yet this “god” couldn’t tie its shoes or navigate a living room. Brooks provocatively highlighted this mismatch with his essay’s title: an elephant might not play chess, but that doesn’t make it unintelligent. Elephants (and dolphins, dogs, toddlers, and the rest of us) demonstrate intelligence through coping with the real world’s complexity, not through excelling at abstract games. To Brooks, it was unfair – even absurd – to dismiss the intelligence of an elephant simply because it fails at a human-invented board game. Yet classical AI, in a sense, was busy trying to build disembodied brains that could play chess (or do algebra, or plan perfectly) while ignoring the richer forms of intelligence exhibited by embodied creatures in messy environments.
So, the “godlike” approach of classical AI was to start at the top: recreate the mind’s highest faculties (logic, reasoning, strategy) first, and hope the rest (the common sense, the perception, the intuition) would somehow fill itself in. Brooks turned this idea on its head. Instead of playing God, he suggested we should play Nature – or even “play parent” – and let our artificial minds grow up from humble beginnings.
Learning Like a Toddler: Brooks’s Bottom-Up Revolution
Brooks’s philosophy invites us to imagine AI development in the way a toddler learns about the world. Think about how a small child gains intelligence: not by reading an instruction manual on life, but by crawling around, touching things, watching, falling down and getting up again. A toddler’s mind is not designed top-down by a deity-like parent; it is shaped by experience – a continuous loop of perception and action. Brooks argued that AI should follow a similar bottom-up path, literally grounded in the physical world. In his words, “the world is its own best model. It is always exactly up to date. It always contains every detail there is to be known. The trick is to sense it appropriately and often enough.”. This deceptively simple observation carries a Zen-like wisdom: instead of programming a complicated internal model of the world, just let the AI look at the world itself! It’s always accurate and current, after all. A robot doesn’t need an elaborate 3D simulation of a room in its head if it can continually sense the real room around it.
Brooks formalized this idea as the physical grounding hypothesis. It states that to build a truly intelligent system, the system’s concepts and knowledge must ultimately derive from physical interactions. In practical terms, that means an AI needs sensors (to see, hear, feel the real world) and actuators (to move and affect things in the real world). Typed inputs and outputs won’t cut it – the AI has to live in the world, not in a sterile data stream. This was a radical break from the norm. Brooks proposed constructing intelligence from the ground up, by building simple behavior modules that handle basic sensory-motor patterns and then layering these to achieve more complex behavior. Each module might be trivial on its own (avoid obstacles, follow a corridor, find a charging station), but together they could produce surprisingly sophisticated, adaptive conduct – emergent intelligence, in effect.
To illustrate the power of this approach, Brooks often pointed to nature’s own experiment: evolution. Life on Earth did not start with genius apes contemplating philosophy. It began with single cells, then simple multicellular organisms, then fish, reptiles, mammals, and only very late in the game did creatures with human-level cognition appear. The vast majority of evolution’s 4+ billion-year R&D went into refining creatures’ ability to perceive and act in the physical world – finding food, avoiding hazards, navigating complex environments. Abilities like abstract reasoning, language, and chess-playing are an afterthought by comparison. As Brooks dryly notes, “problem solving, language, expert knowledge and reason, are all rather simple once the essence of being and reacting are available.” That essence is the knack for moving around in a dynamic world and sensing what you need to survive – the very skills evolution spent eons honing. In the same way, Brooks believed we should first get AI to handle the essentials of situated existence before worrying about high-level cognition. Teach the “toddler” robot to walk, to recognize objects that it bumps into, to adapt to changes – then you can expect it one day to do the AI equivalent of algebra. Skipping straight to the adult intellect is not just unrealistic; it misses the point of how intelligence comes to be.
Even Alan Turing, way back in 1950, hinted at this bottom-up insight. He famously suggested that instead of trying to program an adult mind outright, “why not rather try to produce one which simulates the child’s?”. A child’s mind, once equipped with the ability to learn from experience, can grow into adult intelligence. Brooks took such ideas to heart and gave them robotic form. His lab in the late ’80s built insect-like robots that scurried around with no central brain, just layered simple behaviors. They couldn’t plan a vacation or play chess – but they could explore a cluttered room without crashing, something big AI programs of the time struggled to do. In one anecdote, Brooks contrasted a colleague’s slow, computation-heavy robot (which took hours to plan moves and could barely go a few meters) with the effortless agility of insects: a mere 100,000 neurons in a fly’s brain allows it to zip around, avoid predators, find mates – feats that the cutting-edge AI robots couldn’t come close to matching. This stark comparison was a “eureka” moment: nature was doing something right with its reactive, embodied approach that AI researchers were missing. Why not make our robots a bit more like insects (or toddlers) and a bit less like all-knowing chess computers?
The result of Brooks’s paradigm shift was the field of behavior-based robotics (or “nouvelle AI,” as he playfully called it). It was a rallying cry to stop obsessing over abstract intelligence and start building creatures – AI agents embedded in the real world, learning and adapting as they go. It’s the difference between raising an AI versus manufacturing one. Brooks effectively chose to raise toddlers, not design gods.
The World as the Best Model (Simplicity and Elegance in Letting Go)
One of Brooks’s most powerful ideas is elegant in its simplicity: let the world be the model. In traditional AI, enormous effort went into making internal models of reality – databases of facts, maps of environments, ontologies of common knowledge. Early AI planners would, for example, laboriously represent every piece of furniture in a room and reason about their positions before a robot took a single step. This is the top-down mentality: “we must recreate the world inside the computer’s head, in every detail, for it to act intelligently.” Brooks says, no, we don’t. The real world already contains all the detail we need – literally at our fingertips and eyeballs. Instead of pre-loading an AI with a god’s-eye blueprint of its world, just equip it with good enough perception to sense what’s relevant, when it needs it. The robot’s camera feed is its map; its touch sensors are its feedback. By doing this, we avoid one of the banes of classical AI: the internal model that is inevitably incomplete, often outdated, and expensive to maintain. As Brooks quipped, the world is always up-to-date and perfectly detailed – an infinite fidelity simulation that runs in real time. Why not use it?
This approach yields a beautiful simplicity. It means our AI or robot can be far less complicated internally. We don’t have to anticipate every scenario in code; the agent will discover scenarios by bumping into them (sometimes literally) and adjust accordingly. In a way, Brooks’s philosophy shifts some of the “intelligence work” out of the AI’s head and into the interaction between the agent and its environment. The complexity of the world doesn’t have to be mirrored by an equal complexity in the robot’s brain – the robot can sample the world’s complexity as needed. This mirrors how humans handle life’s vast richness: we don’t keep a detailed 3D model of every street in our hometown in our heads. We have just enough memory and perception (landmarks, cues) to navigate, and if something changes (a road is closed, a new building appears), we notice and adapt. Our knowledge is grounded in continuous sensing, not in a static internal replica of the universe. In AI design, letting go of the godlike need to pre-know everything is liberating. It means we can start with simple systems that grow in competence over time, each layer or learned skill adding to the next. As Brooks put it, classical AI tried to add more and more complex cognition on top of a weak foundation, whereas his approach builds a solid foundation of real-world savvy first, upon which higher abilities can later “emerge” naturally.
There is a striking elegance in this philosophy. By embracing the world as the ultimate model, we avoid the kludges and patches that plagued old AI programs struggling to handle unpredicted inputs. Brooks’s own robot “Herbert,” for instance, famously wandered an office collecting soda cans with no explicit internal map – it used the layout of the room itself as its memory, scanning for walls and doors afresh each moment. If a chair was moved or a new obstacle introduced, Herbert didn’t freak out that its internal map was wrong – it simply saw the new reality and adjusted. The simplicity of this cannot be overstated. It’s Ockham’s razor applied to AI: do not multiply internal entities beyond necessity. Where a classical AI would create dozens of variables to track an object’s possible states, a Brooks-style AI just checks the object again in the world. It trades exhaustive deliberation for real-time reaction, and in doing so often gains robustness.
Embracing Humility: Why Brooks’s Vision Matters (Even Today)
Brooks’s ideas were a bit heretical in 1990, and he faced skepticism. Critics said his robots were too simple, or retorted “but your approach can’t do X or Y high-level task.” Brooks would acknowledge the limitations but remind us that every approach has limitations – we shouldn’t throw out the embodied approach just because it doesn’t play chess yet. After all, no one dismisses an elephant’s intelligence just because it can’t play chess. The goal is to let our AI elephants grow and eventually tackle the harder problems, once they’ve mastered living in the real world. In his essay, Brooks cleverly noted that traditional AI and nouvelle AI (his approach) were almost mirror opposites in their strategy. He summarized it like this: • Traditional AI demonstrated sophisticated reasoning in toy domains, hoping those methods would scale up to the real world. • Nouvelle AI (Brooks’s camp) demonstrated simple behaviors in the real world, hoping those would scale up to more sophisticated abilities.
Each side had hope as a compass, but Brooks’s hope, which I share, feels more grounded (literally!). It’s easier to trust that a being who can robustly navigate a noisy, dynamic environment might eventually learn higher reasoning, than to expect a virtuoso chess program to suddenly wake up and comprehend the physical world.
Today, Brooks’s influence can be seen in how we think about AI and robotics. The boom in robotics and the emphasis on learning from interaction (like reinforcement learning and embodied AI research) echo many of his principles. We’ve seen AI agents learn to walk, run, and play Atari games through trial and error – approaches that owe more to the “toddler method” than to classic top-down planning. And yet, the allure of the god-like AI hasn’t fully gone away. We still hear bold claims about engineered “AGI” (artificial general intelligence) that will just wake up one day, fully formed, in a server rack. Whenever I hear that, I channel Brooks and think: Perhaps we should send that supposed AI mind out to play in the dirt for a while first. Intelligence, human or artificial, is forged in the crucible of experience. To design a mind in isolation is to miss the very ingredient that makes minds work.
In aligning myself with Brooks’s philosophy, I’m advocating for a bit of humility in how we create intelligence. It’s a humility that says: we humans might not be smart enough to design a mind from scratch (after all, we didn’t design our own brains), but we can create the conditions for a mind to develop. We can build AI like we raise children – with curiosity, incremental learning, and yes, by letting them make mistakes and learn from them. This approach is not just scientifically sound; it’s also wonderfully intriguing. It suggests an AI that surprises us, that grows in directions we might not have anticipated in our original blueprint. That is both slightly scary and delightfully exciting – it means our creations could one day truly evolve beyond what we explicitly program. But that’s the point: by giving up total control (the illusion of playing God), we gain the possibility of true, life-like intelligence.
Conclusion: Rodney Brooks’s “Elephants Don’t Play Chess” was more than a critique of 1980s AI; it was a manifesto for an AI rooted in reality and experience. Agreeing with Brooks, I believe the path to genuine intelligence in machines will not come from ever-bigger brains in jars concocted in code. It will come from AI that has a body (or at least sensors) and lives a life of interactions, however rudimentary at first. We should let the world teach our AIs what it means to be intelligent. Classical AI tried to play God, but it turned out that playing human – or playing animal – is a better start. By sending our AI “toddlers” out to play and learn, we follow a blueprint that nature knows well. After all, elephants don’t play chess, yet they thrive in their complexity. Perhaps the smartest thing we can do in AI is to relinquish the chessboard, step off the divine pedestal, and embrace the role of a patient teacher – or parent – to the nascent intelligences we create. In doing so, we allow them to find intelligence in the world itself, one curious step at a time.