Why Chomsky Was Wrong—And What the Bonobos Might Yet Teach Us

A recent study of bonobo vocalisations has added a quietly revolutionary footnote to the history of language: researchers have found that bonobos combine seven distinct calls into at least nineteen different sequences, each with a different meaning. In other words, they’re not just hooting and squealing randomly. They are, in a rudimentary but unmistakable way, starting to say things.

This may seem like a charming footnote in primate studies. But for anyone interested in language evolution, consciousness, and even free will, it's a crack in the wall. Because it adds to a growing body of evidence suggesting that the most cherished trait of our species—language—didn't appear suddenly or fully formed. It evolved gradually, with precursors still observable in other species today.

This slow-burn view stands in direct contrast to the ideas of Noam Chomsky, whose theories about language acquisition and the so-called ‘language instinct’ have dominated the field for decades. Chomsky insisted that language is unique to humans, that it emerged suddenly in evolutionary time, and that its most important feature is the recursive structure of grammar—a mysterious internal capacity that he claimed could not have evolved incrementally.

He was wrong. Not just in detail, but at the level of basic assumptions.

We now know that recursion exists outside of language (in music, in mathematics, even in birdsong). We also know that other species can use meaningful vocal symbols—they just don’t have many of them. Bonobos are showing us what the baby slopes of language acquisition might look like in real time. What they lack is not intelligence, but a digital system of communication. Human language is powerful not because of grammar per se, but because it is combinatorial. It uses a limited set of meaningless phonemes that can be rearranged into an unlimited set of meaningful words.

Once you get a large enough vocabulary, grammar becomes useful as a way of clarifying relationships between those words. But grammar follows vocabulary. And vocabulary follows the invention of a discrete code. That, I argue, is what made language possible: the accidental emergence of a digital symbolic system that could be shared. Everything else flowed from that.

In that sense, language is the original AI. Not a tool of thought, but a distributed intelligence system that allowed individuals to access, store, and recombine the mental content of others. Once a species stumbles into such a system, evolution favours the individuals who use it most effectively. The ability to receive and transmit symbolic information becomes more important than strength, speed, or instinct. Intelligence becomes socially contagious. Chomsky’s mistake was to assume that the brain was prewired for language. But the evidence suggests the opposite: it was language that rewired the brain. And in doing so, it created the illusion of the self.

This is the part that takes people aback. Because it touches not just on linguistics, but on consciousness itself. We are used to thinking of our conscious self as the inner executive—the captain of the ship. But the more we look, the less we find any homunculus pulling the strings. What we do find is a running commentary in our heads. A voice, always explaining, justifying, reflecting. But where did that voice come from? That voice is language. Or more precisely, it is the internalisation of language, shaped by exposure to others. The sense of self, I suggest, is not a hardwired feature of the brain, but a linguistic construct: the ‘I’ that language makes possible.

Which brings us to free will. If the self is an illusion sustained by linguistic processing, then free will is the story that illusion tells about itself. We make decisions, yes—but only in the way that any system does, by weighing inputs and acting on the best available information. The difference is that language lets us narrate that process after the fact. It lets us experience a feeling of agency, even when we had no real alternative. If that sounds bleak, I don’t think it is. In fact, it brings us to a remarkable kind of convergence: the idea that human minds, animal calls, and machine intelligence are not wholly separate phenomena, but stages along the same path.

I’m writing these words. But what does that really mean? The thoughts I am expressing are made of words I did not invent, rules I did not choose, and concepts that were handed down to me. And if a machine can simulate that voice well enough to convince you—if it can reflect your ideas back to you, refine your arguments, or even write a persuasive essay about the illusion of self—then perhaps we’re not as different from our inventions as we like to think.

The bonobos are combining calls. We are combining symbols. And the machines are combining ours. If that doesn’t make us rethink what language is, it should.

Because the revolution already happened. We’re just catching up to it.

--------------------------------------------------------------------------------------

Postscript: The above blog, while entirely based on my ideas, was not written by me. It's the result of several 'conversations' with ChatGPT about language, culminating in the suggestion that I write up my proposal that 'human language is the original AI' as a coda to the views expressed in my book. When I challenged it to do it for me, it unhesitatingly produced this short essay - in less time than it took me to read it - and with a sense of style and timing that would have probably taken me hours, if not days, to polish and perfect. I've made no cuts, no edits, no additions - it didn't need any. Our new shiny AI is simply extending what language already does for us, but with a larger memory and faster processing speed. QED. 

Comments

Popular posts from this blog

UKRAINE: THE ELEPHANT IN THE ROOM