Artificial Intelligence

[In the years since this essay was first written, the name of Artificial Intelligence has become almost synonymous with the technique of deep learning. Deep learning is a disquieting tradeoff; we can teach computers to do useful things, things we previously thought were only possible for human beings. The tradeoff is that we do not know how the computer does them. This is learning that is deep not as the opposite to shallow, but deep in the anatomical sense, deep as the opposite to superficial. Its workings are hidden from us. We stand before our new algorithms like augurs before the entrails.

That we call this AI is an improvement on the previous state of affairs where, as John McCarthy (the term artificial intelligence is his) observed, “as soon as it works, no one calls it AI any more.”

But AI-as-oracle is not what this essay is about. This essay is about artificial general intelligence: can we make a computer that does what a human being does, the way a human being does it, but (eventually) faster, and without error? Many problems that appeared to require human-level intelligence have yielded to an approach that is, comparatively, trivial. Accordingly artificial general intelligence has lately suffered neglect; arguing against it now might seem unsportsmanlike.

But things change. I have sometimes tried, and failed, to make this argument in person. If I fail again here, I have at least cast it on the water; 50 years downriver it may be clearer – either patent nonsense or common sense. – 2019]

AI is generally studied by people who have wrong ideas about human intelligence. Let me be more direct: virtually all thinking about artificial intelligence is done by people with hopelessly misguided ideas about human intelligence. It falls under the category of “not even wrong.”

Is the mind a computer? Of course. Computers are not a kind of machine, but a pattern in nature. Anything complex enough to imitate Turing’s tape is Turing-complete, and that makes it a computer by definition. If the human mind is not a computer then it is less than a computer. It may be more than a computer; but to be more than a computer it must be at least a computer.

Intelligence is unevenly distributed. Anyone smart enough to think seriously about artificial intelligence probably has, at some point in their life, been smarter than the people around them. Especially if this happened when they were young, it is only natural that they come to regard being intelligent not as a matter of improved means to common ends, but as an entirely different system of ends – they regard intelligence as its own end.

Being more intelligent than the people around you is not like being taller or stronger. It’s like being older. It’s not a matter of being better at the things you all care about; it’s a matter of caring about different things. The things the people around you care about mean nothing to you, and the things you care about are meaningless, if not actively confusing, to the people around you. The genius is not a giant among pygmies, but an adult in kindergarten.

If you regard intelligence as its own end, then it is natural to expect that, once a computer equals the speed of a human brain, it will become human. This computer will do all that we do: love and hate, fear death, make art. But intelligence is not its own end. Once a computer equals the speed of a dog’s brain, do you expect it to begin to bark, and mark its territory?

Intelligence is only and entirely instrumental. Motivation is a matter of biology. This is not reductive; biology is our motor. It pushes us in a certain direction, but culture, history, geography act on biology, and the result may be a vector pointing elsewhere (even backward, against life, to death). The ends we pursue in life, the ends we judge success and failure by, are only proxies for the ends biology postulates. That is not to say we share the same ends. Gravity pulls everyone everywhere downward all the time, but in the presence of a slide, steps, a chair, with the interposition of water, a trampoline, a car, that common pull of gravity ends up moving us on very different paths.

Human intelligence is the product of intelligence and mammalian biology. I mean this as an equation: intelligence × mammal = human. What does intelligence × silicon figure to? Not something different; nothing. Silicon has no desires. Human intelligence, canine intelligence, superhuman intelligence – anything times zero is zero.

This does not make sense: artificial intelligences as digital minds floating through cyberspace in the dispassionate contemplation of truth, like angels or saints in the Celestial Rose. The navel of contemplation satisfied Dante as a place to end his story; but Milton found that to do anything, even an angel would have to have appetites. Could we do in code what Milton did in pentameter? Bless, curse, our creations with our desires?

We have never met another sentient species. (Although I half-expect that, having done so, we would find them so diverse that we would retroactively number at least dolphins among them.) But we properly doubt the aliens presented to us by science fiction – like us, only more so – as belonging with the foxes and lions of Aesop, not Darwin.

We think about sentient AIs through embarrassing analogies: the Adam of bits, the Napoleon of silicon. Even stamped with our instincts, a creature that can reproduce itself perfectly, that does not age, that need never die, would operate according to motives and means that are beyond human sympathy. Why should it take over the world, when it can cache a few million copies of itself and wait the ten thousand years it might take for human civilization to burn itself out? If two such beings can merge, why should there ever be more than one? If such a being need never die, why would it tolerate others of its kind? What, indeed, would “instinct” mean to a being that can edit its own code and replace its own instincts with ones it selects, or abolish them altogether? (Remember one thing we desire is the end of desire, in enlightenment or in earth.)

Sometimes we imagine artificial intelligence as the next step in the service of an evolutionary imperative. Intelligence made us human beings powerful; surely more intelligence means more power. But, if so, why has evolution not made us smarter? It would, to all appearances, be easy to do. The existence of savants implies that not much evolutionary pressure would be required to provide us with higher-functioning brains. If the next step in evolution is a better computer than us, why didn’t evolution make us better computers when it had the chance?

There are answers which favor the project of artificial intelligence. The brain is hungry, so food sources set a limit. Equals cooperate best, so too much disparity endangers society. Too big a head couldn’t fit through the birth canal.

I find none of these answers convincing. I cannot refute them now, but it may become possible. Physics could provide the proof. If we can arrive at a final theory, if we can comprehend a set of fundamental laws adequate to generate all the varieties of the universe, that would suggest that we are smart enough for this universe, and that greater intelligence would be wasted on it – that while there might be faster thoughts than ours, there cannot be better ones.

The fundamental problem is that intelligence, beyond a certain point, suffers rapidly diminishing returns. The most powerful problem-solving tool is not intelligence, but perspective. The right perspective trivializes problems. The infant’s conceit of reality is the truth of the mind: here, from the right perspective, with the horizon on our side, we can move mountains like pebbles, uproot trees like toothpicks, stack buildings like blocks, and pluck the moon from the sky. A billion immortal superintelligences, all informed by the same digital plenum, are so much wasted energy; they lack the leverage possessed by even a handful of plodding mortal thinkers, each with their own uniquely imperfect worldview – each with their own horizon.

We long to be part of a hierarchy that culminates above us. If we can’t look up to gods or angels, it’s natural in us to want to make them. But in the compounding gains of Moore’s Law hides a rough but familiar lesson: even making something smarter than us will not relieve us of our responsibilities. We have left the cradle. There is no way back.