Departments

Artificial intelligence

[I have sometimes tried, and failed, to make this argument in person. If I fail again here, I have at least cast it on the water; perhaps 50 years downriver it will be clearer—either common sense or patent nonsense.]

I.

A computer with the speed of a human brain would no more feel love and hate, fear death, or make art, than a computer with the speed of a dog's brain would bark and mark its territory. Human behavior as such requires the means of the computational capacity of the human brain; but the use of human behaviors is in the physiological and glandular environment where the brain lies. It is wrong to vaunt that all our purposes are but the shadows of inherited drives and reducible to their combinations; yet the faculty of desire is a faculty of the flesh.


II.

I expect that to the scholars of the next century the dissonance of the disciplines of neurology and artificial intelligence will provide a curious case study in the history of scientific consensus. They cannot both be right. Either the brain creates emotion, and emotions serve some need of the brain; or the brain serves emotion, and emotions have created the brain to ease their uses. One could suppose a mutuality of feedback; and everyone has sometimes overruled their emotions, and sometimes been overwhelmed by them; but one or the other, neuron or hormone (loosely speaking), must have primacy—must set the wheel in motion, must provide the power that keeps it going.

I don't know whether anyone has attempted to calculate the information-bearing complexity of the endocrine system, or of the loops which it forms with the genes, the immune system, epigenetics, &c. The operation is clearly very complex; but we don't even know how the body represents itself to itself within the system of hormones—or whether that representation is compressible—or whether there is any such representation at all. But even if this system is in itself very simple, its interaction with the brain sets a much higher target for an artificial mind than just reproducing the complexity of the neurons—and it may not be simple at all.

III.

We have never encountered another sentient species. (Though I half-expect that, having done so, we would find them so diverse that we would retroactively number dolphins among them.) We might discover that some principle of convergence makes all creatures past a certain point of intelligence much alike—but that would be little comfort, given that within our own species there is room for both Hitler and Saint Francis. But we properly doubt the aliens presented to us by science fiction—like us, only more so—as belonging with the foxes and lions of Aesop, not Darwin. But this is what we expect of artificial intelligence—like us, only more so—digital minds floating through cyberspace in the dispassionate contemplation of truth, like discorporate angels or saints in Scholasticism.

That satisfied Aquinas; but Milton found that to do anything, even an angel would have to have appetites. Could we do for programming what Milton did for literature? Create an economy of artificial instincts? It is not unthinkable; but why do so?

We have developed systems of formality, in every culture, with the deliberate object of not having to indulge the presumptuous sympathies of strangers. Foolish as it is to reveal ourselves to strangers who can spread our weaknesses through gossip and backbiting—how much more so, to reveal ourselves to a machine? Already hackers can violate us with the exposure of our exterior activities; we would be giving them the chance to pry into the variations of our moods: to steal the memory, as it were, of a trusted companion. Such companions might still be no less trustworthy than human beings; but they would have new ways to betray us.

The question, of course, is: are emotions all or nothing, or can we impart only the ones we choose? There are some good reasons to think the latter. Brain lesions can extinguish specific emotions; and some mental illnesses comprise only the absence of certain emotions—depression, for example, the absence of pleasure and delight. But these phenomena do not answer the question: is there an array of emotions which can each be switched off and on independently, or is there a system of emotions which is sometimes thrown off-balance or distorted? If the latter, then to say that because we can cut off specific emotions, we can create them independently, were as if one said that because we can cut off tree limbs, we can build our own, without the tree.

Suppose that emotions are such a system. We are humanized by human limitations; why would we impose them on computers? We neuter pets and draft animals; we breed them to be docile, obedient, unobtrusive; we blinker horses. We must deprive them of their drives and instincts, and of the occassions for those behaviors, to make them useful to us. Why would we do the opposite to computers? There is the story of a battle during the Crusades that turned to a farce because the Crusader stallions caught the scent of the Saracen mares—the mares were in heat. Emoting computers hold out similar dangers: a depressed or panicky stock market, an angry weapons system, a jealous desktop, a pushy laptop. A machine, to be capable of useful empathy, would be susceptible to pride and pique; and while we might program a computer to be respectful of people, it would be harder to make people reciprocate.


IV.

The discipline of artificial intelligence seems to view itself as being in the service of an evolutionary imperative. Our intelligence has made us human beings powerful; surely more intelligence means more power. But, if so, why has evolution not made us smarter? It would, to all appearances, be easy to do. The existence of autism and of savants suggests that not much evolutionary pressure would be required to provide us with higher-functioning brains. If the next step in evolution is a better computer than we, why has evolution not made us better computers, when it has had the chance?

There are answers which favor the project of artificial intelligence. The brain is hungry, so food sources set a limit. Equals cooperate best, so too much disparity endangers society. Too big a head couldn't fit through the birth canal.

But I find none of these answers convincing. I cannot refute them now, but it may become possible. Physics could provide the proof. If we are able to arrive at a final theory, if we can find out, comprehend, and apply a set of fundamental laws adequate to all the various phenomena of the universe, that would suggest that we are quite smart enough for this universe, and that greater intelligence would be wasted on it—that while there might be faster thoughts than ours, there cannot be better ones.

This is, I concede, a distasteful thought. Something in us longs to belong to a hierarchy that culminates above us. If we can't look up to gods or angels, it's natural in us to want to make them. (Even granting God, no theologian would say that God's knowledge and judgement are by way of anything like thought). But I think that in the futurity of Moore's Law we are awaited by a cruel lesson: that even making something smarter than us will not allow us to turn over our responsibilities to it.