The Ruricolist is now available in print.

Weakmindedness Part Four


Don’t I know how Socrates condemned writing – how it would give the appearance of wisdom but not the substance – with an Egyptian fable where Thoth presents writing, among other useful inventions, only to have it rejected by the god as harmful?

This little anecdote – a single paragraph of a long dialog, a minor support to a more complex argument, and the least extended of the many fables which adorn the Phaedrus – has acquired a reputation and argumentative weight that its duration cannot support. Here it is in full, after Jowett:

At the Egyptian city of Naucratis, there was a famous old god, whose name was Theuth; the bird which is called the Ibis is sacred to him, and he was the inventor of many arts, such as arithmetic and calculation and geometry and astronomy and draughts and dice, but his great discovery was the use of letters. Now in those days the god Thamus was the king of the whole country of Egypt; and he dwelt in that great city of Upper Egypt which the Hellenes call Egyptian Thebes, and the god himself is called by them Ammon. To him came Theuth and showed his inventions, desiring that the other Egyptians might be allowed to have the benefit of them; he enumerated them, and Thamus enquired about their several uses, and praised some of them and censured others, as he approved or disapproved of them. It would take a long time to repeat all that Thamus said to Theuth in praise or blame of the various arts. But when they came to letters, This, said Theuth, will make the Egyptians wiser and give them better memories; it is a specific both for the memory and for the wit. Thamus replied: O most ingenious Theuth, the parent or inventor of an art is not always the best judge of the utility or inutility of his own inventions to the users of them. And in this instance, you who are the father of letters, from a paternal love of your own children have been led to attribute to them a quality which they cannot have; for this discovery of yours will create forgetfulness in the learners’ souls, because they will not use their memories; they will trust to the external written characters and not remember of themselves. The specific which you have discovered is an aid not to memory, but to reminiscence, and you give your disciples not truth, but only the semblance of truth; they will be hearers of many things and will have learned nothing; they will appear to be omniscient and will generally know nothing; they will be tiresome company, having the show of wisdom without the reality.

Its prominence is due more to the names involved than its contents. It is told by Socrates, historical founder of philosophy. It concerns Thoth, mythical founder of esotericism. To Socrates and Plato he was only one Egyptian deity; but intervening tradition crowns him Thrice-Great Hermes, founder of all Western esoteric traditions (excluding of course the Cabala, separately descended from the secret revelation of Moses). Here is the author of the Emerald Tablet, condemned for his vain and foolhardy invention of writing! The irony of the anecdote impresses it in the memory.

But consider the context. I will not rehearse the whole of the Phaedrus, only call attention to its last section. It begins when Phaedrus remarks in passing that the politicians of Athens care so little for their speeches that they must be begged to write them down.

Socrates calls him on this absurdity. He contrasts true and false rhetoric – the false rhetoric of politicians, giving set speeches to a lump audience; and the true rhetoric – that is, dialectic: to understand and address your argument to the conditions and abilities of one person. Writing, they come to agree, is a weak thing, because like speechifying it does not accommodate itself to any particular understanding. Like a painting, it has the semblance of life, but remains dumb when questions are asked of it.

Note that a specific kind of writing is meant – persuasive writing – and that a specific fault is diagnosed – generality. Writing that is addressed to a specific person and meant to be replied to, like a letter, is not considered, nor is writing that preserves facts, like histories or treatises. Within the limits of his actual argument Socrates is hard to disagree with. Of course it is better to persuade in person. Of course it is a higher skill to persuade someone in particular than to sway a crowd. But even then Socrates recommends writing to hedge against old age. I would add death and distance. He really has no argument against writing at all; it is merely an occasion to express the difference between rhetoric and dialectic, which is not specific to writing.

But to show that Socrates did not mean what people think he meant is not to show that what people think he meant is wrong. Surely writing impairs memory? Surely writing gives us the voice of wisdom, without the substance?

We wrongly think of mnemonic feats as proper to pre-literate cultures; but the ars memoriæ shows that memorization only gained in urgency with the invention of writing. Before writing there was simply less to remember. The feats of illiterate mnemonicists in memorizing long epic poems are rightly impressive. But this means that to be remembered for more than one lifetime, knowledge had to be worked up in poetry – no easier then that is now, whether of the the “Sailor’s delight” variety or the “Sing, goddess” variety.

By itself writing lets knowledge persist without being remembered, but does not itself retain knowledge. Yes, the knowledge you want is in a book; but that book is chained up in the next country. You may obtain knowledge through reading; but you must bring it back in your head. What trivialized mnemotechnique was not writing, but printing.

But then may what is said against writing apply to printing? Consider another anecdote about memory and writing, this one from the Life of Johnson. It is the source of a quote which has become so familiar that it passes for a cliché or a snowclone. Johnson, on his arrival at a house, surveyed the books there. Joshua Reynolds, painter, quipped that his own art was superior to writing: he could take in all the paintings in the room at a glance, but Johnson could hardly read all the books. Johnson riposted with a distinction:

Sir, the reason is very plain. Knowledge is of two kinds. We know a subject ourselves, or we know where we can find information upon it. When we enquire into any subject, the first thing we have to do is to know what books have treated of it. This leads us to look at catalogues, and the backs of books in libraries.

Very good, of course; telling; and the standard explanation for the effect of printing: it replaced knowledge of facts with knowledge of the sources of facts. But I am not willing to accept this – I think that Johnson, and we, are wrong.

We need something to compare to language, something else which has gone through the same transition from oral transmission to written form to printing. There is such a comparison in music. Music too underwent transition from aural to written to printed form. Unlike language, its first transition is not prehistoric (as language’s tautologically must be); and its unwritten forms have continued to develop, and may be compared to the written forms.

A musician who plays wholly from written music may not be particularly good at memorizing long pieces or at improvisation. But such inability to memorize may be by choice – pianists strictly play from sheet music because they think it better to do so than to memorize – and the ability to improvise arises mostly as a consequence of the feeling for music theory – the theory required (at least implicitly) to understand, play, and compose music. Playing from written music does not prevent a musician from playing with feeling tone, living rhythm, and meaningful phrasing.

True, in principle, one could be able to read music but not to play it – but that would be perverse. It would be like reading without thinking – which is impossible, because written words have no meaning of their own. Their meaning must be reconstructed in the mind of the reader; and this reconstruction is a skill, an ability, an act, like playing music. The skills you must have to read at all, and the skills you must have to play at all, are far more difficult and important than the skills whose necessity reading relieves. They blend in their perfection: memorization from a position of ability, understanding the rules behind the changes, is better than memorization from inability, taking every note on faith.

Do I then excuse the net? Do I consider it as safe as sheet music? If the net were another such step, as from writing to printing, I would.

More than anything else, the net is a machine for exaggerating its own importance. In its function of making information accessible it is not transformative. Comparisons between the net and the invention of printing – even the invention of writing – are commonplace, but absurd. Those who so compare reveal their dependency on inherited thought patterns, on the Whig history of the intellect.

It is comparable to the wrong idea most people have about industry – there was an Industrial Revolution; and since then, more of the same. But of course modern industry is as far from the old mills and factories as they were from cabin piecework. The invention of electric lighting and air conditioning mark transformations of the factory system as profound as mass production and the assembly line. But somehow we do not notice such changes.

Likewise we do not notice the two most important events in the history of the intellect: the public library and the paperback book. Between these two inventions more information has been made available to any human being than any human lifetime could absorb. They changed a world of scarcity into a world of plenty. The net – a change from plenty to plenty – is comparatively insignificant. A thousand or a million times too much is still just too much. (Of course this is not equally true everywhere.)

But the net does have peculiar advantages; the net is different. It is frictionless, instantaneous, ubiquitous – and consequentially so. Consider drink. Spirits were once the only way to preserve surplus harvests, for storage and transport. Intelligence has had the same use: to distill, compact, and preserve masses of information and experience. Now we can move harvests in refrigerated bulk, preserve them as long and transport them as far as we like. Of course people still drink; but now drink serves a recreational purpose. When notes are as accessible as narratives, when eyewitnesses are as accessible as reports, there the exercise of intelligence, though no less useful to sort excess than to defy scarcity, loses its urgency, excusability, and remunerability. The price of the cheapest smartphone is enough to make a “walking encyclopedia.”


Everything with an outside has an inside. (Topological curiosities notwithstanding.) If digital augmentation obsoletes intelligence on the outside, what about the inside? Surely, however hard the wind blows against the rest, programmers are safe in the eye of the storm.

So far I have written intelligence and intellect interchangeably; but there is a distinction. It is best made by an example Jacques Barzun relates in his House of Intellect. Consider the alphabet: 52 utterly arbitrary signs (26 uppercase, 26 lowercase), with history but without rationale, which make it possible to record and represent anything that can be said. Millions use it, as Barzun observes, who could not have invented it. Intelligence so crystallized, so made current and bankable, is intellect.

The book itself is remarkable – I recommend it for anyone impressed by The Closing of the American Mind, which is tedious, muddled, and dated by comparison. But I will not rehearse his argument. The battle is over, the other side won before I was born. Of course intelligence does not require intellect; but without its tools, fending for itself, it moves slowly and wastefully. Relying on its own resources, it becomes too serious; everything is so costly that it cannot take anything lightly. It loses time.

Programming is the most purely intellectual discipline which human beings have ever created – as it must be, given (Dijkstra observes somewhere) that computer programs are the most complicated things human beings have learned to make. Programming cannot be learned, it must be adopted; it is a skill not just of action, but of perception.

Some people wonder if programming is an art or a craft. In seeking humility the usual answer is craft. But this is false. A craftsman works with stubborn materials and gets it right the first time. A carpenter who takes three tries to build a table and throws away the first two is not a craftsman. But three tries at a painting or a program is typical. A craftsman is finished when the work is done; an artist’s work must be declared finished. Of course the better the programmer, the larger the chunks of the program that come out right in the first place. But the challenge of programming, its possibilities for flow, lie at the point where it is pursued artistically.

In a small way the early web made this art accessible and meaningful for those who did not think of themselves as programmers. In a small way it brought intellect into lives otherwise unaccommodating of it. The primitive character of the technology and the intrepidity of its early adopters, both required and welcomed intellect. Anyone who accepted the discipline of HTML, who studied literature to write better fanfiction, who studied the fallacies to call them out in forums or newsgroups – they were embracing intellect as they had never before had the freedom to do.

But this is past. You can, of course, study photography toward a better photostream, writing toward a better blog; but the improvement is along a spectrum. Formerly those unwilling to take trouble were absent; now there is no break to differentiate those who take the trouble from those who do not. We are all on the same footing, because we all possess personalities. When all the tools are provided it takes intelligence, but not intellect, to use them well.

The old web promised to change relations, to establish an invisible college; the new web promises to recapitulate existing relations.

(Note that the succession of Web 2 to Web 1 was not gradual or competitive; 9/11 broke Web 1, and something else had to be created to replace it. The relative political tranquility of the time was as important a precondition as the technology; politics, when passionate, is neurotoxic to intellect – a drop in the reservoir makes a reservoir of poison.)

Programming is writing: a very exact kind of writing, for a highly intelligent, totally unsympathetic, viciously literal reader. But here, too, there are fashions. The anarchy of Perl and the onramp of PHP yield to the whiteroom of Python and the velvet rope of Ruby. The hacker, who looks inside everything – with or without permission – yields to the developer, whose job is pasting together blackbox libraries and invoking their “magic.” Even MIT has ceded software engineering to Python. The cathedral has fallen on the bazaar; the freedom of free software is not free as in beer or as in speech, but free as in sunlight, air, and other unmetered utilities.

Of course computers failed to deliver artificial intelligence. But machine intelligence need not equal human intelligence to render it obsolete. The assembly line never matched the finesse of the workbench. The progress of transportation was not from meat legs to machine legs, but from legs to wheels. Programmers have their own ways of taking a spin.


New technology has previously made intelligence easier without depriving it of value; why should the net be so dangerous to it? I have considered only the means of its attack, not the cause of its enmity.

In truth there is no intrinsic reason why the technology of the net must oppress intelligence; I use it heavily in that faith. But though the enmity is not intrinsic, it is still inborn, because the net was conceived in the pursuit of efficiency. Efficiency, like society, hates intelligence and wants to destroy it.

What is efficiency? Efficiency means maximum return on minimum effort and minimum expense. But not everything that ensures more return for less done is a measure of efficiency. In the simplest case the efficiency of one technology may be superseded by another technology that is inherently more efficacious – highly efficient systems of horse-powered mail delivery like the English mail coach or the Pony Express were displaced by steam power in its earliest and least efficient forms. The horse’s lineage and the rider’s tack were the products of millennia of tradition that allowed horse and rider to operate as one animal. Rail, by comparison, was unreliable and unpredictable; intrepid for engineers, opaque for passengers.

If computers were the successors of paper-based information management, as rail was successor to the stagecoach, there would be no problem. The problem is not inherent to computers or to the net at all; it belongs to culture. Technology does not incur, it enables. It is not the fault of the orgasm button that the mouse starves while pressing it. This was always a weakness in the mouse; the button just gave him the chance to destroy himself as he had always been prepared to destroy himself.

We all suspect, most quietly, that the technological developments of the late twentieth century, and of our own time, let down the rapid pace of progress which developed the developed world; everything is sleeker, everything is faster and more brilliant, but little is new. Remember Engelbart débuted the net forty years ago. Progress, once an irresistible force, is now hardly felt; in its place are so many immovable objects, so much foot-dragging, second-guessing, and public relations as the art of excuses.

The most parsimonious explanation of this state of affairs is that after decades of focus on efficiency, there is no more room left for innovation – not even on the scale of the refrigerator, let alone the scale of the jet engine. Standards for return on capital have become so high that there is no indulgence left for the expensive and unrewarding infancy of really new technology.

But this is too limited an example. The idea is more general, and more familiar; we carry the lesson in the very frame of our bodies. Human beings are terribly inefficient at moving around. We traded the gait of the quadruped, even the lope of the knuckledragger, for the endless high-wire act of the biped. For most of our lives we leave two limbs – two perfectly good forelimbs – hanging unused by our sides while our hind limbs waddle precariously. Through idleness and inefficiency we trade forelimbs for arms and hands, and all that hands can do.

Consider Aristotle: “Civilization begins in leisure.” The phrase passes on Aristotle’s credit; but in itself it ought to be shocking, if not absurd. Who believes in leisure? We have psychology; and whatever the school all psychology is one in its unwillingness to cede the possibility of leisure. If there is such a thing as leisure, psychology is impossible just as, if there are such things as miracles, physics is impossible. Everything we do must serve some urgent purpose of the unconscious or of the genes.

The paradox is curious: we have the most refined instruments of leisure ever devised, but we will not believe in it. We admit to resting, relaxing, blowing off steam, unwinding, recharging, renewing; we admit to solace, consolation, distraction, and escape – but we do not admit to leisure. We suspend work to work more. For this god we admit no counterpart.

There is so much to do, and so little time. There are only so many open slots; no matter how efficiently you pack, at some point every new claim on your time pushes an old one off into oblivion. Productivity systems in general strike me as perverse, because they keep the least worthwhile, most predictable claims uppermost, and push the more interesting, amorphous claims down and finally off the edge. It is life laid out in line, without recurrence, without themes, without center. It is the final victory of school over life, when last year’s projects are as irrelevant as last year’s homework.

But we are tired. Enough of the open-ended, the uncertain, the unknown. There is something to be said for life that is modest in its ambitions, confident within its limits, at home with itself – at least by way of amor fati, since it is trivially true that the ordinary life cannot be extraordinary. We are so tired, and the world is so old. There are so many big ideas; do we really need more? Let the scholars publish and perish; at least it keeps them too busy to preach. And politics – after so much politics nothing is settled, nothing is certain – let those who can do nothing else devote themselves to it. Do not trouble us with that frenzy – whoever the people are, we are not they. “This is a sweet, comfortable thing; by what right do you condemn the consoling scent of the lotus, and bid us onto the open sea?” By no right; only because I am arrogant – arrogant enough to believe that if the sea calls to me, it must call to others.


What is obsolescence? Plainly the concept is partly technological, partly social. Cars made draft horses and buggy whips obsolete, but there are still mounted police, and buggy whip manufacturers made a smooth transition to manufacturing car parts. (Capital finds its level.) A technology that displaces another never does so completely; no technology is completely interchangeable with another. They all imply their own particular scale of values. Too, there is an index of obsolesciveness, a function of complexity: the ax survived the chainsaw, because it is simple, but the vacuum tube did not survive the transistor, because it is complex. But even these technologies sometimes linger. Investment has inertia, nobody likes to close a factory, and what’s so bad about COBOL after all?

Obsolescence happens, it is a real force; but it is over-billed. Nothing disappears, nothing ceases to function the day it becomes obsolete. Obsolescence is something that happens to technologies, but it is not the chief or limiting condition of their existence.

Real obsolescence is the opposite, not of progress, but of simplicity. This is vividly, though shrilly, argued in an interesting but flawed book, The Shock of the Old – shrilly, because a cogent case against the doctrine of obsolescence would have to consider not just products but production methods. The author writes as if the ax of 2010 were the same as the ax of 1910 or 1810 – as if there were an equivalence between the product of a blacksmith, an assembly line, and a laser-guided CNC machine. He adduces shipbreakers tearing down the most massive artifacts of industrial civilization with muscle and hand tools, but passes over the fact that this manpower is fed by the high technology of the Green Revolution.

So what do I mean when I say “intelligence is obsolete”? Is its obsolescence real or doctrinal? I think it is doctrinal because, as I said before, intelligence cannot just disappear.

But this sounds circular. What is obsolete? Intelligence is obsolete. What judges obsolescence? Intelligence judges obsolescence – even granting some part of the emotional repulsion of obsolescence to distaste for the unfashionable, still the idea operates too broadly not to imply judgment, intelligence, and intellect.

It is this circle one sees, I think, in the odd paradoxes pronounced by the boosters of technology-as-magic, which would solve the bewilderment of technology with more technology, who would loosen the constraints of technology with more constraints, and who would make technology less demanding by ensuring that it follows you everywhere. They recognize technology as something enabling choice and critical judgment in everything except technology; technology as something solvent to ignorance, helplessness, and herd behavior in everything except technology. But I see nothing that makes technology such a fixed point; it seems to me as unstable, as potentially a topic for deliberation, as it encourages everything else to be.

I have heard it argued that technology – be straight, computers – is becoming more like cars – devices practically magical, in that they are operated more through ritual then understanding. This cuts me a little because in fact I understand very little about cars. But why do I know so little about cars? Not lack of curiosity, but lack of opportunity. I have never been in a position where a car was something I could afford to break – not to mention putting life and limb in hazard. But the evident trend of computers is towards commodification; everything done to de-commodify computing devices is ultimately doomed. Once I had one computer, heavy metal taking up desk space, and that was a serious investment in hardware; now I have several computers, and the most valuable thing about them is the peculiar configuration of each one. But, increasingly, anywhere I can check out a few repositories, I am at home. It is this possibility, the chance to evolve your own peculiar relationship with technology, one that is cumulative, personal, and free; one that you own and control; one that is a slow growth of the mind into the possibilities afforded by intellectual augmentation, not an accommodation of the mind to the tools and metaphors dictated to you – this possibility which allows intelligence to employ technology, not serve it.

In finding intelligence obsolete the doctrine of obsolescence obsolesces itself. A replacement is in order; some new view of technology is required; maybe something like the idea of a technosphere; if not that, certainly something of equal scope. But then I am hasty. So obsolescence obsoletes itself; so there is a contradiction, so what? A dissonance implies a resolution but nothing says that it has to resolve. Self-contradiction may even strengthen an idea, by imbuing its holders with faith.

Intelligence is obsolete. Obsolescence is obsolete. Somewhere between these poles our future lies: either a course closer to one than the other, or a circle trapped between them.