Departments

Weakmindedness Part Three

VII.

Intelligence has never been in fashion. It has been news for a century that individual intelligence has become obsolete and the future belongs to procedures, teams, and institutions. This is a future that has always just arrived. The lesson is not that intelligence has always appeared to be on the verge of becoming obsolete (although it has); the lesson is that something in society hates intelligence and wants it to be obsolete—needs to believe that it is obsolete.

Obviously in a commercial society we are always worth more for what we can own—or for being owned—than for what we can do. And it is true, regarding the advantages of teamwork over intelligence, that all the inputs into the economy from outside it involve teams and companies. An industrial army keeps the wells flowing, the mines bearing, the fields fruiting. The individual cannot keep up—can have an effect, but only indirectly; the way a programmer controls what a computer does, but cannot do what it does. Naturally the institutions intended to handle these inputs expect to deal with teams and institutions—an affinity that propagates throughout society.

Society, remember, is not a human invention, but a pattern in nature which human beings borrow; a pattern we share with bees and ants and mole rats. It has its own logic, its own dynamics, and its own tendency—a tendency which is always toward the intelligence-free ground state of the hive or colony. For society as such intelligence is an irritant, something to be encapsulated and expelled, like a splinter in the thumb, or cicatrized in place, like a piece of shrapnel.

The greater the intelligence, the more likely it is to destroy its own advantage. Be born with exceptional strength and the best thing you can do with it is to use it yourself. Be born with exceptional intelligence and the best thing you can do with it is to turn it on itself—to figure out how the exceptional part of your intelligence works so you can teach it to others. We all think a little like Einstein now, because we have the maxims he so carefully wrought out, the examples he so carefully related.

Of course human beings are not ants or bees or mole rats and society cannot turn them into zombies. People scheme. This is natural: intelligence atrophies when unused. It is no more comfortable to be flabby in mind than in body. Nor would society want us to be; the software of society needs human speech to run on. Society does not want or need human beings to speak well, but it does need them to speak well enough.

To perfect this balance, we have the job, which stands in relation to the mind as aerobics to the body: it keeps you from becoming flabby, without fitting you for any particular use. Not that jobs are inherently useless; only that, given a minimal denomination of employment (say 9–5), real work is always padded with makework to fill it out fungibly.

Society's capacity to encapsulate intelligence is ultimately limitless but not particular responsive. A sudden jump in the efficiency of all workers opens a gap, leaves intelligence idle—this may be called, to borrow a phrase, a cognitive surplus. In the last two decades we have seen one open up; remarkable things emerged from it—the web, the blogosphere, the Wikipedia (more later) &c.—and I think we have begun to see it close, soaked up into flash video and social networking.

The centrality which magazines have resumed in online intellectual life is a sign of its decay. Witness the return of the article, the lowest form of writing, opening with an anecdote and closing with a cop-out. Watch the epicene descendants of the intellectual thugs of undead ideologies playing intellectual. Could this be all that it comes to? All our work, all our hope? The same sad cycle of toothless posturing vs. splenetic emission, only this time on screens instead of paper, and with Star Wars references? Well, we had our chance; now we see what we made of it.

VIII.

I began by comparing strength and intelligence and should justify it. This is difficult because silly ideas pass about both. Witlings think smart people quote cube roots the same way weaklings think strong people are musclebound. The smart people do not obsess over mental math, knowledge of trivia, and the size of their IQs; the strong people do not obsess over diet, dead lifts and the size of their biceps.

The parallel stereotypes are collateral results of the same error: if an ability is not economically rewarding, people pretend it does not exist. To account for records of its existence, some such stereotype will be foisted as its modern descendant.

Strength has not ceased to exist; it is even still useful. All the marvelous mechanical contrivances of modern life are lubricated with human sweat. To give an extreme example, soldiers now ride in APCs, fire low-caliber assault rifles, call in strikes from guns, helicopters, and drones; but a soldier must still be in good shape, because no matter how elaborate the technologies they employ, there always remain interstices that must be filled out the old-fashioned way.

Strength is necessary, but not advantageous. Everywhere, for free, strength is making civilized life possible; but there is nothing strength can do for free that cannot be done without strength for money. The best that strength can do is keep you from failing; you cannot distinguish yourself with it in any but recreational uses. No one earns a profit or a promotion for being strong.

Likewise by intelligence becoming obsolete I do not mean its disappearance, but its insignificance. The intellectual machinery that makes life faster and more brilliant will always need lubrication; but that work will be invisible, underground, and unrewarded. And being taken for granted, it will cease to be believed in.

Westerners allow themselves to be deluded about the actual range of human strength. Of course it is difficult to prove strength in physical teamwork; when working with someone weaker than yourself, you must moderate your own strength to avoid hurting the other person. Say confuse for hurt and the same applies to intellectual teamwork. Insofar as teamwork is expected, insofar as the idea of intelligence is undermined with untestable explanations ("Anyone could do that if they spent ten years learning it"—will you take ten years to find out?)—that far intelligence will simply cease to be thought of, let alone believed in.

For now, intellectual work is still exalted. The gospel of productivity offers to make it accessible to everyone, by debunking its romance, by making it as tractable as "cranking widgets". Somehow intellectual work reduced to cranking widgets comes across more like intellectual work and less like cranking widgets. But this is to be expected. Twentieth century industry enjoyed the prestige of muscularity, virility, and futurity for decades while it chained generations of children, abused generations of women, and poisoned, wore out, and discarded generations of men. Likewise intellectual work may be expected to enjoy the prestige of thoughtfulness long after thinking has been lost from it.

IX.

I cannot get away with referencing the idea of cognitive surplus without engaging it. Or more directly: "What about Wikipedia?"

Do consider Wikipedia. But first, forget what you have read about Wikipedia: it is all lies. No one who opines about it understands it. It is almost certain that if you have not participated in it, you not only do not understand it, but are deluded about it.

I should disclose my participation in Wikipedia. I have written two obscure articles and heavily rewritten another. Beside that, my contributions have been limited to weeding vandalism, polishing grammar and expression (the bad to the acceptable; improving the adequate to the excellent would be rude), and filling in gaping omissions—though I do less and less of any of these, largely because there is less and less need. I do have the Wikimedia franchise.

Let me also stipulate that I love the Wikipedia, esteem it as the best service of the net, and consider it the most important and consequential cultural development of the twenty-first century—much more so than, say, social networking or Google. (Though I acknowledge that the Google-Wikipedia relationship is symbiotic.)

Wikipedia is not spontaneous. The typical Wikipedia article is not a lovely crystal of accretive collaboration. It is a Frankenstein's monster of copy stitched together from a dozen donors, a literary teratoma. Wikipedia as a whole is a ravenous black hole that sucks up endless amounts of copy: the out-of-copyright public domain; the direct to public domain; and the unpublishable. Wikipedia is not just the last encyclopedia; it is the Eschaton of all encyclopedias, the strange attractor drawing on to the end of their history. Wikipedia is the hundred-hearted shambling biomass to which every encyclopedia that has ever existed unwittingly willed its organs. Whole articles from Chamber's Cyclopæedia—the very first encyclopedia—turn up inside it completely undigested. As soon as it was born it ate its parent, the Nupedia, then went about seeking whom it might devour. Its greatest conquest was the celebrated 11th edition of the Encyclopædia Britannica—the last great summary deposition of proud world-bestriding European civilization before it passed judgment on itself. (As the article "Artillery" states: "Massed guns with modern shrapnel would, if allowed to play freely upon the attack, infallibly stop, and probably annihilate, the troops making it.")

If you had heard of the Wikipedia but not seen it you might surmise that the kind of people who would edit it would have a technical and contemporary bias, and that trivia would predominate: there exists a band that no one has ever heard; there exists a town in Scotland where nothing has ever happened. And you would be right. But the massive scholarship of the 1911 encyclopedia perfectly counterbalances the bias and the bullshit. The credibility of the Wikipedia as a universal reference was invisibly secured by this massive treasure, excavated as surely and strangely as Schliemann excavated the gold of Troy. Whole articles from the 1911 edition live in Wikipedia, and even where the revision of obsolete information and prejudiced opinion has replaced most of the article, whole paragraphs and sentences remain intact. If while reading an article in Wikipedia you feel at a sudden chill in the air, shiver with a thrill of dry irony or scholarly detachment, feel a thin rope of syntax winding itself around your brain—the ghosts of 1911 are speaking.

(The Britannica itself dispensed with this material during its reinvention in 1974.)

Do not rely on me; count. Wikipedia requires the use of templates—boilerplate disclaimer—whenever text from a public-domain source is imported as an article. Using Google's site search we can count them. (Keep in mind that these numbers are severely understated; revisers frequently delete these templates once the article has been brought up to date. Also note that I did these searches some months ago.)

YearNameCount
1728Chamber's Cyclopedia531
1918Gray's Anatomy2180
1913Catholic Encylopaedia~ 28 000
1911Encyclopædia Britannica~ 120 000

But there are many more such searches to be done. How significant is the importation? I encourage you to try out Wikipedia's "random article" function, on the left, just above the search box. Here is a random sample of ten articles (disambiguations & lists ignored):

  1. Italian race car driver. Stub
  2. A railway station in Melbourne.
  3. One paragraph on a comics anthology. Stub
  4. Lululaund, a eccentric faux-Bavarian mansion in Hertfordshire, destroyed in 1939. (Linked because curious.)
  5. The definition of "bulk email software." Stub
  6. An a Capella quartet, Anonymous 4, who perform medieval music.
  7. "The Postal Orders of Anguilla." (A digression reveals that a postal order is the British for money order.)
  8. Cumbria.
  9. Brief biography and long bibliography of The Most Reverend Marcelo Sánchez Sorondo, Argentinian, Catholic, philosopher, theologian, and historian of philosophy.
  10. A 2006 single.

Or another:

  1. 1946 college football season.
  2. Jean-Marie Roland, de la Platière, the Girondist. 1911
  3. Munching square. Stub
  4. "Personal name."
  5. Cincinnati Redlegs' 1956 season. Stub
  6. Pope Simplicius. Stub
  7. A South African judge. Stub
  8. Torbanite, a variety of coal. Stub
  9. A Scottish football club. Stub
  10. A church on the Isle of Wight. Stub

Or another:

  1. The London Midland and Scottish Railway.
  2. An administrative district in east-central Poland. Stub
  3. A village in north-central Poland. Stub
  4. A game from The Price is Right.
  5. A Gibraltarian politician.
  6. Johann Nikolaus Forkel, German musician. 1911
  7. The Mumbai Amateur Radio Society.
  8. King Amoghabhuti.
  9. An episode of House, M.D.
  10. An office in the Indian National Congress. Stub

The second source is material that is directly released into the public domain: press releases, government documents, think tank reports. A business has two vital functions: to do something and to let people know what it is doing. The latter has always provided great opportunities to the Wikipedia, which is always searching things people might want to know about. Wikipedia has a magpie eye, and press releases are very shiny.

(Wikipedia also picks up shiny stuff where it shouldn't—it's always distasteful to click through a reference link and find that the text of the reference, a private website, evidently not in the public domain, has simply been copied—but then again Wikipedia saves some good copy this way that would otherwise be lost to link rot.)

Beside the brook of business runs the massive river of text thrown off by the metabolism of the military-industrial-governmental complex, large amounts of which are explicitly in the public domain, other parts of which are too evidently of public interest to be neglected. Wikipedia soaks up this stuff like a Nevada golf course.

The third source is sophisticated yet unpublishable material. If you have ever despaired at the thought of how much intellectual energy goes into a school report, written to be read once by someone who learns nothing from it, know that the Wikipedia is there to catch all these efforts. (Or was, rather, before it began to inform them.) I suspect that the preponderance of original articles on the Wikipedia were actually executed as assignments or requirements of teachers or employers. Wikipedia strains the plankton from the sea of busywork like the baleen of a whale.

What is Wikipedia? Wikipedia is a sublimely efficient method of avoiding redundant effort. Wikipedia is write once, remember forever. Wikipedia is make do and mend. Wikipedia is reuse and recycle.

Weakmindedness Part Two

IV.

To make his ideas more tractable Engelbart tells a story of two characters: "You", addressed in the second person, and "Joe", who is experienced with augmentation and is giving You a demonstration.

First Joe shows off his workstation. His desk has two monitors, both mounted at slight angles to the desk—"more like the surface of a drafting table than the near-vertical picture TK displays you had somehow imagined." He types into a split keyboard, each half flanking a monitor, poising him over his screens as he works.

The ergonomics are impeccable. Consider how tradition forces us into a crick-necked and hunched-shouldered position whenever we sit at keyboard—how it literally constrains us. Judge how much more of the way you work is so ruled.

To introduce the capabilities of the system Joe edits a page of prose. Lo—every keystroke appears instantly onscreen! When he reaches the end of the line, carriage return is automatic! He can delete words and sentences, transpose them, move them around, make them disappear—"able to [e]ffect immediately any of the changes that a proofreader might want to designate with his special marks, only here the proofreader is always looking at clean text as if it had been instantaneously retyped." He can call up definitions, synonyms and antonyms "with a few quick flicks on the keypad." He can define abbreviations for any word or string of words he employs, whenever he wants, and call them up with substrings or keychords. But you have fonts, yes?

In short the capabilities of Joe's editor are somewhat above those of a word processor and somewhat below those of a programmer's editor.

Here we find one of the problems with Engelbart's vision. It is easier to augment entities than procedures. If in the context of typing a word is just the procedure of hitting a certain sequence of letters, then in the near term, it actually costs energy and time to change your procedure to typing the first few letters of a word and letting the editor expand it. It requires you to think of the word as an entity, not an operation. For most people, this is impractical.

Consider the abacus. The frictionless operation of mental arithmetic seems easier than the prestidigitation the abacus requires. So the most practiced algorists are faster than the fastest abacists. (Sometimes, as in the famous anecdote of Feynman and the Japanese abacist, the algorist's superior knowledge of mathematics will simplify the problem to triviality.) But of course the abacus is easier to learn than mathematics, and for a given amount of practice the average abacist will be much faster than the average algorist.

There are abacus virtuosos who can calculate faster than the abacus can be fingered, who calculate moving their fingers on an empty table, but who cannot calculate at all without moving their fingers—slaves to a skeuomorph.

Skeuomorph is a term from architectural criticism. It names, usually to pejorate, a building's needless imitation of the traces of old-fashioned construction methods and materials it does not employ. But skeuomorphs are not all bad—classical architecture, in its system of pillars, pilasters, entablatures &c. is a representation in stone of the engineering of wooden buildings.

The experience of using a computer is run through with skeuomorphs—the typewriter in the keyboard, the desktop in the screen, the folders on the hard drive, the documents they contain. Through a cultural process they dictate—even to those with little experience of the analog originals—how computers are to be used. Even as they let us in they hold us back.

So it might seem that new user-interface concepts are necessarily a liberation. They should be; but so far they have not been. In particular the recent generation of portable devices have moved farther toward the pole of the abacus—easy for competence, limited for mastery. As they break down walls, they close doors. As they are more and more physical and spatial, they are less and less symbolic.

V.

Now we come to the last part of Joe's demonstration, and leave the familiar behind. The talk from here on is of arguments, statements, dependencies, and conceptual structures. Joe explains that he uses his workstation to produce arguments, composed of statements, arranged sequentially, but not serially. Quote:

This makes you recall dimly the generalizations you had heard previously about process structuring limiting symbol structuring, symbol structuring limiting concept structuring, and concept structuring limiting mental structuring. You nod cautiously, in hopes that he will proceed in some way that will tie this kind of talk to something from which you can get the "feel" of what it is all about.

He warns you not to expect anything impressive. What he has to show you is the sum of great many little changes. It starts with links: not just links between one document and others, but links within the document—links that break down sentences like grammatical diagrams, links that pin every statement to its antecedents and consequences--

[T]he simple capabilities of being able to establish links between different substructures, and of directing the computer subsequently to display a set of linked substructures with any relative positioning we might designate among the different substructures.

Note that this does not just mean creating links—it means creating bidirectional linkages, linkages that have kinds, linkages that can be viewed as structures as well as followed.

Here is a skeuomorph: the index or cross-reference in the hyperlink. The hyperlink are we know it is hyper only in the most trivial sense. You cannot even link a particular part of one document to a particular part of another document unless the target is specially prepared with anchors to hold the other end of the link. Except inside of a search engine (and the futile experiment of trackbacks), a link contributes no metadata to its target. The web has no provisions for back-and-forth or one-to-many links, let alone for, say, uniquely identified content or transclusions.

These are not particularly exotic or difficult ideas; to understand how they might have worked—what the net might have been—look at Nelson's Xanadu.

Understand that the problems of the web—the problems smug commentators vaunt as unpredictable consequences of runaway innovation—these problems were not only thought of, but provided for, before the web existed. Understand that the reason we have these problems anyway is the haphazard and commercially-driven way the web came to be. Understand that the ways in which the web destroys value—its unsuitability for micropayment, for example—and the profits the web affords—like search—are consequences of its non-architecture. If the web had been designed at all, music, news, writing would be booming in proportion with their pervasiveness. Instead we have Google. Instead we have a maze where the only going concern it allows is selling maps.

I should stipulate that the net—the Internet—and the web—the World Wide Web—are different things. The net is the underlying technology, the pipes; the web is one way of using that technology. Email, for example, is part of the net, but not part of web; the same is true of Bittorrent or VOIP. At one level the answer to the question "Is Google making us stupid?" is "No, the web is making us stupid—wiring our brains into the web is just Google's business model."

Certainly it is easy to defend the web against this kind of heckling. Nothing succeeds, as they say, like success. The guy in the back of the audience muttering how the guys on stage are doing it wrong is always and rightfully the object of pity. And there is no way back to the whiteboard; the web is, and it is what it is.

But we must remember that it could have been different—if only to remind us that we will have more choices. What has happened was not inevitable; what is predicted is not inexorable.

VI

The way Joe describes the effect of augmented symbol-structuring is worth quoting in full:

I found, when I learned to work with the structures and manipulation processes such as we have outlined, that I got rather impatient if I had to go back to dealing with the serial-statement structuring in books and journals, or other ordinary means of communicating with other workers. It is rather like having to project three-dimensional images onto two-dimensional frames and to work with them there instead of in their natural form.

This is, of course, against recalls the question, this time in its intended meaning: "Is Google making us stupid?" It is not a problem I have, but people do seem to suffer from it, so I can name the tragedy—we have just enough capacity for symbol-structuring on the web to break down some people's tolerance for linear argument, but not enough to give them multidimensional ways of constructing arguments. The web is a perfectly bad compromise: it breaks down old patterns without capacitating new ones.

Joe moves on from symbol structuring to process structuring. Here the methods resembled those used for symbol structuring—they are links and notes—but they are interpreted differently. A symbol structure yields an argument; a process structure answers the question—"What next?"

And this, of course, is recalls "Getting Things Done"—it is the complement of the next action. GTD's, however, is the abacist approach. Adherents of GTD manipulate physical objects or digital metaphors for physical objects—inboxes and TO DO lists—and reduce them to a definite series of next actions. Ultimately this is all any process structure can disclose—"What do I do now"—and for most tasks something like GTD is adequate.

If there is a hole in your roof that leaks, the fact of the leak will remind you to fix the hole. The process is self-structuring: you will fix it or get wet. So, to a lesser extent, is the letter on your desk. But the email in your inbox—if you expect to answer it, you must find some way to point its urgency. But why should this be? Why can't you instruct the computer to be obtrusive? Why can't digital tasks structure themselves?

They can; but they don't, because there is no metaphor for it. The abacist email program has an inbox; following the metaphor, to get something out of the inbox, you must do something with it. More algorist email programs, like Mutt or Gnus, invert the problem—once you have read a message, unless you explicitly retain it, it disappears. This approach is vastly more efficient, but it has no straightforward paperwork metaphor, so it is reserved for the geeks.

Or again: why can't you develop processes in the abstract? GTD is itself a single abstract workflow. Bloggers are forever writing up their own workflows. Why can't your computer analyze your workflow, guide you through it, help you refine it? Why can't it benchmark your activities and let you know which ones absorb the most time for the least return? Why is there no standard notation for workflows? Of course programmers have something like this in their editors and IDEs; but probably you do not.

Augmenting Human Intellect is worth reading but I am done with it. If I have been successful I have disenchanted you with the net—disenchanted literally; broken the spell it casts over minds that should know better. If I have been successful you understand that the net as you know it is not inevitable; that its future is not fixed; that its role is not a given; that is usefulness for any particular purpose is subject to judgment and discretion.

Weakmindedness Part One

[At some point I noticed that many of my essays contained digressions about the net's effect on our minds and lives. These digressions were faults as digressions, but the topic was interesting, so I began collecting these scraps with the plan of welding them into a coherent essay. The result was on a larger scale than I had expected: around 10 000 words in 13 parts.

This poses a problem of formatting. In the past I have run essays in several parts over as many days. But such a bombardment would be ridiculous. Instead I have gathered these 13 essays into 4 parts, to be run at the usual near-weekly intervals. In this way readers may plausibly have time to digest and follow the argument.]


I.

Is intelligence obsolete? I mean: do the digital technologies of intellectual augmentation make exceptional intelligence obsolete, in the same way that the mechanical technologies of physical augmentation made exceptional strength obsolete? Not, "is the net is making us stupid?" but "does the net make it as impossible to be stupid, as the grid makes it impossible to be powerless?" I want to be free to develop the argument without suspense, so I will conclude now: yes—with reservations about the concept of obsolescence.

I say intellectual augmentation to reference Douglas Engelbart's 1962 Augmenting Human Intellect. I will use this book as the scaffold for the first part of my argument. Anyone who has investigated the origins of the net will know Vannevar Bush's 1945 As We May Think, a prophesy of the Internet in light-table and microfilm. Augmenting Intellect is explicitly an attempt to show how Bush's vision could be made workable in electronic form. It is not a marginal document; six years after it was published the author, head of the Augmentation Research Center at Stanford, gave what is now known as the "Mother of All Demos", where he débuted, among other things, the mouse, email, and hypertext.

Some of the possibilities that Augmenting Human Intellect proposes have been fulfilled; some have been overtaken; some have failed; and some remain untried. The interesting ones are the untried.

The relevant part of Augmenting Human Intellect begins with Engelbart's description of the system he used to write the book--edge-notched cards, coded with the book or person from whom the content was derived. I say "content" because, as anyone who has attempted to maintain a system of notes allowing small, disparate pieces of information to be conserved usefully will realize, it is impossible to strictly distinguish thoughts and facts—the very act of selecting a fact for inclusion implies a thought about it. Engelbart calls these thought-facts kernels. He would arrange these cards into single-subject stacks, or notedecks. In the book he summarizes the frustrations of creating a memo using these cards—the lack of a mechanism for making associations (links, that is, but in both directions), the tedium of copying the links out, the confusion of keeping track of what linked to what. He considers some mechanical system for leaving trails between cards and for copying them, but objects:

It is plain that even if the equipment (artifacts) appeared on the market tomorrow, a good deal of empirical research would be needed to develop a methodology that would capitalize upon the artifact process capabilities. New concepts need to be conceived and tested relative to the way the "thought kernels" could be knitted together into working structures, and relative to the conceptual presentations which become available and the symbol-manipulation processes which provide these presentations.

He proceeds to further object that by the time some such mechanical system could be perfected, electronics would be better suited to the job. And we are off.


II.

But let us pause first to and consider the concept of the kernel. Engelbart is explicit that the kernel itself represents a "structure of symbols" subject to mapping. Yet for purposes of inclusion in a larger symbolic structure the kernel must be treated as smooth and integral. Every symbolic structure is composed of smooth kernels, yet all kernels are spiky. This tension can be dealt with in more than one way.

Imagine a series of levels in the computer's awareness of a kernel's internal structure. (These levels are my own coinage for this essay; they probably correspond to a known mathematical structure, but it seemed easier to reinvent than research.)

Level zero is the simplest possible organization of nuggets, an anonymous jumble where the only thing the system knows about the content of a kernel is that it exists. An unsorted inbox or a directory of temp files are level 0 structures.

Level 1 is a simple filing system: the system knows exactly one thing about the kernel: which folder it belongs to.

A level 2 structure approaches the limits of what is possible with paper: the system knows that a single kernel can be in several places at once. A physical file system where documents are both uniquely identified in some master reference, and where copies of these documents are present in multiple folders, is a level 2 system; so is double-entry bookkeeping, where the kernels have no internal structure at all. Tagging is the computerized equivalent; in library science, faceted classification.

A level 3 structure is possible with paper—using edge-notched cards and pin-sort operations—but in practice it requires a computer. In a level 3 structure the system can retrieve a kernel conditionally, based on which folders it is in—Edward Teach was a pirate and a bearded man, so he is found in the pseudo-folder "Pirates and bearded men". Basically a level 3 structure knows enough about its kernels to do basic set operations—the Venn diagram level.

(Full-text search is a level 3 structure, where each kernel is indexed by every word that it contains.)

In a level 4 structure folders are themselves included in folders. This sounds trivial—manila folders inside hanging folders, or sub-directories in directories—but that would be a level 1 system. In a level 2 structure, each folder is included in multiple folders, including itself. This is impractical on paper, and just practical using symlinks—the POSIX file system is arguably a level 4 structure. But the only nontrivial level 4 structure in operation is Google's search engine, which is smart enough that it could retrieve Edward Teach for "Pirate who did not shave"—it is capable of including the folder "bearded" in the folder "not shaving." (It doesn't, alas).

Understand the distinction between levels 3 and 4: in level 4 the maximum number of searches that can retrieve a kernel is a function of the number of folders and the length of the search query. At level 4 because folders can be retrieved by other folders—not just pseudo-folders—the maximum number of searches assumes at every folder includes every pseudofolder.

But why stop with level 4? Computers could do better. Level 0 jumbles kernels; level 1 puts them in folders; level 2 puts them in multiple folders, level 3 puts folders in folders, level 4 puts folders in multiple folders. The simplest case of a level 5 structure would be the search: "searches that return Edward Teach." This sounds useless until you consider the benefit of gradually narrowing a search by doing one search after another, each on the results of the last. When you think about it this seems very hierarchical—like folders inside folders. With a level 5 structure you could retrieve multiple searches the way you search multiple tags. For example, suppose you want to know how the Queen Anne's Revenge was rigged; and suppose there is a website about this. Now, of course, you could search "Queen Anne's revenge rigging", or search "Queen Anne's Revenge" and then rigging, or "sailing ship" and then "Queen Anne's Revenge"—but no luck. But suppose you could search "searches that return Queen Anne's Revenge + searches that return sailing ship + rigging". Now if it happened that the Queen Anne's Revenge was a French frigate, and French frigates of the early 1700s were rigged in a characteristic way, this search could shake that connection out.

(Note that a strictu sensu blog is actually a sort of level 5 search—a blog that collates links on a certain set of topics is a handmade equivalent to a search on searches that return those topics.)

But enough speculation. The point of proposing these levels is to show that ascending levels in the sophistication of search are independent of the internal structure of a kernel. Even the most sophisticated searches possible, and those not yet possible, are still a matter of folders and contents. And putting a kernel into one or many folders is not the same as parsing it.

Indeed parsing is an impediment to search, not an aid. Certainly it is good when we search on Edward Teach and are directed to a "Blackbeard" chapter in a book about pirates. For our purposes the book as a whole is a kernel; perhaps the chapter is too—we may print it out, or find the book and photocopy it, or collect it screenshot by screenshot. But how far can we break it down? It may be true that half of the chapter is not about Blackbeard at all—this paragraph tells about the town where he was born, this paragraph tells about his ship, this paragraph tells about his competitors—and it may be true that of the paragraphs about him half the sentences are not about him—here is a thought on the nature of brutality, here is a thought about why bearded men are threatening. Yet if you isolate only the sentences that are about Blackbeard specifically, the result is gibberish. You wanted something about Blackbeard? Well, this chapter as a whole is about Blackbeard—but no part of it is actually about him.

This is why PIM is hard: there need not exist any necessary connection between a kernel's internal structure and the folders where it is classified. The relationship is unpredictable. This unpredictability makes PIM hard—/hard/ not as in difficult, but hard as in insoluble, in a way that is revealing of some human limitation. Classification is irreducibly contingent.

Accordingly PIM is always tendentious, always fallible, and not always comprehensible outside of a specific context, or to any other but a specific person. And the most useful abstract classifications are not the best, but the most conventional—like the Dewey Decimal system, whose only advantage was that of existing.


III

Now I return to Engelbart and his "quick summary of relevant computer technology." It would be tempting to pass over this section of Augmenting Human Intellect as pointless. We know computers; we know what they can do. The introductions necessary in 1962 are needless for us. And true, some of it is funny.

For presenting computer-stored information to the human, techniques have been developed by which a cathode-ray-tube (of which the television picture tube is a familiar example) can be made to present symbols on their screens of quite good brightness, clarity, and with considerable freedom as to the form of the symbol. Under computer control an arbitrary collection of symbols may be arranged on the screen, with considerable freedom as to relative location, size, and brightness.

But we should look, because Augmenting Human Intellect predates a great schism in the design and use of computers. Two sects emerged from that schism. The technologies that Engelbart thought would make augmentation practical largely ended up in the possession of one side of this schism—the losing side.

Engelbart thinks of computers as symbol-manipulating engines. This strikes one in the face when he talks about simulation:

[T]hey discovered that the symbol structures and the process structures required for such simulation became exceedingly complex, and the burden of organizing these was a terrific impediment to their simulation research. They devised a structuring technique for their symbols that is basically simple but from which stem results that are very elegant. Their basic symbol structure is what they call a 'list," a string of substructures that are linked serially in exactly the manner proposed by Bush for the associative trails in his Memex—i.e., each substructure contains the necessary information for locating the next substructure on the list. Here, though, each substructure could also be a list of substructures, and each of these could also, etc. Their standard manner for organizing the data which the computer was to operate upon is thus what they term "list structuring."

This is in reference to IPL-V. A few paragraphs later he writes, with frustrating understatement, "Other languages and techniques for the manipulation of list structures have been described by McCarthy"—followed by eight other names. But McCarthy's is the name to notice; and his language, LISP (LIst Processing) would become the standard tool for this kind of work.

There is a famous essay about the schism, source of the maxim "Worse is Better." It contrasts two styles of programming: the "MIT style"—the style of the MIT AI Lab, with the "New Jersey style"—the Bell Labs style. Software as we know it—based around the C programming language and the Unix family of operating systems, derives from the New Jersey style. Gabriel's essay actually characterizes the New Jersey style as a virus.

But how does this difference in style relate to the concept of "symbolic structures"? Lisp is focused on the manipulation of symbolic structures; and Lisp is the language best suited for this because Lisp code is in fact itself a symbolic structure—every Lisp program is an exhaustive description of itself. C-like languages are a set of instructions to a compiler or interpreter. The instructions are discrete and serial. The resultant symbolic structure exists only when the program is run. (Note that the difference is one of tendency, not of capacity. It is an axiom that any program can be written in any programming language that has the property of being Turing-complete —as all these languages are.)

Why C-like languages won may be summarized by a point of jargon. In Lisp-like languages anything besides manipulating symbolic structures—say, writing a file to disk or rendering it to the screen—is called a side effect. What are side effects to Lisp programmers are the business of C programmers. So instead of symbols and trails we deal with files and windows and websites, and have to hold the structures they are supposed to fit into in our own heads.

Coincidentally in housebuilding the quick and dirty style is called "New Jersey framing." The standard way is to frame a wall as a unit—a grid of studs nailed through their ends at right angles—then stand it up and nail it into place. Jersey framing instead maneuvers each stud into its final position before toenailing it in place—that is, hammering nails in at an angle. The standard style is more secure, but involves delay and forethought; New Jersey framing is less secure, but makes constant progress. New Jersey programming has essentially the same advantages and drawbacks.

Genius

Analogies between intelligence and physical strength are easy to make and often useful. I have used them before and I expect to use them again. But the correspondence is not exact. If to say “genius” is to mean anything, it must do more than name qualities of intelligence that are superior to ignorance in the same way that athleticism is superior to clumsiness. There are such qualities, such matters of degree; but they are not genius.

To use the word genius significantly, I would posit that strength is stable, but intelligence is metastable. These are terms from physics; they have statistical analogs but the terms from physics are more easily illustrated.

Imagine a marble rattling inside a bowl with tall sides. Rest the bowl on flat ground; shake it. Sometimes the marble climbs one side; sometimes another; but always it come to rest on the bottom – and when it falls out, it falls no lower than the bottom. In this bowl the marble’s condition is stable. (Chart the marble’s movements, and you have a bell curve.)

But intelligence is metastable. Imagine the same bowl; but this time, instead of resting it on the ground, put it at the summit of a hill. Mostly the marbles rattle inside this bowl as they did in the other; but sometimes a marble overtops the side, and shoots off down the hill on a trajectory we rattling marbles cannot imagine.

I believe in genius – not in geniuses. All of us spend most of our time rattling around in the bowl. But when the right person thinks about the right subject at the right time, a mind can take a trajectory that briefly places it, not just above all others, but above the sum of all others. In a work of genius, however briefly, a brainpower is concentrated that exceeds the combined brainpower of the rest of the human race. (Or, if not the sum, at least the sum of what language could coordinate to be applied along those lines.) Not a bit-for-bit balance of computations – only an unpredictable and incomparable excession.

A work of genius is recognizable because it arrives, even when it is simple in itself, as a characteristic expression of an unknown order of things – the way that the first artifact discovered from a lost civilization stands, the way the first signal from an alien civilization might stand – standing apart from all you know, not because it is overtly different, but because it implies in its negative space, in its outlines and hollows, a system of beliefs and concerns altogether contained in itself, a strangeness that is not a shock but a rich and intricate surprise.

Maybe this is why I feel such desparate pity for lost books. Sometimes when a book is lost, all that is lost is one more thing in the world; but sometimes when a book is lost, something like a world, something like cities and peoples, falls silent.