Departments

The Ruricolist is now available in print.

Weakmindedness Part One

I

Is intelligence obsolete?

The question is: will the digital technologies of intellectual augmentation make exceptional intelligence obsolete, in the same way that the mechanical technologies of physical augmentation made exceptional strength obsolete? Not, “is the net is making us stupid?” but “does the net make it as impossible to be stupid, as the grid makes it impossible to be powerless?”

The saying goes that any article that asks a question does so in order to answer “no.” If they were sure, they wouldn’t ask. This is not one of those articles. My answer to the question “is intelligence obsolete?” is yes – though with reservations about the concept of obsolescence.

I say intellectual augmentation to reference Douglas Engelbart’s 1962 Augmenting Human Intellect. I will use this book as the scaffold for the first part of my argument. Anyone who has investigated the origins of the net will know Vannevar Bush’s 1945 As We May Think, a prefiguration of the Internet in light-table and microfilm. Augmenting Intellect is explicitly an attempt to show how Bush’s vision could be made workable in electronic form. It is not a marginal document; six years after it was published the author, head of the Augmentation Research Center at Stanford, gave what is now known as the “Mother of All Demos,” where he débuted, among other things, hypertext, email, and the mouse.

Some of the possibilities that Augmenting Human Intellect proposes have been fulfilled; some have been overtaken; some have failed; and some remain untried. The interesting ones are the untried.

The relevant part of Augmenting Human Intellect begins with Engelbart’s description of the system he used to write the book – edge-notched cards, coded with the book or person from whom the content was derived.

(I say “content” because, as anyone who has attempted to maintain a system of notes organizing small, disparate pieces of information will realize, it is impossible to strictly distinguish thoughts and facts – the very act of selecting a fact for inclusion implies a thought about it.)

Engelbart calls these thought-facts kernels. He would arrange his cards into single-subject stacks, or notedecks. In the book he summarizes the frustrations of creating a memo using these cards – the lack of a mechanism for making associations (links, that is, but in both directions); the tedium of copying the links out; the confusion of keeping track of what linked to what.

He considers a mechanical system for leaving trails between cards and for copying them, but objects:

It is plain that even if the equipment (artifacts) appeared on the market tomorrow, a good deal of empirical research would be needed to develop a methodology that would capitalize upon the artifact process capabilities. New concepts need to be conceived and tested relative to the way the “thought kernels” could be knitted together into working structures, and relative to the conceptual presentations which become available and the symbol-manipulation processes which provide these presentations.

He proceeds to further object that by the time some such mechanical system could be perfected, electronics would be better suited to the job. And we’re off.

II

Pause, first, to consider Engelbart’s concept of the thought kernel. Engelbart is explicit that the kernel itself represents a “structure of symbols.” Yet, for purposes of inclusion in a larger symbolic structure, the kernel must be treated as smooth and integral. Every symbolic structure is made of smooth kernels – but all kernels are composite. This tension can be dealt with in more than one way.

Ascending levels in the sophistication of search are independent of the internal structure of a kernel. Even the most sophisticated searches now possible, and those not yet possible, are still a matter of folders and contents. And putting a kernel into one or many folders is not the same as parsing it.

Parsing is an impediment to search, not an aid. Certainly it is good when we search for Edward Teach and are directed to a “Blackbeard” chapter in a book about pirates. For our purpose the book as a whole is a kernel; and the chapter is too – we may print it out, or find the book and photocopy it, or collect it screenshot by screenshot. But how far can we break it down? It may be true that half the chapter is not about Blackbeard at all – this paragraph tells us about the town where he was born, this paragraph tells us about his ship, this paragraph tells us about his competitors – and it may be true that of the paragraphs about him half the sentences are not about him – here is a thought on the nature of brutality, here is a thought about why bearded men are menacing. If you isolate only the sentences that are about Blackbeard specifically, the result is gibberish. You wanted something about Blackbeard? Well, this chapter as a whole is about Blackbeard – but no part of it is actually about him.

This is why PIM (“personal information management”) is hard: there need not exist any necessary connection between a kernel’s internal structure and the folders where it is classified. The relationship is unpredictable. This unpredictability makes PIM hard – hard not as in difficult, but hard as in insoluble, in a way that is revealing of some human limitation. Classification is contingent, irreducibly.

Accordingly PIM is always tendentious, always fallible, and not always comprehensible outside of a specific context, or to any other but a specific person. And the most useful abstract classifications are not the best, but the most conventional – like the Dewey Decimal system, whose only advantage is that it exists.

III

Now I return to Engelbart and his “quick summary of relevant computer technology.” It would be tempting to pass over this section of Augmenting Human Intellect as pointless. We know computers; we know what they can do. The introductions necessary in 1962 are needless for us. And true, some of it is funny.

For presenting computer-stored information to the human, techniques have been developed by which a cathode-ray-tube (of which the television picture tube is a familiar example) can be made to present symbols on their screens of quite good brightness, clarity, and with considerable freedom as to the form of the symbol.

But we should look anyway, because Augmenting Human Intellect predates a great schism in the design and use of computers. Two sects emerged from that schism. The technologies that Engelbart thought would make augmentation practical largely ended up in the possession of one side of this schism – the losing side.

Engelbart thinks of computers as symbol-manipulating engines. This strikes one in the face when he talks about simulation:

[T]hey discovered that the symbol structures and the process structures required for such simulation became exceedingly complex, and the burden of organizing these was a terrific impediment to their simulation research. They devised a structuring technique for their symbols that is basically simple but from which stem results that are very elegant. Their basic symbol structure is what they call a “list,” a string of substructures that are linked serially in exactly the manner proposed by Bush for the associative trails in his Memex – i.e., each substructure contains the necessary information for locating the next substructure on the list. Here, though, each substructure could also be a list of substructures, and each of these could also, etc. Their standard manner for organizing the data which the computer was to operate upon is thus what they term “list structuring.”

This is in reference to IPL-V. A few paragraphs later he writes, with spectacular understatement, “Other languages and techniques for the manipulation of list structures have been described by McCarthy” – followed by eight other names. But McCarthy’s is the name to notice; and his language, LISP (LISt Processing) would become the standard tool for this kind of work.

There is a famous essay about the schism, by Richard Gabriel, source of the maxim “Worse is Better.” It contrasts two styles of programming: the “MIT style” – the style of the MIT AI Lab, with the “New Jersey style” – the Bell Labs style. Software as we know it – based around the C programming language and the Unix family of operating systems, derives from the New Jersey style. Gabriel’s essay actually characterizes the New Jersey style as a virus.

But how does this difference in style relate to the concept of “symbolic structures”? Lisp is focused on the manipulation of symbolic structures; and Lisp is the language best suited for this because Lisp code is in fact itself a symbolic structure. C-like languages are instructions to a compiler or interpreter. The instructions are discrete and serial. The symbolic structure remains implicit.

(Note that the difference is one of tendency, not of possibility. It is an axiom that any program can be written in any programming language that has the property of being Turing-complete – as all these languages are.)

Why C-like languages won may be suggested by a point of jargon. In Lisp-like languages anything besides manipulating symbolic structures – say, writing a file to disk or rendering it to the screen – is called a side effect. What are side effects to Lisp programmers are the business of C programmers. So instead of symbols and trails we deal with files and windows and websites, and have to hold the structures they are supposed to fit into in our own heads.

Coincidentally, in construction the quick and dirty style of framing a house is called “New Jersey framing.” The standard way is to frame a wall is as a unit – a grid of studs nailed through their ends at right angles – then stand it up and nail it into place. Jersey framing instead maneuvers each stud into its final position before toenailing it in place – that is, hammering nails in at an angle. The standard style is more secure, but involves delay and forethought; New Jersey framing is less secure, but makes constant progress. New Jersey programming has essentially the same advantages and drawbacks.