Weakmindedness Part Two


To make his ideas more tractable Engelbart tells a story of two characters: "You", addressed in the second person, and "Joe", who is experienced with augmentation and is giving You a demonstration.

First Joe shows off his workstation. His desk has two monitors, both mounted at slight angles to the desk—"more like the surface of a drafting table than the near-vertical picture TK displays you had somehow imagined." He types into a split keyboard, each half flanking a monitor, poising him over his screens as he works.

The ergonomics are impeccable. Consider how tradition forces us into a crick-necked and hunched-shouldered position whenever we sit at keyboard—how it literally constrains us. Judge how much more of the way you work is so ruled.

To introduce the capabilities of the system Joe edits a page of prose. Lo—every keystroke appears instantly onscreen! When he reaches the end of the line, carriage return is automatic! He can delete words and sentences, transpose them, move them around, make them disappear—"able to [e]ffect immediately any of the changes that a proofreader might want to designate with his special marks, only here the proofreader is always looking at clean text as if it had been instantaneously retyped." He can call up definitions, synonyms and antonyms "with a few quick flicks on the keypad." He can define abbreviations for any word or string of words he employs, whenever he wants, and call them up with substrings or keychords. But you have fonts, yes?

In short the capabilities of Joe's editor are somewhat above those of a word processor and somewhat below those of a programmer's editor.

Here we find one of the problems with Engelbart's vision. It is easier to augment entities than procedures. If in the context of typing a word is just the procedure of hitting a certain sequence of letters, then in the near term, it actually costs energy and time to change your procedure to typing the first few letters of a word and letting the editor expand it. It requires you to think of the word as an entity, not an operation. For most people, this is impractical.

Consider the abacus. The frictionless operation of mental arithmetic seems easier than the prestidigitation the abacus requires. So the most practiced algorists are faster than the fastest abacists. (Sometimes, as in the famous anecdote of Feynman and the Japanese abacist, the algorist's superior knowledge of mathematics will simplify the problem to triviality.) But of course the abacus is easier to learn than mathematics, and for a given amount of practice the average abacist will be much faster than the average algorist.

There are abacus virtuosos who can calculate faster than the abacus can be fingered, who calculate moving their fingers on an empty table, but who cannot calculate at all without moving their fingers—slaves to a skeuomorph.

Skeuomorph is a term from architectural criticism. It names, usually to pejorate, a building's needless imitation of the traces of old-fashioned construction methods and materials it does not employ. But skeuomorphs are not all bad—classical architecture, in its system of pillars, pilasters, entablatures &c. is a representation in stone of the engineering of wooden buildings.

The experience of using a computer is run through with skeuomorphs—the typewriter in the keyboard, the desktop in the screen, the folders on the hard drive, the documents they contain. Through a cultural process they dictate—even to those with little experience of the analog originals—how computers are to be used. Even as they let us in they hold us back.

So it might seem that new user-interface concepts are necessarily a liberation. They should be; but so far they have not been. In particular the recent generation of portable devices have moved farther toward the pole of the abacus—easy for competence, limited for mastery. As they break down walls, they close doors. As they are more and more physical and spatial, they are less and less symbolic.


Now we come to the last part of Joe's demonstration, and leave the familiar behind. The talk from here on is of arguments, statements, dependencies, and conceptual structures. Joe explains that he uses his workstation to produce arguments, composed of statements, arranged sequentially, but not serially. Quote:

This makes you recall dimly the generalizations you had heard previously about process structuring limiting symbol structuring, symbol structuring limiting concept structuring, and concept structuring limiting mental structuring. You nod cautiously, in hopes that he will proceed in some way that will tie this kind of talk to something from which you can get the "feel" of what it is all about.

He warns you not to expect anything impressive. What he has to show you is the sum of great many little changes. It starts with links: not just links between one document and others, but links within the document—links that break down sentences like grammatical diagrams, links that pin every statement to its antecedents and consequences--

[T]he simple capabilities of being able to establish links between different substructures, and of directing the computer subsequently to display a set of linked substructures with any relative positioning we might designate among the different substructures.

Note that this does not just mean creating links—it means creating bidirectional linkages, linkages that have kinds, linkages that can be viewed as structures as well as followed.

Here is a skeuomorph: the index or cross-reference in the hyperlink. The hyperlink are we know it is hyper only in the most trivial sense. You cannot even link a particular part of one document to a particular part of another document unless the target is specially prepared with anchors to hold the other end of the link. Except inside of a search engine (and the futile experiment of trackbacks), a link contributes no metadata to its target. The web has no provisions for back-and-forth or one-to-many links, let alone for, say, uniquely identified content or transclusions.

These are not particularly exotic or difficult ideas; to understand how they might have worked—what the net might have been—look at Nelson's Xanadu.

Understand that the problems of the web—the problems smug commentators vaunt as unpredictable consequences of runaway innovation—these problems were not only thought of, but provided for, before the web existed. Understand that the reason we have these problems anyway is the haphazard and commercially-driven way the web came to be. Understand that the ways in which the web destroys value—its unsuitability for micropayment, for example—and the profits the web affords—like search—are consequences of its non-architecture. If the web had been designed at all, music, news, writing would be booming in proportion with their pervasiveness. Instead we have Google. Instead we have a maze where the only going concern it allows is selling maps.

I should stipulate that the net—the Internet—and the web—the World Wide Web—are different things. The net is the underlying technology, the pipes; the web is one way of using that technology. Email, for example, is part of the net, but not part of web; the same is true of Bittorrent or VOIP. At one level the answer to the question "Is Google making us stupid?" is "No, the web is making us stupid—wiring our brains into the web is just Google's business model."

Certainly it is easy to defend the web against this kind of heckling. Nothing succeeds, as they say, like success. The guy in the back of the audience muttering how the guys on stage are doing it wrong is always and rightfully the object of pity. And there is no way back to the whiteboard; the web is, and it is what it is.

But we must remember that it could have been different—if only to remind us that we will have more choices. What has happened was not inevitable; what is predicted is not inexorable.


The way Joe describes the effect of augmented symbol-structuring is worth quoting in full:

I found, when I learned to work with the structures and manipulation processes such as we have outlined, that I got rather impatient if I had to go back to dealing with the serial-statement structuring in books and journals, or other ordinary means of communicating with other workers. It is rather like having to project three-dimensional images onto two-dimensional frames and to work with them there instead of in their natural form.

This is, of course, against recalls the question, this time in its intended meaning: "Is Google making us stupid?" It is not a problem I have, but people do seem to suffer from it, so I can name the tragedy—we have just enough capacity for symbol-structuring on the web to break down some people's tolerance for linear argument, but not enough to give them multidimensional ways of constructing arguments. The web is a perfectly bad compromise: it breaks down old patterns without capacitating new ones.

Joe moves on from symbol structuring to process structuring. Here the methods resembled those used for symbol structuring—they are links and notes—but they are interpreted differently. A symbol structure yields an argument; a process structure answers the question—"What next?"

And this, of course, is recalls "Getting Things Done"—it is the complement of the next action. GTD's, however, is the abacist approach. Adherents of GTD manipulate physical objects or digital metaphors for physical objects—inboxes and TO DO lists—and reduce them to a definite series of next actions. Ultimately this is all any process structure can disclose—"What do I do now"—and for most tasks something like GTD is adequate.

If there is a hole in your roof that leaks, the fact of the leak will remind you to fix the hole. The process is self-structuring: you will fix it or get wet. So, to a lesser extent, is the letter on your desk. But the email in your inbox—if you expect to answer it, you must find some way to point its urgency. But why should this be? Why can't you instruct the computer to be obtrusive? Why can't digital tasks structure themselves?

They can; but they don't, because there is no metaphor for it. The abacist email program has an inbox; following the metaphor, to get something out of the inbox, you must do something with it. More algorist email programs, like Mutt or Gnus, invert the problem—once you have read a message, unless you explicitly retain it, it disappears. This approach is vastly more efficient, but it has no straightforward paperwork metaphor, so it is reserved for the geeks.

Or again: why can't you develop processes in the abstract? GTD is itself a single abstract workflow. Bloggers are forever writing up their own workflows. Why can't your computer analyze your workflow, guide you through it, help you refine it? Why can't it benchmark your activities and let you know which ones absorb the most time for the least return? Why is there no standard notation for workflows? Of course programmers have something like this in their editors and IDEs; but probably you do not.

Augmenting Human Intellect is worth reading but I am done with it. If I have been successful I have disenchanted you with the net—disenchanted literally; broken the spell it casts over minds that should know better. If I have been successful you understand that the net as you know it is not inevitable; that its future is not fixed; that its role is not a given; that is usefulness for any particular purpose is subject to judgment and discretion.