Nondefinition #31

Solipsism. Are "you" Happy? Do you "want" to be Happy? Then you need to "know" about JOHN TEMPLE, the "One" you've been waiting for! In 1998 JOHN TEMPLE made discovery while reading Philosophy of "only" completely TRUE religion Solipsism. JOHN TEMPLE discovers that himself, JOHN TEMPLE, is Only "Real" Being in "Universe"! Since then JOHN TEMPLE has made 100's of people "happy" just by "thinking" about them and making them "real"!! Being real is "only" TRUE HAPPINESS! You too can be "happy"!! Just send name and picture to JOHN TEMPLE and he will make you "happy" by thinking about YOU for FIVE MINUTES!! Only $20 for five minutes of "HAPPINESS"! You too can be "happy"!! JOHN TEMPLE --- IS THE "TEMPLE" OF HAPPINESS!

That Damn Barking

The night is done, but I am still awake
Because the dog is barking just to bark
(I've checked it twice―there's nothing I can see.)
I tell myself it's dogs that made the world.
Outside the hungry night with teeth and claws
Inside the world, with space for human things.
My nails are short because a dog's are long;
My teeth are dull because a dog’s are sharp.
We shoot and shine, we spray and fence it in,
We pierce the lucid dark, we switch it off,
But dogs still keep their watch along the night.
The dogs came first, before the walls and roofs,
But cities need them not. The useless dogs
Of Moscow, idle, learn to waste their time.
My dog knows none of this. She keeps the faith.
She knows her work, she does as she was bred,
Makes way for life. The dog will have to learn.
I cannot live like this. But better this
Than nights when no dog barks, and certain sounds—
Sounds faint and dryly rasping in my ears
Drift in across a hanging-open door.

Nondefinition #30


Moral paradox. "Can I talk to you a minute? The cops say you were the last guy to see my brother—I mean, he was my brother, you were up there on the bridge with him. Did he, like, say anything to you? I wish I knew what he was thinking. I mean, everybody knew he was depressed—ever since the infection he just kept putting on weight, his glands don't work right—but he said he was OK with the operation, we thought there was some hope finally. I just don't get it. Right in front of the trolley. Do you think—somebody said to me maybe he thought he could stop the trolley—no, never mind. That's stupid. I mean, he was an engineer—he would have known, tons of metal, the trolley would go right through him and hit the other people anyway. Oh, I'm sorry, man, I'm rambling, you had to see that, all those people, I get it, you can't talk right now. Thanks for listening. Look, let me give you my number. If you ever need to talk, I'm here for you. Okay, we'll talk later. Thanks. You're a great guy."

Societal contract

Sociality defines human beings differently than it defines social animals. In other animals the individual encapsulates the species. Even if it is a specialized form, even if it is sterile, still its genes record what it cannot embody or propagate. The societies of insects are all in the genes; an individual queen instantiates a complete hive; the hive itself is pure unfolding.

But the society of human beings, though genetically driven, is not genetically predictable. There is an adventitious variation in how a society forms, as there is epigenetic variation in an individual. A single human being contains the certainty of a society, but not the form of that society.

We cannot establish society on a rational basis because society exists to serve an unreasoning need. Imposing reason on the substructure of a society only weakens its differentiating superstructure, imploding society to one of its basic forms.

What are these basic forms? The most basic division I can find is twofold: rule by the old, and rule by the young.

Rule by the old suits small populations. Thus we find tribes and villages with elders, and the most ancient city-states constituted with Senates. Paradoxically, it is by far the more stable of the two forms, yet it is not the ground state. When it fails, government by the old collapses into government by the young; and it is so much harder for a large population to attain government by the old that it arrives there only as an inheritor to government by youth.

Neither rulers nor ruled are immortal. This fact is not without political significance. People age: this is the advantage of rule by the old over rule by the young. Those who attain power old cannot expect to keep it long, so they tolerate the division of responsibilities requisite to a smooth succession.

But those who attain power young can never risk losing it. In rule by the young the rulers are not usually themselves young. The difference is that in rule by the old, age is the title to power; while in rule by the young, it is youth, and the deeds and quality of youth.

Rule by the old is the base class of the kinds of societies we call tribes, and of ancient republics; rule by the young is the base class of societies we call feudal, and of crime we call organized. At the top of such pyramidal systems the king or boss is probably much older than his vassals or boys at the bottom; but his authority derives from how well he understands and can sway the youth of his comitatus (or his posse).

Rule by the young is perishable; rule by the young lasts only as long as it takes to grow old in power. Of course growing old in power is the trick. Rule by the old, left alone, lasts forever; some extant specimens are older than history. But few societies are left alone forever.

It is in competition with other societies that the rule of the old shows it weakness; and war is the exemplary competition. Where the young rule, there is never any difficulty in finding and empowering a competent military commander. Ability trumps respect. But where the old rule, authority and ability are opposites, because authority is reserved for those least able to abuse it.

Accordingly we see that when a militant rule by the young arises, in its aftermath, even when it loses, all the governments involved go over to the young. It is like rabies: those who try to restrain the first victim get bitten; they all go mad soon.

This, of course, applies only to civilized countries: there is no difference in authority and ability where there is no such thing as tactics. But consider that the humiliation of Carthage brought about the rule of youth that set Hannibal in command; and how that Rome was forced to answer with Scipio was the beginning of the end of the Republic, once his glory set him above the law.

The same competition appears, though more subtly, in commerce, culture, and discovery.

Of the two forms, I would rather live under the rule of the old; I would rather rule by the rule of the young. So ruled, I could plan for the future; so ruling, I could decide it.

Rule by the old is the default form of government, the one adopted when there is no basis for choice; rule by the young is the ground state, the one that comes into effect when no government at all seems possible.

(Thus I regard anarchists, libertarians, &c., as being on the side of the young; they call what they aim for free trade or coöperation, but what they mean is freedom of action for the young, from a government that they despise as belonging to the old. But freedom of action is the same thing as government. If I can hurt you, and you cannot do anything about it, then I rule you.)

Can there be government that is both stable and active? Insofar as rule by the old correlates to aristocracy, and rule by the young correlates to monarchy, the answers after all this fuss would seem to be obvious: democracy.

Note, however, that these two forms of rule are not classes to one or the other of which all governments must belong. They are elemental forms of government. In classification they serve as poles, to one or the other of which a real government sometimes draws nearer; in analysis they are recapitulated and combined at different levels. Any real government—certainly any modern government, on a national scale—is really a hierarchy and network of governments, in which governments both fit inside and parallel other governments.

I want to resist as strongly as I can the notion that there is some simple nosology of government. The threefold division we blame on Aristotle—monarchy, aristocracy, isonomy (democracy)—what use is it? Even if we assign each form its evil twin—tyranny, oligarchy, democracy (ochlocracy)—what does the choice mean, except approval or disapproval? What structural features are different between each twin?

When Aristotle speaks of democracy, he means election by lottery; when he speaks of aristocracy, he means nearly what we mean by meritocracy; what he means by a monarch is more like what we would call a political boss, than the throne-sitters the word king calls up. And this is the best classification we have!

A constitution does more than fix power relations between classes. A constitution—even one that does not works as its written prospectus suggests—is a piece of social machinery. Once people have the leisure to propose them, an artistic diversity of forms of government can be designed and practiced. People can govern themselves by as many schemes of constitutional machinery as they can invent; they may make decisions according to the will of the king or the voice of the people; they can consult the innards of ravens, the shapes of clouds, the pips of dice, the whisperings of prophetesses, the opinions of economists. As long as everyone believes in it, as long as nothing unexpected challenges it, if it works, it works. This is the first kind of societal contract: like a corporate charter, it defines how things work while things work.

But sometimes things stop working. Plague strikes, you lose the war, rivals outspend you, your markets attempt suicide. When no procedure answers to the problem, then the old and young are heard from. Either the old among the powerful combine to keep things from going wrong; or things go wrong, and the young find themselves making decisions.

Here we find the second kind of societal contract. It has only three possible forms.

The first is the contract between old and young, while both have power. The old agree not to seize power from the constitutional forms; the young agree not to accept power when the rabble offers it to them, except in the name of the constituted government as a whole.

The second is the contract between the old and young, after the old take power; promising that when the elite needs renewal, they shall be the ones coöpted.

The third is the contract that the young offer the old when power falls to them: to retain them for their advice, and not to let the rabble blame and destroy them.

Consider the French Revolution, when all three contracts were offered but fell through. The first, when Louis fled, and the Jacobins and Girondins accepted the loyalty of the mob and the bourgeois, respectively; the second, when the Girondins tried to exclude the Jacobins; the third, when the Jacobins recruited the rabble into the Terror.

It is very easy to write a constitution that works; any lawyer can do it. (Writing constitutions used to be a hobby of mine.) But it is very hard to write a constitution that fails gracefully. Fortunately, there is a quick test: the longer the written form of the constitution, the more untrustworthy it is. The English constitution, at zero words, has lasted almost a millennium; the American, at ~4500 words, has lasted over two centuries. The Soviet constitution came in at ~13 300 words—a work of fiction of nearly novel length. The Chinese at ~14 000 strikes me as dubious. The length of the recently defeated EU constitution—about 150 000 words (can that be right?)—is perverse. 1

Consider any common business contract: the longer it is, the more detailed the terms, the more ways the contract can bend without breaking, and the less it should be trusted. The longer the contract, the less relevant—witness unreadable software EULAs, whose terms are less unenforceable than fallacious. When nothing is certain, the only contract that holds is the handshake.

Hippocrates taught that medicine has two parts, diagnosis and prognosis. Prognosis is the showmanlike part. The purpose of giving a disease a name and predicting to the patient its future course is not to inform the doctor, but to impress the patient. Diagnosis is the subtle and difficult part: the ability to trace the course of the disease so far, to discover and remove its causes, to recognize and avoid its dangers. Diagnosis is how the doctor finds a cure; prognosis is what makes the patient agree to it. Most political and sociological thinking is prognostic. I am trying to see how a diagnostic approach might work.

1 Figures for the US and Chinese constitutions include amendments. The figure for the EU constitution derives from an automatic conversion (pdftotext | wc -w) and may be inaccurate.


Can technology perform what has been promised of it—can it make everyone creative? To my surprise, I conclude that it can, if by being creative we understand something as distinct from creation as being active is distinct from action. Obviously technology allows us to be active without acting, decouples action from activity both in frustrating determination by the infinite multiplication of stakeholders and infinite attenuation of authority, and in excusing indecision by an infinite regress of prerequisites, preparations, and precautions. While the year's planning of a royal feast becomes the labor of an hour's trip to the grocery store, the hour's action of a king and counselor becomes the decades' activity of a movement of tens of millions. Technology can likewise democratize creativity by decoupling it from creation, after empowering it with resources cheapened through economies of scale. This decoupling takes place at two points: in preventing the concealment of sources, and in preventing the discrimination of influences.

Current popular writing works by defending oversimplifications with caricatures. You will read, for example, that creativity now stands revealed as inherently collaborative and social, over against an outdated and misguided nineteenth century—no, Victorian—no, Romantic (they are interchangeable for the purpose) idea of Genius inexplicably and mystically inspired with utterly irreducible, unaccountable, and unanswerable Originality.

You will hear quoted (from whom, I don't know) "Creativity is the art of concealing sources." This sounds awful to our ears; it stimulates our reflexes: concealment—hypocrisy! Expose it! And so we embrace a doctrine of synthetic, social creativity defined only as the opposite to a something that no one ever really believed.

Of course creativity really is synthetic. A mass of knowledge has inertia. The more knowledge is accumulated, the more it is all alive; by concentration and fermentation is becomes fertile. It is true, with reservations, that creativity cannot be independent. Every act of creation is fed by a thousand buried mingling rivers. Even the springs of the desert rise by distant rainfalls. Mere self-expression cannot be original because nothing we can create of ourselves can be as dear or as meaningful to us as the things that we have experienced, that have shaped us. New and awkward is just awkward; old but well done is just well done. For a human being there should be a pleasure in seeing anything done well, with gravity and devotion: whittling or whistling, playing a toccata, laying out a garden, writing a program, spinning a yo-yo, engineering an industrial process, writing a constitution, inventing a language. The value of originality does not occur independently.

The caricature is absurd, but not new. Consider Swift, who maligned early Romanticism by comparing its writers to the spider, spinning flimsy pale cobwebs from her own substance, over against the bee, who patiently collects nectar from a hundred flowers and distills it to thick golden honey. Our idea of social creativity matches this caricature, only taking the part of the beehive, instead of the bee. But this will not do. The vomit of bees and the spinneret-slurry of spiders are alike products of digestion; one happens to be more pleasant to observe than the other. But I have seen golden spiders spin golden webs.

The spider conceals its sources; from the look of its web you cannot tell what meal she has turned to silk. The bee flavors his honey according to the kind of flower he feeds from. But compare their work. The bee turns something pleasant in one way into something pleasant in another way; the spider turns something harmful into something useful, even sometimes beautiful—those golden webs in the morning sun glisten like blossoms.

But enough analogy. The architecture of social and participatory creativity is defined in its legal instruments, the GPL, Creative Commons, &c., as one based in the preservation of attribution. More than a legal reality, this becomes a moral principle. Young people, at least, regard themselves as being at liberty to use any kind of created material—pictures, songs, characters—where and as they please, expecting that their makers, even if they disapprove of the particular expression, will approve of the idea of reuse as a form of promotion to their ultimate financial benefit. So it may indeed be: but note the presumption that attribution is all the control a maker deserves over their work. And note that even where materials are drawn from the public domain and no legal necessity operates, attribution as a moral principle holds.

But attribution is more than courtesy: to have one's attributions in order is the passport of good work, the condition of its admittance to critical consideration.

The drawbacks of this system are two: it excludes personal experience; and it narrows the range of influences it is wise to receive, to the range of influences it is wise to admit to. The difficulty in using personal experience is that precisely the reaction that a criticism of originality most values—"Where did that come from?"—is the reaction that a criticism of attribution most deprecates, because it requires that question to be answered before criticism can begin. When the naming of influences becomes a public act, the choice of influences obeys the necessities of signaling that all public acts entail—some are in fashion, some are out of fashion, some are pedantic, some are pretentious, some are contemptible. Before the work even begins the selection of influences becomes the first move in the tactics of presenting the work. Because you must expect your work to be tasted with the intent of discerning its influences you must collect them out in the open, under the sunlight. What moves in the close and the dark is off-limits.

The obverse of this problem of narrowed influences is the impossibility of discrimination of influences. Once what is permissible has been agreed upon, to ignore some part of that range seems capricious and arrogant. You must take it all seriously. In education, conversation, and manners, it is a virtue to try to take everyone and everything seriously. As you overcome the instinct to scoff at what is unfamiliar or distasteful, you become more a thinking individual and less an accident of genes and community. And if your ambitions are essayistic or critical, you can stop there. But when you sit down to create something—art, music, literature, science—then you must choose: whom do you take seriously? If you cannot choose, you cannot act. You cannot have Leonardo and Warhol, Bach and Glass, Homer and Joyce, Cantor and Kronecker, Witten and Penrose, Gould and Dawkins. You may reject without disrespect: but you must choose. Where one is old and one is new, one must be obsolete, and one modern, or one humane, the other a fad or disease. When both are old one is classic, one is dated or irrelevant. When both are new, one is avant-garde, and one is bourgeois; or one is navel-gazing, the other is world-engaging. You do not even have to always make the same choice: but you must always chose. If you fail to choose, if you cultivate eclecticism to the point of indecision and deference to the point of impotence, you condemn yourself to the insubstantiation of Buridan's ass, which in the thought experiment starves for being unable to choose between two identical piles of hay. I love an army of writers for their particular excellences, and will defend them in entirety for the sake of those excellences. But when I sit down to write there are perhaps a half-dozen writers whom I can take seriously; everyone else seems ridiculous. I only demur that six is probably too many.

Consider the "mash-up." It is, defying the dictionary agon of creativity and criticism, a creative criticism; juxtapositions are not random, but either analogical or contrastive. If two analogical things are comparable, they supply understanding of each other exactly as a written analogy would. If contrastive they heighten the contrast either to absurdity or poignancy. As an art form the mash-up is not a uncalculating jeux d'esprit, not afterthought or diversion; it is tendentious and didactic. It is a popular pastime as politics are: it gives people a chance to opine, and lends to the opinion the sophistication of the underlying elements, as including Democracy or Justice in an assertion lends it the sophistication of the philosophies the words represent.

Mash-ups are not nearly as unpredictable as they should be, if they are primarily creative. Nothing technically prevents mixing parts of a dozen or a hundred songs and movies, but more than two of each would be out of order, because if it were done well the parts used would meld and the critical tension would be absent.

Those most likely to disagree with this are not the makers of mash-ups but academics with the odd, pseudo-ethnographic habit of taking Internet communities and fads far more seriously than they take themselves, of concentrating on the extremes and the fringes and ignoring the consensuses that these communities have about themselves (compare fanfiction, for example). They also are given to confounding creativity with informality—you will hear dialects, for instance, praised as "creative," as if Grimm's Law were an expression of the creativity of German speakers. Dialects indeed require creativity in the sense that they might have been otherwise; but they are not themselves creations because they had to be something and might just as easily have been something else. Creation is by definition not inevitable.

This division between creativity and creation is, in its effects and its attractions, not unlike the division between sex and reproduction. Creativity, as such, is delightful, relaxing, and consoling. But when it results in creations, these attractions disappear. Your creation preoccupies and distracts you, disrupts your sleep, fills you with doubts. Like any offspring it warps you in its gestation, enslaves you in its infancy, and grows by the life it takes from you. To nourish and raise it takes strength and time to spare. Creativity is a disposition, a faculty, a gift; creation is a vocation, a devotion, a discipline. It is absorbing and racking. One creation does not easily follow another, and too many in succession, or at once, can break a constitution and unmake a home. Even once it has some substance and independence one is weighted with an interminable responsibility for it, to find the right place and the right friends for it, to see it sent out into the world. Even then you are left exhausted, restless, and anxious. You can always grow, but you cannot always bear fruit, at least not the same kind. Even the mind needs cover crops, needs seasons to thicken its sap and sink its roots. This is not to reprove those who seeks the pleasure without the responsibility; only to observe that one who, by being creative, presumes to know what it takes to create something, is a damn fool.

The middle distance

Between the present and the past there is more of the past. The discovery of this submerged past is the difference between history and hindsight. History raises it; hindsight floats over it. History, plodding, myopic, keeps its eyes on the ground, watches every step. Hindsight, presbyopic, stumbles over a fuzzy path to a past in clear focus. History is a discipline, attained by few; hindsight is a faculty, born with all.

It is easier to sympathize with the young dead than the old living. The dead return to youth: their stories are the stories of what they did with their strength; the rest is epilogue and aftermath. But we cannot help seeing the living, not for what they did, but for what they have become. Nor can the dead refuse our sympathy and admiration. We can bestow it freely, without fear of rejection—bestow it on any basis at all, even just for the pleasure of bestowal. But the living can still refuse us, still embarrass us. This understandable gap in sympathy becomes a strange gap in perception. Not perceiving the immediate supports of history—not seeing the wires that hold us up—we see history as if we floated free from it, as if we were free to choose the antecedents we like, the lessons we want to hear; we remain part of hindsight's audience, not history's procession.

The quotable past, the repeatable past, the restorable past—these are the pasts of hindsight. The past, like wood, must dry out to be useful. Green, wet history is easiest to shape—consider how many shapes the legacy of, say, a living former President can be carved into, and how fast the changes can be made—but as it dries it warps, cracks, and splits. Memoir and journalism are hothouse arts, coercing early and transient blooms. Lasting work can only be done dry wood, once the life has left it.

Remember with Bacon that it is we (we much more than he) who are the ancients, not those who lived innocently in the youth of the world. We have had a long time to learn. It is not enough to quote Confucius, Jesus, Muhammad without being able to say how present evils came to be despite the Sage Emperors and the Apostles and the Best Generations.

We are nearsighted when we look forward, farsighted when we look backward. Renovators and restorers always favor one past over another. The fans of old music and old movies are fans of the music and movies of their grandfathers' day, not their fathers'. Anglospherically, to the Victorians the Regency was more remote than the Elizabethans; to the Gilded Age the Civil War was more remote than the Revolution; to the Elizabethans and Founders alike, the true past was Roman. William Shakespeare is less remote to me than Samuel Johnson; but to see Shakespeare quoted in Johnson's dictionary opens a chasmal distance under my feet.

The past is easier for us to understand than the present, not because the past was simpler than the present, but because we do not have to convince ourselves that it is worth living in. We can face nakedly what was unbearable in it. No one not uncommonly cruel can judge a life wasted until it is over; and so true historical judgment is reserved until what it studies is over, not just in its events, but in its needful illusions.

Both hindsight and history can connect the present and the past; but only history connects the past with the past. The past is full of the seeds of the present; but it also held the seeds of what was yet the future. The Germany of the Bundesrepublik was implicit in the Germany of Goethe; but it was there beside the Germany of the last Kaiser, and the Germany of Weimar, and the Germany of the last Reichskanzler. All the doors were open, and no one knew which ones opened on dead ends. We have shut these doors with such great effort that it seems almost sacrilegious to open them again for any reason. It is easy to imagine the past with doors closed; it is very hard to imagine the past with doors open. But this is the difference between history and hindsight.