Departments

Art vs. Life

Sometimes art frightens me. Sometimes I wonder what art is taking to match what it gives. Surely talk was faster and more excursive before recording; surely clothing was more splendid and plumed before photography; surely gesture and pose were quicker and more lifelike before movies. Maybe worship was more devoted, before idols and icons; maybe love was stronger, memory keener, regret fiercer before the portrait; maybe voices were softer, birdsong sweeter, before music. Art universalizes particular experience, delivers it across space, time, and language. But what we receive as if transmitted, might only be lost; what we receive as if preserved, might only be embalmed; what we receive as if translated, might only be parodied. How are we, art-shrouded, art-addled to know any better? When every sense bends to its particular art, do we more watch than see, more listen than hear, more savor than taste? At best art stands between us and life; at worst it supplants our lives. What could Arthur Henry Hallam have done with his life to match In Memoriam? – where the use of grief in art prevents us from sharing that grief, we who so value the expression. I fear art and I love it; I fear it as I love it, because love is power given, and power brings abuse of power. So many minds are lost to art, full of images and stories they do not even recognize as art, puppets to old, ingrown art (they call it common sense). I study art, value it, and judge it not to pass life but to save life: because to study, value, and judge art is the only defense against it.

Cognitive psychology 5/5

If all this were true it would have two consequences. First, it would require a strict distinction between what a person reports their perception to be and what that perception actually is. The act of perceiving a perception in order to describe or render it would be understood as a skill, subject to cultivation. What cognitive psychology identifies as biases of human perception would be no more than an untrained clumsiness. And second, it would regard the ways that cognitive psychology identifies to influence human behavior as weaknesses to be compensated by education, not intrinsic handles to pull in a desirable direction.

All this essaying is futile, I know. Even if I were right, no one would ever call me right, except in retrospect; and I am very likely wrong. In doubting a large field of scientific work I am certain to sound like a crank. I can only note that I am not nailing up theses; and that if I am wrong I am only hurting myself.

Postscript 2014

Since I wrote this essay I have participated, as a subject, in several experiments in cognitive psychology. In consequence, I now regard cognitive psychology as a pseudoscience.

Here is the problem: in order to avoid the appearance of shirking, the subject has no choice but to express preferences in the absence of preference and beliefs in the absence of belief. Even without financial stakes, ordinary social conventions compel the subject, in order to be kind to the experimenter, to deceive them.

It goes like this. The experimenter asks, “Does this make me look fat?” The subject says, “No.” And the experiment concludes: “Human beings are incapable of accurate estimation of one another’s weight.” Soon books are written about “cognitive weight bias.” “For our ancestors,” the editorial begins, “underestimating one another’s weight was an important survival strategy.”

This is what cognitive psychology is, and it is all cognitive psychology is: the heedless elaboration of a social solecism. In the experiments of cognitive psychology, the only true psychologists are the subjects.

Cognitive psychology 4/5

In the laboratory this fine distinction can only split hairs. I should supply a larger example that will bear the division better.

Drawing is self-reporting of a kind. Proposition: if you accept the standards that cognitive psychology uses to judge the self-reporting of perceptions, then you must also accept that people who draw perceive the world as they draw it to be; which is absurd, because they would not then be fit to live. When you draw a stick figure, that does not justify the conclusion that stick figures are what you see.

Four objections present themselves.

1. If drawing is a manual skill, then non-artists may simply lack enough control to make the pencil do what they want.

There is certainly room for manual skill in drawing; but it is not required. Anyone who can negotiate the angles and curves of the alphabet has all the control required for a good sketch.

2. Non-artists might really see the world as they draw it, but still be able to function because seeing and drawing differ in the same way as recognition and recollection.

The problem is that non-artists often do worse attempting to draw from life than from memory. Partly this because drawing from memory spares them the awkward necessity of refocusing between paper and subject, which is a distinct skill. If you tell someone to draw a leaf from memory the result is usually recognizable; if you give someone a leaf to draw you may get a tracing, or something that looks like the coastline of an imaginary island.

Allow me to employ, for the moment, a simplified model of how memory works. Sustaining the computational metaphor, recall is presumably something like a value stored as a string, and recognition is something like a value stored as a hash.

The difference is this. When you log into a (well-designed) site, the site has no record of your password as such. Instead, the site transforms your password using a mathematical operation which is easy to apply but difficult to reverse, and compares the result, the hash, with a hash that it has on record. The site recognizes your password without recalling it. (This is important because it ensures that if someone were to crack the database they could obtain only the useless hashes, not the passwords themselves.)

To distinguish recall and recollection we could thus suppose that the mind contains two ideas of the leaf: one an ideal, schematic, low-density Leaf-concept that can easily be handled in working memory when thinking about leaves – good enough to figure out how much pressure to use in raking and how big a pile of leaves will fit in one bag – and a separate Leaf-hash, which the brain can readily compare particular objects to in order to recognize them as leaves. Thus, when you attempt to draw a leaf, all you can draw is the Leaf-concept, because that is all you really know about leaves. But, through some sort of hashing, you can still recognize leaves that do not match the concept.

How, then, could anyone ever draw a leaf at all? The artist does not add parameters to the Leaf-value; indeed the artist ceases to have any Leaf-value at all. Artists draw not things, but configurations of color and shadow.

But if this distinct leaf-value – the leaf as recalled – is useless either to recognize or depict a leaf, what good is it?

The leaf-value might somehow guide the process of recognition, suggests that something might be a leaf and directs the brain to check. But really, what does the bright-green oak-maple hybrid I expect my English-speaking readers might draw have to do with leaves? With leaves green, red, yellow, brown, anthocyaninic blue and black, leaves from grass blades to tropical aroid elephant ears?

Either the brain harbors a Leaf-value for no other purpose than to thwart artists – not even in dreams do trees have such unleaflike leaves – or this distinction of recognition and recall is false, and this parody of a leaf is cover for the inability to consciously perceive how we conscious perceive a leaf?

3. Non-artists do not perceive the world as they draw it, but the way they draw distorts what they see, the same way memory in general distorts what they experience.

This is a better objection because there is an obvious comparison between how memory and drawing both exaggerate emotionally significant aspects of perception; between how memory artfully fits experience to narrative, and how drawing unartfully fits vision to outline.

Consider outlines. Outlines exist nowhere but in non-artists’ drawings. Nature defies outline; vision nowhere finds it. Nonetheless when non-artists draw, invariably they first attempt an outline – even cave painters, who were artists when they drew, still loved to outline their hands on the rock. Outlines are not incompatible with art – the Egyptians made high art of shaded outlines – but they are prior to art. Abstract outlines do not depict anything: their value is that, being abstractions, they preserve the symmetries and topologies of what they anonymize – they are mathematical in character and, for simple shapes, the origin of mathematics in the promise of geometry.

The comparison with memory and narrative is obvious. We need not invoke the world-cone diagrams of physics to understand that in anything that happens, an imponderable diversity of causes conspire, and that for anything that happens, an innumerable diversity of effects result. Every event is part of the fabric of the whole world.

Narrative, like outline, is unreal but useful; patterns of events, like shapes, though potentially infinite in variety, tend to approximate simple forms with predictable properties.

But drawing cannot distort information in the same way as memory. However badly someone draws, they do not ever act as if they see the world that way. To walk or sit, to touch or pick up, proves that a non-artist does not bungle seeing the world in same way as rendering it. Even in dreams no one sees as badly as they draw. The brain does not retouch value into outline before storing it; the reduction to outline is a loss within the brain.

4. If someone does not draw in a style that resembles Western art, that person is not therefore a non-artist. High cultures elevate as art what Westerners might regard as mistakes. Non-artists may not exist. Could it be that everyone distorts their perceptions artistically when they draw, most in more dramatic ways than the subtle ones traditionally valorized in the West?

Western art has its artificial conventions, but to say that photography and the kind of art that obeys the same laws of optics and projection is essentially a cultural convention requires more gall than I have. Human eyes only work one way. The anecdote says that a pygmy brought out of the forest could not tell buffaloes on the horizon from insects. Assume the anecdote is true; what does it prove? It proves that there exists such a thing as a myopic pygmy. Or should we believe that pygmies never look up into the crowns of trees? That they cannot tell that the bird overhead is the same as the bird in the bush? To prize optic validity as artistic quality is cultural; but the validity itself is physiological.

So if for the first 3500 years or so of human history no culture or civilization held the goal of art to be to represent just what was seen, then of course we are readily distracted by other goals, and must be induced by long training to give them up for this one particular goal of realism.

But I am unwilling to credit that the artistic way is ever the easy way. The artists of the cave walls, of Egypt and Sumer, of India and Persia, were no lazier than the artists of Venice, Florence, and Amsterdam. They were not primitive; they were not innocent. Assimilating natural errors to artistic traditions they happen to resemble represents a more ridiculous pedestal for Western art than any academy ever proposed.

Too, perspective and foreshadowing are not utterly alien to the brain; I suspect that even those whose arts reject these values, do in fact dream with them.

Cognitive psychology 3/5

The formal scientific method abstracts a discovery from the experiment that made it as early and thoroughly as possible – by suppressing the personalities of the experimenters, by using standardized equipment and methods of analysis, by ensuring reproducibility and undertaking it. All these mechanisms are relatively weak in cognitive psychology. Success results in advancement, experiments are themselves the instruments, and the expense and inconvenience of recruitment, with the backlog of experiments yet to be done, make reproducing experiments a last resort. But psychology has never worked the formal way – try it, and you get Behaviorism – and this way seems, for the most part, to work.

Cognitive psychology had to fight hard to free itself from Behaviorism. Here we come to the problem. Behaviorism rejected introspection; cognitive psychology accepts it; but it exchanges uncritical rejection for uncritical acceptance. I think that there is something that cognitive psychology gets fundamentally wrong about introspection: it assumes that our perceptions resemble our perception of our perceptions.

(Nominally cognitive psychology rejects “introspection” but accepts “self-reporting” – I do not understand the difference.)

For example: a form of experiment is to present a list of words on a common theme for memorization. The list omits some particularly obvious entry that could be expected to occur with the rest. When memorizers repeat the list back, they often supply this absent word. The obvious conclusion, the conclusion that cognitive psychology draws, is that the memorizers have perceived the absent word; that a subconscious process – subconscious in the sense of subroutine – inserts the extra word into the memorizer’s perception of the list.

But this may not be so. Precisely because the list is simple it obscures the distinction between perception and perception-of-perception. There is another interpretation. What if the memorizers do not perceive the extra word? What if, instead, in reproducing the list, the memorizers perceive their own perception of the list in an incorrect, yet conscious way? This multiplies entities, but I think it ultimately provides the more parsimonious explanation.

Cognitive psychology 2/5

Before I attempt a reasoned argument I want to sketch four broad points of general discomfort with cognitive psychology.

1. It is difficult to pay sustained attention to cognitive psychology without feeling that the point of many experiments is not the paper, but the press release. When we hear “science writer”, we imagine journalists trawling scientific publications for stories with headline appeal. But this is not how it works. Institutions (with what degree of involvement from researchers I do not know) push press releases to sites like EurekAlert!; journalists may check up on them to add human interest, but the transition between experiment and news item happens inside the institution. As a regime of incentives, this strikes me as perverse.

2. Cognitive psychology is based on the model of the brain as a computer; but this is a trivial statement, nearly a tautology. It implies that a computer is a kind of machine, another something like a clock or a car; a sophisticated machine, certainly, but just a machine; just a machine, and the brain just another example.

Cognitive psychology began when the psychologists of the 1960s saw computers and found in them an analogy for the mind. Unfortunately this is still true: cognitive psychology still understands the mind as a computer of the 1960s, complete with fMRI blinkenlights.

But this is not what a computer is. The history of computers is not a history of invention; it is a history of discovery. We did not invent computers; we discovered computation. Computation is an aspect of nature, something like heat or gravity, a property of all sufficiently complex systems. If something is not a computer, it is less than a computer. So of course the brain is a computer; and—?

3. Cognitive psychology and ethics in psychological research are roughly coeval. The obvious suspicion is to wonder whether we discovered cognitive psychology because it was the only psychology we could discover ethically? In ages when the human form was held sacred, even after death, anatomy without dissection went badly wrong. Medieval anatomy reflected not the body, but medieval ethics. Does psychology reflect the mind, or does it reflect the ethics that direct our examination of the mind? Psychologists cannot do harm, so they find no harm in the mind; psychologists cannot deceive, so they find biases, not gullibility; psychologists must have volunteer consent, so they find the mind sociable and cooperative.

4. Imagine you are an experimental subject in cognitive psychology. After the experiment, you will be paid, but only if you appear to have taken the experiment seriously. You need the money; why else would you be here?

You are given a series of tasks: information to evaluate, items to rate, decisions to make. What do you prefer? Do you agree strongly? Disagree strongly? Answering honestly is out of the question, because it would mean giving the same answer every time – whatever stands for “I don’t care.”

What you do care about is getting paid. You beware gotchas; you check and recheck the wording for clues about what you’re supposed to do; you ransack your psyche for reactions that you can exaggerate into preferences and ratings; and, failing all else, you guess. That’s what they used to tell you about tests, after all; better to guess than not to answer. And you have to answer, whatever the consent form says. It’s expected.

In the end you collect your money, leave, buy groceries. Meanwhile the experimenter gloats over the data, having once again demonstrated how irrational people are, how little they act in their own interest.

Cognitive psychology 1/5

So pervasive have the claims of cognitive psychology become, so often do I encounter the rhetoric in which the introductory anecdote takes the form of a cognitive psychology experiment, that I find it necessary to decide just what I think of it, and where I stand. I do so best by doing so publicly.

I admire cognitive psychology. It is the most coherent and fruitful paradigm in psychology. The nosology of biases, by itself, might be the greatest act of thinking about thinking since the Greek achievement of logic.

Of course, like logic, the doctrine of biases is subject to misuse – the more so because it is new and we have little experience of its limitations. In logic, the spirit of catching out logical fallacies trivializes itself in the principle of the fallacy of fallacies – the error of assuming that because an argument is fallacious, its conclusion is wrong. Now, because biases are innate, but the recognition of biases is acquired, the use in argument of accusations of bias cannot be closed by a neat “bias of biases”; but something equivalent is required to avoid the error of rejecting conclusions not because they are wrong, but because they are biased – which, in itself, is no argument.

Biases are everywhere. But the very pervasiveness of cognitive psychology risks creating a new bias – a meta-bias, a confusion between biased and wrong.

Reading

We readers – some of us are compelled to seek the true way of reading, the one discipline that suits all books. The attractions are many. There is a sense of belonging in knowing what kind of reader you are; there is confidence in knowing exactly what you will get from a book; and there is the general attraction of any discipline in life – the focus that comes from the neglect of any concerns beyond that focus.

But we readers, after all, are people who choose to spend unusual proportions of our time alone. The value of the independence we must have to read at all holds for our choice of how to read. We do not read to rehearse an opinion, but to animate it – to produce some motion in it analogous to the biology of growth, or healing, or even decay. (Sometimes we read not to reinforce our opinions, but to escape them.) And sometimes a book should produce the same motion in the idea of reading itself.

The value of reading a book for instruction, instead of running a search; the value of reading a book for diversion, instead of participating in the lives of friends – the value is that the writer, in filling up so huge a thing as even a short book is, must call upon and involve their whole experience and sense of the world, must turn themselves inside-out in such a way that someone else can put them on.

If reading is to be creative – and why not let it be, if it can? – then it must be by a parallel inversion, a counterpoint to writing in which reading calls on and involves you as writing does; which must therefore be subject to change as your experience of the world changes. And for a reader, experience must include reading itself.

All of which is to say that I try not to worry about what kind of reader I am or about how I should read. I change with reading, and I do not even know it until I re-read a book after a long interval and discover how different a reader I have become.

Pseudoscience

Because historians do not understand science, scientists write their own history. So when the perimeters of science change – when what appeared to be a science turns out to have been a pseudoscience – the dead or, worse, retired scientists who pursued it are sorted ex post facto. The ones who anticipated the change remain scientists; the ones who fell for it turn out to have been pseudoscientists. Of course, they were pseudoscientists all along – everyone knew it – but that media, that irresponsible media, they were the ones who made it seem otherwise – we scientists always knew better; it’s your fault for being gullible, you cargo cultists.

You can watch this happening to string theory. In one direction or the other expect to see it in climatology. Let us rehearse the explanations in advance. Someday you may recognize one or the other as a news item or a footnote in history.

Case A:

Despite the overwhelming evidence for the anthropogenic origin of global warming, a movement of so-called skeptics, organized through the resources of corporations and political parties whose interests were threatened by the urgent measures the situation required, were able able to delay action until the forces behind climate change had become irreversible. Certain scientists, some through misplaced but sincere conviction, but most because it was convenient and attention-getting, continued to cast doubt on the evidence even after the scientific consensus was incontrovertible. Nonetheless, none of the best scientists failed to see reason, and it is simply false to assert, as some have, that scientists themselves were at fault.

Case B:

In the tense political atmosphere of the early 21st century it was only natural that movements were eager to enlist scientific evidence to support their policies. Given the apocalyptic mood of the time – a quick look at the box-office returns for the first decade of the century will show that the impending end of the world was a cultural commonplace – it is unsurprising that what developed was a superficially scientific vision of the apocalypse. The media too were part of this zeitgeist, and they freely exaggerated a concern many scientists had with the unknown effects of carbon dioxide, and certain alarming high-level trends, into a movement complete with speeches, rallies, and platforms. Nonetheless, none of the best scientists failed to see reason, and it is simply false to assert, as some have, that scientists themselves were at fault.

I am not proposing a debate. I am not trying to convince you of anything except the irrelevance of your convictions. Climate change is just a convenient subject.

(For the record my view is: better safe than sorry. The absence of anthropogenic global warming would be harder to explain than its presence; I therefore am in favor of anything short of irreversible geo-engineering.

Though I do admit to disgust for those who condemn “economists’ reliance on models” with one fork of their tongues, while the other extols “the proven science of climate change” – as if modeling the economy were any harder than modeling the climate.)

I am sarcastic because I am disgusted. But I am not attacking science. I trust that what has been declared a pseudoscience is so. In this retrospect science is as good as infallible. But I dispute the hypocrisy which would pretend that pseudoscience has never entered the mainstream of science, or that if it ever had, it would have been due to outside meddling.

“No true scientist would have participated in X; therefore any so-called scientist who participated in X was not a true scientist.” The whole history of the relation of science and pseudoscience is constructed with this tautology.

True, science is not just a vocation, but also an affiliation, a group, and therefore, like any other group with a purpose, compromised by loyalties and solidarities. But the more acute problem is that science and pseudoscience are not dichotomous. Degrees exist between them.

The world protoscience has been advanced for the pre-scientific pursuits that lead into sciences – as astrology lead into astronomy, as alchemy lead into chemistry, as doctoring lead into medicine. There may be many more intermediate degrees of this kind, but I propose only one. Some scientific pursuits are neither sciences nor pseudosciences, but placeholder sciences. The textbook scientific method expects, within a science, that observation, hypothesis, and theory will follow in order. But in much scientific work the science itself is a hypothesis. The first question is not, “What law governs this phenomenon?” but “Is this a phenomenon at all?”

Most -ologies are not really fields at all, but gambits. In science a field does not arrive and then demand methods, subjects, a center and journals; instead the methods, subjects, the center and the journals are how hopeful scientists attempt to bootstrap a new science into being. In the end the attempt either succeeds as a science, or fails as a pseudoscience; but in the meantime it is neither – it invites a science and clears a space for it, saves it a seat. It is a placeholder.

For scientists of a later generation to judge a placeholder science according to its final result is unfair. The scientists who failed were not cranks; the scientists who succeeded were not visionaries. To suppose they could have known better in advance is to posit a faculty which, if it did exist, would make actual science superfluous. A scientist can no more anticipate a pseudoscience than a computer can anticipate a halting problem.

All sciences begin as gambits. The sooner we recognize this, the sooner we can avoid misplacing the faith due a mature science in its placeholder; but more importantly, the sooner this is recognized, the sooner we can begin, not just accommodating such gambits, but encouraging them.