Wednesday, March 08, 2017

How Should We Then Live?

The philosophy of Arthur Schopenhauer, which we’ve been discussing for several weeks now, isn’t usually approached from the angle by which I’ve been approaching it—that is, as a way to talk about the gap between what we think we know about the world and what we actually know about it. The aspect of his work that usually gets all the publicity is the ethical dimension.

That’s understandable but it’s also unfortunate, because the ethical dimension of Schopenhauer’s philosophy is far and away the weakest part of it. It’s not going too far to say that once he started talking about ethics, Schopenhauer slipped on a banana peel dropped in his path by his own presuppositions, and fell flat on his nose. The banana peel in question is all the more embarrassing in that he spent much of the first half of The World as Will and Representation showing that you can’t make a certain kind of statement without spouting nonsense, and then turned around and based much of the second half on exactly that kind of statement.

Let’s review the basic elements of Schopenhauer’s thinking. First, the only things we can experience are our own representations. There’s probably a real world out there—certainly that hypothesis explains the consistency of our representations with one another, and with those reported by (representations of) other people, with less handwaving than any other theory—but all the data we get from the world out there amounts to a thin trickle of sensory data, which we then assemble into representations of things using a set of prefab templates provided partly by our species’ evolutionary history and partly by habits we picked up in early childhood. How much those representations have to do with what’s actually out there is a really good question that’s probably insoluble in principle.

Second, if we pay attention to our experience, we encounter one thing that isn’t a representation—the will. You don’t experience the will, you encounter its effects, but everything you experience is given its framing and context by the will. Is it “your” will? The thing you call “yourself” is a representation like any other; explore it using any of at least three toolkits—sustained introspection, logical analysis, and scientific experimentation—and you’ll find that what’s underneath the representation of a single self that chooses and wills is a bundle of blind forces, divergent and usually poorly coordinated, that get in each other’s way, interfere with each other’s actions, and produce the jumbled and self-defeating mess that by and large passes for ordinary human behavior.

Third, the point just made is difficult for us to accept because our culture prefers to think of the universe as consisting of mind and matter—more precisely, active, superior, personal mind and passive, inferior, impersonal matter. Schopenhauer pokes at both of these concepts and finds them wanting. What we call mind, from his perspective, is simply one of the more complex and less robust grades of will—it’s what happens when the will gets sufficiently tangled and bashed about that it picks up the habit of representing a world to itself, so that it can use that as a map to avoid the more obvious sources of pain. Matter is a phantom—an arbitrarily defined “stuff” we use to pretend that our representations really do exist out there in reality.

Fourth, since the only things we encounter when we examine the world are representations, on the one hand, and will in its various modes on the other, we really don’t have any justification for claiming that anything else actually exists. Maybe there are all kinds of other things out there in the cosmos, but if all we actually encounter are will and representations, and a description of the cosmos as representation and will makes sense of everything we meet with in the course of life, why pile up unnecessary hypotheses just because our cultural habits of thought beg for them?

Thus the world Schopenhauer presents to us is the world we encounter—provided that we do in fact pay attention to what we encounter, rather than insisting that our representations are realities and our culturally engrained habits of thought are more real than the things they’re supposed to explain. The difficulty, of course, is that imagining a universe of mind and matter allows us to pretend that our representations are objective realities and that thoughts about things are more real than the things themselves—and both of these dodges are essential to the claim, hammered into the cultural bedrock of contemporary industrial society, that we and we alone know the pure unvarnished truth about things.

From Schopenhauer’s perspective, that’s exactly what none of us can know. We can at best figure out that when this representation appears, that representation will usually follow, and work out formal models—we call these scientific theories—that allow us to predict, more or less, the sequence of representations that appear in certain contexts. We can’t even do that much reliably when things get complex enough; at that point we have to ditch the formal models and just go with narrative patterns, the way I’ve tried to do in discussing the ways that civilizations decline and fall.

Notice that this implies that the more general a statement is, the further removed it is from that thin trickle of sensory data on which the whole world of representations is based, and the more strictly subjective it is. That means, in turn, that any value judgment applied to existence as a whole must be utterly subjective, an expression of the point of view of the person making that judgment, rather than any kind of objective statement about existence itself.

There’s the banana peel on which Schopenhauer slipped, because having set up the vision of existence I’ve just described, he turned around and insisted that existence is objectively awful and the only valid response to it for anyone, anywhere, is to learn to nullify the will to live and, in due time, cease to be.

Is that one possible subjective response to the world in which we find ourselves? Of course, and some people seem to find it satisfying. Mind you, the number of them that actually go out of their way to cease existing is rather noticeably smaller than the number who find such notions pleasing in the abstract. Schopenhauer himself is a helpful example. Having insisted in print that all pleasure is simply a prelude to misery and an ascetic lifestyle ending in extinction is the only meaningful way to live, he proceeded to live to a ripe old age, indulging his taste for fine dining, music, theater, and the more than occasional harlot. I’m not sure how you’d translate “do what I say, not what I do” into classical Greek, but it would have made an appropriate epigraph for The World as Will and Representation.

Now of course a failure to walk one’s talk is far from rare among intellectuals, especially those of ascetic leanings, and the contrast between Schopenhauer’s ideals and his actions doesn’t disprove the value of the more strictly epistemological part of his work. It does, however, point up an obvious contradiction in his thinking. Accept the basic assumptions of his philosophy, after all, and it follows that the value judgments we apply to the representations we encounter are just as much a product of our own minds as the representations themselves; they’re not objective qualities of the things we judge, even though we’re used to treating them that way.

We treat them that way, in turn, because for the last two millennia or so it’s been standard for prophetic religious traditions to treat them that way. By “prophetic religious traditions” I mean those that were founded by individual persons—Gautama the Buddha, Jesus of Nazareth, Muhammad, and so on—or were reshaped in the image of such faiths, the way Judaism was reshaped in the image of the Zoroastrian religion after the Babylonian captivity. (As Raphael Patai pointed out in quite some detail a while back in his book The Hebrew Goddess, Judaism wasn’t monotheistic until the Jews picked up that habit from their Zoroastrian Persian liberators; quite a few other traits of post-Exilic Judaism, such as extensive dietary taboos, also have straightforward Zoroastrian origins.)

A range of contrasts separate the prophetic religions from the older polytheist folk religions that they supplanted over most of the world, but one of the crucial points of difference is in value judgments concerning human behavior—or, as we tend to call them these days, moral judgments. The gods and goddesses of folk religions are by and large no more moral, or interested in morality, than the forces of nature they command and represent; some expect human beings to maintain certain specific customs—Zeus, for example, was held by the ancient Greeks to punish those who violated traditional rules of hospitality—but that was about it. The deities central to most prophetic religions, by contrast, are all about moral judgment.

The scale of the shift can be measured easily enough from the words “morals” and “ethics” themselves. It’s become popular of late to try to make each of these mean something different, but the only actual difference between them is that “morals” comes from Latin and “ethics” comes from Greek. Back in classical times, though, they had a shared meaning that isn’t the one given to them today. The Latin word moralia derives from mores, the Greek word ethike derives from ethoi, and mores and ethoi both mean “customs” or “habits,” without the language of judgment associated with the modern words.

To grasp something of the difference, it’s enough to pick up a copy of Aristotle’s Nicomachean Ethics, by common consent the most important work of what we’d now call moral philosophy that came out of the ancient world. It’s not ethics or morals in any modern sense of the word; it’s a manual on how to achieve personal greatness, and it manages to discuss most of the territory now covered by ethics without ever stooping to the kind of moral denunciation that pervades ethical thought in our time.

Exactly why religion and morality got so thoroughly conflated in the prophetic religions is an interesting historical question, and one that deserves more space than a fraction of one blog post can provide. The point I want to address here is the very difficult fit between the sharp limits on human knowledge and the sweeping presuppositions of moral knowledge that modern societies have inherited from the age of prophetic religions. If we don’t actually know anything but our representations, and can draw only tentative conclusions from them, do we really know enough to make sweeping generalizations about good and evil?

The prophetic religions themselves actually have a workable response to that challenge. Most of them freely admit that human beings don’t have the capacity to judge rightly between good and evil without help, and go on to argue that this is why everyone needs to follow the rules set down in scripture as interpreted by the religious specialists of their creed. Grant the claim that their scriptures were actually handed down from a superhumanly wise source, and it logically follows that obeying the moral rules included in the scriptures is a reasonable action. It’s the basic claim, of course, that’s generally the sticking point; since every prophetic religion has roughly the same evidence backing its claim to divine inspiration as every other, and their scriptures all contradict one another over important moral issues, it’s not exactly easy to draw straightforward conclusions from them.

Their predicament is a good deal less complex, though, than that of people who’ve abandoned the prophetic religions of their immediate ancestors and still want to make sweeping pronouncements about moral goodness and evil. It’s here that the sly, wry, edgy voice of Friedrich Nietzsche becomes an unavoidable presence, because the heart of his philosophy was an exploration of what morality means once a society can no longer believe that its tribal taboos were handed down intact, and will be enforced via thunderbolt or eternal damnation, by the creator of the universe.

Nietzsche’s philosophical writings are easy to misunderstand, and he very likely meant that to be the case. Where Schopenhauer proceeded step by step through a single idea in all its ramifications, showing that the insight at the core of his vision makes sense of the entire world of our experience, Nietzsche wrote in brief essays and aphorisms, detached from one another, dancing from theme to theme. He was less interested in convincing people than in making them think; each of the short passages that makes up his major philosophical works is meant to be read, pondered, and digested on its own. All in all, his books make excellent bathroom reading—and I suspect that Nietzsche himself would have been amused by that approach to his writings..

The gravitational center around which Nietzsche’s various thought experiments orbited, though, was a challenge to the conventional habits of moral discourse in his time and ours. For those who believe in a single, omniscient divine lawgiver, it makes perfect sense to talk about morals in the way that most people in his time and ours do in fact talk about them—that is to say, as though there’s some set of moral rules that are clearly set out and incontrovertibly correct, and the task of the moral philosopher is to badger and bully his readers into doing what they know they ought to do anyway.

From any other perspective, on the other hand, that approach to talking about morals is frankly bizarre. It’s not just that every set of moral rules that claims to have been handed down by the creator of the universe contradicts every other such set, though of course this is true. It’s that every such set of rules has proven unsatisfactory when applied to human beings. The vast amount of unnecessary misery that’s resulted from historical Christianity’s stark terror of human sexuality is a case in point, though it’s far from the only example, and far from the worst.

Yet, of course, most of us do talk about moral judgments as though we know what we’re talking about, and that’s where Nietszche comes in. Here’s his inimitable voice, from the preface to Beyond Good and Evil, launching a discussion of the point at issue:

“Supposing truth to be a woman—what? Is the suspicion not well founded that all philosophers, when they have been dogmatists, have had little understanding of women? That the gruesome earnestness, the clumsy importunity with which they have hitherto been in the habit of approaching truth have been inept and improper means for winning a wench? Certainly she has not let herself be won—and today every kind of dogmatism stands sad and discouraged.”

Nietzsche elsewhere characterized moral philosophy as the use of bad logic to prop up inherited prejudices. The gibe’s a good one, and generally far more accurate than not, but again it’s easy to misunderstand. Nietzsche was not saying that morality is a waste of time and we all ought to run out and do whatever happens to come into our heads, from whatever source. He was saying that we don’t yet know the first thing about morality, because we’ve allowed bad logic and inherited prejudices to get in the way of asking the necessary questions—because we haven’t realized that we don’t yet have any clear idea of how to live.

To a very great extent, if I may insert a personal reflection here, this realization has been at the heart of this blog’s project since its beginning. The peak oil crisis that called The Archdruid Report into being came about because human beings have as yet no clear idea how to get along with the biosphere that supports all our lives; the broader theme that became the core of my essays here over the years, the decline and fall of industrial civilization, shows with painful clarity that human beings have as yet no clear idea how to deal with the normal and healthy cycles of historical change; the impending fall of the United States’ global empire demonstrates the same point on a more immediate and, to my American readers, more personal scale. Chase down any of the varied ramblings this blog has engaged in over the years, and you’ll find that most if not all of them have the same recognition at their heart: we don’t yet know how to live, and maybe we should get to work figuring that out.

***
I’d like to wind up this week’s post with three announcements. First of all, I’m delighted to report that the latest issue of the deindustrial-SF quarterly Into the Ruins is now available. Those of you who’ve read previous issues know that you’re in for a treat; those who haven’t—well, what are you waiting for? Those of my readers who bought a year’s subscription when Into the Ruins first launched last year should also keep in mind that it’s time to re-up, and help support one of the few venues for science fiction about the kind of futures we’re actually likely to get once the fantasy of perpetual progress drops out from under us and we have to start coping with the appalling mess that we’ve made of things.

***
Second, I’m equally delighted to announce that a book of mine that’s been out of print for some years is available again. The Academy of the Sword is the most elaborate manual of sword combat ever written; it was penned in the early seventeenth century by Gerard Thibault, one of the greatest European masters of the way of the sword, and published in 1630, and it bases its wickedly effective fencing techniques on Renaissance Pythagorean sacred geometry. I spent almost a decade translating it out of early modern French and finally got it into print in 2006, but the original publisher promptly sank under a flurry of problems that were partly financial and partly ethical. Now the publisher of my books Not the Future We Ordered and Twilight’s Last Gleaming has brought it back into print in an elegant new hardback edition. New editions of my first two published books, Paths of Wisdom and Circles of Power, are under preparation with the same publisher as I write this, so it’s shaping up to be a pleasant spring for me.

***
Finally, this will be the last post of The Archdruid Report for a while. I have a very full schedule in the weeks immediately ahead, and several significant changes afoot in my life, and won’t be able to keep up the weekly pace of blog posts while those are happening. I’m also busily sorting through alternative platforms for future blogging and social media—while I’m grateful to Blogger for providing a free platform for my blogging efforts over the past eleven years, each recent upgrade has made it more awkward to use, and it’s probably time to head elsewhere. When I resume blogging, it will thus likely be on a different platform, and quite possibly with a different name and theme. I’ll post something here and on the other blog once things get settled. In the meantime, have a great spring, and keep asking the hard questions even when the talking heads insist they have all the answers.

Wednesday, March 01, 2017

The Magic Lantern Show

The philosophy of Arthur Schopenhauer, which we’ve been discussing for the last three weeks, was enormously influential in European intellectual circles from the last quarter of the nineteenth century straight through to the Second World War.  That doesn’t mean that it influenced philosophers; by and large, in fact, the philosophers ignored Schopenhauer completely. His impact landed elsewhere: among composers and dramatists, authors and historians, poets, pop-spirituality teachers—and psychologists.

We could pursue any one of those and end up in the place I want to reach.  The psychologists offer the straightest route there, however, with useful vistas to either side, so that’s the route we’re going to take this week. To the psychologists, two closely linked things mattered about Schopenhauer. The first was that his analysis showed that the thing each of us calls “myself” is a representation rather than a reality, a convenient way of thinking about the loose tangle of competing drives and reactions we’re taught to misinterpret as a single “me” that makes things happen. The second was that his analysis also showed that what lies at the heart of that tangle is not reason, or thinking, or even consciousness, but blind will.

The reason that this was important to them, in turn, was that a rising tide of psychological research in the second half of the nineteenth century made it impossible to take seriously what I’ve called the folk metaphysics of western civilization: the notion that each of us is a thinking mind perched inside the skull, manipulating the body as though it were a machine, and now and then being jabbed and jolted by the machinery. From Descartes on, as we’ve seen, that way of thinking about the self had come to pervade the western world. The only problem was that it never really worked.

It wasn’t just that it did a very poor job of explaining the way human beings actually relate to themselves, each other, and the surrounding world, though this was certainly true. It also fostered attitudes and behaviors that, when combined with certain attitudes about sexuality and the body, yielded a bumper crop of mental and physical illnesses. Among these was a class of illnesses that seemed to have no physical cause, but caused immense human suffering: the hysterical neuroses.  You don’t see these particular illnesses much any more, and there’s a very good reason for that.

Back in the second half of the nineteenth century, though, a huge number of people, especially but not only in the English-speaking world, were afflicted with apparently neurological illnesses such as paralysis, when their nerves demonstrably had nothing wrong with them. One very common example was “glove anesthesia”: one hand, normally the right hand, would become numb and immobile. From a physical perspective, that makes no sense at all; the nerves that bring feeling and movement to the hand run down the whole arm in narrow strips, so that if there were actually nerve damage, you’d get paralysis in one such strip all the way along the arm. There was no physical cause that could produce glove anesthesia, and yet it was relatively common in Europe and America in those days.

That’s where Sigmund Freud entered the picture.

It’s become popular in recent years to castigate Freud for his many failings, and since some of those failings were pretty significant, this hasn’t been difficult to do. More broadly, his fate is that of all thinkers whose ideas become too widespread: most people forget that somebody had to come up with the ideas in the first place. Before Freud’s time, a phrase like “the conscious self” sounded redundant—it had occurred to very, very few people that there might be any other kind—and the idea that desires that were rejected and denied by the conscious self might seep through the crawlspaces of the psyche and exert an unseen gravitational force on thought and behavior would have been dismissed as disgusting and impossible, if anybody had even thought of it in the first place.

From the pre-Freud perspective, the mind was active and the body was passive; the mind was conscious and the body was incapable of consciousness; the mind was rational and the body was incapable of reasoning; the mind was masculine and the body was feminine; the mind was luminous and pure and the body was dark and filthy. These two were the only parts of the self; nothing else need apply, and physicians, psychologists, and philosophers alike went out of their way to raise high barriers between the two. This vision of the self, in turn, was what Freud destroyed.

We don’t need to get into the details of his model of the self or his theory of neurosis; most of those have long since been challenged by later research. What mattered, ironically enough, wasn’t Freud’s theories or his clinical skills, but his immense impact on popular culture. It wasn’t all that important, for example, what evidence he presented that glove anesthesia is what happens when someone feels overwhelming guilt about masturbating, and unconsciously resolves that guilt by losing the ability to move or feel the hand habitually used for that pastime.

What mattered was that once a certain amount of knowledge of Freud’s theories spread through popular culture, anybody who had glove anesthesia could be quite sure that every educated person who found out about it would invariably think, “Guess who’s been masturbating!” Since one central point of glove anesthesia was to make a symbolic display of obedience to social convention—“See, I didn’t masturbate, I can’t even use that hand!”—the public discussion of the sexual nature of that particular neurosis made the neurosis itself too much of an embarrassment to put on display.

The frequency of glove anesthesia, and a great many other distinctive neuroses of sexual origin, thus dropped like a rock once Freud’s ideas became a matter of general knowledge. Freud therefore deserves the honor of having extirpated an entire class of diseases from the face of the earth. That the theories that accomplished this feat were flawed and one-sided simply adds to his achievement.

Like so many pioneers in the history of ideas, you see, Freud made the mistake of overgeneralizing from success, and ended up convincing himself and a great many of his students that sex was the only unstated motive that mattered. There, of course, he was quite wrong, and those of his students who were willing to challenge the rapid fossilization of Freudian orthodoxy quickly demonstrated this. Alfred Adler, for example, showed that unacknowledged cravings for power, ranging along the whole spectrum from the lust for domination to the longing for freedom and autonomy, can exert just as forceful a gravitational attraction on thought and behavior as sexuality.

Carl Jung then upped the ante considerably by showing that there is also an unrecognized motive, apparently hardwired in place, that pushed the tangled mess of disparate drives toward states of increased integration. In a few moments we’ll be discussing Jung in rather more detail, as some of his ideas mesh very well indeed with the Schopenhauerian vision we’re pursuing in this sequence of posts. What’s relevant at this point in the discussion is that all the depth psychologists—Freud and the Freudians, Adler and the Adlerians, Jung and the Jungians, not to mention their less famous equivalents—unearthed a great deal of evidence showing that the conscious thinking self, the supposed lord and master of the body, was froth on the surface of a boiling cauldron, much of whose contents was unmentionable in polite company.

Phenomena such as glove anesthesia played a significant role in that unearthing. When someone wracked by guilt about masturbating suddenly loses all feeling and motor control in one hand, when a psychosomatic illness crops up on cue to stop you from doing something you’ve decided you ought to do but really, really, don’t want to do, or when a Freudian slip reveals to all present that you secretly despise the person whom, for practical reasons, you’re trying to flatter—just who is making that decision? Who’s in charge? It’s certainly not the conscious thinking self, who as often as not is completely in the dark about the whole thing and is embarrassed or appalled by the consequences.

The quest for that “who,” in turn, led depth psychologists down a great many twisting byways, but the most useful of them for our present purposes was the one taken by Carl Jung.

Like Freud, Jung gets castigated a lot these days for his failings, and in particular it’s very common for critics to denounce him as an occultist. As it happens, this latter charge is very nearly accurate.  It was little more than an accident of biography that landed him in the medical profession and sent him chasing after the secrets of the psyche using scientific methods; he could have as easily become a professional occultist, part of the thriving early twentieth century central European occult scene with which he had so many close connections throughout his life. The fact remains that he did his level best to pursue his researches in a scientific manner; his first major contribution to psychology was a timed word-association test that offered replicable, quantitative proof of Freud’s theory of repression, and his later theories—however wild they appeared—had a solid base in biology in general, and in particular in ethology, the study of animal behavior.

Ethologists had discovered well before Jung’s time that instincts in the more complex animals seem to work by way of hardwired images in the nervous system. When goslings hatch, for example, they immediately look for the nearest large moving object, which becomes Mom. Ethologist Konrad Lorenz became famous for deliberately triggering that reaction, and being instantly adopted by a small flock of goslings, who followed him dutifully around until they were grown. (He returned the favor by feeding them and teaching them to swim.) What Jung proposed, on the basis of many years of research, is that human beings also have such hardwired images, and a great deal of human behavior can be understood best by watching those images get triggered by outside stimuli.

Consider what happens when a human being falls in love. Those who have had that experience know that there’s nothing rational about it. Something above or below or outside the thinking mind gets triggered and fastens onto another person, who suddenly sprouts an alluring halo visible only to the person in love; the thinking mind gets swept away, shoved aside, or dragged along sputtering and complaining the whole way; the whole world gets repainted in rosy tints—and then, as often as not, the nonrational factor shuts off, and the former lover is left wondering what on Earth he or she was thinking—which is of course exactly the wrong question, since thinking had nothing to do with it.

This, Jung proposed, is the exact equivalent of the goslings following Konrad Lorenz down to the lake to learn how to swim. Most human beings have a similar set of reactions hardwired into their nervous systems, put there over countless generations of evolutionary time, which has evolved for the purpose of establishing the sexual pair bonds that play so important a role in human life. Exactly what triggers those reactions varies significantly from person to person, for reasons that (like most aspects of human psychology) are partly genetic, partly epigenetic, partly a matter of environment and early experience, and partly unknown. Jung called the hardwired image at the center of that reaction an archetype, and showed that it surfaces in predictable ways in dreams, fantasies, and other contexts where the deeper, nonrational levels come within reach of consciousness.

The pair bonding instinct isn’t the only one that has its distinctive archetype. There are several others. For example, there’s a mother-image and a father-image, which are usually (but not always) triggered by the people who raise an infant, and may be triggered again at various points in later life by other people. Another very powerful archetype is the image of the enemy, which Jung called the Shadow. The Shadow is everything you hate, which means in effect that it’s everything you hate about yourself—but inevitably, until a great deal of self-knowledge has been earned the hard way, that’s not apparent at all. Just as the Anima or Animus, the archetypal image of the lover, is inevitably projected onto other human beings, so is the Shadow, very often with disastrous results.

In evolutionary terms, the Shadow fills a necessary role. Confronted with a hostile enemy, human or animal, the human or not-quite-human individual who can access the ferocious irrational energies of rage and hatred is rather more likely to come through alive and victorious than the one who can only draw on the very limited strengths of the conscious thinking self. Outside such contexts, though, the Shadow is a massive and recurring problem in human affairs, because it constantly encourages us to attribute all of our own most humiliating and unwanted characteristics to the people we like least, and to blame them for the things we project onto them.

Bigotries of every kind, including the venomous class bigotries I discussed in an earlier post, show the presence of the Shadow.  We project hateful qualities onto every member of a group of people because that makes it easier for us to ignore those same qualities in ourselves. Notice that the Shadow doesn’t define its own content; it’s a dumpster that can be filled with anything that cultural pressures or personal experiences lead us to despise.

Another archetype, though, deserves our attention here, and it’s the one that the Shadow helpfully clears of unwanted content. That’s the ego, the archetype that each of us normally projects upon ourselves. In place of the loose tangle of drives and reactions each of us actually are, a complex interplay of blind pressures striving with one another and with a universe of pressures from without, the archetype of the ego portrays us to ourselves as single, unified, active, enduring, conscious beings. Like the Shadow, the ego-archetype doesn’t define its own content, which is why different societies around the world and throughout time have defined the individual in different ways.

In the industrial cultures of the modern western world, though, the ego-archetype typically gets filled with a familiar set of contents, the ones we discussed in last week’s post: the mind, the conscious thinking self, as distinct from the body, comprising every other aspect of human experience and action. That’s the disguise in which the loose tangle of complex and conflicting will takes in us, and it meets us at first glance whenever we turn our attention to ourselves just as inevitably as the rose-tinted glory of giddy infatuation meets the infatuated lover who glances at his or her beloved, or the snarling, hateful, inhuman grimace of the Shadow meets those who encountes one of the people onto whom they have projected their own unacceptable qualities.

All this, finally, circles back to points I made in the first post in this sequence. The same process of projection we’ve just been observing is the same, in essence, as the one that creates all the other representations that form the world we experience. You look at a coffee cup, again, and you think you see a solid, three-dimensional material object, because you no longer notice the complicated process by which you assemble fragmentary glimpses of unrelated sensory input into the representation we call a coffee cup. In exactly the same way, but to an even greater extent, you don’t notice the processes by which the loose tangle of conflicting wills each of us calls “myself” gets overlaid with the image of the conscious thinking self, which our cultures provide as raw material for the ego-archetype to feed on.

Nor, of course, do you notice the acts of awareness that project warm and alluring emotions onto the person you love, or hateful qualities onto the person you hate. It’s an essential part of the working of the mind that, under normal circumstances, these wholly subjective qualities should be experienced as objective realities. If the lover doesn’t project that roseate halo onto the beloved, if the bigot doesn’t project all those hateful qualities onto whatever class of people has been selected for their object, the archetype isn’t doing its job properly, and it will fail to have its effects—which, again, exist because they’ve proven to be more successful than not over the course of evolutionary time.

Back when Freud was still in medical school, one common entertainment among the well-to-do classes of Victorian Europe was the magic lantern show. A magic lantern is basically an early slide projector; they were used in some of the same ways that PowerPoint presentations are used today, though in the absence of moving visual media, they also filled many of the same niches as movies and television do today. (I’m old enough to remember when slide shows of photos from distant countries were still a tolerably common entertainment, for that matter.) The most lurid and popular of magic lantern shows, though, used the technology to produce spooky images in a darkened room—look, there’s a ghost! There’s a demon! There’s Helen of Troy come back from the dead!  Like the performances of stage magicians, the magic lantern show produced a simulacrum of wonders in an age that had convinced itself that miracles didn’t exist but still longed for them.

The entire metaphor of “projection” used by Jung and other depth psychologists came from these same performances, and it’s a very useful way of making sense of the process in question. An image inside the magic lantern appears to be out there in the world, when it’s just projected onto the nearest convenient surface; in the same way, an image within the loose tangle of conflicting wills we call “ourselves” appears to be out there in the world, when it’s just projected onto the nearest convenient person—or it appears to be the whole truth about the self, when it’s just projected onto the nearest convenient tangle of conflicting wills.

Is there a way out of the magic lantern show? Schopenhauer and Jung both argued that yes, there is—not, to be sure, a way to turn off the magic lantern, but a way to stop mistaking the projections for realities.  There’s a way to stop spending our time professing undying love on bended knee to one set of images projected on blank walls, and flinging ourselves into mortal combat against another set of images so projected; there’s a way, to step back out of the metaphor, to stop confusing the people around us with the images we like to project on them, and interact with them rather than with the images we’ve projected. 

The ways forward that Jung and Schopenhauer offered were different in some ways, though the philosopher’s vision influenced the psychologist’s to a great extent. We’ll get to their road maps as this conversation proceeds; first, though, we’re going to have to talk about some extremely awkward issues, including the festering swamp of metastatic abstractions and lightly camouflaged bullying that goes these days by the name of ethics.

I’ll offer one hint here, though. Just as we don’t actually learn how to love until we find a way to integrate infatuation with less giddy approaches to relating to another person, just as we don’t learn to fight competently until we can see the other guy’s strengths and weaknesses for what they are rather than what our projections would like them to be, we can’t come to terms with ourselves until we stop mistaking the ego-image for the whole loose tangled mess of self, and let something else make its presence felt. As for what that something else might be—why, we’ll get to that in due time.