Wednesday, February 22, 2017

A Muddle of Mind and Matter

The philosophy of Arthur Schopenhauer, which we’ve been discussing for the last two weeks, has a feature that reliably irritates most people when they encounter it for the first time: it doesn’t divide up the world the way people in modern western societies habitually do. To say, as Schopenhauer does, that the world we experience is a world of subjective representations, and that we encounter the reality behind those representations in will, is to map out the world in a way so unfamiliar that it grates on the nerves. Thus it came as no surprise that last week’s post fielded a flurry of responses trying to push the discussion back onto the more familiar ground of mind and matter.

That was inevitable. Every society has what I suppose could be called its folk metaphysics, a set of beliefs about the basic nature of existence that are taken for granted by most people in that society, and the habit of dividing the world of our experience into mind and matter is among the core elements of the folk metaphysics of the modern western world. Most of us think of it, on those occasions when we think of it at all, as simply the way the world is. It rarely occurs to most of us that there’s any other way to think of things—and when one shows up, a great many of us back away from it as fast as possible.

Yet dividing the world into mind and matter is really rather problematic, all things considered. The most obvious difficulty is the relation between the two sides of the division. This is usually called the mind-body problem, after the place where each of us encounters that difficulty most directly. Grant for the sake of argument that each of us really does consist of a mind contained in a material body, how do these two connect? It’s far from easy to come up with an answer that works.

Several approaches have been tried in the attempt to solve the mind-body problem. There’s dualism, which is the claim that there are two entirely different and independent kinds of things in the world—minds and bodies—and requires proponents to comes up with various ways to justify the connection between them. First place for philosophical brashness in this connection goes to Rene Descartes, who argued that the link was directly and miraculously caused by the will of God. Plenty of less blatant methods of handwaving have been used to accomplish the same trick, but all of them require question-begging maneuvers of various kinds, and none has yet managed to present any kind of convincing evidence for itself.

Then there are the reductionistic monisms, which attempt to account for the relationship of mind and matter by reducing one of them to the other. The most popular reductionistic monism these days is reductionistic materialism, which claims that what we call “mind” is simply the electrochemical activity of those lumps of matter we call human brains. Though it’s a good deal less popular these days, there’s also reductionistic idealism, which claims that what we call “matter” is the brought into being by the activity of minds, or of Mind.

Further out still, you get the eliminative monisms, which deal with the relationship between mind and matter by insisting that one of them doesn’t exist. There are eliminative materialists, for example, who insist that mental experiences don’t exist, and our conviction that we think, feel, experience pain and pleasure, etc. is an “introspective illusion.” (I’ve often thought that one good response to such a claim would be to ask, “Do you really think so?” The consistent eliminative materialist would have to answer “No.”) There are also eliminative idealists, who insist that matter doesn’t exist and that all is mind.

There’s probably been as much effort expended in attempting to solve the mind-body problem as any other single philosophical issue has gotten in modern times, and yet it remains the focus of endless debates even today. That sort of intellectual merry-go-round is usually a pretty good sign that the basic assumptions at the root of the question have some kind of lethal flaw. That’s particularly true when this sort of ongoing donnybrook isn’t the only persistent difficulty surrounding the same set of ideas—and that’s very much the case here.

After all, there’s a far more personal sense in which the phrase “mind-body problem” can be taken. To speak in the terms usual for our culture, this thing we’re calling “mind” includes only a certain portion of what we think of as our inner lives. What, after all, counts as “mind”? In the folk metaphysics of our culture, and in most of the more formal systems of thought based on it, “mind” is consciousness plus the thinking and reasoning functions, perhaps with intuition (however defined) tied on like a squirrel’s  tail to the antenna of an old-fashioned jalopy. The emotions aren’t part of mind, and neither are such very active parts of our lives as sexual desire and the other passions; it sounds absurd, in fact, to talk about “the emotion-body problem” or the “passion-body problem.” Why does it sound absurd? Because, consciously or unconsciously, we assign the emotions and the passions to the category of “body,” along with the senses.

This is where we get the second form of the mind-body problem, which is that we’re taught implicitly and explicitly that the mind governs the body, and yet the functions we label “body” show a distinct lack of interest in obeying the functions we call “mind.” Sexual desire is of course the most obvious example. What people actually desire and what they think they ought to desire are quite often two very different things, and when the “mind” tries to bully the “body” into desiring what the “mind” thinks it ought to desire, the results are predictably bad. Add enough moral panic to the mix, in fact, and you end up with sexual hysteria of the classic Victorian type, in which the body ends up being experienced as a sinister Other responding solely to its own evil propensities, the seductive wiles of other persons, or the machinations of Satan himself despite all the efforts of the mind to rein it in.

Notice the implicit hierarchy woven into the folk metaphysics just sketched out, too. Mind is supposed to rule matter, not the other way around; mind is active, while matter is passive or, at most, subject to purely mechanical pressures that make it lurch around in predictable ways. When things don’t behave that way, you tend to see people melt down in one way or another—and the universe being what it is, things don’t actually behave that way very often, so the meltdowns come at regular intervals.

They also arrive in an impressive range of contexts, because the way of thinking about things that divides them into mind and matter is remarkably pervasive in western societies, and pops up in the most extraordinary places.  Think of the way that our mainstream religions portray God as the divine Mind ruling omnipotently over a universe of passive matter; that’s the ideal toward which our notions of mind and body strive, and predictably never reach. Think of the way that our entertainment media can always evoke a shudder of horror by imagining something we assign to the category of lifeless matter—a corpse in the case of zombie flicks, a machine in such tales as Stephen King’s Christine, or what have you—suddenly starts acting as though it possesses a mind.

For that matter, listen to the more frantic end of the rhetoric on the American left following the recent presidential election and you’ll hear the same theme echoing off the hills. The left likes to think of itself as the smart people, the educated people, the sensitive and thoughtful and reasonable people—in effect, the people of Mind. The hate speech that many of them direct toward their political opponents leans just as heavily on the notion that these latter are stupid, uneducated, insensitive, irrational, and so on—that is to say, the people of Matter. Part of the hysteria that followed Trump’s election, in turn, might best be described as the political equivalent of the instinctive reaction to a zombie flick: the walking dead have suddenly lurched out of their graves and stalked toward the ballot box, the body politic has rebelled against its self-proclaimed mind!

Let’s go deeper, though. The habit of dividing the universe of human experience into mind and matter isn’t hardwired into the world, or for that matter into human consciousness; there have been, and are still, societies in which people simply don’t experience themselves and the world that way. The mind-body problem and the habits of thought that give rise to it have a history, and it’s by understanding that history that it becomes possible to see past the problem toward a solution.

That history takes its rise from an interesting disparity among the world’s great philosophical traditions. The three that arose independently—the Chinese, the Indian, and the Greek—focused on different aspects of humanity’s existence in the world. Chinese philosophy from earliest times directed its efforts to understanding the relationship between the individual and society; that’s why the Confucian mainstream of Chinese philosophy is resolutely political and social in its focus, exploring ways that the individual can find a viable place within society, and the alternative Taoist tradition in its oldest forms (before it absorbed mysticism from Indian sources) focused on ways that the individual can find a viable place outside society. Indian philosophy, by contrast, directed its efforts to understanding the nature of individual existence itself; that’s why the great Indian philosophical schools all got deeply into epistemology and ended up with a strong mystical bent.

The Greek philosophical tradition, in turn, went to work on a different set of problems. Greek philosophy, once it got past its initial fumblings, fixed its attention on the world of thought. That’s what led Greek thinkers to transform mathematics from a unsorted heap of practical techniques to the kind of ordered system of axioms and theorems best exemplified by Euclid’s Elements of Geometry, and it’s also what led Greek thinkers in the same generation as Euclid to create logic, one of the half dozen or so greatest creations of the human mind. Yet it also led to something considerably more problematic: the breathtaking leap of faith by which some of the greatest intellects of the ancient world convinced themselves that the structure of their thoughts was the true structure of the universe, and that thoughts about things were therefore more real than the things themselves.

The roots of that conviction go back all the way to the beginnings of Greek philosophy, but it really came into its own with Parmenides, an important philosopher of the generation immediately before Plato. Parmenides argued that there were two ways of understanding the world, the way of truth and the way of opinion; the way of opinion consisted of understanding the world as it appears to the senses, which according to Parmenides means it’s false, while the way of truth consisted of understanding the world the way that reason proved it had to be, even when this contradicted the testimony of the senses. To be sure, there are times and places where the testimony of the senses does indeed need to be corrected by logic, but it’s at least questionable whether this should be taken anything like as far as Parmenides took it—he argued, for example, that motion was logically impossible, and so nothing ever actually moves, even though it seems that way to our deceiving senses.

The idea that thoughts about things are more real than things settled into what would be its classic form in the writings of Plato, who took Parmenides’ distinction and set to work to explain the relationship between the worlds of truth and opinion. To Plato, the world of truth became a world of forms or ideas, on which everything in the world of sensory experience is modeled. The chair we see, in other words, is a projection or reflection downwards into the world of matter of the timeless, pure, and perfect form or idea of chair-ness. The senses show us the projections or reflections; the reasoning mind shows us the eternal form from which they descend.

That was the promise of classic Platonism—that the mind could know the truth about the universe directly, without the intervention of the senses, the same way it could know the truth of a mathematical demonstration. The difficulty with this enticing claim, though, was that when people tried to find the truth about the universe by examining their thinking processes, no two of them discovered exactly the same truth, and the wider the cultural and intellectual differences between them, the more different the truths turned out to be. It was for this reason among others that Aristotle, whose life’s work was basically that of cleaning up the mess that Plato and his predecessors left behind, made such a point of claiming that nothing enters the mind except through the medium of the senses. It’s also why the Academy, the school founded by Plato, in the generations immediately after his time took a hard skeptical turn, and focused relentlessly on the limits of human knowledge and reasoning.

Later on, Greek philosophy and its Roman foster-child headed off in other directions—on the one hand, into ethics, and the question of how to live the good life in a world where certainty isn’t available; on the other, into mysticism, and the question of whether the human mind can experience the truth of things directly through religious experience. A great deal of Plato’s thinking, however, got absorbed by the Christian religion after the latter clawed its way to respectability in the fourth century CE.

Augustine of Hippo, the theologian who basically set the tone of Christianity in the west for the next fifteen centuries, had been a Neoplatonist before he returned to his Christian roots, and he was far from the only Christian of that time to drink deeply from Plato's well. In his wake, Platonism became the standard philosophy of the western church until it was displaced by a modified version of Aristotle’s philosophy in the high Middle Ages. Thinkers divided the human organism into two portions, body and soul, and began the process by which such things as sexuality and the less angelic emotions got exiled from the soul into the body.

Even after Thomas Aquinas made Aristotle popular again, the basic Parmenidean-Platonic notion of truth had been so thoroughly bolted into Christian theology that it rode right over any remaining worries about the limitations of human reason. The soul trained in the use of reason could see straight to the core of things, and recognize by its own operations such basic religious doctrines as the existence of God:  that was the faith with which generations of scholars pursued the scholastic philosophy of medieval times, and those who disagreed with them rarely quarreled over their basic conception—rather, the point at issue was whether the Fall had left the human mind so vulnerable to the machinations of Satan that it couldn’t count on its own conclusions, and the extent to which divine grace would override Satan’s malicious tinkerings anywhere this side of heaven.

If you happen to be a devout Christian, such questions make sense, and they matter. It’s harder to see how they still made sense and mattered as the western world began moving into its post-Christian era in the eighteenth century, and yet the Parmenidean-Platonic faith in the omnipotence of reason gained ground as Christianity ebbed among the educated classes. People stopped talking about soul and body and started talking about mind and body instead.

Since mind, mens in Latin, was already in common use as a term for the faculty of the soul that handled its thinking and could be trained to follow the rules of reason, that shift was of vast importance. It marked the point at which the passions and the emotions were shoved out of the basic self-concept of the individual in western culture, and exiled to the body, that unruly and rebellious lump of matter in which the mind is somehow caged.

That’s one of the core things that Schopenhauer rejected. As he saw it, the mind isn’t the be-all and end-all of the self, stuck somehow into the prison house of the body. Rather, the mind is a frail and unstable set of functions that surface now and then on top of other functions that are much older, stronger, and more enduring. What expresses itself through all these functions, in turn, is will:  at the most basic primary level, as the will to exist; on a secondary level, as the will to live, with all the instincts and drives that unfold from that will; on a tertiary level, as the will to experience, with all the sensory and cognitive apparatus that unfolds from that will; and on a quaternary level, as the will to understand, with all the abstract concepts and relationships that unfold from that will.

Notice that from this point of view, the structure of thought isn't the structure of the cosmos, just a set of convenient models, and thoughts about things are emphatically not more real than the things themselves.  The things themselves are wills, expressing themselves through their several modes. The things as we know them are representations, and our thoughts about the things are abstract patterns we create out of memories of representations, and thus at two removes from reality.

Notice also that from this point of view, the self is simply a representation—the ur-representation, the first representation each of us makes in infancy as it gradually sinks in that there’s a part of the kaleidoscope of our experience that we can move at will, and a lot more that we can’t, but still just a representation, not a reality. Of course that’s what we see when we first try to pay attention to ourselves, just as we see the coffee cup discussed in the first post in this series. It takes exacting logical analysis, scientific experimentation, or prolonged introspection to get past the representation of the self (or the coffee cup), realize that it’s a subjective construct rather than an objective reality, and grasp the way that it’s assembled out of disparate stimuli according to preexisting frameworks that are partly hardwired into our species and partly assembled over the course of our lives.

Notice, finally, that those functions we like to call “mind”—in the folk metaphysics of our culture, again, these are consciousness and the capacity to think, with a few other tag-ends of other functions dangling here and there—aren’t the essence of who we are, the ghost in the machine, the Mini-Me perched inside the skull that pushes and pulls levers to control the passive mass of the body and gets distracted by the jabs and lurches of the emotions and passions. The functions we call “mind,” rather, are a set of delicate, tentative, and fragile functions of will, less robust and stable than most of the others, and with no inherent right to rule the other functions. The Schopenhauerian self is an ecosystem rather than a hierarchy, and if what we call “mind” sits at the top of the food chain like a fox in a meadow, that simply means that the fox has to spend much of its time figuring out where mice like to go, and even more of its time sleeping in its den, while the mice scamper busily about and the grass goes quietly about turning sunlight, water and carbon dioxide into the nutrients that support the whole system.

Accepting this view of the self requires sweeping revisions of the ways we like to think about ourselves and the world, which is an important reason why so many people react with acute discomfort when it’s suggested. Nonetheless those revisions are of crucial importance, and as this discussion continues, we’ll see how they offer crucial insights into the problems we face in this age of the world—and into their potential solutions.

Wednesday, February 15, 2017

The World as Will

It's impressively easy to misunderstand the point made in last week’s post here on The Archdruid Report. To say that the world we experience is made up of representations of reality, constructed in our minds by taking the trickle of data we get from the senses and fitting those into patterns that are there already, doesn’t mean that nothing exists outside of our minds. Quite the contrary, in fact; there are two very good reasons to think that there really is something “out there,” a reality outside our minds that produces the trickle of data we’ve discussed.

The first of those reasons seems almost absurdly simple at first glance: the world doesn’t always make sense to us. Consider, as one example out of godzillions, the way that light seems to behave like a particle on some occasions and like a wave on others. That’s been described, inaccurately, as a paradox, but it’s actually a reflection of the limitations of the human mind.

What, after all, does it mean to call something a particle? Poke around the concept for a while and you’ll find that at root, this concept “particle” is an abstract metaphor, extracted from the common human experience of dealing with little round objects such as pebbles and marbles. What, in turn, is a wave? Another abstract metaphor, extracted from the common human experience of watching water in motion. When a physicist says that light sometimes acts like a particle and sometimes like a wave, what she’s saying is that neither of these two metaphors fits more than a part of the way that light behaves, and we don’t have any better metaphor available.

If the world was nothing but a hallucination projected by our minds, then it would contain nothing that wasn’t already present in our minds—for what other source could there be? That implies in turn that there would be a perfect match between the contents of the world and the contents of our minds, and we wouldn’t get the kind of mismatch between mind and world that leaves physicists flailing. More generally, the fact that the world so often baffles us offers good evidence that behind the world we experience, the world as representation, there’s some “thing in itself” that’s the source of the sense data we assemble into representations.

The other reason to think that there’s a reality distinct from our representations is that, in a certain sense, we experience such a reality at every moment.

Raise one of your hands to a position where you can see it, and wiggle the fingers. You see the fingers wiggling—or, more precisely, you see a representation of the wiggling fingers, and that representation is constructed in your mind out of bits of visual data, a great deal of memory, and certain patterns that seem to be hardwired into your mind. You also feel the fingers wiggling—or, here again, you feel a representation of the wiggling fingers, which is constructed in your mind out of bits of tactile and kinesthetic data, plus the usual inputs from memory and hardwired patterns. Pay close attention and you might be able to sense the way your mind assembles the visual representation and the tactile one into a single pattern; that happens close enough to the surface of consciousness that a good many people can catch themselves doing it.

So you’ve got a representation of wiggling fingers, part of the world as representation we experience. Now ask yourself this: the action of the will that makes the fingers wiggle—is that a representation?

This is where things get interesting, because the only reasonable answer is no, it’s not. You don’t experience the action of the will as a representation; you don’t experience it at all. You simply wiggle your fingers. Sure, you experience the results of the will’s action in the form of representations—the visual and tactile experiences we’ve just been considering—but not the will itself. If it were true that you could expect to see or hear or feel or smell or taste the impulse of the will rolling down your arm to the fingers, say, it would be reasonable to treat the will as just one more representation. Since that isn’t the case, it’s worth exploring the possibility that in the will, we encounter something that isn’t just a representation of reality—it’s a reality we encounter directly.

That’s the insight at the foundation of Arthur Schopenhauer’s philosophy. Schopenhauer’s one of the two principal guides who are going to show us around the giddy funhouse that philosophy has turned into of late, and guide us to the well-marked exits, so you’ll want to know a little about him. He lived in the ramshackle assortment of little countries that later became the nation of Germany; he was born in 1788 and died in 1860; he got his doctorate in philosophy in 1813; he wrote his most important work, The World as Will and Representation, before he turned thirty; and he spent all but the last ten years of his life in complete obscurity, ignored by the universities and almost everyone else. A small inheritance, carefully managed, kept him from having to work for a living, and so he spent his time reading, writing, playing the flute for an hour a day before dinner, and grumbling under his breath as philosophy went its merry way into metaphysical fantasy. He grumbled a lot, and not always under his breath. Fans of Sesame Street can think of him as philosophy’s answer to Oscar the Grouch.

Schopenhauer came of age intellectually in the wake of Immanuel Kant, whose work we discussed briefly last week, and so the question he faced was how philosophy could respond to the immense challenge Kant threw at the discipline’s feet. Before you go back to chattering about what’s true and what’s real, Kant said in effect, show me that these labels mean something and relate to something, and that you’re not just chasing phantoms manufactured by your own minds.

Most of the philosophers who followed in Kant’s footsteps responded to his challenge by ignoring it, or using various modes of handwaving to pretend that it didn’t matter. One common gambit at the time was to claim that the human mind has a special superpower of intellectual intuition that enables it to leap tall representations in a single bound, and get to a direct experience of reality that way. What that meant in practice, of course, is that philosophers could claim to have intellectually intuited this, that, and the other thing, and then build a great tottering system on top of them. What that meant in practice, of course, that a philosopher could simply treat whatever abstractions he fancied as truths that didn’t have to be proved; after all, he’d intellectually intuited them—prove that he hadn’t!

There were other such gimmicks. What set Schopenhauer apart was that he took Kant’s challenge seriously enough to go looking for something that wasn’t simply a representation. What he found—why, that brings us back to the wiggling fingers.

As discussed in last week’s post, every one of the world’s great philosophical traditions has ended up having to face the same challenge Kant flung in the face of the philosophers of his time. Schopenhauer knew this, since a fair amount of philosophy from India had been translated into European languages by his time, and he read extensively on the subject. This was helpful because Indian philosophy hit its own epistemological crisis around the tenth century BCE, a good twenty-nine centuries before Western philosophy got there, and so had a pretty impressive head start. There’s a rich diversity of responses to that crisis in the classical Indian philosophical schools, but most of them came to see consciousness as a (or the) thing-in-itself, as reality rather than representation.

It’s a plausible claim. Look at your hand again, with or without wiggling fingers. Now be aware of yourself looking at the hand—many people find this difficult, so be willing to work at it, and remember to feel as well as see. There’s your hand; there’s the space between your hand and your eyes; there’s whatever of your face you can see, with or without eyeglasses attached; pay close attention and you can also feel your face and your eyes from within; and then there’s—

There’s the thing we call consciousness, the whatever-it-is that watches through your eyes. Like the act of will that wiggled your fingers, it’s not a representation; you don’t experience it. In fact, it’s very like the act of will that wiggled your fingers, and that’s where Schopenhauer went his own way.

What, after all, does it mean to be conscious of something? Some simple examples will help clarify this. Move your hand until it bumps into something; it’s when something stops the movement that you feel it. Look at anything; you can see it if and only if you can’t see through it. You are conscious of something when, and only when, it resists your will.

That suggested to Schopenhauer that consciousness derives from will, not the other way around. There were other lines of reasoning that point in the same direction, and all of them derive from common human experiences. For example, each of us stops being conscious for some hours out of every day, whenever we go to sleep. During part of the time we’re sleeping, we experience nothing at all; during another part, we experience the weirdly disconnected representations we call “dreams.” Even in dreamless sleep, though, it’s common for a sleeper to shift a limb away from an unpleasant stimulus. Thus the will is active even when consciousness is absent.

Schopenhauer proposed that there are different forms or, as he put it, grades of the will. Consciousness, which we can define for present purposes as the ability to experience representations, is one grade of the will—one way that the will can adapt to existence in a world that often resists it. Life is another, more basic grade. Consider the way that plants orient themselves toward sunlight, bending and twisting like snakes in slow motion, and seek out concentrations of nutrients with probing, hungry roots. As far as anyone knows, plants aren’t conscious—that is, they don’t experience a world of representations the way that animals do—but they display the kind of goal-seeking behavior that shows the action of will.

Animals also show goal-seeking behavior, and they do it in a much more complex and flexible way than plants do. There’s good reason to think that many animals are conscious, and experience a world of representations in something of the same way we do; certainly students of animal behavior have found that animals let incidents from the past shape their actions in the present, mistake one person for another, and otherwise behave in ways that suggest that their actions are guided, as ours are, by representations rather than direct reaction to stimuli. In animals, the will has developed the ability to represent its environment to itself.

Animals, at least the more complex ones, also have that distinctive mode of consciousness we call emotion. They can be happy, sad, lonely, furious, and so on; they feel affection for some beings and aversion toward others. Pay attention to your own emotions and you’ll soon notice how closely they relate to the will. Some emotions—love and hate are among them—are motives for action, and thus expressions of will; others—happiness and sadness are among them—are responses to the success or failure of the will to achieve its goals. While emotions are tangled up with representations in our minds, and presumably in those of animals as well, they stand apart; they’re best understood as conditions of the will, expressions of its state as it copes with the world through its own representations.

And humans? We’ve got another grade of the will, which we can call intellect:  the ability to add up representations into abstract concepts, which we do, ahem, at will. Here’s one representation, which is brown and furry and barks; here’s another like it; here’s a whole kennel of them—and we lump them all together in a single abstract category, to which we assign a sound such as “dog.” We can then add these categories together, creating broader categories such as “quadruped” and “pet;” we can subdivide the categories to create narrower ones such as “puppy” and “Corgi;” we can extract qualities from the whole and treat them as separate concepts, such as “furry” and “loud;” we can take certain very general qualities and conjure up the entire realm of abstract number, by noticing how many paws most dogs have and using that, and a great many other things, to come up with the concept of “four.”

So life, consciousness, and intellect are three grades of the will. One interesting thing about them is that the more basic ones are more enduring and stable than the more complex ones. Humans, again, are good examples. Humans remain alive all the way from birth to death; they’re conscious only when awake; they’re intelligent only when actively engaged in thinking—which is a lot less often than we generally like to admit. A certain degree of tiredness, a strong emotion, or a good stiff drink are usually enough to shut off the intellect and leave us dealing with the world on the same mental basis as an ordinarily bright dog; it takes quite a bit more to reduce us to the vegetative level, and serious physical trauma to go one more level down.

Let’s take a look at that final level, though. The conventional wisdom of our age holds that everything that exists is made up of something called “matter,” which is configured in various ways; further, that matter is what really exists, and everything else is somehow a function of matter if it exists at all. For most of us, this is the default setting, the philosophical opinion we start from and come back to, and anyone who tries to question it can count on massive pushback.

The difficulty here is that philosophers and scientists have both proved, in their own ways, that the usual conception of matter is quite simply nonsense. Any physical scientist worth his or her sodium chloride, to begin with, will tell you that what we habitually call “solid matter” is nearly as empty as the vacuum of deep space—a bit of four-dimensional curved spacetime that happens to have certain tiny probability waves spinning dizzily in it, and it’s the interaction between those probability waves and those composing that other patch of curved spacetime we each call “my body” that creates the illusions of solidity, color, and the other properties we attribute to matter.

The philosophers got to the same destination a couple of centuries earlier, and by a different route. The epistemologists I mentioned in last week’s post—Locke, Berkeley, and Hobbes—took the common conception of matter apart layer by layer and showed, to use the formulation we’ve already discussed, that all the things we attribute to matter are simply representations in the mind. Is there something out there that causes those representations? As already mentioned, yes, there’s very good reason to think so—but that doesn’t mean that the “something out there” has to consist of matter in any sense of the word that means anything.

That’s where Schopenhauer got to work, and once again, he proceeded by calling attention to certain very basic and common human experiences. Each of us has direct access, in a certain sense, to one portion of the “something out there,” the portion each of us calls “my body.” When we experience our bodies, we experience them as representations, just like anything else—but we also act with them, and as the experiment with the wiggling fingers demonstrated, the will that acts isn’t a representation.

Thus there’s a boundary between the part of the universe we encounter as will and representation, and the part we encounter only as representation. The exact location of that boundary is more complex than it seems at first sight. It’s a commonplace in the martial arts, for example, that a capable martial artist can learn to feel with a weapon as though it were a part of the body. Many kinds of swordsmanship, for example, rely on what fencers call sentiment de fer, the “sense of the steel;” the competent fencer can feel the lightest touch of the other blade against his own, just as though it brushed his hand.

There are also certain circumstances—lovemaking, dancing, ecstatic religious experience, and mob violence are among them—in which under certain hard-to-replicate conditions, two or more people seem to become, at least briefly, a single entity that moves and acts with a will of its own. All of those involve a shift from the intellect to a more basic grade of the will, and they lead in directions that will deserve a good deal more examination later on; for now, the point at issue is that the boundary line between self and other can be a little more fluid than we normally tend to assume.

For our present purposes, though, we can set that aside and focus on the body as the part of the world each of us encounters in a twofold way: as a representation among representations, and as a means of expression for the will.  Everything we perceive about our bodies is a representation, but by noticing these representations, we observe the action of something that isn’t a representation, something we call the will, manifesting in its various grades. That’s all there is. Go looking as long as you want, says Schopenhauer, and you won’t find anything but will and representations. What if that’s all there is—if the thing we call "matter" is simpy the most basic grade of the will, and everything in the world thus amounts to will on the one hand, and representations experienced by that mode of will we call consciousness on the other, and the thing that representations are representing are various expressions of this one energy that, by way of its distinctive manifestations in our own experience, we call the will?

That’s Schopenhauer’s vision. The remarkable thing is how close it is to the vision that comes out of modern science. A century before quantum mechanics, he’d already grasped that behind the facade of sensory representations that you and I call matter lies an incomprehensible and insubstantial reality, a realm of complex forces dancing in the void. Follow his arguments out to their logical conclusion and you get a close enough equivalent of the universe of modern physics that it’s not at all implausible that they’re one and the same. Of course plausibility isn’t proof—but given the fragile, dependent, and derivative nature of the human intellect, it may be as close as we can get.

And of course that latter point is a core reason why Arthur Schopenhauer spent most of his life in complete obscurity and why, after a brief period of mostly posthumous superstardom in the late nineteenth century, his work dropped out of sight and has rarely been noticed since. (To be precise, it’s one of two core reasons; we’ll get to the other one later.) If he’s right, then the universe is not rational. Reason—the disciplined use of the grade of will I’ve called the intellect—isn’t a key to the truth of things.  It’s simply the systematic exploitation of a set of habits of mind that turned out to be convenient for our ancestors as they struggled with the hard but intellectually undemanding tasks of staying fed, attracting mates, chasing off predators, and the like, and later on got pulled out of context and put to work coming up with complicated stories about what causes the representations we experience.

To suggest that, much less to back it up with a great deal of argument and evidence, is to collide head on with one of the most pervasive presuppositions of our culture. We’ll survey the wreckage left behind by that collision in next week’s post.

Wednesday, February 08, 2017

The World as Representation

It can be hard to remember these days that not much more than half a century ago, philosophy was something you read about in general-interest magazines and the better grade of newspapers. Existentialist philosopher Jean-Paul Sartre was an international celebrity; the posthumous publication of Pierre Teilhard de Chardin’s Le Phenomenon Humaine (the English translation, predictably, was titled The Phenomenon of Man) got significant flurries of media coverage; Random House’s Vintage Books label brought out cheap mass-market paperback editions of major philosophical writings from Plato straight through to Nietzsche and beyond, and made money off them.

Though philosophy was never really part of the cultural mainstream, it had the same kind of following as avant-garde jazz, say, or science fiction.  At any reasonably large cocktail party you had a pretty fair chance of meeting someone who was into it, and if you knew where to look in any big city—or any college town with pretensions to intellectual culture, for that matter—you could find at least one bar or bookstore or all-night coffee joint where the philosophy geeks hung out, and talked earnestly into the small hours about Kant or Kierkegaard. What’s more, that level of interest in the subject had been pretty standard in the Western world for a very long time.

We’ve come a long way since then, and not in a particularly useful direction. These days, if you hear somebody talk about philosophy in the media, it’s probably a scientific materialist like Neil deGrasse Tyson ranting about how all philosophy is nonsense. The occasional work of philosophical exegesis still gets a page or two in the New York Review of Books now and then, but popular interest in the subject has vanished, and more than vanished: the sort of truculent ignorance about philosophy displayed by Tyson and his many equivalents has become just as common among the chattering classes as a feigned interest in the subject was a half century in the past.

Like most human events, the decline of philosophy in modern times was overdetermined; like the victim in the murder-mystery paperback who was shot, strangled, stabbed, poisoned, whacked over the head with a lead pipe, and then shoved off a bridge to drown, there were more causes of death than the situation actually required. Part of the problem, certainly, was the explosive expansion of the academic industry in the US and elsewhere in the second half of the twentieth century.  In an era when every state teacher’s college aspired to become a university and every state university dreamed of rivaling the Ivy League, a philosophy department was an essential status symbol. The resulting expansion of the field was not necessarily matched by an equivalent increase in genuine philosophers, but it was certainly followed by the transformation of university-employed philosophy professors into a professional caste which, as such castes generally do, defended its status by adopting an impenetrable jargon and ignoring or rebuffing attempts at participation from outside its increasingly airtight circle.

Another factor was the rise of the sort of belligerent scientific materialism exemplified, as noted earlier, by Neil deGrasse Tyson. Scientific inquiry itself is philosophically neutral—it’s possible to practice science from just about any philosophical standpoint you care to name—but the claim at the heart of scientific materialism, the dogmatic insistence that those things that can be investigated using scientific methods and explained by current scientific theory are the only things that can possibly exist, depends on arbitrary metaphysical postulates that were comprehensively disproved by philosophers more than two centuries ago. (We’ll get to those postulates and their problems later on.) Thus the ascendancy of scientific materialism in educated culture pretty much mandated the dismissal of philosophy.

There were plenty of other factors as well, most of them having no more to do with philosophy as such than the ones just cited. Philosophy itself, though, bears some of the responsibility for its own decline. Starting in the seventeenth century and reaching a crisis point in the nineteenth, western philosophy came to a parting of the ways—one that the philosophical traditions of other cultures reached long before it, with similar consequences—and by and large, philosophers and their audiences alike chose a route that led to its present eclipse. That choice isn’t irreparable, and there’s much to be gained by reversing it, but it’s going to take a fair amount of hard intellectual effort and a willingness to abandon some highly popular shibboleths to work back to the mistake that was made, and undo it.

To help make sense of what follows, a concrete metaphor might be useful. If you’re in a place where there are windows nearby, especially if the windows aren’t particularly clean, go look out through a window at the view beyond it. Then, after you’ve done this for a minute or so, change your focus and look at the window rather than through it, so that you see the slight color of the glass and whatever dust or dirt is clinging to it. Repeat the process a few times, until you’re clear on the shift I mean: looking through the window, you see the world; looking at the window, you see the medium through which you see the world—and you might just discover that some of what you thought at first glance was out there in the world was actually on the window glass the whole time.

That, in effect, was the great change that shook western philosophy to its foundations beginning in the seventeenth century. Up to that point, most philosophers in the western world started from a set of unexamined presuppositions about what was true, and used the tools of reasoning and evidence to proceed from those presuppositions to a more or less complete account of the world. They were into what philosophers call metaphysics: reasoned inquiry into the basic principles of existence. That’s the focus of every philosophical tradition in its early years, before the confusing results of metaphysical inquiry refocus attention from “What exists?” to “How do we know what exists?” Metaphysics then gives way to epistemology: reasoned inquiry into what human beings are capable of knowing.

That refocusing happened in Greek philosophy around the fourth century BCE, in Indian philosophy around the tenth century BCE, and in Chinese philosophy a little earlier than in Greece. In each case, philosophers who had been busy constructing elegant explanations of the world on the basis of some set of unexamined cultural assumptions found themselves face to face with hard questions about the validity of those assumptions. In terms of the metaphor suggested above, they were making all kinds of statements about what they saw through the window, and then suddenly realized that the colors they’d attributed to the world were being contributed in part by the window glass and the dust on it, the vast dark shape that seemed to be moving purposefully across the sky was actually a beetle walking on the outside of the window, and so on.

The same refocusing began in the modern world with Rene Descartes, who famously attempted to start his philosophical explorations by doubting everything. That’s a good deal easier said than done, as it happens, and to a modern eye, Descartes’ writings are riddled with unexamined assumptions, but the first attempt had been made and others followed. A trio of epistemologists from the British Isles—John Locke, George Berkeley, and David Hume—rushed in where Descartes feared to tread, demonstrating that the view from the window had much more to do with the window glass than it did with the world outside. The final step in the process was taken by the German philosopher Immanuel Kant, who subjected human sensory and rational knowledge to relentless scrutiny and showed that most of what we think of as “out there,” including such apparently hard realities as space and time, are actually artifacts of the processes by which we perceive things.

Look at an object nearby: a coffee cup, let’s say. You experience the cup as something solid and real, outside yourself: seeing it, you know you can reach for it and pick it up; and to the extent that you notice the processes by which you perceive it, you experience these as wholly passive, a transparent window on an objective external reality. That’s normal, and there are good practical reasons why we usually experience the world that way, but it’s not actually what’s going on.

What’s going on is that a thin stream of visual information is flowing into your mind in the form of brief fragmentary glimpses of color and shape. Your mind then assembles these together into the mental image of the coffee cup, using your memories of that and other coffee cups, and a range of other things as well, as a template onto which the glimpses can be arranged. Arthur Schopenhauer, about whom we’ll be talking a great deal as we proceed, gave the process we’re discussing the useful label of “representation;” when you look at the coffee cup, you’re not passively seeing the cup as it exists, you’re actively representing—literally re-presenting—an image of the cup in your mind.

There are certain special situations in which you can watch representation at work. If you’ve ever woken up in an unfamiliar room at night, and had a few seconds pass before the dark unknown shapes around you finally turned into ordinary furniture, you’ve had one of those experiences. Another is provided by the kind of optical illusion that can be seen as two different things. With a little practice, you can flip from one way of seeing the illusion to another, and watch the process of representation as it happens.

What makes the realization just described so challenging is that it’s fairly easy to prove that the cup as we represent it has very little in common with the cup as it exists “out there.” You can prove this by means of science: the cup “out there,” according to the evidence collected painstakingly by physicists, consists of an intricate matrix of quantum probability fields and ripples in space-time, which our senses systematically misperceive as a solid object with a certain color, surface texture, and so on. You can also prove this, as it happens, by sheer sustained introspection—that’s how Indian philosophers got there in the age of the Upanishads—and you can prove it just as well by a sufficiently rigorous logical analysis of the basis of human knowledge, which is what Kant did.

The difficulty here, of course, is that once you’ve figured this out, you’ve basically scuttled any chance at pursuing the kind of metaphysics that’s traditional in the formative period of your philosophical tradition. Kant got this, which is why he titled the most relentless of his analyses Prolegomena to Any Future Metaphysics; what he meant by this was that anybody who wanted to try to talk about what actually exists had better be prepared to answer some extremely difficult questions first.  When philosophical traditions hit their epistemological crises, accordingly, some philosophers accept the hard limits on human knowledge, ditch the metaphysics, and look for something more useful to do—a quest that typically leads to ethics, mysticism, or both. Other philosophers double down on the metaphysics and either try to find some way around the epistemological barrier, or simply ignore it, and this latter option is the one that most Western philosophers after Kant ended up choosing.  Where that leads—well, we’ll get to that later on.

For the moment, I want to focus a little more closely on the epistemological crisis itself, because there are certain very common ways to misunderstand it. One of them I remember with a certain amount of discomfort, because I made it myself in my first published book, Paths of Wisdom. This is the sort of argument that sees the sensory organs and the nervous system as the reason for the gap between the reality out there—the “thing in itself” (Ding an Sich), as Kant called it—and the representation as we experience it. It’s superficially very convincing: the eye receives light in certain patterns and turns those into a cascade of electrochemical bursts running up the optic nerve, and the visual centers in the brain then fold, spindle, and mutilate the results into the image we see.

The difficulty? When we look at light, an eye, an optic nerve, a brain, we’re not seeing things in themselves, we’re seeing another set of representations, constructed just as arbitrarily in our minds as any other representation. Nietzsche had fun with this one: “What? and others even go so far as to say that the external world is the work of our organs? But then our body, as a piece of this external world, would be the work of our organs! But then our organs themselves would be—the work of our organs!” That is to say, the body is also a representation—or, more precisely, the body as we perceive it is a representation. It has another aspect, but we’ll get to that in a future post.

Another common misunderstanding of the epistemological crisis is to think that it’s saying that your conscious mind assembles the world, and can do so in whatever way it wishes. Not so. Look at the coffee cup again. Can you, by any act of consciousness, make that coffee cup suddenly sprout wings and fly chirping around your computer desk? Of course not. (Those who disagree should be prepared to show their work.) The crucial point here is that representation is neither a conscious activity nor an arbitrary one. Much of it seems to be hardwired, and most of the rest is learned very early in life—each of us spent our first few years learning how to do it, and scientists such as Jean Piaget have chronicled in detail the processes by which children gradually learn how to assemble the world into the specific meaningful shape their culture expects them to get. 

By the time you’re an adult, you do that instantly, with no more conscious effort than you’re using right now to extract meaning from the little squiggles on your computer screen we call “letters.” Much of the learning process, in turn, involves finding meaningful correlations between the bits of sensory data and weaving those into your representations—thus you’ve learned that when you get the bits of visual data that normally assemble into a coffee cup, you can reach for it and get the bits of tactile data that normally assemble into the feeling of picking up the cup, followed by certain sensations of movement, followed by certain sensations of taste, temperature, etc. corresponding to drinking the coffee.

That’s why Kant included the “thing in itself” in his account: there really does seem to be something out there that gives rise to the data we assemble into our representations. It’s just that the window we’re looking through might as well be a funhouse mirror:  it imposes so much of itself on the data that trickles through it that it’s almost impossible to draw firm conclusions about what’s “out there” from our representations. The most we can do, most of the time, is to see what representations do the best job of allowing us to predict what the next series of fragmentary sensory images will include. That’s what science does, when its practitioners are honest with themselves about its limitations—and it’s possible to do perfectly good science on that basis, by the way.

It’s possible to do quite a lot intellectually on that basis, in fact. From the golden age of ancient Greece straight through to the end of the Renaissance, in fact, a field of scholarship that’s almost completely forgotten today—topics—was an important part of a general education, the kind of thing you studied as a matter of course once you got past grammar school. Topics is the study of those things that can’t be proved logically, but are broadly accepted as more or less true, and so can be used as “places” (in Greek, topoi) on which you can ground a line of argument. The most important of these are the commonplaces (literally, the common places or topoi) that we all use all the time as a basis for our thinking and speaking; in modern terms, we can think of them as “things on which a general consensus exists.” They aren’t truths; they’re useful approximations of truths, things that have been found to work most of the time, things to be set aside only if you have good reason to do so.

Science could have been seen as a way to expand the range of useful topoi. That’s what a scientific experiment does, after all: it answers the question, “If I do this, what happens?” As the results of experiments add up, you end up with a consensus—usually an approximate consensus, because it’s all but unheard of for repetitions of any experiment to get exactly the same result every time, but a consensus nonetheless—that’s accepted by the scientific community as a useful approximation of the truth, and can be set aside only if you have good reason to do so. To a significant extent, that’s the way science is actually practiced—well, when it hasn’t been hopelessly corrupted for economic or political gain—but that’s not the social role that science has come to fill in modern industrial society.

I’ve written here several times already about the trap into which institutional science has backed itself in recent decades, with the enthusiastic assistance of the belligerent scientific materialists mentioned earlier in this post. Public figures in the scientific community routinely like to insist that the current consensus among scientists on any topic must be accepted by the lay public without question, even when scientific opinion has swung around like a weathercock in living memory, and even when unpleasantly detailed evidence of the deliberate falsification of scientific data is tolerably easy to find, especially but not only in the medical and pharmaceutical fields. That insistence isn’t wearing well; nor does it help when scientific materialists insist—as they very often do—that something can’t exist or something else can’t happen, simply because current theory doesn’t happen to provide a mechanism for it.

Too obsessive a fixation on that claim to authority, and the political and financial baggage that comes with it, could very possibly result in the widespread rejection of science across the industrial world in the decades ahead. That’s not yet set in stone, and it’s still possible that scientists who aren’t too deeply enmeshed in the existing order of things could provide a balancing voice, and help see to it that a less doctrinaire understanding of science gets a voice and a public presence.

Doing that, though, would require an attitude we might as well call epistemic modesty: the recognition that the human capacity to know has hard limits, and the unqualified absolute truth about most things is out of our reach. Socrates was called the wisest of the Greeks because he accepted the need for epistemic modesty, and recognized that he didn’t actually know much of anything for certain. That recognition didn’t keep him from being able to get up in the morning and go to work at his day job as a stonecutter, and it needn’t keep the rest of us from doing what we have to do as industrial civilization lurches down the trajectory toward a difficult future.

Taken seriously, though, epistemic modesty requires some serious second thoughts about certain very deeply ingrained presuppositions of the cultures of the West. Some of those second thoughts are fairly easy to reach, but one of the most challenging starts with a seemingly simple question: is there anything we experience that isn’t a representation? In the weeks ahead we’ll track that question all the way to its deeply troubling destination.

Wednesday, February 01, 2017

Perched on the Wheel of Time

There's a curious predictability in the comments I field in response to posts here that talk about the likely shape of the future. The conventional wisdom of our era insists that modern industrial society can’t possibly undergo the same life cycle of rise and fall as every other civilization in history; no, no, there’s got to be some unique future awaiting us—uniquely splendid or uniquely horrible, it doesn’t even seem to matter that much, so long as it’s unique. Since I reject that conventional wisdom, my dissent routinely fields pushback from those of my readers who embrace it.

That’s not surprising in the least, of course. What’s surprising is that the pushback doesn’t surface when the conventional wisdom seems to be producing accurate predictions, as it does now and then. Rather, it shows up like clockwork whenever the conventional wisdom fails.

The present situation is as good an example as any. The basis of my dissident views is the theory of cyclical history—the theory, first proposed in the early 18th century by the Italian historian Giambattista Vico and later refined and developed by such scholars as Oswald Spengler and Arnold Toynbee, that civilizations rise and fall in a predictable life cycle, regardless of scale or technological level. That theory’s not just a vague generalization, either; each of the major writers on the subject set out specific stages that appear in order, showed that these have occurred in all past civilizations, and made detailed, falsifiable predictions about how those stages can be expected to occur in our civilization. Have those panned out? So far, a good deal more often than not.

In the final chapters of his second volume, for example, Spengler noted that civilizations in the stage ours was about to reach always end up racked by conflicts that pit established hierarchies against upstart demagogues who rally the disaffected and transform them into a power base. Looking at the trends visible in his own time, he sketched out the most likely form those conflicts would take in the Winter phase of our civilization. Modern representative democracy, he pointed out, has no effective defenses against corruption by wealth, and so could be expected to evolve into corporate-bureaucratic plutocracies that benefit the affluent at the expense of everyone else. Those left out in the cold by these transformations, in turn, end up backing what Spengler called Caesarism—the rise of charismatic demagogues who challenge and eventually overturn the corporate-bureaucratic order.

These demagogues needn’t come from within the excluded classes, by the way. Julius Caesar, the obvious example, came from an old upper-class Roman family and parlayed his family connections into a successful political career. Watchers of the current political scene may be interested to know that Caesar during his lifetime wasn’t the imposing figure he became in retrospect; he had a high shrill voice, his morals were remarkably flexible even by Roman standards—the scurrilous gossip of his time called him “every man’s wife and every woman’s husband”—and he spent much of his career piling up huge debts and then wriggling out from under them. Yet he became the political standardbearer for the plebeian classes, and his assassination by a conspiracy of rich Senators launched the era of civil wars that ended the rule of the old elite once and for all.

Thus those people watching the political scene last year who knew their way around Spengler, and noticed that a rich guy had suddenly broken with the corporate-bureaucratic consensus and called for changes that would benefit the excluded classes at the expense of the affluent, wouldn’t have had to wonder what was happening, or what the likely outcome would be. It was those who insisted on linear models of history—for example, the claim that the recent ascendancy of modern liberalism counted as the onward march of progress, and therefore was by definition irreversible—who found themselves flailing wildly as history took a turn they considered unthinkable.

The rise of Caesarism, by the way, has other features I haven’t mentioned. As Spengler sketches out the process, it also represents the exhaustion of ideology and its replacement by personality. Those of my readers who watched the political scene over the last few years may have noticed the way that the issues have been sidelined by sweeping claims about the supposed personal qualities of candidates. The practically content-free campaign that swept Barack Obama into the presidency in 2008—“Hope,” “Change,” and “Yes We Can” aren’t statements about issues, you know—was typical of this stage, as was the emergence of competing personality cults around the candidates in the 2016 election.  In the ordinary way of things, we can expect even more of this in elections to come, with messianic hopes clustering around competing politicians until the point of absurdity is well past. These will then implode, and the political process collapse into a raw scramble for power at any cost.

There’s plenty more in Spengler’s characterization of the politics of the Winter phase, and all of it’s well represented in today’s headlines, but the rest can be left to those of my readers interested enough to turn the pages of The Decline of the West for themselves. What I’d like to discuss here is the nature of the pushback I tend to field when I point out that yet again, predictions offered by Spengler and other students of cyclic history turned out to be correct and those who dismissed them turned out to be smoking their shorts. The responses I field are as predictable as—well, the arrival of charismatic demagogues at a certain point in the Winter phase, for example—and they reveal some useful flimpses into the value, or lack of it, of our society’s thinking about the future in this turn of the wheel.

Probably the most common response I get can best be characterized as simple incantation: that is to say, the repetition of some brief summary of the conventional wisdom, usually without a shred of evidence or argument backing it up, as though the mere utterance is enough to disprove all other ideas.   It’s a rare week when I don’t get at least one comment along these lines, and they divide up roughly evenly between those that insist that progress will inevitably triumph over all its obstacles, on the one hand, and those that insist that modern industrial civilization will inevitably crash to ruin in a sudden cataclysmic downfall on the other. I tend to think of this as a sort of futurological fundamentalism along the lines of “pop culture said it, I believe it, that settles it,” and it’s no more useful, or for that matter interesting, than fundamentalism of any other sort.

A little less common and a little more interesting are a second class of arguments, which insist that I can’t dismiss the possibility that something might pop up out of the blue to make things different this time around. As I pointed out very early on in the history of this blog, these are examples of the classic logical fallacy of argumentum ad ignorantiam, the argument from ignorance. They bring in some factor whose existence and relevance is unknown, and use that claim to insist that since the conventional wisdom can’t be disproved, it must be true.

Arguments from ignorance are astonishingly common these days. My readers may have noticed, for example, that every few years some new version of nuclear power gets trotted out as the answer to our species’ energy needs. From thorium fission plants to Bussard fusion reactors to helium-3 from the Moon, they all have one thing in common: nobody’s actually built a working example, and so it’s possible for their proponents to insist that their pet technology will lack the galaxy of technical and economic problems that have made every existing form of nuclear power uneconomical without gargantuan government subsidies. That’s an argument from ignorance: since we haven’t built one yet, it’s impossible to be absolutely certain that they’ll have the usual cascading cost overruns and the rest of it, and therefore their proponents can insist that those won’t happen this time. Prove them wrong!

More generally, it’s impressive how many people can look at the landscape of dysfunctional technology and failed promises that surrounds us today and still insist that the future won’t be like that. Most of us have learned already that upgrades on average have fewer benefits and more bugs than the programs they replace, and that products labeled “new and improved” may be new but they’re rarely improved; it’s starting to sink in that most new technologies are simply more complicated and less satisfactory ways of doing things that older technologies did at least as well at a lower cost.  Try suggesting this as a general principle, though, and I promise you that plenty of people will twist themselves mentally into pretzel shapes trying to avoid the implication that progress has passed its pull date.

Even so, there’s a very simple answer to all such arguments, though in the nature of such things it’s an answer that only speaks to those who aren’t too obsessively wedded to the conventional wisdom. None of the arguments from ignorance I’ve mentioned are new; all of them have been tested repeatedly by events, and they’ve failed. I’ve lost track of the number of times I’ve been told, for example, that the economic crisis du jour could lead to the sudden collapse of the global economy, or that the fashionable energy technology du jour could lead to a new era of abundant energy. No doubt they could, at least in theory, but the fact remains that they don’t. 

It so happens that there are good reasons why they don’t, varying from case to case, but that’s actually beside the point I want to make here. This particular version of the argument from ignorance is also an example of the fallacy the old logicians called petitio principii, better known as “begging the question.” Imagine, by way of counterexample, that someone were to post a comment saying, “Nobody knows what the future will be like, so the future you’ve predicted is as likely as any other.” That would be open to debate, since there’s some reason to think we can in fact predict some things about the future, but at least it would follow logically from the premise.  Still, I don’t think I’ve ever seen anyone make that claim. Nor have I ever seen anybody claim that since nobody knows what the future will be like, say, we can’t assume that progress is going to continue.

In practice, rather, the argument from ignorance is applied to discussions of the future in a distinctly one-sided manner. Predictions based on any point of view other than the conventional wisdom of modern popular culture are dismissed with claims that it might possibly be different this time, while predictions based on the conventional wisdom of modern popular culture are spared that treatment. That’s begging the question: covertly assuming that one side of an argument must be true unless it’s disproved, and that the other side can’t be true unless it’s proved.

Now in fact, a case can be made that we can in fact know quite a bit about the shape of the future, at least in its broad outlines. The heart of that case, as already noted, is the fact that certain theories about the future do in fact make accurate predictions, while others don’t. This in itself shows that history isn’t random—that there’s some structure to the flow of historical events that can be figured out by learning from the past, and that similar causes at work in similar situations will have similar outcomes. Apply that reasoning to any other set of phenomena, and you’ve got the ordinary, uncontroversial basis for the sciences. It’s only when it’s applied to the future that people balk, because it doesn’t promise them the kind of future they want.

The argument by incantation and the argument from ignorance make up most of the pushback I get. I’m pleased to say, though, that every so often I get an argument that’s considerably more original than these. One of those came in last week—tip of the archdruidical hat to DoubtingThomas—and it’s interesting enough that it deserves a detailed discussion.

DoubtingThomas began with the standard argument from ignorance, claiming that it’s always possible that something might possibly happen to disrupt the cyclic patterns of history in any given case, and therefore the cyclic theory should be dismissed no matter how many accurate predictions it scored. As we’ve already seen, this is handwaving, but let’s move on.  He went on from there to argue that much of the shape of history is defined by the actions of unique individuals such as Isaac Newton, whose work sends the world careening along entirely new and unpredicted paths. Such individuals have appeared over and over again in history, he pointed out, and was kind enough to suggest that my activities here on The Archdruid Report were, in a small way, another example of the influence of an individual on history. Given that reality, he insisted, a theory of history that didn’t take the actions of unique individuals into account was invalid.

Fair enough; let’s consider that argument. Does the cyclic theory of history fail to take the actions of unique individuals into account?

Here again, Oswald Spengler’s The Decline of the West is the go-to source, because he’s dealt with the sciences and arts to a much greater extent than other researchers into historical cycles. What he shows, with a wealth of examples drawn from the rise and fall of many different civilizations, is that the phenomenon DoubtingThomas describes is a predictable part of the cycles of history. In every generation, in effect, a certain number of geniuses will be born, but their upbringing, the problems that confront them, and the resources they will have available to solve those problems, are not theirs to choose. All these things are produced by the labors of other creative minds of the past and present, and are profoundly influenced by the cycles of history.

Let’s take Isaac Newton as an example. He happened to be born just as the scientific revolution was beginning to hit its stride, but before it had found its paradigm, the set of accomplishments on which all future scientific efforts would be directly or indirectly modeled. His impressive mathematical and scientific gifts thus fastened onto the biggest unsolved problem of the time—the relationship between the physics of moving bodies sketched out by Galileo and the laws of planetary motion discovered by Kepler—and resulted in the Principia Mathematica, which became the paradigm for the next three hundred years or so of scientific endeavor.

Had he been born a hundred years earlier, none of those preparations would have been in place, and the Principia Mathematica wouldn’t have been possible. Given the different cultural attitudes of the century before Newton’s time, in fact, he would almost certainly become a theologian rather than a mathematician and physicist—as it was, he spent much of his career engaged in theology, a detail usually left out by the more hagiographical of his biographers—and he would be remembered today only by students of theological history. Had he been born a century later, equally, some other great scientific achievement would have provided the paradigm for emerging science—my guess is that it would have been Edmund Halley’s successful prediction of the return of the comet that bears his name—and Newton would have had the same sort of reputation that Karl Friedrich Gauss has today: famous in his field, sure, but a household name? Not a chance.

What makes the point even more precise is that every other civilization from which adequate records survive had its own paradigmatic thinker, the figure whose achievements provided a model for the dawning age of reason and for whatever form of rational thought became that age’s principal cultural expression. In the classical world, for example, it was Pythagoras, who invented the word “philosophy” and whose mathematical discoveries gave classical rationalism its central theme, the idea of an ideal mathematical order to which the hurly-burly of the world of appearances must somehow be reduced. (Like Newton, by the way, Pythagoras was more than half a theologian; it’s a common feature of figures who fill that role.)

To take the same argument to a far more modest level, what about DoubtingThomas’ claim that The Archdruid Report represents the act of a unique individual influencing the course of history? Here again, a glance at history shows otherwise. I’m a figure of an easily recognizable type, which shows up reliably as each civilization’s Age of Reason wanes and it begins moving toward what Spengler called the Second Religiosity, the resurgence of religion that inevitably happens in the wake of rationalism’s failure to deliver on its promises. At such times you get intellectuals who can communicate fluently on both sides of the chasm between rationalism and religion, and who put together syntheses of various kinds that reframe the legacies of the Age of Reason so that they can be taken up by emergent religious movements and preserved for the future.

In the classical world, for example, you got Iamblichus of Chalcis, who stepped into the gap between Greek philosophical rationalism and the burgeoning Second Religiosity of late classical times, and figured out how to make philosophy, logic, and mathematics appealing to the increasingly religious temper of his time. He was one of many such figures, and it was largely because of their efforts that the religious traditions that ended up taking over the classical world—Christianity to the north of the Mediterranean, and Islam to the south—got over their early anti-intellectual streak so readily and ended up preserving so much of the intellectual heritage of the past.

That sort of thing is a worthwhile task, and if I can contribute to it I’ll consider this life well spent. That said, there’s nothing unique about it. What’s more, it’s only possible and meaningful because I happen to be perched on this particular arc of the wheel of time, when our civilization’s Age of Reason is visibly crumbling and the Second Religiosity is only beginning to build up a head of steam. A century earlier or a century later, I’d have faced some different tasks.

All of this presupposes a relationship between the individual and human society that fits very poorly with the unthinking prejudices of our time. That’s something that Spengler grappled with in his book, too;  it’s going to take a long sojourn in some very unfamiliar realms of thought to make sense of what he had to say, but that can’t be helped.

We really are going to have to talk about philosophy, aren’t we? We’ll begin that stunningly unfashionable discussion next week.