Wednesday, July 26, 2006

How Not To Save Science

Over the last few years, more and more people in the scientific community have realized the scale of the threat to industrial civilization posed by peak oil, other forms of resource depletion, global warming, and the rest of the rising spiral of crises we face nowadays. One common response has been to look for ways to save today's science and technology. In the abstract, at least, it's a good idea. I'm not at all sure the world of the future really needs to know how to build nuclear warheads or synthesize DDT, but there's plenty of modern science that's well worth saving.

So far, though, nearly all the discussion of this useful idea has centered on one specific plan for making it happen. The proposal is that a panel of scientific experts ought to be commissioned to write a book outlining everything modern science has learned about the universe, and the book produced en masse on durable paper, so that some copies make it through the decline and fall of industrial society and reach the hands of future generations. James Lovelock's 1998 essay "A Book for All Seasons" describes this proposed book as "the scientific equivalent of the Bible." This essay played a key role in launching the discussions just referred to, but Lovelock isn't the only important figure who's backed the plan.

It's hard to think of a better piece of proof that most scientists don't learn enough about the history of their own disciplines -- or a better piece of evidence that they need to. A book of the sort Lovelock and others have proposed would be a very, very bad idea. I don't simply present this claim as a matter of opinion. The experiment has been tried before, and the results were, to put it mildly, not good.

In the twilight years of Roman civilization in Western Europe, as the old institutions of classical learning were giving way to the Dark Ages, Isidore of Seville (560-636) -- a Christian bishop and theologian in Spain -- compiled a book along the same lines as the one being discussed today. Titled Etymologiae (Etymologies), it was the world's first encyclopedia, and it was a huge success by the standards of the time. The single most popular general reference work in medieval libraries, it was still so widely respected in the Renaissance that it saw ten print editions between 1470 and 1530.

During the Dark Ages, the Etymologiae served a useful purpose as a compendium of general knowledge. Over the longer term, though, its effects were far less positive. Because Isidore's book quickly came to be seen as the be-all and end-all of learning, other books -- many of which would have been much more useful to the renaissance of learning that spread through Europe after the turn of the millennium -- were allowed to decay, or had their parchment pages recycled to produce more copies of the Etymologiae.

Worse, the reverence given to Isidore's work gave a great deal of momentum to the medieval belief that the best way to learn about nature was to look something up in an old book. That same reverence came to be applied to the works of Aristotle after these latter were translated out of Arabic in the 12th century, and thus succeeded in hamstringing natural science for centuries. It took the social convulsions of the 16th and 17th centuries to finally break Aristotle's iron grip on scientific thought in the western world and make it acceptable for people to learn from nature directly.

This is exactly what Lovelock's "scientific equivalent of the Bible" would do. Like Isidore's encyclopedia, a modern compendium of science would inevitably contain inaccurate information -- today's scientists are no more omniscient than those of 50 years ago, when continental drift was still considered crackpot pseudoscience, or 110 years ago, when Einstein and the quantum physicists hadn't yet proved that the absolute space and uniform time of Newtonian cosmology were as imaginary as Oz. Like Isidore's encyclopedia, it would teach people that the way to learn about nature was to look facts up in a book, rather than paying attention to what was actually happening in front of their noses -- and it might well ensure that, in a time that had limited resources for the preservation of books, copies of a book of scientific doctrines could be preserved at the expense of, say, the last remaining copy of Newton's Principia Mathematica, Darwin's Origin of Species, or some other scientific classic that would offer much more to the future.

A book of scientific doctrines would also ensure that the most important dimension of science itself would be lost. Science, it's crucial to remember, is not a set of doctrines about the universe. At its core, science is a system of practical logic, a set of working rules that allow hypotheses to be tested against experience so that they can be discarded if they're false. That set of rules isn't perfect or flawless, but it's arguably the best method of investigating nature our species has invented so far, and it's worth far more to the future than any compendium of currently accepted scientific opinions.

In his essay, Lovelock imagines a survivor in some postcollapse society faced with a cholera epidemic, and equipped with nothing but a book on aromatherapy. It's a compelling image. What, though, if the survivor has to deal with a new disease -- one that hasn't yet jumped to human beings from its original animal host, let's say? A textbook focused on existing knowledge circa 2006 would offer little help. Nature is constantly changing. Science as a method of inquiry can keep track of those changes; science as a set of doctrines can't.

A book that might actually succeed in saving science for the future would be a very different book from the one Lovelock et al. have envisioned. Rather than projecting the omniscience that a phrase like "the scientific equivalent of the Bible" suggests, it would present the scientific method as an open-ended way of questioning nature, and provide enough practical tips and examples to help readers learn how to create their own experiments and ask their own questions. It would treat its readers in the present and future alike as participants in the process of science, not simply consumers of its products. The role of participant is not one that many scientists today are comfortable seeing conferred on laypeople, but if today's science is going to be saved for the future, getting past that discomfort is one of the first and least negotiable requirements.

Wednesday, July 12, 2006

Feeding the Deindustrial Future

Ask people today what they think future generations will consider the 20th century’s most important legacy and you’ll likely get any number of answers – the Apollo moon landings, computer technology, the discovery of the genetic code, or what have you. Past ages, though, were notoriously bad judges of the relative importance of the legacies they’ve left to the future. In the Middle Ages, scholastic theology was thought to be the crowning achievement of the human mind, while the Gothic cathedrals, the spectacular technological advances chronicled by Jean Gimpel in in The Medieval Machine, and the English feudal laws that evolved into parliamentary government and trial by jury would have been considered minor matters if anybody thought of them at all. Today nobody outside the University of Chicago and a few conservative Catholic colleges pays the least attention to scholasticism, while Gothic architecture still shapes how we think of space and light, a good half of the machinery that surrounds us every day runs on principles evolved by the inventors of the clock and the windmill, and the political and legal systems of a majority of the world’s nations – including ours – come from that odd Saxon tribal custom, borrowed by Norman kings for their own convenience, of calling together a group of yeomen to discuss new laws or decide who committed a crime.

When it comes to the long-term value of a culture’s accomplishments, in other words, the future has the deciding vote. I don’t pretend to know for certain how that vote will be cast; you don’t get privileged access to knowledge about the future, I’m sorry to say, by being an archdruid. Still, I’m willing to risk a guess. A thousand, or two thousand, or ten thousand years from now, when people look back through the mists of time to the 20th century and talk about its achievements, the top of the list won’t be moon landings, computers, or the double helix, much less the political and cultural ephemera that occupy so much attention just now. If I’m right, it will be something much humbler – and much more important.

In the first decades of the 20th century, an English agronomist named Albert Howard working in India began experimenting with farming methods that focused on the health of the soil and its natural cycles. Much of his inspiration came from traditional farming practices in India, China and Japan that had maintained soil fertility for centuries or millennia. Howard fused their ideas with Western scientific agronomy and the results of his own experiments to create the first modern organic agriculture. Later researchers, notably Alan Chadwick in England and John Jeavons in America, combined Howard’s discoveries with methods of intensive gardening that had evolved in France not long before Howard began his work, and with the biodynamic system developed in the 1920s by Austrian philosopher Rudolf Steiner, to develop the current state of the art in organic intensive farming.

The result of their work is at least potentially a revolution in humanity’s relationship to the land and the biosphere as dramatic as the original agricultural revolution itself. To begin with, the new organic methods are astonishingly productive. Using them, it’s possible to grow a spare but adequate vegetarian diet for one person on 1000 square feet of soil. For those with math phobia, that’s a patch of dirt 20’ by 50’, about the size of a small urban backyard, 1/45 of a football field, or a bit less than 1/43 of an acre – not much, in other words. (If you find this hard to believe – I certainly did, before I did the research and started using these methods in my own gardens – the details and documentation are in David Duhon, One Circle (Willits, CA: Ecology Action, 1985) and John Freeman’s Survival Gardening (Rock Hill, SC: John’s Press, 1983), among other sources.) These yields require no fossil fuels, no chemical fertilizers or pesticides, and no soil additives other than compost made from vegetable waste and human manure. Hand tools powered by human muscle are the only technological requirements – and yet organic methods get yields per acre far beyond what you can get with tractors and pesticides.

What makes this even more astonishing is that these yields are sustainable over the very long term. The core concept of organic agriculture is that healthy soil makes a healthy garden. Instead of treating soil like a sponge that needs to be filled with chemical nutrients, the organic method sees it as an ecosystem that will provide everything plants need so long as it’s kept in balance. The insect pests and plant diseases that give conventional farmers so much trouble can be managed easily by fine-tuning the soil ecosystem, changing the timing and mix of plants, and introducing natural predators – name any organism you need to get rid of, and there’s something that wants to eat it for you. Where conventional farming depletes the soil, requiring heavier applications of fertilizer and pesticides every season, organic methods produce improved soil, increased yields, and decreased pest problems year after year.

The third factor that makes today’s organic methods revolutionary is that they’re portable. Many traditional cultures around the world have worked out farming methods that are sustainable over the long term, but nearly all of those depend on specific environmental conditions and plant varieties. The growing methods practiced in the New Guinea highlands, for example, are brilliantly adapted to their native ecosystem and produce impressive yields, but they only work when you’ve got the specific mix of food crops, weather and soil conditions, and ecological factors found where they evolved. Intensive organic farming, by contrast, was developed simultaneously in the very different ecosystems of England and California, and has been put to use successfully in temperate, semiarid, and semitropical environments around the world. Like everything natural, it has its limits, but some 80% of the world’s population lives in areas where it can be practiced.

So why isn’t this front page news? There are plenty of reasons. To begin with, organic intensive methods aren’t suited to cash crops – you have to grow a mix of different, mutually supporting plants, rather than a single crop that can be sold in bulk to wholesalers – and the diet you can get from 1000 square feet of organic garden is high on sweet potatoes and soybeans but low on the sort of food Americans prefer to eat. More broadly, a society that measures all human values in terms of the abstract social game called money is very poorly equipped to make room for a means of subsistence that fills human needs but doesn’t do well at generating profits. Still, as the fictive economy winds down in the aftermath of the industrial age and modern chemical agriculture has to contend with the loss of its fossil fuel resource base, organic farming is one of the few ways we’ll be able to keep people fed. If enough people learn how to do it and start practicing it now, while there’s time to go through the learning curve, that is.

Wednesday, July 05, 2006

Climbing Down the Ladder

Over the last month or so I’ve talked about some of the mental barriers that keep people from thinking clearly about the predicament of industrial society, and in the process sketched out, at least by implication, the shape of that predicament and its likely consequences. At this point it may be useful to shift the conversation a bit, from the obstacles we face to the potentials for constructive action that are still left to us. It’s one thing to announce that the wolf is at the door, but quite another to propose some way to deal with that fact.

Now it’s sometimes true that the only way to deal with a hard fact is the even harder path of acceptance, and in at least one sense that describes the situation we’re in right now. The current predicament can’t be dealt with at all if “dealing with it” means finding a way to prevent the end of our civilization and the coming of the deindustrial dark age that will follow it. That option went out the window around 1980, when the major industrial nations turned their collective backs on a decade of promising movements toward sustainability. At this point we’ve backed ourselves into the trap predicted by The Limits to Growth back in 1972; we no longer have the resources to simultaneously meet our present needs and provide for our future. So far the future has gotten the short end of the stick – a choice that guarantees that not too far down the road, when the future becomes the present, we will no longer have the resources to do much of anything at all. That’s when catabolic collapse begins in a big way, and industrial society starts consuming itself.

History reminds us, though, that this isn’t a quick process, or a linear one. Civilizations fall in a stepwise fashion, with periods of crisis and contraction followed by periods of stability and partial recovery. The theory of catabolic collapse explains this as, basically, a matter of supply and demand; each crisis brings about a sharp decrease in the amount of capital (physical, human, social, and intellectual) that has to be maintained, and this frees up enough resources to allow effective crisis management, at least for a time. This same sequence is likely to repeat itself many times over the next few centuries, as industrial civilization slides down the slope of its own decline and fall.

The stepwise decline of industrial civilization (or any other) can be understood in another way, though, and this points toward possibilities for constructive action that can still be pursued, even this late in the game. Civilizations in full flower typically evolve complex, resource-intensive ways of doing things, because they can, and because the social benefits of extravagance outweigh the resource costs. The infrastructure that serves these functions contains substantial resources that, in a less extravagant time, can be salvaged and put to more prudent uses. As whatever passes for high technology drops out of use, the resources once locked up in high-tech equipment become raw material for simpler and more resource-efficient technologies. People realize that you don’t need a pyramid to bury a king, or Roman baths to wash your skin, and pretty soon the stone blocks of the pyramid and the plumbing of the Roman baths get salvaged and put to more immediately useful purposes.

This same process bids fair to play a massive role in the twilight of the industrial age. Some people in today’s neoprimitivist movement have claimed that as industrial civilization winds down, the survivors will slide all the way back to the stone age, because the last few centuries of feverish mining have stripped the planet of metal ores that can be processed by low-tech means. If we had to rely on ores in the ground, this might be true, but we don’t; the richest sources of metal in the world today are aboveground. The average skyscraper contains hundreds of tons of iron, steel, aluminum, and copper, ready to be cut apart by salvage crews, hauled away on oxcarts, and turned into knives, hoes, plowshares, and other useful things. The millions of autos currently cluttering America’s roads, used car lots, and junkyards are another rich source of raw materials, and the list goes on.

In this way, the sheer material extravagance of the industrial age could provide a vital cushion of resources as we move down the curve of decline. The most important limiting factor here is the practical knowledge necessary to turn skyscrapers, cars, and the other detritus of the industrial system into useful goods for the deindustrial world. Not many people have that knowledge just now. Our educational system, if America’s dysfunctional schooling industry deserves that name, shed the old trade schools and their practical training programs decades ago. At a time where the creation and exchange of actual goods and services has become an economic sideline, while the manipulation of baroque pyramids of unpayable IOUs has become the core of today’s economy, this comes as no surprise, but it’s a situation that has to change if anything is to be salvaged once the first major wave of crises hits.

Technological progress has a curious feature that bears on this situation. One of the children’s books I read when I was growing up used the metaphor of a ladder for progress; this rung is a chariot, the next a stagecoach, the one after that a locomotive, and up at the top a car. The problem with this metaphor is that it makes it look as though the earlier rungs are still there, so if the top one starts to crack, you can step down to the next one down, or the one below that. In most fields of technological progress, that isn’t even remotely true. How many people nowadays, faced with a series of complicated math problems and denied a computer, could whip out a slide rule or sit down with a table of logarithms and solve them? These days even elementary school students in math class do arithmetic on pocket calculators.

The same thing is true in nearly any other branch of technology you care to name. Each new generation of technology is more complex than the one before it, more resource-intensive to build and maintain, and more interdependent with other technologies. As each new generation of technology is adopted, the one before it becomes “obsolete” – even if the older technology does the job just as effectively as the newer one – and is scrapped. Twenty years later not even retired engineers still remember how the old technology worked, much less how to build it again from scratch.

In effect, as we’ve climbed from step to step on the ladder of progress, we’ve kicked out each rung under us as we’ve climbed to the next. That’s fine so long as the ladder keeps on going up forever. If you reach the top of the ladder unexpectedly, though, you’re likely to end up teetering on a single rung with no other means of support – and if you can’t stay on that one rung, for one reason or another, it’s a long way down. That’s the situation we’re in right now. As the end of cheap fossil fuels pushes us into the spiral of rising costs and dwindling resources The Limits to Growth predicted more than thirty years ago, the rung of high-tech, high-cost, and high-maintenance electronic technology is cracking beneath us.

In the last few years, fortunately, people have begun to replace a few of the lower rungs. I know working farmers who use draft animals or their own muscles instead of tractors, and fertilize the soil with compost and manure instead of petroleum-based agricultural chemicals, blacksmiths who make extraordinary things using only hand tools, and home brewers who turn out excellent beer with ordinary kitchen gear and raw materials their great-great-great-grandparents knew well. The lowest rung of all, making stone tools by flint knapping, has had a modest renaissance of its own in recent years.

This is only a beginning; there’s a huge amount still to be done. One of the most hopeful features of this side of our predicament, though, is that this work can be done successfully by individuals working on their own. This is no accident; it’s precisely those technologies that can be built, maintained, and used by individuals that formed the mainstay of the economy in the days before economic integration spun out of control, and these same technologies – if they’re recovered while time and resources still permit that – can make use of the abundant salvage of industrial civilization, help cushion the descent into the deindustrial future, and provide part of the basis for the sustainable cultures that will rise out of the ruins of our age.