Wednesday, May 26, 2010

The World After Abundance

It has been nearly four decades now since the limits to industrial civilization’s trajectory of limitless material growth on a limited planet have been clearly visible on the horizon of our future. Over that time, a remarkable paradox has unfolded. The closer we get to the limits to growth, the more those limits impact our daily lives, and the more clearly our current trajectory points toward the brick wall of a difficult future, the less most people in the industrial world seem to be able to imagine any alternative to driving the existing order of things ever onward until the wheels fall off.

This is as true in many corners of the activist community as it is in the most unregenerate of corporate boardrooms. For most of today’s environmentalists, for example, renewable energy isn’t something that people ought to produce for themselves, unless they happen to be wealthy enough to afford the rooftop PV systems that have become the latest status symbol in suburban neighborhoods on either coast. It’s something that utilities and the government are supposed to produce as fast as possible, so that Americans can keep on using three times as much energy per capita as the average European and twenty times as much as the average Chinese.

Of course there are alternatives. In the energy crisis of the Seventies, relatively simple conservation and efficiency measures, combined with lifestyle changes, sent world petroleum consumption down by 15% in a single decade and caused comparable drops in other energy sources across the industrial world. Most of these measures went out the window in the final binge of the age of cheap oil that followed, so there’s plenty of low hanging fruit to pluck. That same era saw a great many thoughtful people envision ways that people could lead relatively comfortable and humane lives while consuming a great deal less energy and the products of energy than people in the industrial world do today.

It can be a troubling experience to turn the pages of Rainbook or The Book of the New Alchemists, to name only two of the better products of that mostly forgotten era, and compare the sweeping view of future possibilities that undergirded their approach to a future of energy and material shortages with the cramped imaginations of the present. It’s even more troubling to notice that you can pick up yellowing copies of most of these books for a couple of dollars each in the used book trade, at a time when their practical advice is more relevant than ever, and their prophecies of what would happen if the road to sustainability was not taken are looking more prescient by the day.

The irony, and it’s a rich one, is that our collective refusal to follow the lead of those who urged us to learn how to get by with less has not spared us the necessity of doing exactly that. That’s the problem, ultimately, with driving headlong at a brick wall; you can stop by standing on the brake pedal, or you can stop by hitting the wall, but either way, you’re going to stop.

One way to make sense of the collision between the brittle front end of industrial civilization and the hard surface of nature’s brick wall is to compare the spring of 2010 with the autumn of 2007. Those two seasons had an interesting detail in common. In both cases, the price of oil passed $80 a barrel after a prolonged period of price increases, and in both cases, this was followed by a massive debt crisis. In 2007, largely driven by speculation in the futures market, the price of oil kept on zooming upwards, peaking just south of $150 a barrel before crashing back to earth; so far, at least, there’s no sign of a spike of that sort happening this time, although this is mostly because speculators are focused on other assets these days.

In 2007, though, the debt crisis also resulted in a dramatic economic downturn, and just now our chances of dodging the same thing this time around do not look good. Here in the US, most measures of general economic activity are faltering where they aren’t plunging – the sole exceptions are those temporarily propped up by an unparalleled explosion of government debt – and unemployment has become so deeply entrenched that what to do about the very large number of Americans who have exhausted the 99 weeks of unemployment benefits current law allows them is becoming a significant political issue. Even the illegal economy is taking a massive hit; a recent NPR story noted that the price of marijuana has dropped so sharply that northern California, where it’s a huge cash crop, is seeing panic selling and sharp economic contraction.

What’s going on here is precisely what The Limits to Growth warned about in 1973: the costs of continued growth have risen faster than growth itself, and are reaching a level that is forcing the economy to its knees. By “costs,” of course, the authors of The Limits to Growth weren’t talking about money, and neither am I. The costs that matter are energy, resources, and labor; it takes a great deal more of all of these to extract oil from deepwater wells in the Gulf of Mexico or oil sands in Alberta, say, than it used to take to get it from Pennsylvania or Texas, and since offshore drilling and oil sands make up an increasingly large share of what we’ve got left – those wells in Pennsylvania and Texas have been pumped dry, or nearly so – these real, nonmonetary costs have climbed steadily.

The price of oil in dollars functions here as a workable proxy measure for the real cost of oil production in energy, resources, and materials. The evidence of the last few years suggests that when the price of oil passes $80 a barrel, that’s a sign that the real costs have reached a level high enough that the rest of the economy begins to crack under the strain. Since astronomical levels of debt have become standard practice all through today’s global economy, the ability of marginal borrowers to service their debt is where the cracks showed up first. In the fall of 2007, many of those marginal borrowers were homeowners in the US and UK; this spring, they include entire nations.

What all this implies, in a single phrase, is that the age of abundance is over. The period from 1945 to 2005 when almost unimaginable amounts of cheap petroleum sloshed through the economies of the world’s industrial nations, and transformed life in those nations almost beyond recognition, still shapes most of our thinking and nearly all of our expectations. Not one significant policy maker or mass media pundit in the industrial world has begun to talk about the impact of the end of the age of abundance; it’s an open question if any of them have grasped how fundamental the changes will be as the new age of post-abundance economics begins to clamp down.

Most ordinary people in the industrial world, for their part, are sleepwalking through one of history’s major transitions. The issues that concern them are still defined entirely by the calculus of abundance. Most Americans these days, for example, worry about managing a comfortable retirement, paying for increasingly expensive medical care, providing their children with a college education and whatever amenities they consider important. It has not yet entered their darkest dreams that they need to worry about access to such basic necessities as food, clothing and shelter, the fate of local economies and communities shredded by decades of malign neglect, and the rise of serious threats to the survival of constitutional government and the rule of law.

Even among those who warn that today’s Great Recession could bottom out at a level equal to that reached in the Great Depression, very few have grappled with the consequences of a near-term future in which millions of Americans are living in shantytowns and struggling to find enough to eat every single day. To paraphrase Sinclair Lewis, that did happen here, and it did so at a time when the United States was a net exporter of everything you can think of, and the world’s largest producer and exporter of petroleum to boot. The same scale of economic collapse in a nation that exports very little besides unpayable IOUs, and is the world’s largest consumer and importer of petroleum, could all too easily have results much closer to those of the early 20th century in Central Europe, for example: that is, near-universal impoverishment, food shortages, epidemics, civil wars, and outbreaks of vicious ethnic cleansing, bracketed by two massive wars that both had body counts in the tens of millions.

Now you’ll notice that this latter does not equate to the total collapse into a Cormac McCarthy future that so many people like to fantasize about these days. I’ve spent years wondering why it is that so many people seem unable to conceive of any future other than business as usual, on the one hand, and extreme doomer porn on the other. Whatever the motives that drive this curious fixation, though, I’ve become convinced that it results in a nearly complete blindness to the very real risks the future is more likely to hold for us. It makes a useful exercise to take current notions about preparing for the future in the survivalist scene, and ask yourself how many of them would have turned out to be useful over the decade or two ahead if someone had pursued exactly those strategies in Poland or Slovakia, let’s say, in the years right before 1914.

Measure the gap between the real and terrible events of that period, on the one hand, and the fantasies of infinite progress or apocalyptic collapse that so often pass for realistic images of our future, on the other, and you have some sense of the gap that has to be crossed in order to make sense of the world after abundance. One way or another, we will cross that gap; the question is whether any significant number of us will do so in advance, and have time to take constructive actions in response, or whether we’ll all do so purely in retrospect, thinking ruefully of the dollars and hours that went into preparing for an imaginary future while the real one was breathing down our necks.

I’ve talked at quite some length in these essays about the kinds of preparations that will likely help individuals, families, and communities deal with the future of resource shortages, economic implosion, political breakdown, and potential civil war that the missed opportunities and purblind decisions of the last thirty years have made agonizingly likely here in the United States and, with an infinity of local variations, elsewhere in the industrial world. Those points remain crucial; it still makes a great deal of sense to start growing some of your own food, to radically downscale your dependence on complex technological systems, to reduce your energy consumption as far as possible, to free up at least one family member from the money economy for full-time work in the domestic economy, and so on.

Still, there’s another dimension to all this, and it has to be mentioned, though it’s certain to raise hackles. For the last three centuries, and especially for the last half century or so, it’s become increasingly common to define a good life as one provided with the largest possible selection of material goods and services. That definition has become so completely hardwired into our modern ways of thinking that it can be very hard to see past it. Of course there are certain very basic material needs without which a good life is impossible, but those are a good deal fewer and simpler than contemporary attitudes assume, and once those are provided, material abundance becomes a much more ambivalent blessing than we like to think.

In a very real sense, this way of thinking mirrors the old joke about the small boy with a hammer who thinks everything is a nail. In an age of unparalleled material abundance, the easy solution for any problem or predicament was to throw material wealth at it. That did solve some problems, but it arguably worsened others, and left the basic predicaments of human existence untouched. Did it really benefit anyone to spend trillions of dollars and the talents of some of our civilization’s brightest minds creating high-end medical treatments to keep the very sick alive and miserable for a few extra months of life, for example, so that we could pretend to ourselves that we had evaded the basic human predicament of the inevitability of death?

Whatever the answer, the end of the age of abundance draws a line under that experiment. Within not too many years, it’s safe to predict, only the relatively rich will have the dubious privilege of spending the last months of their lives hooked up to complicated life support equipment. The rest of us will end our lives the way our great-grandparents did: at home, more often than not, with family members or maybe a nurse to provide palliative care while our bodies do what they were born to do and shut down. Within not too many years, more broadly, only a very few people anywhere in the world will have the option of trying to escape the core uncertainties and challenges of human existence by chasing round after round of consumer goodies; the rest of us will count ourselves lucky to have our basic material needs securely provided for, and will have to deal with fundamental questions of meaning and value in some less blatantly meretricious way.

Some of us, in the process, may catch on to the subtle lesson woven into this hard necessity. It’s worth noting that while there’s been plenty of talk about the monasteries of the Dark Ages among people who are aware of the impending decline and fall of our civilization, next to none of it has discussed, much less dealt with, the secret behind the success of monasticism: the deliberate acceptance of extreme material poverty. Quite the contrary; all the plans for lifeboat ecovillages I’ve encountered so far, at least, aim at preserving some semblance of a middle class lifestyle into the indefinite future. That choice puts these projects in the same category as the lavish villas in which the wealthy inhabitants of Roman Britain hoped to ride out their own trajectory of decline and fall: a category mostly notable for its long history of total failure.

The European Christian monasteries that preserved Roman culture through the Dark Ages did not offer anyone a middle class lifestyle by the standards of their own time, much less those of ours. Neither did the Buddhist monasteries that preserved Heian culture through the Sengoku Jidai, Japan’s bitter age of wars, or the Buddhist and Taoist monasteries that preserved classical Chinese culture through a good half dozen cycles of collapse. Monasteries in all these cases were places people went to be very, very poor. That was the secret of their achievements, because when you reduce your material needs to the absolute minimum, the energy you don’t need to spend maintaining your standard of living can be put to work doing something more useful.

Now it’s probably too much to hope for that some similar movement might spring into being here and now; we’re a couple of centuries too soon for that. The great age of Christian monasticism in the West didn’t begin until the sixth century CE, by which time the Roman economy of abundance had been gone for so long that nobody even pretended that material wealth was an answer to the human condition. Still, the monastic revolution kickstarted by Benedict of Nursia drew on a long history of Christian monastic ventures; those unfolded in turn from the first tentative communal hermitages of early Christian Egypt; and all these projects, though this is not often mentioned, took part of their inspiration and a good deal of their ethos from the Stoics of Pagan Greece and Rome.

Movements of the Stoic type are in fact very common in civilizations that have passed the Hubbert peak of their own core resource base. There’s good reason for that. In a contracting economy, it becomes easier to notice that the less you need, the less vulnerable you are to the ups and downs of fortune, and the more you can get done of whatever it is that you happen to want to do. That’s an uncongenial lesson at the best of times, and during times of material abundance you won’t find many people learning it. Still, in the world after abundance, it’s hard to think of a lesson that deserves more careful attention.

Wednesday, May 19, 2010

Garlic, Chainsaws, and Victory Gardens

The uncontrolled simplification of a complex system is rarely a welcome event for those people whose lives depend on the system in question. That’s one way to summarize the impact of the waves of trouble rolling up against the sand castles we are pleased to call the world’s modern industrial nations. Exactly how the interaction between sand and tide will work out is anyone’s guess at this point; the forces that undergird that collision have filled the pages of this blog for a year and a half now; here, and for the next few posts, I want to talk a bit about what can be done to deal with the consequences.

That requires, first of all, recognizing what can’t be done. Plenty of people have argued that the only valid response to the rising spiral of crisis faced by industrial civilization is to build a completely new civilization from the ground up on more idealistic lines. Even if that latter phrase wasn’t a guarantee of disaster – if there’s one lesson history teaches, it’s that human societies are organic growths, and trying to invent one to fit some abstract idea of goodness is as foredoomed as trying to make an ecosystem do what human beings want – we no longer have time for grand schemes of that sort. To shift metaphors, when your ship has already hit the iceberg and the water’s coming in, it’s a bit too late to suggest that it should be rebuilt from the keel up according to some new scheme of naval engineering.

An even larger number of people have argued with equal zeal that the only valid response to the predicament of our time is to save the existing order of things, with whatever modest improvements the person in question happens to fancy, because the alternative is too horrible to contemplate. They might be right, too, if saving the existing order of things was possible, but it’s not. A global civilization that is utterly dependent for its survival on ever-expanding supplies of cheap abundant energy and a stable planetary biosphere is simply not going to make it in a world of ever-contracting supplies of scarce and expensive energy and a planetary biosphere that the civilization’s own activities are pushing into radical instability. Again, when your ship has already hit the iceberg and the water’s coming in, it’s not helpful to insist that the only option is to keep steaming toward a distant port.

What that leaves, to borrow a useful term from one of the most insightful books of the last round of energy crises, is muddling through. Warren Johnson’s Muddling Toward Frugality has fallen into the limbo our cultural memory reserves for failed prophecies; neither he nor, to be fair to him, anybody else in the sustainability movement of the Seventies had any idea that the collective response of most industrial nations to the approach of the limits to growth would turn out to be a thirty-year vacation from sanity in which short-term political gimmicks and the wildly extravagant drawdown of irreplaceable resources would be widely mistaken for permanent solutions.

That put paid to Johnson’s hope that simple, day by day adjustments to dwindling energy and resource supplies would cushion the transition from an economy of abundance to one of frugality. His strategy, though, still has some things going for it that no other available approach can match: It can still be applied this late in the game; if it’s done with enough enthusiasm or desperation, and with a clear sense of the nature of our predicament, it could still get a fair number of us through the mess ahead; and it certainly offers better odds than sitting on our hands and waiting for the ship to sink, which under one pretense or another is the other option open to us right now.

A strategy of muddling doesn’t lend itself to nice neat checklists of what to do and what to try, and so I won’t presume to offer a step-by-step plan. Still, showing one way to muddle, or to begin muddling, and outlining some of the implications of that choice, can bridge the gap between abstraction and action, and suggest ways that those who are about to muddle might approach the task – and of course there’s always the chance that the example might be applicable to some of the people who read it. With this in mind, I want to talk about victory gardens.

The victory garden as a social response to crisis was an invention of the twentieth century. Much before then, it would have been a waste of time to encourage civilians in time of war to dig up their back yards and put in vegetable gardens, because nearly everybody who had a back yard already had a kitchen garden in it. That was originally why houses had back yards; the household economy, which produced much of the goods and services used by people in pre-petroleum Europe and America, didn’t stop at the four walls of a house; garden beds, cold frames, and henhouses in urban back yards kept pantries full, while no self-respecting farm wife would have done without the vegetable garden out back and the dozen or so fruit trees close by the farmhouse.

Those useful habits only went into decline when rail transportation and the commercialization of urban food supplies gave birth to the modern city in the course of the nineteenth century. When 1914 came around and Europe blundered into the carnage of the First World War, the entire system had to be reinvented from scratch in many urban areas, since the transport networks that brought fresh food to the cities in peacetime had other things to do, and importing food from overseas became problematic for all the combatants in a time of naval blockades and unrestricted submarine warfare. The lessons learned from that experience became a standard part of military planning thereafter, and when the Second World War came, well-organized victory garden programs shifted into high gear, helping to take the hard edges off food rationing. It’s a measure of their success that despite the massive mismatch between Britain’s wartime population and its capacity to grow food, and the equally massive challenge of getting food imports through a gauntlet of U-boats, food shortages in Britain never reached the level of actual famine.

In the Seventies, in turn, the same thing happened on a smaller scale without government action; all over the industrial world, people who were worried about the future started digging victory gardens in their back yards, and books offering advice on backyard gardening became steady sellers. (Some of those are still in print today.) These days, sales figures in the home garden industry reliably jolt upwards whenever the economy turns south or something else sends fears about the future upwards; for many people, planting a victory garden has become a nearly instinctive response to troubled times.

It’s fashionable in some circles to dismiss this sort of thing as an irrelevance, but such analyses miss the point of the phenomenon. The reason that the victory garden has become a fixture of our collective response to trouble is that it engages one of the core features of the predicament individuals and families face in the twilight of the industrial age, the disconnection of the money economy from the actual production of goods and services – in the terms we’ve used here repeatedly, the gap between the tertiary economy on the one hand, and the primary and secondary economies on the other.

Right now, the current theoretical value of all the paper wealth in the world – counting everything from dollar bills in wallets to derivatives of derivatives of derivatives of fraudulent mortgage loans in bank vaults – is several orders of magnitude greater than the current value of all the actual goods and services in the world. Almost all of that paper wealth consists of debt in one form or another, and the mismatch between the scale of the debt and the much smaller scale of the global economy’s assets means exactly the same thing that the same mismatch would mean to a household: imminent bankruptcy. That can take place in two ways – either most of the debt will lose all its value by way of default, or all of the debt will lose most of its value by way of hyperinflation – or, more likely, by a ragged combination of the two, affecting different regions and economic sectors at different times.

What that implies for the not too distant future is that any economic activity that depends on money will face drastic uncertainties, instabilities, and risks. People use money because it gives them a way to exchange their labor for goods and services, and because it allows them to store value in a relatively stable and secure form. Both these, in turn, depend on the assumption that a dollar has the same value as any other dollar, and will have roughly the same value tomorrow that it does today.

The mismatch between money and the rest of economic life throws all these assumptions into question. Right now there are a great many dollars in the global economy that are no longer worth the same as any other dollar. Consider the trillions of dollars’ worth of essentially worthless real estate loans on the balance sheets of banks around the world. Governments allow banks to treat these as assets, but unless governments agree to take them, they can’t be exchanged for anything else, because nobody in his right mind would buy them for more than a tiny fraction of their theoretical value. Those dollars have the same sort of weird half-existence that horror fiction assigns to zombies and vampires; they’re undead money, lurking in the shadowy crypts of Goldman Sachs like so many brides of Dracula, because the broad daylight of the market would kill them at once.

It’s been popular for some years, since the sheer amount of undead money stalking the midnight streets of the world’s financial centers became impossible to ignore, to suggest that the entire system will come to a messy end soon in some fiscal equivalent of a zombie apocalypse movie. Still, the world’s governments are doing everything in their not inconsiderable power to keep that from happening. Letting banks meet capital requirements with technically worthless securities is only one of the maneuvers that government regulators around the world allow without blinking. Driving this spectacular lapse of fiscal probity, of course, is the awkward fact that governments – to say nothing of large majorities of the voters who elect them – have been propping up budgets for years with their own zombie hordes of undead money.

Underlying this awkward fact is the reality that the only response to the current economic crisis most governments can imagine involves churning out yet more undead money, in the form of an almost unimaginable torrent of debt; the only response most voters can imagine, in turn, involves finding yet more ways to spend more money than they happen to earn. So we’re all in this together; everybody insists that the walking corpses in the basement are fine upstanding citizens, and we all pretend not to notice that more and more people are having their necks bitten or their brains devoured.

As long as most people continue to play along, it’s entirely possible that things could stumble along this way for quite a while, with stock market crashes, sovereign debt crises, and corporate bankruptcies quickly covered up by further outpourings of unpayable debt. The problem for individuals and families, though, is that all this makes money increasingly difficult to use as a medium of exchange or a store of wealth. If hyperinflation turns out to be the mode of fiscal implosion du jour, it becomes annoying to have to sprint to the grocery store with your paycheck before the price of milk rises above $1 million a gallon; if we get deflationary contraction instead, business failures and plummeting wages make getting any paycheck at all increasingly challenging; in either case your pension, your savings, and the money you pour down the rathole of health insurance are as good as lost.

This is where victory gardens come in, because the value you get from a backyard garden differs from the value you get from your job or your savings in a crucial way: money doesn’t mediate between your labor and the results. If you save your own seeds, use your own muscles, and fertilize the soil with compost you make from kitchen and garden waste – and many gardeners do these things, of course – the only money your gardening requires of you is whatever you spend on beer after a hard day’s work. The vegetables that help feed your family are produced by the primary economy of sun and soil and the secondary economy of sweat; the tertiary economy has been cut out of the loop.

Now it will doubtless be objected that nobody can grow all the food for a family in an ordinary back yard, so the rest of the food remains hostage to the tertiary economy. This is more or less true, but it’s less important than it looks. Even in a really thumping depression, very few people have no access to money at all; the problem is much more often one of not having enough money to get everything you need by way of the tertiary economy. An effective response usually involves putting those things that can be done without money outside the reach of the tertiary economy, and prioritizing whatever money can be had for those uses that require it.

You’re not likely to be able to grow field crops in your back yard, for example, but grains, dried beans, and the like can be bought in bulk very cheaply. What can’t be bought cheaply, and in a time of financial chaos may not be for sale at all, are exactly the things you can most effectively grow in a backyard garden, the vegetables, vine and shrub fruits, eggs, chicken and rabbit meat, and the like that provide the vitamins, minerals, and nutrients you can’t get from fifty pound sacks of rice and beans. Those are the sorts of things people a century and a half ago produced in their kitchen gardens, along with medicinal herbs to treat illnesses and maybe a few dye plants for homespun fabric; those are the sorts of things that make sense to grow at home in a world where the economy won’t support the kind of abundance most people in the industrial world take for granted today.

It will also doubtless be objected that even if you reduce the amount of money you need for food, you still need money for other things, and so a victory garden isn’t an answer. This is true enough, if your definition of an answer requires that it simultaneously solves every aspect of the mess in which the predicament of industrial society has landed us. Still, one of the key points I’ve tried to make in this blog is that waiting for the one perfect answer to come around is a refined version of doing nothing while the water rises. Muddling requires many small adjustments rather than one grand plan: planting a victory garden in the back yard is one adjustment to the impact of a dysfunctional money economy on the far from minor issue of getting food on the table; other impacts will require other adjustments.

A third objection I expect to hear that not everybody can plant a victory garden in the back yard. A good many people don’t have back yards these days, and some of those who do are forbidden by restrictive covenants from using their yards as anything but props for their home’s largely imaginary resale value. (Will someone please explain to me why so many Americans, who claim to value freedom, willingly submit to the petty tyranny of planned developments and neighborhood associations? Brezhnev’s Russia placed fewer restrictions on people’s choices than many neighborhood covenants do.) The crucial point here is that a victory garden is simply an example of the way that people have muddled through hard times in the past, and might well muddle through the impending round of hard times in the future. If you can’t grow a garden in your backyard, see if there’s a neighborhood P-Patch program that will let garden somewhere else, or look for something else that will let you meet some of your own needs with your own labor without letting money get in the way.

That latter, of course, is the central point of this example. At a time when the tertiary economy is undergoing the first stages of an uncontrolled and challenging simplification, if you can disconnect something you need from the tertiary economy, you’ve insulated a part of your life from at least some of the impacts of the chaotic resolution of the mismatch between limitless paper wealth and the limited real wealth available to our species on this very finite planet. What garlic is to vampires and a well-fueled chainsaw is to zombies, being able to do things yourself, with the skills and resources you have on hand, is to the undead money lurching en masse through today’s economy; next week, we’ll replace the garlic with a mallet and a stake.

Wednesday, May 12, 2010

After Money

The discussion of the risks of complexity in the last few posts here on The Archdruid Report dealt in large part with abstract concepts, though the news headlines did me the favor of providing some very good examples of those concepts in action. Still, it’s time to review some of the practical implications of the ideas presented here, and in the process, begin wrapping up the discussion of economics that has been central to this blog’s project over the last year and a half.

The news headlines once again have something to contribute. I think most of my readers will be aware that the economic troubles afflicting Europe came within an ace of causing a major financial meltdown last week. The EU, with billions in backing from major central banks around the globe, managed to stave off collapse for now, but it’s important to realize that the rescue package so hastily cobbled together will actually make things worse in the not-very-long run. Like the rest of the industrial world, the EU is drowning in excess debt; the response of the EU’s leadership is to issue even more debt, so they can prop up one round of unpayable debts with another. They’re in good company; Japan has been doing this continuously since its 1990 stock market and real estate collapse, and the US has responded to its current economic nosedive in exactly the same way.

It’s harsh but not, I think, unfair to characterize this strategy as trying to put out a house fire by throwing buckets of gasoline onto the blaze. Still, a complex history and an even more complex set of misunderstandings feeds this particular folly. Nobody in Europe has forgotten what happened the last time a major depression was allowed to run its course unchecked by government manipulation, and every European nation has its neofascist fringe parties who are eager to play their assigned roles in a remake of that ghastly drama. That’s the subtext behind the EU-wide effort to talk tough about austerity while doing as little as possible to make it happen, and the even wider effort to game the global financial system so that Europe and America can continue to consume more than they produce, and spend more than they take in, for at least a little longer.

There was a time, to be sure, when this wasn’t as daft an idea as it has now become. During the 350 years of the industrial age, a good fraction of Europe did consume more than it produced, by the simple expedient of owning most of the rest of the world and exploiting it for their own economic benefit. As late as 1914, the vast majority of the world’s land surface was either ruled directly from a European capital, occupied by people of European descent, or dominated by European powers through some form of radically unequal treaty relationship. The accelerating drawdown of fossil fuels throughout that era shifted the process into overdrive, allowing the minority of the Earth’s population who lived in Europe or the more privileged nations of the European diaspora – the United States first among them – not only to adopt what were, by the standards of all other human societies, extravagantly lavish lifestyles, but to be able to expect that those lifestyles would become even more lavish in the future.

I don’t think more than a tiny fraction of the people of the industrial world has yet begun to deal with the hard fact that those days are over. European domination of the globe came apart explosively in the four brutal decades between 1914, when the First World War broke out, and 1954, when the fall of French Indochina put a period on the age of European empire. The United States, which inherited what was left of Europe’s imperial role, never achieved the level of global dominance that European nations took for granted until 1914 – compare the British Empire, which directly ruled a quarter of the Earth’s land surface, with the hole-and-corner arrangements that allow America to maintain garrisons in other people’s countries around the world. Now the second and arguably more important source of Euro-American wealth and power – the exploitation of half a billion years of prehistoric sunlight in the form of fossil fuels – has peaked and entered on its own decline, with consequences that bid fair to be at least as drastic as those that followed the shattering of the Pax Europa in 1914.

To make sense of all this, it’s important to recall a distinction made here several times in the past, between the primary, secondary, and tertiary economies. The primary economy is the natural world, which produces around 3/4 of all economic value used by human beings. The secondary economy is the production of goods and services from natural resources by human labor. The tertiary economy is the production and exchange of money – a term that includes everything that has value only because it can be exchanged for the products of the primary and secondary economies, and thus embraces everything from gold coins to the most vaporous products of today’s financial engineering.

The big question of conventional economics is the fit between the secondary and tertiary economies. It’s not at all hard for these to get out of step with each other, and the resulting mismatch can cause serious problems. When there’s more money in circulation than there are goods and services for the money to buy, you get inflation; when the mismatch goes the other way, you get deflation; when the mechanisms that provide credit to business enterprises gum up, for any number of reasons, you get a credit crunch and recession, and so on. In extreme cases, which used to happen fairly often until the aftermath of the Great Depression pointed out what the cost could be, several of these mismatches could hit at once, leaving both the secondary and tertiary economies crippled for years at a time.

This is the sort of thing that conventional economic policy is meant to confront, by fiddling with the tertiary economy to bring it back into balance with the secondary economy. The reason why the industrial world hasn’t had a really major depression since the end of the 1930s, in turn, is that the methods cobbled together by governments to fiddle with the tertiary economy work tolerably well. It’s become popular in recent years to insist that the unfettered free market is uniquely able to manage economic affairs in the best possible way, but such claims fly in the face of all the evidence of history; the late 19th century, for example, when the free market was as unfettered as it’s possible for a market to get, saw catastrophic booms and busts sweep through the industrial world with brutal regularity, causing massive disruption to economies around the world. Those who think this is a better state of affairs than the muted ebbs and flows of the second half of the twentieth century should try living in a Depression-era tarpaper shack on a dollar a day for a week or two.

The problem we face now is that the arrangements evolved over the last century or so only address the relationship between the secondary and tertiary economies. The primary economy of nature, the base of the entire structure, is ignored by most contemporary economics, and has essentially no place in the economic policy of today’s industrial nations. The assumption hardwired into nearly all modern thought is that the economic contributions of the primary economy will always be there so long as the secondary and tertiary economy are working as they should. This may just be the Achilles’ heel of the entire structure, because it means that mismatches between the primary economy and the other two economies not only won’t be addressed – they won’t even be noticed.

This, I suspect, is what underlies the rising curve of economic volatility of the last decade or so: we have reached the point where the primary economy of nature will no longer support the standards of living most people in the industrial world expect. Our politicians and economists are trying to deal with the resulting crises as though they were purely a product of mismatches between the secondary and tertiary economies. Since such measures don’t address the real driving forces behind the crises, they fail, or at best stave off trouble for a short time, at the expense of making it worse later on.

The signals warning us that we have overshot the capacity of the primary economy are all around us. The peaking of world conventional oil production in 2005 is only one of these. The dieoff of honeybees is another, on a different scale; whatever its cause, it serves notice that something has gone very wrong with one of the natural systems on which human production of goods and services depends. There are many others. It’s easy to dismiss any of them individually as irrelevancies, but every one of them has an economic cost, and every one of them serves notice that the natural systems that make human economic activity possible are cracking under the strain we’ve placed on them.

That prospect is daunting enough. There’s another side to our predicament, though, because the only tools governments have available these days to deal with economic trouble are ways of fiddling with the tertiary economy. When those tools don’t work – and these days, increasingly, they don’t – the only option policy makers can think of is to do more of the same, following what’s been called the “lottle” principle – “if a little doesn’t work, maybe a lot’ll do the trick.” The insidious result is that the tertiary economy of money is moving ever further out of step with the secondary economy of goods and services, yielding a second helping of economic trouble on top of the one already dished out by the damaged primary economy. Flooding the markets with cheap credit may be a workable strategy when a credit crunch has hamstrung the secondary economy; when what’s hitting the secondary economy is the unrecognized costs of ecological overshoot, though, flooding the markets with cheap credit simply accelerates economic imbalances that are already battering economies around the world.

One interesting feature of this sort of two-sided crisis is that it’s not a unique experience. Most of the past civilizations that overshot the ecological systems that supported them, and crashed to ruin as a result, backed themselves into a similar corner. I’ve mentioned here several times the way that the classic Lowland Maya tried to respond to the failure of their agricultural system by accelerating the building programs central to their religious and political lives. Their pyramids of stone served the same purpose as our pyramids of debt: they systematized the distribution of labor and material wealth in a way that supported the social structure of the Lowland Mayan city-states and the ahauob or “divine kings” who ruled them. Yet building more pyramids was not an effective response to topsoil loss; in fact, it worsened the situation considerably by using up labor that might have gone into alternative means of food production.

An even better example, because a closer parallel to the present instance, is the twilight of the Roman world. Ancient Rome had a sophisticated economic system in which credit and government stimulus programs played an important role. Roman money, though, was based strictly on precious metals, and the economic expansion of the late Republic and early Empire was made possible only because Roman armies systematically looted the wealth of most of the known world. More fatal still was the shift that replaced a sustainable village agriculture across most of the Roman world with huge slave-worked latifundiae, the industrial farms of their day, which were treated as cash cows by absentee owners and, in due time, were milked dry. The primary economy cracked as topsoil loss caused Roman agriculture to fail; attempts by emperors to remedy the situation failed in turn, and the Roman government was reduced to debasing the coinage in an attempt to meet a rising spiral of military costs driven by civil wars and barbarian invasions. This made a bad situation worse, gutting the Roman economy and making the collapse of the Empire that much more inevitable.

It’s interesting to note the aftermath. In the wake of Rome’s fall, lending money at interest – a normal business practice throughout the Roman world – came to a dead stop for centuries. Christianity and Islam, the majority religions across what had been the Empire’s territory, defined it as a deadly sin. More, money itself came to play an extremely limited role in large parts of the former Empire. Across Europe in the early Middle Ages, it was common for people to go from one year to the next without so much as handling a coin. What replaced it was the use of labor as the basic medium of exchange. That was the foundation of the feudal system, from top to bottom: from the peasant who held his small plot of farmland by providing a fixed number of days of labor each year in the local baron’s fields, to the baron who held his fief by providing his overlord with military service, the entire system was a network of personal relationships backed by exchanges of labor for land.

It’s common in contemporary economic history to see this as a giant step backward, but there’s good reason to think it was nothing of the kind. The tertiary economy of the late Roman world had become a corrupt, metastatic mess; the new economy of feudal Europe responded to this by erasing the tertiary economy as far as possible, banishing economic abstractions, and producing a system that was very hard to game – deliberately failing to meet one’s feudal obligations was the one unforgivable crime in medieval society, and generally risked the prompt and heavily armed arrival of one’s liege lord and all his other vassals. The thought of Goldman Sachs executives having to defend themselves in hand-to-hand combat against a medieval army may raise smiles today, a thousand years ago, that’s the way penalties for default were most commonly assessed.

What makes this even more worth noting is that very similar systems emerged in the wake of other collapses of civilizations. The implosion of Heian Japan in the tenth century, to name only one example, gave rise to a feudal system so closely parallel to the European model that it’s possible to translate much of the technical language of Japanese bushido precisely into the equivalent jargon of European chivalry, and vice versa. More broadly, when complex civilizations fall apart, one of the standard results is the replacement of complex tertiary economies with radically simplified systems that do away with abstractions such as money, and replace them with concrete economics of land and labor.

There’s a lesson here, and it can be applied to the present situation. As the rising spiral of economic trouble continues, we can expect drastic volatility in the value and availability of money – and here again, remember that this term refers to any form of wealth that only has value because it can be exchanged for something else. Any economic activity that is solely a means of bringing in money will be held hostage to the vagaries of the tertiary economy, whether those express themselves through inflation, credit collapse, or what have you. Any economic activity that produces goods and services directly for the use of the producer, and his or her family and community, will be much less drastically affected by these vagaries. If you depend on your salary to buy vegetables, for example, how much you can eat depends on the value of money at any given moment; if you grow your own vegetables, using your own kitchen and garden scraps to fertilize the soil and saving your own seed, you have much more direct control over your vegetable supply.

Most people won’t have the option of separating themselves completely from the money economy for many years to come; as long as today’s governments continue to function, they will demand money for taxes, and money will continue to be the gateway resource for many goods and services, including some that will be very difficult to do without. Still, there’s no reason why distancing oneself from the tertiary economy has to be an all-or-nothing thing. Any step toward the direct production of goods and services for one’s own use, with one’s own labor, using resources under one’s own direct control, is a step toward the world that will emerge after money; it’s also a safety cushion against the disintegration of the money economy going on around us – a point I’ll discuss in more detail, by way of a concrete example, in next week’s post.

Wednesday, May 05, 2010

The Principle of Subsidiary Function

I trust my readers will recognize a hint of sarcasm if I say that the good news just keeps on rolling in. Of the smoke plumes that were rising into the industrial world’s increasingly murky skies as last week’s post went up, one – the billowing cloud of assorted mis-, mal- and nonfeasance bubbling out of Goldman Sachs – has faded from the front pages for the moment, though it will doubtless be back before long. On the other hand, the two remaining – the cratering of Greece’s borrow-and-spend economics and the spreading ecological catastrophe in the Gulf of Mexico – have more than made up the difference.

It’s a matter of chance, more than anything else, that Greece happened to become the poster child for what happens when you insist on buying off influential sectors of the electorate with money you don’t happen to have. What we are pleased to call democracy these days is a system in which factions of the political class contend for power by spending large sums of other people’s money to buy the temporary loyalty of voting blocs. There’s nothing especially novel in this system, by the way; the late Roman Republic managed (or, rather, mismanaged) its affairs in exactly the same manner, and such classical theorists as Polybius argued that this is the way democracies normally end up working (or, rather, not working).

You might think that Greece, which happens to be Polybius’ home turf, would have had the common sense to dodge this particular bullet. No such luck; recent Greek governments, like many others, made the strategic mistake of using borrowed funds to provide a good deal of that unearned largesse, and the resulting debt load eventually collided head on with the ongoing deleveraging of the global economy in the wake of the latest round of bubble economics. The result was a fiscal death spiral, as doubts about Greece’s ability to pay its debts drove up the interest rates Greece had to pay to finance those debts, increasing the doubts further; rinse and repeat until something comes unglued. The much-ballyhooed announcement of an EU bailout package stabilized the situation for a few days, but that’s about all; the death spiral has already resumed, accompanied by bloody riots in the streets of Athens and comments by the usual highly placed sources that some kind of default is becoming inevitable.

Headlines for the last few days have warned of similar head-on collisions taking shape in Spain and Portugal. What very few people in the mainstream media are willing to mention is that the most spectacular examples of borrow-and-spend economics are not little countries on the economic margins of Europe, but Britain and the United States. It’s anyone’s guess when investors will begin to realize that neither countries has any way of paying back the gargantuan sums both have borrowed of late to prop up their crippled economies; when it does become clear, the rush to the exits will likely be one for the record books.

It’s equally a matter of chance, in turn, that the Deepwater Horizon drilling platform happened to be the one to fail catastrophically. That something of the sort was going to happen was pretty much a given; drilling for oil a mile underwater is risky, complicated, technologically challenging work, and any oil well, anywhere, can undergo a blowout when it’s being drilled. It just so happened that this was the one that happened to blow, and the lax safety standards and budget-conscious corner-cutting endemic to today’s corporate world made it pretty much inevitable that when a blowout happened, it would turn into a disaster.

Bad as it is already, it may get much, much worse. According to a memo leaked to Gulf Coast newspapers, BP officials have privately admitted to the US government that the torrent of hot, high-pressure crude oil surging through the broken pipe could quite conceivably blow the remaining hardware off the top of the well. This would turn the current 5,000-barrel-a-day spill into a cataclysmic gusher of 40,000 to 60,000 barrels a day. Capping such a flow a mile under water is beyond current technology; if things go that way, there may be no other option than waiting until the flow drops to a more manageable level. If that means the death of every multicellular organism in the Gulf of Mexico, storm surges this hurricane season that leave everything for miles inland coated with black goo, and tar balls and dead birds floating ashore wherever the Gulf Stream goes – and yes, these are tolerably likely consequences if the wellhead blows – that’s what it means.

As I discussed in last week’s post, a common thread of complexity unites these crises with each other, and with others of the same magnitude that are statistically certain to happen in the months and years ahead of us. We – meaning here those of us who live in the world’s industrial nations – have allowed our societies to become more complex than any collection of human minds can effectively manage, and our only response to the problems this causes is to add additional layers of complexity. The result, of course, is that our societies become even more unmanageable, and the problems they generate even more extreme and intractable. Once again, rinse and repeat until something comes unglued.

Now the simple, logical solution to a problem caused by too much complexity is to reduce the amount of complexity. Joseph Tainter’s The Collapse of Complex Societies, which has deservedly become required reading in peak oil circles, argues that societal collapse has exactly this function; when a society has backed itself into a corner by heaping up more complexity than it can manage, collapse offers the one way out. In a post on The Oil Drum a while back, Ugo Bardi made a similar point, arguing that if anyone in Roman times had tried to come up with a sustainable society to which the Roman world could transition, their best option would have looked remarkably like the Middle Ages.

Bardi pointed out, mind you, that there was precisely no chance that any such advice would have been taken by even the wisest of Roman emperors, and it’s just as true that a proposal to reduce the complexity of contemporary civilization can count on getting no more interest from the political classes of today’s industrial nations, or for that matter from the population at large. The experiment has been tried, after all; it’s worth remembering the extent to which the baby steps toward lower complexity taken in the 1970s helped to fuel the Reagan backlash of the 1980s.

Now it’s true that some of the achievements of the 1970s – the dramatic advances in organic agriculture, the birth of the modern recycling industry, the refinement of passive solar heating and solar hot water technology, and more – remained viable straight through the backlash era, and are viable today; and it’s also true that today’s economic debacle, not to mention the looming impact of peak oil, bid fair to make a good many of the legacies of the 1970s much more popular in the years right ahead of us. Still, the dream of a collective conversion to sustainable lifestyles that filled so many pages in Rain, Seriatim, and other journals of that period is further away now than it was then; so much time and so many resources have been wasted that it’s too late for such a collective conversion to work, even if the political will needed for one could be found.

Still, when you get right down to it, the hope of a mass conversion to sustainability by political means – by legislation, let’s say, backed up by the massive new bureaucracy that would be needed to enforce "green laws" affecting every detail of daily life – is yet another attempt to solve a complexity-driven problem by adding on more complexity. That’s a popular strategy, for the same reasons that any other attempt to deal with the problems of complexity through further complexity is popular these days: it makes sense to most of us, since it’s the sort of thing we’re used to doing, and it provides a larger number of economic and social niches for specialists – in this case, members of the professional activist community, who might reasonably expect to step into staff positions in that new bureaucracy – who have the job of managing the new level of complexity for the benefit – at least in theory – of those who have to live with it.

All of this is very familiar ground, echoing as it does the way that countless other efforts at reform have turned into layers of complexity in the past. To suggest, as I do, that it won’t work, doesn’t mean that it won’t be tried. It’s being tried right now, in many countries and on many different levels, with enough success that in Britain, at least, the number of Transition Town activists who have found their way onto municipal payrolls has excited grumbling from members of less successful pressure groups.

In the same way, I think it’s beyond question that every other reasonably well funded attempt to solve the problems of complexity with more complexity will get at least some funding, and be given at least a token trial. We’ve already had the corn ethanol boom here in the US; the cellulosic ethanol and algal biodiesel booms have been delayed a bit by the impact of a collapsing economy on credit markets, but somebody will doubtless find a way around that in good time; down the road a bit, a crash program to build nuclear power plants is pretty much a foregone conclusion; fusion researchers will have the opportunity to flush billions more dollars down the same rathole they’ve been exploring since the 1950s; you name it, if it’s complex and expensive, it will get funding.

Not all of that money will be entirely wasted, either. Current windpower technologies and PV panels may not be sustainable over the long term, but for the decades immediately ahead they’re an excellent investment; anything that can keep the grid supplied with power, even intermittently, as fossil fuel production drops out from under the world’s industrial economies may be able to help make the Long Descent less brutal than it might otherwise be. With any luck, there’ll be a boom in home insulation and weatherstripping, a boom in solar hot water heaters, a boom in backyard victory gardens, and the like – small booms, probably, since they aren’t complex and expensive enough to catch at the contemporary imagination, but even a small boom might help.

On the whole, though, the pursuit of complexity as a solution for the problems caused by complexity is a self-defeating strategy. It happens to be the self-defeating strategy to which we’re committed, collectively and in most cases individually as well, and it can be dizzyingly hard for many people to think of any action at all that doesn’t follow it. Take a moment, now, before reading the rest of this post, to give it a try. Can you think of a way to deal with the problems of complexity in today’s industrial nations – problems that include, but are not limited to, rapidly depleting energy supplies, ecological destruction, and accelerating economic turbulence – that doesn’t simply add another layer of complexity to the mess?

There’s at least one such way, and longtime readers of this blog will not be surprised to learn that it’s a way pioneered decades ago, in a different context, by maverick economist E.F. Schumacher. That way starts with what he termed the Principle of Subsidiary Function. This rule holds that the most effective arrangement to perform any function whatsoever will always assign that function to the smallest and most local unit that can actually perform it.

It’s hard to think of any principle that flies more forcefully in the face of every presupposition of the modern world. Economies of scale and centralization of control are so heavily and unthinkingly valued that it rarely occurs to anyone that in many situation they might not actually be helpful at all. Still, Schumacher was not a pie-in-the-sky theorist; he drew his conclusions on the basis of most of a lifetime as a working economist in the business world. Like most of us, he noticed that the bigger and more centralized an economic or political system happened to be, the less effectively it could respond to the complex texture of local needs and possibilities that makes up the real world.

This rule can be applied to any aspect of the predicament of industrial society you care to name, but just now I want to focus on its application to the vexed question of how to respond to that predicament. Attempts to make such a response on the highest and least local level possible – for example, the failed climate negotiations that reached their latest pinnacle of absurdity in the recent debacle at Copenhagen – have done quite a respectable job of offering evidence for Schumacher’s contention. Attempts to do the same thing at a national level aren’t doing much better. The lower down the ladder of levels you go, and the closer you get to individuals and families confronting the challenges of their own lives, the more success stories you find.

By the same logic, the best place to start backing away from an overload of complexity is in the daily life of the individual. What sustains today’s social complexity, in the final analysis, is the extent to which individuals turn to complex systems to deal with their needs and wants. To turn away from complex systems on that individual level, in turn, is to undercut the basis for social complexity, and to begin building frameworks for meeting human needs and wants of a much simpler and thus more sustainable kind. It also has the advantage – not a small one – that it’s unnecessary to wait for international treaties, or government action, or anything else to begin having an effect on the situation; it’s possible to begin right here, right now, by identifying the complex systems on which you depend for the fulfillment of your needs and wants, and making changes in your own life to shift that dependency onto smaller or more local systems, or onto yourself, or onto nothing at all – after all, the simplest way to deal with a need or want, when doing so is biologically possible, is to stop needing or wanting it.

Such personal responses have traditionally been decried by those who favor grand collective schemes of one kind or another. I would point out in response, first, that a small step that actually happens will do more good than a grandiose plan that never gets off the drawing board, a fate suffered by nearly all of the last half century’s worth of grandiose plans for sustainability; second, that starting from personal choices and local possibilities, rather than abstract and global considerations, makes it a good deal more likely that whatever evolves out of the process might actually work; and third, that tackling the crisis of industrial society from the top down has been tried over and over again by activists for decades now, with no noticeable results, and maybe it’s time to try something else. How that "something else" might be pursued in practice will be the topic of next week’s post.