Wednesday, March 31, 2010

Riddles in the Dark

Any number of metaphors might be used for the predicament today’s industrial societies face as the age of cheap energy stumbles to its end, but the one that keeps coming to mind is drawn from a scene in one of the favorite books of my childhood, J.R.R. Tolkien’s The Hobbit. It’s the point in the story when Bilbo Baggins, the protagonist, gets lost in goblin-tunnels under the Misty Mountains and there encounters a gaunt, slippery, cannibalistic creature named Gollum.

That meeting was not exactly full of bonhomie. Gollum regarded Bilbo in much the way a hungry undergraduate regards the arrival of takeout pizza, but Bilbo was armed and alert. To put his intended meal off his guard, Gollum challenged Bilbo to a riddle contest. So there they sat, deep underground, challenging each other with the hardest riddles they could think of. I sometimes think the rock around Gollum’s lair must have been a Jurassic sandstone full of crude oil; if Gollum were around nowadays, equally, I suspect he would be shilling for Cambridge Energy Research Associates, purveying energy misinformation to the media, and his “Preciousss” would be made of black gold. Certainly, though, the world’s industrial societies right now are in much the same predicament as Bilbo, fumbling in the dark for answers to riddles that take on an increasingly threatening tone with each moment that passes.

I’d like to talk about three of those riddles now. None of them are insoluble, but they point to a profoundly unwelcome reality that will play a major role in shaping the economics of the age dawning around us right now – and unlike characters in a children’s novel, we can’t count on being bailed out of our predicament, as Bilbo was, by the unexpected discovery of a magic ring. Here they are:

First: It is the oldest machine in the world; it has raised the world’s greatest monuments and destroyed most of them, saved lives by the millions and killed them in like number; and when it is not in use, no one can see it. What is it?

Second: There is a thoroughly proven, economically viable way to use solar energy that requires no energy subsidy from fossil fuels at all, and every mainstream economist thinks that getting rid of it wherever possible is the key to prosperity. What is it?

Third: Two workers in different countries work in identical factories, using identical tools to make identical products. One of them makes twenty dollars an hour plus a benefit package; the other makes two dollars a day with no benefits at all. Why is that?

The last one is the easiest, though you’ll have a hard time finding a single figure in American public life who will admit to the answer. It’s not considered polite these days to talk about America’s empire, despite the fact that we keep troops in 140 other countries, and the far from unrelated fact that the 5% of Earth’s population that live in the US use around a third of the world’s resources, energy, and consumer products. Like every other empire, we have a tribute economy; we dress it up in free-market drag by giving our trading partners mountains of worthless paper in return for the torrents of real wealth that flow into the US every day; but the result, now as in the past, is that the imperial nation and its inner circle of allies have a vast surplus of wealth sloshing through their economies. Handing over a little of that extra wealth to the poor and the working class has proven to be a tolerably effective way to maintain some semblance of social order.

That habit has been around nearly as long as empires themselves; the Romans were particularly adept at it -- “bread and circuses” is the famous phrase for their policy of providing free food and entertainment to the Roman urban poor o keep them docile. Starting in the wake of the last Great Depression, when many wealthy people woke up to the fact that their wealth did not protect them against bombs tossed through windows, most industrial nations have done the same thing by ratcheting up working class incomes and providing benefits such as old age pensions. No doubt a similar logic motivated the recent rush to force through a national health care system in the US, though the travesty that resulted is likely to cause far more unrest than it quells.

More generally, what passes by the name of democracy these days is a system in which factions of the political class buy votes from pressure groups by handing out what the political slang of an earlier day called by the endearing name of “pork.” The imperial tribute economy provided ample resources for political pork vendors, and the resulting outpouring of pig product formed a rising tide that, as the saying goes, lifted all boats. The problem, of course, is the same problem that afflicted Britain’s domestic economy during its age of empire, and Spain’s before that, and so on down through history: when wages in an imperial nation rise far enough above those of its neighbors, it stops being profitable to hire people in the imperial nation for any task that can be done outside it.

The result is a society in which those who get access to pork prosper, and those who don’t are left twisting in the wind. Arnold Toynbee, whose monumental study of the rise and fall of empires remains the most detailed examination of the process, calls these latter the “internal proletariat”: those who live within an imperial society but no longer share in its benefits, and become increasingly disaffected from its ideals and institutions. In the near term, they are the natural fodder of demagogues; in the longer term, they make common cause with the “external proletariat” – those nations outside the imperial borders whose labor and resources have become essential to the imperial economy, but who receive no benefits from that economy – and play a key role in bringing the whole system crashing down.

One of the ironies of the modern world is that today’s economists, so many of whom pride themselves on their realism, have by and large ignored the political dimensions of economics, and retreated into what amounts to a fantasy world in which the overwhelming influence of political considerations on economic life is denounced as an aberration where it is acknowledged at all. What Adam Smith and his successors called “political economy” suffered the amputation of its first half once Marx showed that it could be turned into an instrument for rabblerousing. Thus the economists who support the current versions of bread and circuses labor to find specious economic reasons for what, after all, is a simple political payoff. Meanwhile, those who oppose them have lost track of the very real possibility that those who are made to go hungry in the presence of abundance may embrace options entirely outside of the economic realm, such as the aforementioned bombs through windows.

This irony is compounded by the fact that very nearly every economist in the profession, liberal or conservative, accepts certain presuppositions that work overtime to speed the process by which the working class becomes an internal proletariat in Toynbee’s sense, hastening the breakdown of the society these economists claim to interpret. It takes a careful ear for the subtleties of economic jargon to understand how this works. Economists talk constantly about efficiency and productivity, but they rarely say in so many words is what these terms mean.

A glance inside any economics textbook will clue you in. By efficiency, economists mean labor efficiency – that is, how much or little of human labor is needed for any given economic task. By productivity, in turn, economists mean labor productivity – that is, how much value is created per unit of labor. Thus anything that decreases the number of employee hours needed to produce a given quantity of goods and services counts as an increase in efficiency and productivity, whether or not it is efficient or productive in any other sense.

There’s a reason for this rather odd habit, and it points up one of the central issues of the industrial world’s present predicament. In the industrial world, for the last century or more, labor costs have been the single largest expense for most business enterprises, in large part because of the upward pressure on living standards caused by the tribute economy. Meanwhile the cost of natural resources and energy have been kept down by the same imperial arrangements. The result is a close parallel to Liebig’s Law, one of the fundamental principles of ecology. Liebig’s Law holds that the nutrient in shortest supply puts a ceiling on the growth of living things, irrespective of the availability of anything more abundant; in the same way, our economics have evolved to treat the costliest resource to hand, human labor, as the main limitation to economic growth, and to treat anything that decreases the amount of labor as an economic gain.

Even when the energy needed to power machines was still cheap and abundant, this way of thinking was awash with mordant irony, because only in times of relatively robust economic growth did workers who were rendered surplus by such “productivity gains” readily find jobs elsewhere. At least as often, they added to the rolls of the unemployed, or pushed others onto those rolls, fueling the growth of an impoverished underclass that formed the seed of today’s rapidly growing internal proletariat. With the end of the age of cheap energy, though, the fixation on labor efficiency promises to become a millstone around the neck of America’s economy and, from a wider perspective, that of the world as a whole.

A world that has nearly seven billion people on it and a rapidly dwindling supply of fossil fuels, after all, has better ways to manage its affairs than those based on the assumption that putting people out of work and replacing them with fossil fuels is the way to prosperity. This is one of the unlearned lessons of the global economy that is now coming to an end around us. While it was billed by friends and foes alike as the final triumph of corporate capitalism, globalization can more usefully be understood as an attempt by a failing system to prop up the illusion of economic growth by transferring the production of goods and services to economies that are, by the standards just mentioned, less efficient than those of the industrial world. Without the distorting effects of an imperial tribute economy, labor proved to be enough cheaper than energy that the result was profitable, and allowed the world’s industrial nations to maintain their exaggerated standards of living for a few more years.

At the same time, the brief heyday of the global economy was only made possible by a glut of petroleum that made transportation costs negligible. That glut is ending as world oil production begins to slip down the far side of Hubbert’s curve, while the Third World nations that profited most by globalization cash in their newfound wealth for a larger share of the world’s energy resources, putting further pressure on a balance of power that is already tipping against the United States and its allies. As this process continues, the tribute economy will be an early casualty. The implications for the lifestyles of most Americans will not be welcome.

I have suggested in previous posts that one useful way to think about the transformations now under way is to see them as the descent of the United States to Third World status. One consequence of that process is that most Americans, in the not very distant future, will earn the equivalent of a Third World income. It’s unlikely that their incomes will actually drop to $2 a day; far more likely is that the value of the dollar will crumple, so that a family making $40,000 a year might expect to pay half that to keep itself fed on rice and beans, and the rest to buy cooking fuel and a few other necessities.

It’s hard to see any way such a decline in our collective wealth could take place without political explosions on the grand scale. Still, in the twilight of the age of cheap energy, the most abundant energy source remaining throughout the world will be human labor, and as other resources become more costly, the price of labor – and thus the wages that can be earned by it – will drop accordingly.

At the same time, human labor has certain crucial advantages in a world of energy scarcity. Unlike other ways of getting work done, which generally require highly concentrated energy sources, human labor is fueled by food, which is a form of solar energy. Our agricultural system produces food using fossil fuels, but this is a bad habit of an age of abundant energy; field labor by human beings with simple tools, paid at close to Third World wages, already plays a crucial role in the production of many crops in the US, and this will only increase as wages drop and fuel prices rise.

The agriculture of the future, like agriculture in any thickly populated society with few energy resources, will thus use land intensively rather than extensively, rely on human labor with hand tools rather than more energy-intensive methods, and produce bulk vegetable crops and relatively modest amounts of animal protein; the agricultural systems of medieval China and Japan, chronicled by F.H. King in Farmers of Forty Centuries, are as good a model as any. Such an agricultural system will not support seven billion people, but then neither will anything else, and a decline in population as malnutrition becomes common and public health collapses is a sure bet for the not too distant future.

For similar reasons, the economies of the future will make use of human labor, rather than any of the currently fashionable mechanical or electronic technologies, as their principal means for getting things done. Partly this will happen because in an overcrowded world where all other resources are scarce and costly, human labor will be the cheapest resource available, but it draws on another factor as well.

This was pointed out many years ago by Lewis Mumford in The Myth of the Machine. He argued that the revolutionary change that gave rise to the first urban civilizations was not agriculture, or literacy, or any of the other things most often cited in this context. Instead, he proposed, that change was the invention of the world’s first machine – a machine distinguished from all others in that all of its parts were human beings. Call it an army, a labor gang, a bureaucracy or the first stirrings of a factory system; in these cases and more, it consisted of a group of people able to work together in unison. All later machines, he suggested, were attempts to make inanimate things display the singleness of purpose of a line of harvesters reaping barley or a work gang hauling a stone into place on a pyramid.

That kind of machine has huge advantages in an world of abundant population and scarce resources. It is, among other things, a very efficient means of producing the food that fuels it and the other items needed by its component parts, and it is also very efficient at maintaining and reproducing itself. As a means of turning solar energy into productive labor, it is somewhat less efficient than current technologies, but its simplicity, its resilience, and its ability to cope with widely varying inputs give it a potent edge over these latter in a time of turbulence and social decay.

That kind of machine, it deserves to be said, is also profoundly repellent to many people in the industrial world, doubtless including many of those who are reading this essay. It’s interesting to think about why this should be so, especially when some examples of the machine at work – Amish barn raisings come to mind – have gained iconic status in the alternative scene. It is not going too far, I think, to point out that the word “community,” which receives so much lip service these days, is in many ways another word for Mumford’s primal machine. For the last few centuries, we have tried replacing that machine with a dizzying assortment of others; instead of subordinating individual desires to collective needs, like every previous society, we have built a surrogate community of machines powered by coal and oil and natural gas to take care, however sporadically, of our collective needs. As those resources deplete, societies used to directing nonhuman energy according to scientific principles will face the challenge of learning once again how to direct human energy according to older and less familiar laws. This can be done in relatively humane ways, or in starkly inhuman ones; what remains to be seen is where along this spectrum the societies of the future will fall. That riddle neither Bilbo nor Gollum could have answered, and neither can I.

Wednesday, March 24, 2010

The Logic of Abundance

The last several posts here on The Archdruid Report have focused on the ramifications of a single concept – the importance of energy concentration, as distinct from the raw quantity of energy, in the economics of the future. This concept has implications that go well beyond the obvious, because three centuries of unthinking dependence on highly concentrated fossil fuels have reshaped not only the economies and the cultures of the industrial West, but some of our most fundamental assumptions about the universe, in ways all too likely to be disastrously counterproductive in the decades and centuries ahead of us.

Ironically enough, given the modern world’s obsession with economic issues, one of the best examples of this reshaping of assumptions by the implications of cheap concentrated energy has been the forceful resistance so many of us put up nowadays to thinking about technology in economic terms. It should be obvious that whether or not a given technology or suite of technologies continues to exist in a world of depleting resources depends first and foremost on three essentially economic factors. The first is whether the things done by that technology are necessities or luxuries, and if they are necessities, just how necessary they are; the second is whether the same things, or at least the portion of them that must be done, can be done by another technology at a lower cost in scarce resources; the third is how the benefits gained by keeping the technology supplied with the scarce resources it needs measures up to the benefits gained by putting those same resources to other uses.

Nowadays, though, this fairly straightforward calculus of needs and costs is anything but obvious. If I suggest in a post here, for example, that the internet will fail on all three counts in the years ahead of us – very little of what it does is necessary; most of the things it does can be done with much less energy and resource use, albeit at a slower pace, by other means; and the resources needed to keep it running would in many cases produce a better payback elsewhere – you can bet your bottom dollar that a good many of the responses will ignore this analysis entirely, and insist that since it’s technically possible to keep the internet in existence, and a fraction of today’s economic and social arrangements currently depend on (or at least use) the internet, the internet must continue to exist. Now it’s relevant to point out that the world adapted very quickly to using email and Google in place of postage stamps and public libraries, and will doubtless adapt just as quickly to using postage stamps and libraries in place of email and Google if that becomes necessary, but this sort of thinking – necessary as it will be in the years to come – finds few takers these days.

This notion that technological progress is a one-way street not subject to economic limits invites satire, to be sure, and I’ve tried to fill that need more than once in the past. Still, there are deep issues at work that also need to be addressed. One of them, which I’ve discussed at length elsewhere, is the way that progress has taken on an essentially religious value in the modern world, especially but not only among those who reject every other kind of religious thinking. Still, there’s another side to it, which is that for the last three hundred years those who believed in the possibilities of progress have generally been right. There have been some stunning failures to put alongside the successes, to be sure, but the trajectory that reached its climax with human footprints on the Moon has provided a potent argument supporting the idea that technological complexity is cumulative, irreversible, and immune to economic concerns.

The problem with that argument is that it takes the experience of an exceptional epoch in human history as a measure for human history as a whole. The three centuries of exponential growth that put those bootprints on the gray dust of the Sea of Tranquillity were made possible by the conjunction of historical accidents and geological laws that allowed a handful of nations to seize the fantastic treasure of highly concentrated energy buried in the Earth’s fossil fuels and burn through it at ever-increasing rates, flooding their economies with almost unimaginable amounts of cheap and highly concentrated energy. It’s been fashionable to assume that the arc of progress was what made all that energy available, but there’s very good reason to think that this puts the cart well in front of the horse. Rather, it was the huge surpluses of available energy that made technological progress both possible and economically viable, as inventors, industrialists, and ordinary people all discovered that it really was cheaper to have machines powered by fossil fuels take over jobs that had been done for millennia by human and animal muscles, fueled by solar energy in the form of food.

The logic of abundance that was made plausible as well as possible by those surpluses has had impacts on our society that very few people in the peak oil scene have yet begun to confront. For example, many of the most basic ways that modern industrial societies handle energy make sense only if fossil fuel energy is so cheap and abundant that waste simply isn’t something to worry about. One of this blog’s readers, Sebastien Bongard, pointed out to me in a recent email that on average, only a third of the energy that comes out of electrical power plants reaches an end user; the other two-thirds are converted to heat by the electrical resistance of the power lines and transformers that make up the electrical grid. For the sake of having electricity instantly available from sockets on nearly every wall in the industrial world, in other words, we accept unthinkingly a system that requires us to generate three times as much electricity as we actually use.

In a world where concentrated energy sources are scarce and expensive, many extravagances of this kind will stop being possible, and most of them will stop being economically feasible. In a certain sense, this is a good thing, because it points to ways in which nations facing crisis because of a shortage of concentrated energy sources can cut their losses and maintain vital systems. It’s been pointed out repeatedly, for example, that the electrical grids that supply power to homes and businesses across the industrial world will very likely stop being viable early on in the process of contraction, and some peak oil thinkers have accordingly drawn up nightmare scenarios around the sudden and irreversible collapse of national power grids. Like most doomsday scenarios, though, these rest on the unstated and unexamined assumption that everybody involved will sit on their hands and do nothing as the collapse unfolds.

In this case, that assumption rests in turn on a very widespread unwillingness to think through the consequences of an age of contracting energy supplies. The managers of a power grid facing collapse due to a shortage of generation capacity have one obvious alternative to hand: cutting nonessential sectors out of the grid for as long as necessary, so the load on the grid decreases to a level that the available generation capacity can handle. In an emergency, for example, many American suburbs and a large part of the country’s nonagricultural rural land could have electrical service shut off completely, and an even larger portion of both could be put on the kind of intermittent electrical service common in the Third World, without catastrophic results. Of course there would be an economic impact, but it would be modest in comparison to the results of simply letting the whole grid crash.

Over the longer term, just as the twentieth century was the era of rural electrification, the twenty-first promises to be the era of rural de-electrification. The amount of electricity lost to resistance is partly a function of the total amount of wiring through which the current has to pass, and those long power lines running along rural highways to scattered homes in the country thus account for a disproportionate share of the losses. A nation facing prolonged or permanent shortages of electrical generating capacity could make its available power go further by cutting its rural hinterlands off the power grid, and leaving them to generate whatever power they can by local means. Less than a century ago, nearly every prosperous farmhouse in the Great Plains had a windmill nearby, generating 12 or 24 volts for home use whenever the wind blew; the same approach will be just as viable in the future, not least because windmills on the home scale – unlike the huge turbines central to most current notions of windpower – can be built by hand from readily available materials. (Skeptics take note: I helped build one in college in the early 1980s using, among other things, an old truck alternator and a propeller handcarved from wood. Yes, it worked.)

Steps like this have seen very little discussion in the peak oil scene, and even less outside it, because the assumptions about technology discussed earlier in this post make them, in every sense of the word, unthinkable. Most people in the industrial world today seem to have lost the ability to imagine a future that doesn’t have electricity coming out of a socket in every wall, without going to the other extreme and leaning on Hollywood clichés of universal destruction. The idea that some of the most familiar technologies of today may simply become too expensive and inefficient to maintain tomorrow is alien to ways of thought dominated by the logic of abundance.

That blindness, however, comes with a huge price tag. As the age of abundance made possible by fossil fuels comes to its inevitable end, a great many things could be done to cushion the impact. Quite a few of these things could be done by individuals, families, and local communities – to continue with the example under discussion, it would not be that hard for people who live in rural areas or suburbs to provide themselves with backup systems using local renewable energy to keep their homes viable in the event of a prolonged, or even a permanent, electrical outage. None of the steps involved are hugely expensive, most of them have immediate payback in the form of lower energy bills, and local and national governments in much of the industrial world are currently offering financial incentives – some of them very robust – to those who do them. Despite this, very few people are doing them, and most of the attention and effort that goes into responses to a future of energy constraints focuses on finding new ways to pump electricity into a hugely inefficient electrical grid, without ever asking whether this will be a viable response to an age when the extravagance of the present day is no longer an option.

This is why attention to the economics of energy in the wake of peak oil is so crucial. Could an electrical grid of the sort we have today, with its centralized power plants and its vast network of wires bringing power to sockets on every wall, remain a feature of life throughout the industrial world in an energy-constrained future? If attempts to make sense of that future assume that this will happen as a matter of course, or start with the unexamined assumption that such a grid is the best (or only) possible way to handle scarce energy, and fixate on technical debates about whether and how that can be made to happen, the core issues that need to be examined slip out of sight. The question that has to be asked instead is whether a power grid of the sort we take for granted will be economically viable in such a future – that is, whether such a grid is as necessary as it seems to us today; whether the benefits of having it will cover the costs of maintaining and operating it; and whether the scarce resources it uses could produce a better return if put to work in some other way.

Local conditions might provide any number of answers to that question. In some countries and regions, where people live close together and renewable energy sources such as hydroelectric power promise a stable supply of electricity for the relatively long term, a national grid of the current type may prove viable. In others, as suggested above, it might be much more viable to have restricted power grids supplying urban areas and critical infrastructure, while rural hinterlands return to locally generated power or to non-electrified lifestyles. In still others, a power grid of any kind might prove to be economically impossible.

Under all these conditions, even the first, it makes sense for governments to encourage citizens and businesses to provide as much of their own energy needs as possible from locally available, diffuse energy sources such as sunlight and wind. (It probably needs to be said, given current notions about the innate malevolence of government, that whatever advantages might be gained from having people dependent on the electrical grid would be more than outweighed by the advantages of having a work force, and thus an economy, that can continue to function on at least a minimal level if the grid goes down.) Under all these conditions, it makes even more sense for individuals, families, and local communities to take such steps themselves, so that any interruption in electrical power from the grid – temporary or permanent – becomes an inconvenience rather than a threat to survival.

A case could easily be made that in the face of a future of very uncertain energy supplies, alternative off-grid sources of space heating, hot water, and other basic necessities are as important in a modern city as life jackets are in a boat. An even stronger case could be made that individuals and groups who hope to foster local resilience in the face of such a future probably ought to make such simple and readily available technologies as solar water heating, solar space heating, home-scale wind power, and the like central themes in their planning. Up to now, this has rarely happened, and the hold of the logic of abundance on our collective imagination is, I think, a good part of the reason why.

What makes this even more important is that the electrical power grid is only one example, if an important one, of a system that plays a crucial role in the way people live in the industrial world today, but that only makes sense in a world where energy is so abundant that even huge inefficiencies don’t matter. It’s hardly a difficult matter to think of others. To think in these terms, though, and to begin to explore more economical options for meeting individual and community needs in an age of scarce energy, is to venture into a nearly unexplored region where most of the rules that govern contemporary life are stood on their heads. We’ll map out one of the more challenging parts of that territory in next week’s post.

Wednesday, March 17, 2010

Energy Concentration Revisited

For those watching current affairs with an eye sharpened by history, it’s been quite a week since the last Archdruid Report post came out. For starters, American politicians and pundits have gone in for another round of China-bashing, insisting that China’s manipulation of its currency is unacceptable to us. Since the US is manipulating its own currency at least as shamelessly, the strength of their case is open to question; one gathers that the real grievance is that China’s manipulations have been rather more successful than ours. The tone of this latest flurry of denunciation may be gathered from a recent headline: “China Using Trade Agreements For Its Own Advantage.” Er, did anyone think that the Chinese would use those agreements solely for our advantage?

As it happens, my reading material over the last few days has included historian Donald Kagan’s magisterial On The Origins Of War, which anatomizes what generally happens when a declining empire jealous of its privileges collides with a rising power impatient for its own place in the sun. (The title of Kagan’s book offer a hint, if one is needed, about what the consequences usually are.) The slow approach of conflict between America and China has all the macabre fascination of a train wreck in the making; it’s uncomfortably easy, knowing the historical parallels, to see how a few more missteps that each side seems quite eager to make could back both nations into a position where the least either side can accept is more than the most either side can yield. The flashpoint, when it comes, is likely to lie some distance from either country’s borders; look at the parts of the world where Chinese overseas investment is shouldering aside longstanding American interests, and it’s not hard to imagine how and where the resulting struggle might play out.

Meanwhile the Obama administration has decided to give Congress back to the Republicans in the upcoming elections. I can think of no other way of describing Obama’s fixation on ramming through a health care bill that is not merely deeply unpopular, but one of the most absurd pieces of legislation in recent memory as well. How else to describe an attempt to deal with the fact that half the American people can’t afford health insurance by requiring them, under penalty of law, to pay for it anyway? In the process, this bill promises to take tens of billions of dollars a year out of the pockets of American families – during the worst economic conditions since the 1930s, mind you – to benefit a health insurance industry that already ranks as one of the most greedy and corrupt institutions in American public life. You’d think that a party that has ridden into power twice now on a wave of protest would know better than to adopt the most unpopular policies of the party it ousted, and then fritter away its remaining political capital on a disastrously misconceived notion of health care reform. Yet Clinton did that, and Obama’s repeating his mistake; since he’s doing it in the midst of an economic debacle on the grand scale, he’s unlikely to wriggle out of the consequences as adeptly as his predecessor.

Those of my readers who live in America thus might want to consider pressuring their elected representatives to put a brake on either or both of these disasters in the making. Those of my readers who live elsewhere might want to consider hiding under their beds until the rubble stops bouncing; barring exceptionally good luck, the first blasts are unlikely to be long delayed. Still, these cheerful reflections aren’t the theme of this week’s Archdruid Report. No, the theme of this week’s Archdruid Report delves further into the issue at the center of the last several essays, the vexed relationship between thermodynamics, energy resources, and economics in an age of decline.

I’m quite sure that some of my readers would prefer that I talk about something more immediately topical. Still, fundamental issues of the sort I want to pursue just now have immediate practical consequences. The economic debacle that’s among the major forces pushing America and China toward an armed conflict from which neither will benefit, for example, didn’t just happen by chance; it became inevitable once the political classes of the industrial world embraced certain fashionable but direly flawed ideas about economics, and convinced themselves that money was the source of wealth rather than the mere measure of wealth it actually is. Decades of bad policy that encouraged making money at the expense of the production of real wealth followed from those ideas. The result was the transformation of a vast amount of paper “wealth” – that is, money of one kind of another – into some malign equivalent of the twinkle dust of a children’s fairy tale; and the fallout includes economic stresses of the kind that so often push international conflicts past the point of no return.

In the same way, I’m convinced, certain widespread misunderstandings about how energy interfaces with economics are causing a great deal of alternative energy investment to go into schemes that are going to offer us very little help dealing with the end of the age of cheap fossil fuels, while other options that could help a great deal – and there are quite a few of those – are languishing for want of funds. That was the theme of last week’s post; the response was one of the largest these essays have yet fielded, and it helped me clarify the differences between the ways that certain kinds of energy can be used in practice, and the ways that a great deal of current thought assumes they can be used.

That same lesson could have been drawn from history. Solar energy, the most widely available alternative energy source, is not a new thing. Life on earth has been using it for something like two billion years, since the first single-celled prokaryotes figured out the trick of photosynthesis. Human beings were a little slower off the mark, since we had to evolve first, but passive solar heating was in widespread use in ancient Greece and imperial China; the industrial use of solar power in the West dates back to the late Middle Ages, when enterprising alchemists learned to use dished mirrors to focus heat on glass vessels; the first effective solar heat engine had its initial tryout in 1874. One solar energy proponent who commented on last week’s blog argued that human flight had progressed from Kitty Hawk to breaking the sound barrier in sixty years, and therefore solar power could be expected to make some similar leap; he apparently didn’t know that solar power was a working proposition decades before Kitty Hawk, and the leap never happened.

At least, the leap that my commenter expected never happened. Solar power has in fact been hugely successful in a wide range of practical applications. Solar water heaters, a central theme of an earlier post, were in common use across the American Sun Belt for more than half a century before cheap electrical and gas water heaters drove them out of the market in the 1950s. Passive solar household heating has proven itself in countless applications, and so have many other technologies using solar energy as a source of modest amounts of heat. Given that well over half the energy that Americans use today in their homes takes the end form of modest amounts of heat, this is not a minor point, and it directs attention to a range of solar technologies that could be put to work right now to cushion the impact of peak oil and begin the hard but necessary transition to the deindustrial age.

Yet it’s at least as instructive to pay attention to what hasn’t worked. The approach central to today’s large-scale solar plants – mirrors focusing sunlight onto tubes full of fluid, which boils into vapor and runs an engine, which in turn powers a generator – was among the very first things tried by the 19th century pioneers of solar energy. As discussed in last week’s post, these engines work after a fashion; that is, you can get a very, very modest amount of electricity out of sunlight that way with a great deal of complicated and expensive equipment. That’s why, while solar water heaters spread across rooftops on three continents in the early 20th century, solar heat engines went nowhere; the return on investment – measured in money or energy – simply didn’t justify the expenditure.

Now of course we’ve improved noticeably on the efficiency of some of the processes involved in those early solar engines. Still, a good many of the basic limits the 19th and early 20th century solar pioneers faced are not subject to technological improvement, because they unfold from the difference central to last week’s post – the difference between diffuse and concentrated energy.

This difference or, rather, the language I used to discuss that difference, turned out to be the sticking point for a number of scientifically literate readers last week. Some insisted that “exergy,” the term I used for the capacity of energy to do work in a given system, didn’t mean that – though, oddly enough, others who appeared to have just as solid a background in the sciences insisted that it did indeed mean that. Others insisted that I was overgeneralizing, or using sloppy terminology, or simply wrong.

Now I’m quite cheerfully ready to be told that my use of scientific terminology is incorrect. I’m not a physicist, and I don’t even play one on TV; my background is in history and the humanities, and my knowledge of science, with a few exceptions (mostly in ecology and botany), comes from books written for intelligent laypeople. Still, there’s a difference between a misused term and an inaccurate concept, and two things lead me to think that whether or not the former is involved here, the latter is not. The first is the history of alternative energy technologies, of which the trajectory of solar energy traced above is only one part. The second is that I heard from quite a few people who depend on the diffuse energy available from the Sun in their own homes and lives, and thus have a more direct understanding of the matter, and all of them grasped my point instantly and illustrated it with examples from their own experience.

Several additional examples of the same distinction also turned up as I researched the subject. Back most of thirty years ago, when I was studying appropriate technology in college, one of the standard examples the professors used to explain thermodynamic limits was ordinary geothermal heat. This is the sort of thing you get in a place where there isn’t any underground magma close enough to the surface to set off geysers and make commercial geothermal electric plants an option; it’s the gentle heat that filters up through the Earth’s crust from the mantle many miles below. In terms of sheer quantity of thermal energy, it looks really good, but away from hot spots, it’s very diffuse – and as a result, you can show pretty easily by way of Carnot’s law that the energy you’d get from pumping the heat to the surface and using it to drive a heat engine will be less than the energy you need to run the pumps. On the other hand, if all you want is diffuse heat, you’re looking in the right place – and in fact hooking up a heat pump to a hole in the ground and using it for domestic heating and cooling has proven to be a very efficient technology in recent years.

The same thing is true for OTEC, another of those ideas whose time is always supposedly about to come and never quite arrives. The acronym stands for Oceanic Thermal Energy Conversion, and it does with the thermal difference between deep and surface water what a geothermal power plant does with the thermal difference between hot rocks half a mile down and the cold surface of the planet. You can, in fact, run a heat engine on OTEC power, but it takes about 2/3 of the power you generate to run the pumps. That means you’ve got a net energy of 0.33 or so, even before factoring in the energy cost of the OTEC plant; in economic terms, what it means is that you run on government grants or you go broke. On the other hand, there’s at least one resort in the Pacific that uses OTEC for the far simpler task of air conditioning. Again, if all you need to do is move diffuse heat around, a diffuse energy source is more than adequate; if you need to do something more complex you may well have problems.

Let’s take a closer look at why that happens. The core concept to grasp here is that for reasons hardwired into the laws of thermodynamics, converting energy from one form or another, in most cases, is highly inefficient. That’s what an engine does; it takes in thermal energy – that is, heat – and puts out mechanical energy – in most cases, a shaft spinning around very fast, which you hook up to something else like a drive train, a propeller, or a generator. Of all the energy released by burning gasoline in an average automobile engine, which is one form of heat engine, around 25% goes into turning the crankshaft; the rest is lost as diffuse heat. If you’re smart and careful, you can get a heat engine to reach efficiencies above 50%; a modern combined-cycle power plant working at top efficiency can hit 60%, but that’s about as good as the physics of the process will let you get.

Most other ways of turning one form of energy into another are no more efficient, and many of them are much less efficient than heat engines. (That’s why heat engines are used so extensively in modern technology; inefficient as they are, they’re better than most of the alternatives.) The reason nobody worries much about these efficiencies is that we’re used to fossil fuels, and fossil fuels contain so much potential heat in so concentrated a form that the inefficiencies aren’t a problem. 75% of the potential energy in the gas you pour into your car gets turned into waste heat and dumped via the radiator, but you don’t have to care; there’s still more than enough to keep you zooming down the road.

With alternative energy sources, though, you have to care. That’s why the difference between diffuse and concentrated energies matters so crucially; not only specific technologies, but whole classes of technologies on which the modern industrial world depends, embody such massive inefficiencies that diffuse energy sources won’t do the job. Lose 75% of the energy in a gallon of gasoline to waste heat, and you can shrug and pour another gallon in the tank; lose 75% of the energy coming out of a solar collector, and you may well have passed the point at which the solar collector no longer does enough work to be worth the energy and money cost to build and maintain it. The one kind of energy into which you can transform other kinds of energy at high efficiencies — sometimes approaching 100% – is relatively diffuse heat. This is why using sunlight to heat water, air, food, or what have you to temperatures in the low three digits on the Fahrenheit scale is among the most useful things you can do with it, and why, when you’re starting out with diffuse heat, the most useful thing you can do with it is generally to use the energy in that form.

What this means, ultimately, is that the difference between an industrial civilization and what I’ve called an ecotechnic civilization isn’t simply a matter of plugging some other energy source in place of petroleum or other fossil fuels. It’s not even a matter of downscaling existing technologies to fit within a sparser energy budget. It’s a matter of reconceiving our entire approach to technology, starting with the paired recognitions that the very modest supply of concentrated energy sources we can expect to have after the end of the fossil fuel age will have to be reserved for those tasks that still need to be done and can’t be done any other way, and that anything that can be done with a diffuse energy source needs to be done with a diffuse energy source if it’s going to be done at all.

A society running on diffuse energy resources, in other words, is not going to make use of anything like the same kinds of technology as a society running on concentrated energy resources, and attempts to run most existing technologies off diffuse renewable sources are much more likely to be distractions than useful options. In the transition between today’s technology dominated by concentrated energy and tomorrow’s technology dominated by diffuse heat, in turn, some of the most basic assumptions of contemporary economic thought – and of contemporary life, for that matter – are due to be thrown out the window. We’ll discuss one of those next week.

Wednesday, March 10, 2010

Barbarism and Good Brandy

A taste for irony is a useful habit to cultivate if you happen to write about energy issues in the declining years of a civilization defined by its extravagant use of energy, on the one hand, and the dubious logic it uses to justify that extravagance on the other. One of the things you can count on, if that description fits you, is that any time you discuss one of the fallacies that has helped back that civilization into a corner, plenty of readers will respond with comments that demonstrate the fallacy in question more clearly than any of your examples could have done.

Last week’s Archdruid Report post was no exception to that rule. Regular readers will recall that it focused on the difference between the quantity of energy in an energy source and the concentration of energy in that energy source, and pointed out that the latter, not the former, determines the exergy in the source – that is, the amount of work that the energy source is able to perform. True to form, I fielded a flurry of comments that took issue with this, or with the conclusions I drew from it, on the grounds that I wasn’t paying enough attention to the quantity of energy in some favorite energy source.

The example I’d like to highlight here is far from the worst I received. Quite the contrary; it’s precisely because it’s a thoughtful response from an equally thoughtful reader that it makes a good starting point for this week’s discussion. The reader in question pointed out that the photons that reach the Earth from the Sun each contain exactly as much energy as they did when they left the solar atmosphere, and argued on that basis that a point I made about the exergy of solar power was at least open to question.

He’s quite right about the photons, of course. The energy contained in a photon is defined by its frequency, and that remains pretty much the same (barring a bit of gravitational redshifting) from the moment it spins out of the thermonuclear maelstrom of the Sun until the moment eight minutes later when it arrives on earth and gets absorbed by a green leaf, let’s say, or the absorbent surface in a solar water heater. Once again, though, that’s a matter of the quantity of energy, not the concentration. The concentration, in this case, is determined by the rate at which photons impact the leaf or the solar panel; that depends on how widely spread the photons are, and that depends, in turn, on how far the leaf and the panel are from the Sun.

Think of it this way. The individual photons that heat the planet Mercury each contain, on average, the same quantity of energy as the individual photons that heat the planet Neptune. Is Neptune as warm as Mercury? Not hardly, and the reason is that by the time they get out to the orbit of Neptune, the Sun’s rays are spread out over a much vaster area, so each square foot of Neptune gets a lot fewer photons than a corresponding square foot of Mercury. The photons are less concentrated in space, and that, not the quantity of energy they each contain, determines how much of the hard work of heating a planet they are able to do. There are stars in the night sky that produce photons far more energetic, on average, than those released by the Sun, but you’re not going to get a star tan from their light!

This may seem like an obvious point. Still, it deserves restatement, because so many contemporary plans for using solar energy ignore it, fixating on the raw quantity of solar energy that reaches the Earth rather than the very modest concentration of that energy. A habit of comforting abstraction feeds that sort of thinking. It’s easy to insist, for example, that the quantity of solar energy falling annually on some fairly small fraction of the state of Nevada, let’s say, is equal to the quantity of energy that the US uses as electricity each year, and to jump from there to insist that if we just cover a hundred square miles of Nevada with mirrors, so all that sunlight can be used to generate steam, we’ll be fine.

What gets misplaced in appealing fantasies of this sort? Broadly speaking, three things.

The first is that familiar nemesis of renewable energy schemes, the problem of net energy. It would take a pretty substantial amount of highly concentrated energy to build that hundred square mile array of mirrors, counting the energy needed to manufacture the mirrors, the tracking assemblies, the pipes, the steam turbines, and all the other hardware, as well as the energy needed to produce the raw materials that go into them – no small amount, that latter. It would take another very substantial amount of concentrated energy, regularly supplied, to keep it in good working order amid the dust, sandstorms, and extreme temperatures of the Nevada desert; and if the amount of energy produced by the scheme comes anywhere close to what’s theoretically possible, that would probably be the only time in history this has ever occurred with a very new, very large, and very experimental technological project. Subtract the energy cost to build and run the plant from the energy you could reasonably (as opposed to theoretically) expect to get out of it, and the results will inevitably be a good deal less impressive than they look on paper.

The second is another equally common nemesis of renewable energy schemes, the economic dimension. Plenty of renewables advocates say, in effect, that people want electricity, and a hundred square miles of mirrors in Nevada will provide it, so what are we waiting for? This sort of thinking is extremely common, of course; mention that any popular technology you care to name might not be economically viable in a future of energy and resource constraints, and you’re sure to hear plenty of arguments that it has to be economically feasible because, basically, it’s so nifty. There’s a reason for that – it’s the sort of thinking that works in an age of abundance, the kind of age that’s coming to an end around us right now.

The end of that age, though, makes such thinking a hopeless anachronism. In an age of energy and resources constraints, any proposed use of energy and resources must compete against all other existing and potential uses for a supply that isn’t adequate to meet them all. Market forces and political decisions both play a part in the resulting process of triage. If investing billions of dollars (and, more importantly, the equivalent amounts of energy and resources) in mirrors in the Nevada desert doesn’t produce as high an economic return as other uses of the same money, energy, and resources, the mirrors are going to draw the short end of the stick. Political decisions can override that calculus to some extent, but impose an equivalent requirement: if investing that money, energy, and resources in mirrors doesn’t produce as high a political payoff as other uses of the same things, once again, the fact that the mirrors might theoretically allow America’s middle classes to maintain some semblance of their current lifestyle is not going to matter two photons in a Nevada sandstorm.

Still, the problems with net energy and economic triage both ultimately rest on thermodynamic issues, because the exergy available from solar energy simply isn’t that high. It takes a lot of hardware to concentrate the relatively mild heat the Earth gets from the Sun to the point that you can do more than a few things with it, and that hardware entails costs in terms of net energy as well as economics. It’s not often remembered that big solar power schemes, of the sort now being proposed, were repeatedly tried from the late 19th century on, and just as repeatedly turned out to be economic duds.

Consider the solar engine devised and marketed by American engineer Frank Shuman in the first decades of the 20th century. The best solar engine of the time, and still the basis of a good many standard designs, it was an extremely efficient device that focused sunlight via parabolic troughs onto water-filled pipes that drove an innovative low-pressure steam engine. Shuman’s trial project in Meadi, Egypt, used five parabolic troughs 204 feet long and 13 feet wide. The energy produced by this very sizable and expensive array? All of 55 horsepower. Modern technology could do better, doubtless, but not much better, given the law of diminishing returns that affects all movements in the direction of efficiency, and most likely not enough better to matter.

Does this mean that solar energy is useless? Not at all. What it means is that a relatively low-exergy source of energy, such as sunlight, can’t simply be used to replace a relatively high-exergy source such as coal. That’s what Shuman was trying to do; like most of the solar pioneers of his time, he’d done the math, realized that fossil fuels would run out in the not infinitely distant future, and argued that they would have to be replaced by solar energy: “One thing I feel sure of,” he wrote, “and that is that the human race must finally utilize direct sun power or revert to barbarism.”

He may well have been right, but trying to make lukewarm sunlight do the same things as the blazing heat of burning coal was not the way to solve that problem. The difficulty – another of those awkward implications of the laws of thermodynamics – is that whenever you turn energy from one form into another, you inevitably lose a lot of energy to waste heat in the process, and your energy concentration – and thus the exergy of your source – goes down accordingly. If you have abundant supplies of a high-exergy fuel such as coal or petroleum, that doesn’t matter enough to worry about; you can afford to have a great deal of the energy in a gallon of gasoline converted into waste heat and pumped out into the atmosphere by way of your car’s radiator, for example, because there’s so much exergy to spare in gasoline that you have more than enough left over to send your car zooming down the road. With a low-exergy source such as sunlight, you don’t have that luxury, which is why Shuman’s solar plant, which covered well over 13,000 square feet, produced less power than a very modest diesel engine that cost a small fraction of the price and took up an even smaller fraction of the footprint.

This is also why those solar energy technologies that have proven to be economical and efficient are those that minimize conversion losses by using solar energy in the form of heat. That’s the secret to using low-exergy sources: heat is where exergy goes to die, and so if you let it follow that trend, you can turn a relatively diffuse source to heat at very high efficiencies. The heat you get is fairly mild compared to (say) burning gasoline, but that’s fine for practical purposes. It doesn’t take intense heat to raise a bathtub’s worth water to 120º, warm a chilly room, or cook a meal, and it’s precisely tasks like these that solar energy and other low-exergy sources do reliably and well.

It’s interesting to note that Augustin Mouchot, the great 19th century pioneer of solar energy, kept running up against this issue in his work. Mouchot began working with solar energy out of a concern that France, handicapped by its limited reserves of coal, needed some other energy source to compete in the industrial world of the late 19th century. He built the first successful solar steam engines, but they faced the same problems of concentration that made Shuman’s more sophisticated project an economic flop; a representative Mouchot engine, his 1874 Tours demonstration model, used 56 square feet of conical reflector to focus sunlight on a cylindrical boiler, and generated all of 1/2 horsepower.

Yet some of his other solar projects were quite a bit more successful. For many years, the French Foreign Legion relied on one of his inventions in their North African campaigns: a collapsible solar oven that could be packed into a box 20 inches square. It had the same general design as the engine, a conical reflector focusing sunlight onto a cylinder that pointed toward the sun, but it worked, and worked well; the Mouchot oven could cook a large pot roast from raw to well done in under half an hour. Another project, a solar still, proved equally successful, converting wine into brandy at a rate of five gallons a minute – rather good brandy at that, “bold and agreeable to the taste,” Mouchot wrote proudly, “and with...the savor and bouquet of an aged eau-de-vie.” Again, notice the difference: low-exergy sunlight doesn’t convert well to mechanical motion via a steam engine, due to the inevitable conversion losses, but it’s very efficient as a source of heat.

The implications of this difference circle back to a point made by E.F. Schumacher many years ago, and discussed several times already in these essays: the technology that’s useful and appropriate in a setting of energy and resource constraints – for example, the Third World nations of his time, or the soon-to-be-deindustrializing nations of ours – is not the same as the technology that’s useful and appropriate in a setting of abundance – for example, the industrial nations of the age that is ending around us. Centralized power generation is a good example. If you’ve got ample supplies of highly concentrated energy, it makes all the sense in the world to build big centralized power plants and send the power thus produced across hundreds or thousands of miles to consumers; you’ll lose plenty of energy to waste heat at every point along the way, especially in the conversion of one form of energy to another, but if your sources are concentrated and abundant, that doesn’t matter much.

If concentrated energy sources are scarce and rapidly depleting, on the other hand, this sort of extravagance can no longer be justified, and after a certain point, it can no longer be afforded. Since much of the energy that people actually use in their daily lives takes the form of relatively mild heat – the sort that will heat water, warm a house, cook a meal, and so on – it makes more sense in an energy-poor society for people to gather relatively diffuse energy right where they are, and put that to work instead. The same point can be made with equal force for a great many industrial processes; when what you need is heat – and for plenty of economically important activities, such as distilling brandy, that’s exactly what you need – sunlight, concentrated to a modest degree by way of reflectors or fluid-heating panels, will do the job quite effectively.

This is another reason why Schumacher’s concept of intermediate technology, and a great many of the specific technologies he and his associates and successors created, provide a resource base of no little importance as the world’s industrial societies stumble down the far slopes of Hubbert’s peak. When concentrated energy is scarce, local production of relatively diffuse energy for local use is a far more viable approach for a great many uses. This will allow the highly concentrated energies that are left to be directed to those applications that actually need them, while also shielding local communities from the consequences of the failure or complete collapse of centralized systems. The resulting economy may not have much resembance to today’s fantasies of a high-tech future, but the barbarism Frank Shuman feared is not the only alternative to that future; there’s something to be said for a society, even a relatively impoverished and resource-scarce one, that can still reliably provide its inhabitants with hot baths, warm rooms in winter, and well-done pot roasts – and, of course, good brandy.

Wednesday, March 03, 2010

An Exergy Crisis

In last week’s Archdruid Report post, I discussed the difference between energy and exergy, or in slightly less jargon-laden terms, between the quantity of energy and the concentration of energy. It’s hard to think of a more critical difference to keep in mind if you’re trying to make sense of the predicament of modern industrial civilization, but it’s even harder to think of a point more often missed in the rising spiral of debates about that predicament.

The basic principle is simple enough, and bears repeating here: the amount of work you get out of a given energy source depends, not on the quantity of energy in the source, but on the difference in energy concentration between the energy source and the environment. That’s basic thermodynamics, of the sort that every high school student used to learn in physics class back in those far-off days when American high school students took physics classes worth the name. Put that principle to work, though, and the results are often highly counterintuitive; this probably has more than a little to do with the way that even professional scientists miss them, and fumble predictions as a result.

The current brouhaha over anthropogenic climate change offers a good example. There’s been a great deal of high-grade fertilizer heaped over the issues by propaganda factories on all sides of that debate, but beneath it all is the tolerably well documented fact that we’re in the middle of a significant shift in global climate, focused on the north polar region. The causes of that shift are by no means entirely settled, but it seems a little silly to insist, as some people do, that the mass dumping of greenhouse gases into the atmosphere by humanity can’t have anything to do with it – or, for that matter, that it’s a good idea to keep on dumping those gases into an atmospheric system that may already be dangerously unstable for reasons of its own.

Still, for the next decade or more, that bad idea is very likely to remain standard practice around the world, and one reason for that is that climate change activists have shot themselves in the foot. No, I’m not talking about the recent flurry of revelations that some IPCC scientists diddled the facts to make a good but undramatic case more mediagenic. Nor am I talking about the awkward detail that the IPCC scenarios assume, in the teeth of all geological evidence, that the world can keep increasing the amount of fossil fuels it extracts and burns straight through to 2100. The problem goes deeper than that, down to the decision to define the crisis as “global warming.” That seems sensible enough – after all, we’re talking about an increase in the total quantity of heat in the Earth’s atmosphere – but here as elsewhere, the fixation on quantity misses the crucial point at issue.

I’m not generally a fan of Thomas Friedman, but he scored a bullseye in his book Hot, Flat, and Crowded when he pointed out that what we’re facing isn’t global warming but “global weirding:” not a simple increase in temperature, but an increase in unexpected and disruptive weather events. As the atmosphere heats up, the most important effect of that shift isn’t the raw increase in temperature; rather, it’s the increase in the difference in energy concentration between the atmosphere and the oceans. The thermal properties of water make the seas warm up much more slowly than the air and the Earth’s land surface, and so even a fairly modest change in the quantity of heat causes a much more significant change in exergy. Again, it’s exergy rather than energy that determines how much work a system can do, and the work that the Earth’s atmosphere does is called “weather.” Thus the most visible result of a relatively rapid rise in the heat concentration of the atmosphere isn’t a generalized warming. Rather, it’s an increase in extreme weather conditions on both ends of the temperature scale.

This isn’t a new point. It has been made repeatedly by a number of scientists and, interestingly enough, by large insurance companies as well. Munich Re, for example, pointed out a few years back that at the current rate of increase, the annual cost of natural disasters caused by global climate change would equal the gross domestic product of the world well before the end of the 21st century. Had climate advocates taken that as their central theme, this winter’s abnormally harsh storms in the eastern half of the US would have provided plenty of grist for their mills; even hardcore skeptics, as they shoveled snow off their driveways for the fourth or fifth time in a row, might have started to wonder if there was something to the claim that greenhouse-gas dumping was causing the weather to go wild. Instead, seduced by our culture’s fixation on quantity, climate advocates defined the problem purely as a future of too much heat, and those same skeptics, shoveling those same driveways, are rolling their eyes and wishing that a little of that global warming would show up to help them out.

It’s probably too late for climate change activists to switch their talking points from global warming to global weirding and be believed by anybody who isn’t already convinced, and so we’ll likely have to wait until the first really major global climate disaster before any significant steps get taken. (Given the latest reports from the Greenland ice cap, that may not be too many decades in the future, and any of my readers who live within fifty feet or so of sea level might find it advisable to relocate to higher ground.) Still, the same confusion between energy and exergy impacts the crisis of our time in other ways, and some of those are central to the themes this blog has been exploring in recent months.

One of the common ways to avoid thinking about our predicament, as I mentioned last week, is to cite the quantity of energy that arrives on Earth by way of sunlight every day, and note that it’s vastly greater than the quantity of energy our civilization uses in a year. That’s true enough, but it misses the point, which is that the energy in that sunlight has very modest amounts of exergy by the time it crosses 93 million miles of space to get to us, and it can therefore do only modest amounts of work. Strictly speaking, we don’t face an energy crisis as fossil fuels run short; what we face is an exergy crisis – a serious shortage of energy in highly concentrated forms. That’s a problem, because nearly every detail of daily life in a modern industrial society depends on using highly concentrated energy sources.

Longtime readers of this blog will recall that calling something a problem has certain definite implications. A problem, at least potentially, has a solution; that’s what differentiates it from a predicament, which cannot be solved and simply has to be lived with. The depletion and eventual exhaustion of fossil fuels, and the absence of any sign of an abundant high-exergy replacement for them in this small corner of the cosmos, is a predicament. The dependence on these fuels of most of the activities of daily life in the industrial world is a problem, because a great many of those activities don’t actually need anything like the amount of exergy we put into them.

Here’s an example. Nearly every home in the industrial world has hot water on tap. That’s by no means a pointless luxury; the contemporary habit of washing dishes, clothes, and bodies with ample amounts of hot water and soap has eliminated whole categories of illnesses that plagued our ancestors not that long ago. A very large fraction of those homes get that hot water by burning fossil fuels, either right there at the hot water heater, or at a power plant that uses the heat to generate the electricity that does the heating. A society that has ample supplies of high-exergy fossil fuels can afford to do that; a society running out of exergy is likely to face increasing troubles doing so.

There’s a crucial point not often recognized, though, which is that it doesn’t take that much exergy to heat a tank full of water from ambient temperature to 120° or so. The same thing can be done very effectively by energy sources that aren’t very concentrated, such as sunlight.

Enter the solar hot water heater.

This is arguably the most mature and successful solar technology we’ve got right now. The process is simple: one of several different kinds of collectors gather heat from the sun and transmit it either to water, in places that don’t get freezing temperatures, or to an antifreeze solution in places that do. In a water system, the hot water goes from the collector to an insulated tank, and eventually to the hot water faucet; in an antifreeze system, the antifreeze circulates through a heat exchanger that passes the heat to water, which then goes into an insulated tank to wait for its moment of glory. In most parts of the United States, a well-designed solar hot water system will cut a home’s energy use to heat water by 70%; in the Sun Belt, it’s not at all uncommon for a system of this sort to render any other hot water heater unnecessary.

Now it will doubtless already have occurred to my readers that installing a solar hot water system in their homes will not save the world. What it will do, on the other hand, is take part of the work now done by highly concentrated energy sources – most of which are rapidly depleting, and can be expected to become more expensive in real terms over the decades to come – and hand it over to a readily available energy source of lower concentration that, among other things, happens to be free. That’s an obvious practical gain for the residents of the house, and it’s also a collective gain for the community and society, since remaining supplies of high-exergy fossil fuels can be freed up for more necessary uses or, just possibly, left in the ground where they arguably belong.

It’s curious, to use no stronger word, that so eminently practical a step as installing solar hot water systems has received so little attention in the peak oil and climate change communities. It’s all the more curious because the US government, which so often seems incapable of encountering a problem without doing its level best to make it worse, has actually done something helpful for a change: there are very substantial federal income tax benefits for installing a residential solar hot water system. Why, then, haven’t solar hot water heaters blossomed like daisies atop homes across the country? Why haven’t activists made a push to define this proven technology as one part of a meaningful response to the crisis of our time?

It’s an interesting question to which I don’t have a definite answer. Partly, I think it ties into the weird disconnect between belief and action that pervades the apocalyptic end of contemporary culture. Of the sizable number of people in today’s America who say they believe that the world is coming to an end in 2012, for example, how many have stopped putting money into their retirement accounts? To judge by what little evidence I’ve been able to gather, not very many. In the same way, of the people who say they recognize that today’s extravagant habits of energy use are only possible because of a glut of cheap abundant fossil fuels, and will go away as fossil fuels deplete, those who are taking even basic steps to prepare themselves for a future of scarcity and socioeconomic disruption make up an uncomfortably small fraction. It’s hard to imagine passengers on a sinking ship glancing over the side to see the water rising, and going back to their game of shuffleboard on the deck, but a similar behavior pattern is far from rare these days.

Still, I think part of the issue is the same fixation on quantity I’ve discussed already. Solar hot water heaters don’t produce, or save, a great quantity of energy. Water heating uses around 15 per cent of an average home’s energy bill, and so a solar hot water system that replaces 70% of that will account for a bit more than 10% of home energy use. (This is still enough to pay for most professionally installed solar hot water systems in 3 to 7 years, mind you.) If every home in America put a solar hot water heater on its roof, the impact on our total national energy consumption would be noticeable, but in terms of raw quantity, it wouldn’t be huge.

Still, this misses at least three important points. First, of course, installing a solar hot water system can very easily be one piece of a broader program of energy conservation with a much larger impact. Knock 10% off household energy use with a solar hot water system, another 10% by insulating, weatherstripping, and the like, another 10% with an assortment of other simple energy-saving technologies (any halfway decent book on energy conservation from the Seventies has plenty of suggestions), and another 20% with lifestyle changes, and your home will be getting by with half the concentrated energy it uses right now. If even a large minority of homes in America took these steps, or others with similar effects, the effect on national exergy use would be very substantial indeed.

Second, there’s a very large and underappreciated difference between essential and nonessential energy uses, and it’s one that many of us will learn to recognize in the challenging years ahead. A great deal of energy use in America today is nonessential – think for a moment of all the energy currently devoted to the tourism industry, which is a very sizable sector of the US economy these days, and could be shut down tomorrow without impacting much of anything but the unemployment rolls – and a very large amount of that will go away as America slides down the curve of energy descent toward its near-future status as a Third World country. Whether or not hot water is strictly essential, its direct practical benefits in terms of health and comfort put it a good deal closer to the core, and that makes finding low-exergy ways to provide it particularly important.

Third, as I’ve already suggested, we face an exergy shortage rather than an energy shortage. That doesn’t make our predicament any less severe, mind you. A strong case can be made that available exergy places a hard upper limit on the human population of the planet; as our supplies of exergy diminish, so will the human population, and at this point it’s all too likely that most of that reduction will happen in the traditional manner, via those four unwelcome guys on horseback. It does mean, though, that individuals, families, and communities that take steps to meet as many of their energy needs as possible using relatively low-exergy energy sources can have a disproportionate impact on the way that the future unfolds.

I’ve argued elsewhere that Jevons’ Paradox – the rule that gains in the efficiency with which a resource is used tend to increase the use of the resource – only applies when cost is the only restriction to the use of the resource. When use of a resource is declining due to factors external to the economy, such as geological limits, gains in efficiency lessen the economic and social impact of shortages and buy time for a more gradual decline. Solar water heating is one example of a technology that can help our communities and societies make constructive use of that effect, and it’s also a technology that can be put to use by individuals right now. I’ll be discussing other options of the same kind in the next few posts.