Industrial civilization is a complicated thing, and its decline and fall bids fair to be more complicated still, but both rest on the refreshingly simple foundations of physical law. That’s crucial to keep in mind, because the raw emotional impact of the unwelcome future breathing down our necks just now can make it far too easy to retreat into one form or another of self-deception.
Plenty of the new energy technologies discussed so enthusiastically on the internet these days might as well be poster children for this effect. I think most people in the peak oil community are aware by now, for example, that the sweeping plans made for ethanol production from American corn as a solution to petroleum depletion neglected one minor but important detail: all things considered, growing corn and turning it into ethanol uses more energy than you get back from burning the ethanol. It’s not at all surprising that this was missed, for the same variety of bad logic underlies an astonishing amount of our collective conversation about energy these days.
The fundamental mistake that drove the ethanol boom and bust seems to be hardwired into our culture. Here’s an example. Most bright American ten-year-olds, about the time they learn about electric motors and generators, come up with the scheme of hooking up a motor and a generator to the same axle, running the electricity from the generator back to the motor, and using the result to power a vehicle. It seems perfectly logical; the motor drives the generator, the generator powers the motor, perpetual motion results, you hook it up to wheels or the like, and away you drive on free energy. Yes, I was one of those ten-year-olds, and somewhere around here I may still have one of the drawings I made of the car I planned to build when I turned sixteen, using that technology for the engine.
Of course it didn’t work. Not only couldn’t I get the device to power my bicycle – that was how I planned on testing the technology out – I couldn’t even make the thing run without a load connected to it at all. No matter how carefully I hooked up a toy motor to a generator salvaged from an old bicycle light, fitted a flywheel to one end of the shaft, and gave it a spin, the thing turned over a few times and then slowed to a halt. What interests me most about all this in retrospect, though, is that the adults with whom I discussed my project knew that it wouldn’t work, and told me so, but had the dickens of a time explaining why it didn’t work in terms that a bright ten-year-old could grasp.
This isn’t because the subject is overly complicated. The reason why perpetual motion won’t work is breathtakingly simple; the problem is that the way most people nowadays think about energy makes it almost impossible to grasp the logic involved. Most people nowadays think that since energy can be defined as the capacity to do work, if you have a certain amount of energy, you can do a certain amount of work with it. That seems very logical; the problem is that the real world doesn’t work that way.
In the real world, you have to take at least two other things into account. The first of them, of course, has seen a fair amount of discussion in peak oil circles: to figure out the effective energy yield of any energy source, you have to subtract the amount of energy needed to extract that energy source and put the energy in itto work. That’s the problem of net energy, and it’s the trap that’s clamped tightly onto the tender portions of the American ethanol industry; ethanol from corn only makes sense as an energy source if you ignore how much energy has to go into producing it.
The second issue, though, is the one I want to stress here. It’s seen a lot less discussion, but it’s even more important than the issue of net energy, and it unfolds from the most ironclad of all the laws of physics, the second law of thermodynamics. The point that needs to be understood is that how much energy you happen to have on hand, even after subtracting the energy cost, doesn’t actually matter a bit when it comes to doing work. The amount of work you get out of a given energy source depends, not on the amount of energy, but on the difference in energy concentration between the energy source and the environment.
Please read that again: The amount of work you get out of a given energy source depends, not on the amount of energy it contains, but on the difference in energy concentration between the energy source and the environment.
Got that? Now let’s take a closer look at it.
Left to itself, energy always moves from more concentrated states to less concentrated states; this is why the coffee in your morning cuppa gets cold if you leave it on the table too long. The heat that was in the coffee still exists, because energy is neither created nor destroyed; it’s simply become useless to you, because most of it’s dispersed into the environment, raising the air temperature in your dining room by a fraction of a degree. There’s still heat in the coffee as well, since it stops losing heat when it reaches room temperature and doesn’t continue down to absolute zero, but room temperature coffee is not going to do the work of warming your insides on a cold winter morning.
In a very small way, as you sit there considering your cold coffee, you’re facing an energy crisis; the energy resources you have on hand (the remaining heat in the coffee) will not do the work you want them to do (warming your insides). Notice, though, that you’re not suffering from an energy shortage – there’s exactly the same amount of energy in the dining room as there was when the coffee was fresh from the coffeepot. No, what you have is a shortage of the difference between energy concentrations that will allow the energy to do useful work. (The technical term for this is exergy). How do you solve your energy crisis? One way or another, you have to increase the energy concentration in your energy source relative to the room temperature environment. You might do that by dumping your cold coffee down the drain and pouring yourself a fresh cup, say, or by putting your existing cup on a cup warmer. Either way, though, you have to get some energy to do the work, and that means letting it go from higher to lower concentrations.
Any time you make energy do anything, you have to let some of it follow its bliss, so to speak, and pass from a higher concentration to a lower one. The more work you want done, the more exergy you use up; you can do it by allowing a smaller amount of highly concentrated energy to disperse, or by allowing a much larger amount of modestly concentrated energy to do so, or anything in between. One way or another, though, the total difference in energy concentration between source and environment – the total exergy – decreases when work is done. Mind you, you can make energy do plenty of tricks if you’re willing to pay its price; you can change it from one form to another, and you can even concentrate one amount of energy by sacrificing a much larger amount to waste heat; but one way or another, the total exergy in the system goes down.
This is why my great discovery at age ten didn’t revolutionize the world and make me rich and famous, as I briefly hoped it would. Electric motors and generators are ways of turning energy from one form into another – from electricity into rotary motion, on the one hand, and from rotary motion into electricity on the other. Each of them necessarily disperses some energy, and thus loses some exergy, in the process. Thus the amount of electricity that you get out of the generator when the shaft is turning at any given speed will always be less than the amount of electricity the motor needs to get the shaft up to that speed.
This gets missed whenever people assume that the amount of energy, rather than its concentration, is the thing that matters. Post something on the internet about energy as a limiting factor for civilization, and dollars will get you doughnuts that somebody will respond by insisting that the amount of energy in the universe is infinite. Now of course Garrett Hardin was quite right to point out in Filters Against Folly that when somebody says “X is infinite,” what’s actually being said is “I refuse to think about X;” the word “infinite” functions as a thoughtstopper, a way to avoid paying attention to something that’s too uncomfortable to consider closely.
Still, there’s another dimension to the problem, and it follows from the points already raised here. Whether or not there’s an infinite amount of energy in the universe – and we simply don’t know one way or the other – we can be absolutely sure that the amount of highly concentrated energy in the small corner of the universe we can easily access is sharply and distressingly finite. Since energy always tries to follow its bliss, highly concentrated energy sources are very rare, and only occur when very particular conditions happen to be met.
In the part of the cosmos that affects us directly, one set of those conditions exist in the heart of the sun, where gravitational pressure squeezes hydrogen nuclei so hard that they fuse into helium. Another set exists here on the Earth’s surface, where plants concentrate energy in their tissues by tapping into the flow of energy dispersing from the sun, and other living things do the same thing by tapping into the energy supplies created by plants. Now and again in the history of life on Earth, a special set of conditions have allowed energy stockpiled by plants to be buried and concentrated further by slow geological processes, yielding the fossil fuels that we now burn so recklessly. There are a few other contexts in which energy can be had in concentrated forms – kinetic energy from water and wind, both of them ultimately driven by sunlight; heat from within the Earth, caught and harnessed as it slowly disperses toward space; a handful of scarce and unstable radioactive elements that can be coaxed into nuclear misbehavior under exacting conditions – but the vast majority of the energy we have on hand here on Earth comes directly or indirectly from the sun.
That in itself defines our problem neatly, because by the time it gets through 93 million miles of deep space, then filters its way down through the Earth’s relatively murky atmosphere, the energy in sunlight is pretty thoroughly dispersed. That’s why green plants stockpile only about 1% of the energy in the light striking their leaves; the rest either bounces off the leaves or gets dispersed into waste heat in the process of keeping the plant alive and enabling it to manufacture the sugars that store the 1%. Sunlight just isn’t that concentrated, and you have to disperse one heck of a lot of it to get a very modest amount of energy concentrated enough to do much of anything with it.
All this explains as well why the “zero point energy” people are basically smoking their shorts. The premise of zero point energy is that there’s a vast amount of energy woven into the fabric of spacetime; if we can tap into it, we solve all our energy problems and go zooming off to the stars. They do seem to be right that there’s a huge amount of energy in empty space, but once again, the amount of energy does not tell you how much work you can do with it, and zero point energy is by definition at the lowest possible level of concentration. By definition, therefore, it can’t be made to do anything at all, and any attempt to make use of it belongs right up there on the shelf with my motor-generator gimmick.
The same logic also explains why projects for coming up with a replacement for fossil fuels using sunlight, or any other readily available renewable energy source, are doomed to fail. What makes fossil fuels so valuable is the fact that the energy they contain was gathered over countless centuries and then concentrated by geological processes involving fantastic amounts of heat and pressure over millions of years. They define the far end of the curve of energy concentration, at least on this planet, which is why they are as scarce as they are, and why no other energy resource can compete with them – as long as they still exist, that is.
As concentrated fossil fuel supplies deplete, in turn, a civilization that depends on them for its survival will find itself in a very nasty bind. If ours is anything to go by, it will proceed to make that bind even worse by trying to make up the difference by manufacturing new energy sources at roughly the same level of concentration. That’s a losing bargain, because it maximizes the amount of exergy that gets lost: you have to disperse a lot of energy to make the concentrated energy source, remember, before you can get around to using the concentrated energy source to do anything useful. Thus trying to fill our gas tanks with some manufactured substitute for gasoline, say, drains our remaining supplies of concentrated energy at a much faster pace than the other option – that of doing as much as possible with relatively low concentrations of energy, and husbanding the highly concentrated energy sources for those necessary tasks that can’t be done without them.
This is where E.F. Schumacher’s concept of “intermediate technology,” which was discussed in last week’s post, can be fitted into its broader context. Schumacher’s idea was that state-of-the-art factories and an economy dependent on exports to the rest of the world are not actually that useful to a relatively poor nation trying to build an economy from the ground up. He was right, of course – those Third World nations that have prospered are precisely the ones that used trade barriers to shelter low-tech domestic industries, and entered the export market only after building a domestic industrial base one step at a time – but in a future in which all of us will be a good deal poorer than we are today, his insights have a wider value. A state-of-the-art factory, after all, is more expensive in terms much more concrete than paper money; it takes a great deal more exergy to build and maintain one than it does to build and maintain a workshop using hand tools and human muscles to produce the same goods.
My readers will doubtless be aware that such considerations have about as much chance of being taken seriously in the governing circles of American politics and business as a snowball has for a long and comfortable stay in Beelzebub’s back yard. Fortunately, the cooperation of the current American political and executive classes is entirely unnecessary. In the next few posts, we’ll discuss some of the ways that individuals, families, and local communities can make the switch from economic dependence on highly concentrated energy sources to reliance on much more modestly concentrated and more widely available options. The fact that most of the energy in our highly concentrated energy sources has already followed its bliss into entropic ecstasy puts hard limits on what can be achieved, but there’s still plenty of room to make a bad situation somewhat better.
Wednesday, February 24, 2010
Wednesday, February 17, 2010
Why Factories Aren't Efficient
Last week’s Archdruid Report post fielded a thoughtful response from peak oil blogger Sharon Astyk, who pointed out that what I was describing as America’s descent to Third World status could as well be called a future of “ordinary human poverty.” She’s quite right, of course. There’s nothing all that remarkable about the future ahead of us; it’s simply that the unparalleled abundance that our civilization bought by burning through half a billion years of stored sunlight in three short centuries has left most people in the industrial world clueless about the basic realities of human life in more ordinary times.
It’s this cluelessness that underlies so many enthusiastic discussions of a green future full of high technology and relative material abundance. Those discussions also rely on one of the dogmas of the modern religion of progress, the article of faith that the accumulation of technical knowledge was what gave the industrial world its three centuries of unparalleled wealth; since technical knowledge is still accumulating, the belief goes, we may expect more of the same in the future. Now in fact the primary factor that drove the rise of industrial civilization, and made possible the lavish lifestyles of the recent past, was the recklessness with which the earth’s fossil fuel reserves have been extracted and burnt over the last few centuries. The explosion of technical knowledge was a consequence of that, not a cause.
In what we might as well get used to calling the real world – that is, the world as it is when human societies don’t have such immense quantities of highly concentrated energy ready to hand that figuring out how to use it all becomes a major driver of economic change – the primary constraints on the production of wealth are hard natural limits on the annual production of energy resources and raw materials. Even after two billion years of evolutionary improvements, photosynthesis only converts about one percent of the solar energy falling on leaves into chemical energy that can be used for other purposes, and that only when other requirements – water, soil nutrients, and so on – are also on hand. Other than a little extra from wind and running water, that trickle of energy from photosynthesis is what a nonindustrial society has to work with; that’s what fuels the sum total of human and animal muscle that works the fields, digs the mines, wields the tools of every craft, and does everything else that produces wealth. This, in turn, is why most people in nonindustrial societies have so little; the available energy supply, and the other resources that can be extracted and used with that energy, are too limited to provide any more.
The same sort of limits apply to the contemporary Third World, though for different reasons. Here the problem is the assortment of colonial and neocolonial arrangements that drain most of the world’s wealth into the coffers of a handful of industrial nations, and leave the rest to tussle over the little that’s left. I’ve commented here before that the five percent of the world’s population that happens to live in the United States, for example, doesn’t get to use roughly a third of the world’s resources and industrial production because the rest of the world has no desire to use a fairer share themselves. Rather, our prosperity is maintained at their expense, and until recently – when the current imperial system began coming apart at the seams – any Third World country that objected too strenuously to that state of affairs could pretty much count on having its attitude adjusted by way of a coup d’etat or "color revolution" stage-managed by one or more of the powers of the industrial world, if not an old-fashioned invasion of the sort derided in Tom Lehrer’s ballad "Send the Marines."
One consequence of all this is that over the last century or so, a handful of insightful thinkers have tried to explore ways in which the cycle of exploitation and dependency can be broken. One of those was the maverick economist E.F. Schumacher, whose ideas have been central to quite a few of the posts here over the last year or so.
Though he had degrees from Oxford and taught for a while at Columbia University, Schumacher was not primarily an academic; he was the polar opposite of those ivory-tower economists who have done so much damage to the world in recent decades by insisting that their theories are the key to prosperity even when the facts argue forcefully for the opposite case. He spent most of his career working in the places where government and business overlap, helping to rebuild the German economy after the Second World War and then, for two decades, serving as chief economist for the British National Coal Board, at that time one of the world’s largest energy corporations. This was the background he brought to bear on the problems facing the Third World. Still, he drew some of his central ideas from a very different source: the largely neglected economic ideas of Gandhi.
(May I interrupt this post to address a pet peeve? The family name of the founder of modern India is spelled "Gandhi," not "Ghandi." It’s not that difficult to spell it right, any more than it’s hard to avoid writing "Abraham Lcinoln," say, or "Nelson Mdanela;" despite which, I recently got sent a review copy of a book referencing Gandhi – I won’t mention the publishers, to spare them the embarrassment – which misspelled the name on the top of every single page. If you need a mnemonic, just remember that the beginning of his name is spelled like "Gandalf," not like "ghastly." Thank you, and we now return you to your regularly scheduled Archdruid Report.)
A lot of Americans – even, ahem, those who can spell his name correctly – think of Mohandas K. Gandhi as a spiritual leader, which of course he was, and as a political figure, which of course he also was. It’s not as often remembered that he also spent quite a bit of time developing an economic theory appropriate to the challenges facing a newly independent India. His suggestion, to condense some very subtle thinking into too few words, was that a nation that had a vast labor force but very little money was wasting its time to invest that money in state-of-the-art industrial plants; instead, he suggested, the most effective approach was to equip that vast labor force with tools that would improve their productivity within the existing structures of resource supply, production and distribution. Instead of replacing India’s huge home-based spinning and weaving industries with factories, for example, and throwing millions of spinners and weavers out of work, he argued that the most effective use of India’s limited resources was to help those spinners and weavers upgrade their skills, spinning wheels, and looms, so they could produce more cloth at a lower price, continue to support themselves by their labor, and in the process make India self-sufficient in clothing production.
This sort of thinking flies in the face of nearly every mainstream economic theory since Adam Smith, granted. Since nearly every mainstream economic theory since Adam Smith has played a sizable role in landing the industrial world in its current mess, though, I’m not so sure this is a bad thing. Current economics dismisses Gandhi’s ideas on the grounds of their "inefficiency," but this has to be taken in context, "efficiency," in today’s economic jargon, means nothing more or less than efficiency in producing somebody a profit. As a way of keeping millions of people gainfully employed, stabilizing the economy of a desperately poor nation, and preventing its wealth from being siphoned overseas by predatory industrial nations, Gandhi’s proposal is arguably very efficient indeed – and this, in turn, was what brought it to the attention of E.F. Schumacher.
One of Schumacher’s particular talents was a gift for intellectual synthesis; his work is full of cogent insights that sum up a great deal of more specialized work and make it applicable to a wider range of circumstances. This is more or less what he did with Gandhi’s ideas. Schumacher argued that talk about "developing" the Third World typically neglected to deal with one of the most pragmatic issues of all – the cost of setting up workers with the tools they needed to work.
Take a moment to follow the logic. You are the president of the newly independent Republic of Imaginaria. You’ve got a population that’s not particularly well fed, clothed, and housed, and a fairly high unemployment rate; you’ve got a very modest budget for economic development; you’ve also got raw materials of various kinds, which could be used to feed, clothe, and house the Imaginarian people. Your foreign economic advisers, who not coincidentally come from the industrial nation that used to be your country’s imperial overlord, insist that your best option is to use your budget to build a big modern factory that will turn those raw materials into goods for export to their country by their merchants, giving your country cash income to buy goods from them, and in the process employ a few thousand Imaginarians as factory workers.
Not so fast, says Schumacher. If your goal is to feed, clothe, house, and employ the Imaginarian people, building a factory is a very inefficient way to go about it, because that approach requires a very large investment per worker employed. You can provide many more Imaginarians with productive jobs for the same amount of money, by turning to a technology that’s less expensive to build, maintain, and supply with energy and raw materials – say, by providing them with hand tools and workbenches instead of state-of-the-art fabrication equipment, and setting up supply chains that supply them with local raw materials instead of imports from abroad. The goods those workers produce may not be as valuable in the export market as what might come out of a factory, but that’s not necessarily a problem – remember, your main goal is to feed, clothe, and house Imaginarians, so maximizing production for domestic use is a better idea in the first place, since less of the value produced by those workers will be skimmed off by the middlemen who manage international trade. Furthermore, since you won’t have to to trade with overseas producers for as many of the necessities of life, your need for cash from overseas goes down, and you get an economy less vulnerable to foreign-exchange shocks into the bargain.
This was the basis for what Schumacher called "intermediate technology," and the younger generation of activist-inventors who followed in his footsteps called "appropriate technology." The idea was that relatively simple technologies, powered by locally available energy sources and drawing on locally available raw materials, could provide paying jobs and an improved standard of living for working people throughout the Third World. A lot of very productive thinking went into these projects, and there were some impressive success stories before the counterrevolution of the 1980s cut what little funding the movement had been able to find. Mind you, Schumacher’s thinking was never popular among economists or the business world, and it happened more than once that countries that tried to adopt such economic policies were treated to the sort of attitude adjustments mentioned above. Still, pay attention to those Third World nations that have succeeded in becoming relatively prosperous, and you’ll find that some version of Schumacher’s scheme played a significant role in helping them do that.
It’s when the same logic is applied to the industrial world, though, that Schumacher’s ideas become relevant to the project of this blog. If, as I’ve suggested, the United States (and, in due time, the rest of the world’s industrial nations) have begun a descent to Third World status, thinking designed for the Third World may be a good deal more applicable here and now than the conventional wisdom might suggest. It seems utterly improbable to me that the governments of today’s industrial powers will have the foresight, or for that matter the common sense, to realize that economic policies that deliberately increase the number of people earning a living might be a very good idea in an age of pervasive structural unemployment – or, for that matter, to glimpse the unraveling of the industrial age, and realize that within a finite amount of time, the choice will no longer be between high-tech and low-tech ways of manufacturing goods, but between low-tech ways and no way at all. Still, national governments are not the only players in the game.
What Schumacher proposed, in fact, is one of the missing pieces to the puzzle of economic relocalization. The economies of scale that made centralized mass production possible in recent decades were simply one more side effect of the vast amount of energy the industrial nations used up during that time. As fossil fuel depletion brings those excesses to an end, the energy and other resources needed to maintain centralized mass production will no longer be available, and what I’ve described above as the economics of the real world come into play. At that point, the question of how much it costs to equip a worker to do any given job becomes a central economic issue, because any resources that have to go to equipping that worker must be taken away from another productive use.
Now of course it’s true that the cost of equipping somebody to perform some economic function locally has already entered the relocalization movement in an informal way. What Rob Hopkins calls "the great reskilling" – the process by which individuals who have no productive skills outside a centralized industrial economy learn how to make and do things on their own – has had to take place within the tolerably strict constraints of what individuals can afford to buy in the way of tools and workspaces, since there isn’t exactly a torrent of grant money available for people who want to become blacksmiths, brewers, boatbuilders, or practitioners of other useful crafts.
It may be worth suggesting, though, that Schumacher’s logic might be worth applying directly to the relocalization project by those individuals and communities who are willing to put that project into practice. The less it costs in terms of energy and other resources to prepare a community to deal with one or more of its economic needs, after all, the more will be available for other projects. Equally, the more good ideas that can be garnered from the dusty pages of publications issued by Schumacher’s Intermediate Technology Development Group and its many equivalents, and put to work during the industrial world’s decline to Third World status, the more creativity can be spared for other challenges.
Yet there’s also a broader context here, which Schumacher addressed only indirectly, and which has only been hinted at in this post – the need to redefine our notions of economics to make sense in the real world, and above all, to respond to the most economically important of the laws of physics. Yes, those would be the laws of thermodynamics. We’ll talk more about this in next week’s post.
It’s this cluelessness that underlies so many enthusiastic discussions of a green future full of high technology and relative material abundance. Those discussions also rely on one of the dogmas of the modern religion of progress, the article of faith that the accumulation of technical knowledge was what gave the industrial world its three centuries of unparalleled wealth; since technical knowledge is still accumulating, the belief goes, we may expect more of the same in the future. Now in fact the primary factor that drove the rise of industrial civilization, and made possible the lavish lifestyles of the recent past, was the recklessness with which the earth’s fossil fuel reserves have been extracted and burnt over the last few centuries. The explosion of technical knowledge was a consequence of that, not a cause.
In what we might as well get used to calling the real world – that is, the world as it is when human societies don’t have such immense quantities of highly concentrated energy ready to hand that figuring out how to use it all becomes a major driver of economic change – the primary constraints on the production of wealth are hard natural limits on the annual production of energy resources and raw materials. Even after two billion years of evolutionary improvements, photosynthesis only converts about one percent of the solar energy falling on leaves into chemical energy that can be used for other purposes, and that only when other requirements – water, soil nutrients, and so on – are also on hand. Other than a little extra from wind and running water, that trickle of energy from photosynthesis is what a nonindustrial society has to work with; that’s what fuels the sum total of human and animal muscle that works the fields, digs the mines, wields the tools of every craft, and does everything else that produces wealth. This, in turn, is why most people in nonindustrial societies have so little; the available energy supply, and the other resources that can be extracted and used with that energy, are too limited to provide any more.
The same sort of limits apply to the contemporary Third World, though for different reasons. Here the problem is the assortment of colonial and neocolonial arrangements that drain most of the world’s wealth into the coffers of a handful of industrial nations, and leave the rest to tussle over the little that’s left. I’ve commented here before that the five percent of the world’s population that happens to live in the United States, for example, doesn’t get to use roughly a third of the world’s resources and industrial production because the rest of the world has no desire to use a fairer share themselves. Rather, our prosperity is maintained at their expense, and until recently – when the current imperial system began coming apart at the seams – any Third World country that objected too strenuously to that state of affairs could pretty much count on having its attitude adjusted by way of a coup d’etat or "color revolution" stage-managed by one or more of the powers of the industrial world, if not an old-fashioned invasion of the sort derided in Tom Lehrer’s ballad "Send the Marines."
One consequence of all this is that over the last century or so, a handful of insightful thinkers have tried to explore ways in which the cycle of exploitation and dependency can be broken. One of those was the maverick economist E.F. Schumacher, whose ideas have been central to quite a few of the posts here over the last year or so.
Though he had degrees from Oxford and taught for a while at Columbia University, Schumacher was not primarily an academic; he was the polar opposite of those ivory-tower economists who have done so much damage to the world in recent decades by insisting that their theories are the key to prosperity even when the facts argue forcefully for the opposite case. He spent most of his career working in the places where government and business overlap, helping to rebuild the German economy after the Second World War and then, for two decades, serving as chief economist for the British National Coal Board, at that time one of the world’s largest energy corporations. This was the background he brought to bear on the problems facing the Third World. Still, he drew some of his central ideas from a very different source: the largely neglected economic ideas of Gandhi.
(May I interrupt this post to address a pet peeve? The family name of the founder of modern India is spelled "Gandhi," not "Ghandi." It’s not that difficult to spell it right, any more than it’s hard to avoid writing "Abraham Lcinoln," say, or "Nelson Mdanela;" despite which, I recently got sent a review copy of a book referencing Gandhi – I won’t mention the publishers, to spare them the embarrassment – which misspelled the name on the top of every single page. If you need a mnemonic, just remember that the beginning of his name is spelled like "Gandalf," not like "ghastly." Thank you, and we now return you to your regularly scheduled Archdruid Report.)
A lot of Americans – even, ahem, those who can spell his name correctly – think of Mohandas K. Gandhi as a spiritual leader, which of course he was, and as a political figure, which of course he also was. It’s not as often remembered that he also spent quite a bit of time developing an economic theory appropriate to the challenges facing a newly independent India. His suggestion, to condense some very subtle thinking into too few words, was that a nation that had a vast labor force but very little money was wasting its time to invest that money in state-of-the-art industrial plants; instead, he suggested, the most effective approach was to equip that vast labor force with tools that would improve their productivity within the existing structures of resource supply, production and distribution. Instead of replacing India’s huge home-based spinning and weaving industries with factories, for example, and throwing millions of spinners and weavers out of work, he argued that the most effective use of India’s limited resources was to help those spinners and weavers upgrade their skills, spinning wheels, and looms, so they could produce more cloth at a lower price, continue to support themselves by their labor, and in the process make India self-sufficient in clothing production.
This sort of thinking flies in the face of nearly every mainstream economic theory since Adam Smith, granted. Since nearly every mainstream economic theory since Adam Smith has played a sizable role in landing the industrial world in its current mess, though, I’m not so sure this is a bad thing. Current economics dismisses Gandhi’s ideas on the grounds of their "inefficiency," but this has to be taken in context, "efficiency," in today’s economic jargon, means nothing more or less than efficiency in producing somebody a profit. As a way of keeping millions of people gainfully employed, stabilizing the economy of a desperately poor nation, and preventing its wealth from being siphoned overseas by predatory industrial nations, Gandhi’s proposal is arguably very efficient indeed – and this, in turn, was what brought it to the attention of E.F. Schumacher.
One of Schumacher’s particular talents was a gift for intellectual synthesis; his work is full of cogent insights that sum up a great deal of more specialized work and make it applicable to a wider range of circumstances. This is more or less what he did with Gandhi’s ideas. Schumacher argued that talk about "developing" the Third World typically neglected to deal with one of the most pragmatic issues of all – the cost of setting up workers with the tools they needed to work.
Take a moment to follow the logic. You are the president of the newly independent Republic of Imaginaria. You’ve got a population that’s not particularly well fed, clothed, and housed, and a fairly high unemployment rate; you’ve got a very modest budget for economic development; you’ve also got raw materials of various kinds, which could be used to feed, clothe, and house the Imaginarian people. Your foreign economic advisers, who not coincidentally come from the industrial nation that used to be your country’s imperial overlord, insist that your best option is to use your budget to build a big modern factory that will turn those raw materials into goods for export to their country by their merchants, giving your country cash income to buy goods from them, and in the process employ a few thousand Imaginarians as factory workers.
Not so fast, says Schumacher. If your goal is to feed, clothe, house, and employ the Imaginarian people, building a factory is a very inefficient way to go about it, because that approach requires a very large investment per worker employed. You can provide many more Imaginarians with productive jobs for the same amount of money, by turning to a technology that’s less expensive to build, maintain, and supply with energy and raw materials – say, by providing them with hand tools and workbenches instead of state-of-the-art fabrication equipment, and setting up supply chains that supply them with local raw materials instead of imports from abroad. The goods those workers produce may not be as valuable in the export market as what might come out of a factory, but that’s not necessarily a problem – remember, your main goal is to feed, clothe, and house Imaginarians, so maximizing production for domestic use is a better idea in the first place, since less of the value produced by those workers will be skimmed off by the middlemen who manage international trade. Furthermore, since you won’t have to to trade with overseas producers for as many of the necessities of life, your need for cash from overseas goes down, and you get an economy less vulnerable to foreign-exchange shocks into the bargain.
This was the basis for what Schumacher called "intermediate technology," and the younger generation of activist-inventors who followed in his footsteps called "appropriate technology." The idea was that relatively simple technologies, powered by locally available energy sources and drawing on locally available raw materials, could provide paying jobs and an improved standard of living for working people throughout the Third World. A lot of very productive thinking went into these projects, and there were some impressive success stories before the counterrevolution of the 1980s cut what little funding the movement had been able to find. Mind you, Schumacher’s thinking was never popular among economists or the business world, and it happened more than once that countries that tried to adopt such economic policies were treated to the sort of attitude adjustments mentioned above. Still, pay attention to those Third World nations that have succeeded in becoming relatively prosperous, and you’ll find that some version of Schumacher’s scheme played a significant role in helping them do that.
It’s when the same logic is applied to the industrial world, though, that Schumacher’s ideas become relevant to the project of this blog. If, as I’ve suggested, the United States (and, in due time, the rest of the world’s industrial nations) have begun a descent to Third World status, thinking designed for the Third World may be a good deal more applicable here and now than the conventional wisdom might suggest. It seems utterly improbable to me that the governments of today’s industrial powers will have the foresight, or for that matter the common sense, to realize that economic policies that deliberately increase the number of people earning a living might be a very good idea in an age of pervasive structural unemployment – or, for that matter, to glimpse the unraveling of the industrial age, and realize that within a finite amount of time, the choice will no longer be between high-tech and low-tech ways of manufacturing goods, but between low-tech ways and no way at all. Still, national governments are not the only players in the game.
What Schumacher proposed, in fact, is one of the missing pieces to the puzzle of economic relocalization. The economies of scale that made centralized mass production possible in recent decades were simply one more side effect of the vast amount of energy the industrial nations used up during that time. As fossil fuel depletion brings those excesses to an end, the energy and other resources needed to maintain centralized mass production will no longer be available, and what I’ve described above as the economics of the real world come into play. At that point, the question of how much it costs to equip a worker to do any given job becomes a central economic issue, because any resources that have to go to equipping that worker must be taken away from another productive use.
Now of course it’s true that the cost of equipping somebody to perform some economic function locally has already entered the relocalization movement in an informal way. What Rob Hopkins calls "the great reskilling" – the process by which individuals who have no productive skills outside a centralized industrial economy learn how to make and do things on their own – has had to take place within the tolerably strict constraints of what individuals can afford to buy in the way of tools and workspaces, since there isn’t exactly a torrent of grant money available for people who want to become blacksmiths, brewers, boatbuilders, or practitioners of other useful crafts.
It may be worth suggesting, though, that Schumacher’s logic might be worth applying directly to the relocalization project by those individuals and communities who are willing to put that project into practice. The less it costs in terms of energy and other resources to prepare a community to deal with one or more of its economic needs, after all, the more will be available for other projects. Equally, the more good ideas that can be garnered from the dusty pages of publications issued by Schumacher’s Intermediate Technology Development Group and its many equivalents, and put to work during the industrial world’s decline to Third World status, the more creativity can be spared for other challenges.
Yet there’s also a broader context here, which Schumacher addressed only indirectly, and which has only been hinted at in this post – the need to redefine our notions of economics to make sense in the real world, and above all, to respond to the most economically important of the laws of physics. Yes, those would be the laws of thermodynamics. We’ll talk more about this in next week’s post.
Wednesday, February 10, 2010
Becoming a Third World Country
In the course of writing last week’s Archdruid Report post, I belatedly realized that there’s a very simple way to talk about the scope of the brutal economic contraction now sweeping through American society – a way, furthermore, that might just be able to sidestep both the obsessive belief in progress and the equally obsessive fascination with apocalyptic fantasy that, between them, make up much of what passes for thinking about the future these days. It’s to point out that, over the next decade or so, the United States is going to finish the process of becoming a Third World country.
I say “finish the process,” because we are already most of the way there. What distinguishes the Third World from the privileged industrial minority of the world’s nations? Third World nations import most of their manufactured goods from abroad, while exporting mostly raw materials; that’s been true of the United States for decades now. Third World economies have inadequate domestic capital, and are dependent on loans from abroad; that’s been true of the United States for just about as long. Third World societies are economically burdened by severe problems with public health; the United States ranks dead last for life expectancy among industrial nations, and its rates of infant mortality are on a par with those in Indonesia, so that’s covered. Third World nation are very often governed by kleptocracies – well, let’s not even go there, shall we?
There are, in fact, precisely two things left that differentiate the United States from any other large, overpopulated, impoverished Third World nation. The first is that the average standard of living here, measured either in money or in terms of energy and resource consumption, stands well above Third World levels – in fact, it’s well above the levels of most industrial nations. The second is that the United States has the world’s most expensive and technologically complex military. Those two factors are closely related, and understanding their relationship is crucial in making sense of the end of the “American century” and the decline of the United States to Third World status.
The US has the world’s most expensive military because, just now, it has the world’s largest empire. Now of course it’s not polite to talk about that in precisely those terms, but let’s be frank – the US does not keep its troops garrisoned in more than a hundred countries around the world for the sake of their health, you know. That empire functions, as empires always do, as a way of tilting the economic relationships between nations in a way that pumps wealth out of the rest of the world and into the coffers of the imperial nation. It may never have occurred to you to wonder why it is that the 5% of the world’s population who live in the US get to use around a third of the world’s production of natural resources and industrial products – certainly it never seems to occur to most Americans to wonder about that – but the economics of empire are the reason.
A century ago, in 1910, it was Britain that had the global empire, the worldwide garrisons, and the torrents of wealth flowing from around the world to boost the British standard of living at the expense of everyone else’s. A century from now, in 2110, if the technology to maintain any kind of worldwide empire still exists – and it can be done with wooden sailing ships and crude cannon, remember; Spain managed that feat very effectively in its day – somebody else will be in that position. It won’t be America, because empire is the methamphetamine of nations; in the short term, the effects feel great, but in the long term they’re very often lethal. Britain managed to walk away from its empire without total catastrophe because the United States was ready, willing, and able to take over, and give Britain a place in the inner circle of US allies into the bargain; most other nations have paid for their imperial overshoot with a century or two of economic collapse, political chaos, and social disintegration.
That’s the corner into which the United States is backing itself right now. The flood of lightly disguised tribute from overseas, while it made Americans fantastically wealthy by the standards of the rest of the world, also gutted America’s domestic economy – the same economic imbalances that funnel wealth here also make it nearly impossible to produce goods or provide services at home at a cost that can compete with overseas producers – and created a culture of entitlement that includes all classes from the bottom of the social pyramid right up to the top. As always happens, in turn, the benefits of empire are failing to keep pace with its rapidly rising costs, and in addition, rising demands for imperial largesse from all parts of society are drawing down an increasingly straitened supply of wealth. Meanwhile other nations with imperial ambitions are circling like sharks; the wisest among them know that time is on their side, and that any additional burden that can be loaded onto a drowning empire will hasten the day when it goes under for the third time and they can close for the kill.
This view of the world situation is not one that you’ll find in the cultural mainstream, or for that matter any of its self-proclaimed alternatives. The contrast with a century ago is instructive. A great many people in late imperial Britain knew perfectly well that the empire on which the sun famously never set – critics suggested that this was because God Himself wouldn’t trust an Englishman in the dark – had had its day and was itself setting; the lines of Rudyard Kipling’s poem “Recessional” –
Far-called, our navies melt away;
On dune and headland sinks the fire.
Lo! All our pomp of yesterday
Is one with Nineveh and Tyre.
– simply put in powerful imagery what many were thinking at that time. You won’t find the same sort of historical sense nowadays, though, and I suspect the role of the myth of progress as the secular religion of the modern world has a lot to do with it. In 1910, the concept of historical decline was on a great many minds; these days you’ll hardly hear it mentioned, because the belief in history as perpetual progress has become all the more deeply entrenched as the foundations that made the progress of recent centuries possible have rotted away.
The resulting insistence on seeing all social changes through onward-and-upward colored spectacles has imposed huge distortions on our perceptions of recent events. One good example is the rise and fall of the so-called “global economy” in recent decades. Its proponents portrayed it as the triumphant wave of a Utopian future that would enable everybody to live like middle-class Americans; its critics portrayed it as the equally triumphant metastasis of a monolithic corporate power out to enslave the world. Very few people saw it as the desperate gambit of a faltering imperial society that could no longer even afford to run its own economy, and was forced to outsource even its most basic economic functions to other countries. Nonetheless, this is what it has turned out to be, and it had the predictable result that several other nations used the influx of capital and technology to build their own industrial sectors, bide their time, and then enter the market themselves and outcompete the very companies and countries that gave them a foot in the door.
More broadly, it seems to have escaped the attention of a great many observers that the day of the multinational corporations is drawing to an end. The struggle over Russia’s energy resources was the decisive battle there, and when Putin crushed the Western-funded oligarchs and retook control of his country’s energy supply, that battle was settled with a typically Russian sense of drama. The elegance with which China has turned international trade law against its putative beneficiaries is in its own way just as typical; a flurry of corporations owned by the Chinese government have spread operations throughout the world, using the mechanisms of global trade to lay the foundations of a future Chinese global empire, while the Chinese government efficiently stonewalled any further trade negotiations that would have put Chinese economic interests at home in jeopardy. More recently, China has begun buying sizable stakes in the multinational corporations that so many well-meaning people in the West once thought would reduce the world to vassalage; the day when ExxonMobil is a wholly-owned subsidiary of CNOOC may be closer than it looks.
The same biases that make such global changes invisible have impacts at least as sweeping here at home. Faith in progress, coupled with the tribute economy’s culture of entitlement I mentioned earlier, have made it nearly impossible for anybody in American public life to talk about the hard fact that America can no longer afford most of the social habits it adopted during its age of empire. It’s almost impossible to think of an aspect of daily life in America today that will not change drastically as a result. We will have to give up the notion, for example, that most Americans ought to go to college and get a “meaningful and fulfilling” job of the sort that can be done sitting at a desk. We will have to abandon the idea that it makes any sense to spend a quarter of a million dollars giving an elderly person with an incurable illness six more months of life. We will have to relearn the old distinction between the deserving poor – those who are willing to work and simply need the opportunity, or who have fallen into destitution through circumstances outside their control – and those who are simply trying to game the system. The great majority of us will get to find out what it’s like to make things instead of buying them, even when that means a sharp reduction in quality; to skip meals, or make do with very little, because the money to pay for anything more simply isn’t there; to treat serious illnesses at home because care from a doctor costs too much; I could go on for paragraphs, but I trust you get the idea.
All these changes, it needs to be said, would be inevitable at this point even if the industrial world depended on renewable resources and had a stable, sustainable relationship with the planetary biosphere that supports all our lives. The United States has played its recent hands in the game of empire very badly indeed, and responded to each loss by doubling down and raising the stakes even higher. If, as a growing number of perceptive commentators have suggested, the US government has been reduced to borrowing money from itself in order to pay its bills – the theme of last week’s Archdruid Report post – the end of that road is in sight. It’s hard to see this as anything but a desperation move on the part of a political and economic establishment that sees no other options for short-term survival and thinks it has nothing left to lose. It’s the exact equivalent of paying household bills by running up debt on credit cards; it can buy a little time, but at the cost of making bankruptcy a certainty once that time runs out.
The global context of the crisis, though, also needs to be kept in mind. The industrial world does not depend on renewable resources, and its relationship with the biosphere is leading it straight down the well-worn path of overshoot and collapse; the endgame of American empire, while it would be taking place anyway, has the additional factor of the limits to growth in play. In an alternate world where energy and resource flows could be counted on to remain stable for the foreseeable future, it’s quite possible that one of the rising powers might offer America the same devil’s bargain we offered Britain in 1942, and prop up the husk of our empire just long enough to take it over for themselves.
As it is, it cannot have escaped the attention of any other nation on the planet that something like a quarter of the world’s dwindling resource production could be made available for other countries, if only the United States were to lose the ability to purchase energy and other resources from outside its own borders. It’s not hard to think of nations that would be in a position to profit mightily from such a readjustment, and nothing so unseemly as a global war would necessarily be required to make it happen; to name only one possibility, it’s by no means unthinkable that the United States, having manufactured “color revolutions” to order in countries around the world, might turn out to be vulnerable to the same sort of well-organized mob action here at home.
Exactly how things will play out in the months and years to come is anybody’s guess. One of the consequences of America’s descent into Third World status, though, is that a great many of us may have scant leisure to contemplate global and national issues amid the struggle to keep food on the table and a roof over our heads. In the long run, this shift in focus may have certain advantages; I have argued in previous posts that those nations that undergo the deindustrial transition soonest, and are thus forced to learn how to get by on the very modest energy and resource flows available in the absence of fossil fuels, may find that this gives them a head start in making changes that everyone else will have to make in due time. Still, making the most of those advantages will require a very different approach to economics, among other things, than most of us have pursued (or imagined pursuing) so far.
Interestingly, this brings us back to the point where this blog’s exploration of deindustrial economics started some months ago: the thought of the maverick economist E.F. Schumacher. Among his other achievements, Schumacher developed a theory of economic development for the Third World that cut straight across the assumptions of his own time and ours alike, and proposed a route toward relative prosperity that took the limits to growth and the failures of empire into account. That route was not taken in his time; it may be the only way left open in ours. We’ll discuss it in detail in next week’s post.
I say “finish the process,” because we are already most of the way there. What distinguishes the Third World from the privileged industrial minority of the world’s nations? Third World nations import most of their manufactured goods from abroad, while exporting mostly raw materials; that’s been true of the United States for decades now. Third World economies have inadequate domestic capital, and are dependent on loans from abroad; that’s been true of the United States for just about as long. Third World societies are economically burdened by severe problems with public health; the United States ranks dead last for life expectancy among industrial nations, and its rates of infant mortality are on a par with those in Indonesia, so that’s covered. Third World nation are very often governed by kleptocracies – well, let’s not even go there, shall we?
There are, in fact, precisely two things left that differentiate the United States from any other large, overpopulated, impoverished Third World nation. The first is that the average standard of living here, measured either in money or in terms of energy and resource consumption, stands well above Third World levels – in fact, it’s well above the levels of most industrial nations. The second is that the United States has the world’s most expensive and technologically complex military. Those two factors are closely related, and understanding their relationship is crucial in making sense of the end of the “American century” and the decline of the United States to Third World status.
The US has the world’s most expensive military because, just now, it has the world’s largest empire. Now of course it’s not polite to talk about that in precisely those terms, but let’s be frank – the US does not keep its troops garrisoned in more than a hundred countries around the world for the sake of their health, you know. That empire functions, as empires always do, as a way of tilting the economic relationships between nations in a way that pumps wealth out of the rest of the world and into the coffers of the imperial nation. It may never have occurred to you to wonder why it is that the 5% of the world’s population who live in the US get to use around a third of the world’s production of natural resources and industrial products – certainly it never seems to occur to most Americans to wonder about that – but the economics of empire are the reason.
A century ago, in 1910, it was Britain that had the global empire, the worldwide garrisons, and the torrents of wealth flowing from around the world to boost the British standard of living at the expense of everyone else’s. A century from now, in 2110, if the technology to maintain any kind of worldwide empire still exists – and it can be done with wooden sailing ships and crude cannon, remember; Spain managed that feat very effectively in its day – somebody else will be in that position. It won’t be America, because empire is the methamphetamine of nations; in the short term, the effects feel great, but in the long term they’re very often lethal. Britain managed to walk away from its empire without total catastrophe because the United States was ready, willing, and able to take over, and give Britain a place in the inner circle of US allies into the bargain; most other nations have paid for their imperial overshoot with a century or two of economic collapse, political chaos, and social disintegration.
That’s the corner into which the United States is backing itself right now. The flood of lightly disguised tribute from overseas, while it made Americans fantastically wealthy by the standards of the rest of the world, also gutted America’s domestic economy – the same economic imbalances that funnel wealth here also make it nearly impossible to produce goods or provide services at home at a cost that can compete with overseas producers – and created a culture of entitlement that includes all classes from the bottom of the social pyramid right up to the top. As always happens, in turn, the benefits of empire are failing to keep pace with its rapidly rising costs, and in addition, rising demands for imperial largesse from all parts of society are drawing down an increasingly straitened supply of wealth. Meanwhile other nations with imperial ambitions are circling like sharks; the wisest among them know that time is on their side, and that any additional burden that can be loaded onto a drowning empire will hasten the day when it goes under for the third time and they can close for the kill.
This view of the world situation is not one that you’ll find in the cultural mainstream, or for that matter any of its self-proclaimed alternatives. The contrast with a century ago is instructive. A great many people in late imperial Britain knew perfectly well that the empire on which the sun famously never set – critics suggested that this was because God Himself wouldn’t trust an Englishman in the dark – had had its day and was itself setting; the lines of Rudyard Kipling’s poem “Recessional” –
Far-called, our navies melt away;
On dune and headland sinks the fire.
Lo! All our pomp of yesterday
Is one with Nineveh and Tyre.
– simply put in powerful imagery what many were thinking at that time. You won’t find the same sort of historical sense nowadays, though, and I suspect the role of the myth of progress as the secular religion of the modern world has a lot to do with it. In 1910, the concept of historical decline was on a great many minds; these days you’ll hardly hear it mentioned, because the belief in history as perpetual progress has become all the more deeply entrenched as the foundations that made the progress of recent centuries possible have rotted away.
The resulting insistence on seeing all social changes through onward-and-upward colored spectacles has imposed huge distortions on our perceptions of recent events. One good example is the rise and fall of the so-called “global economy” in recent decades. Its proponents portrayed it as the triumphant wave of a Utopian future that would enable everybody to live like middle-class Americans; its critics portrayed it as the equally triumphant metastasis of a monolithic corporate power out to enslave the world. Very few people saw it as the desperate gambit of a faltering imperial society that could no longer even afford to run its own economy, and was forced to outsource even its most basic economic functions to other countries. Nonetheless, this is what it has turned out to be, and it had the predictable result that several other nations used the influx of capital and technology to build their own industrial sectors, bide their time, and then enter the market themselves and outcompete the very companies and countries that gave them a foot in the door.
More broadly, it seems to have escaped the attention of a great many observers that the day of the multinational corporations is drawing to an end. The struggle over Russia’s energy resources was the decisive battle there, and when Putin crushed the Western-funded oligarchs and retook control of his country’s energy supply, that battle was settled with a typically Russian sense of drama. The elegance with which China has turned international trade law against its putative beneficiaries is in its own way just as typical; a flurry of corporations owned by the Chinese government have spread operations throughout the world, using the mechanisms of global trade to lay the foundations of a future Chinese global empire, while the Chinese government efficiently stonewalled any further trade negotiations that would have put Chinese economic interests at home in jeopardy. More recently, China has begun buying sizable stakes in the multinational corporations that so many well-meaning people in the West once thought would reduce the world to vassalage; the day when ExxonMobil is a wholly-owned subsidiary of CNOOC may be closer than it looks.
The same biases that make such global changes invisible have impacts at least as sweeping here at home. Faith in progress, coupled with the tribute economy’s culture of entitlement I mentioned earlier, have made it nearly impossible for anybody in American public life to talk about the hard fact that America can no longer afford most of the social habits it adopted during its age of empire. It’s almost impossible to think of an aspect of daily life in America today that will not change drastically as a result. We will have to give up the notion, for example, that most Americans ought to go to college and get a “meaningful and fulfilling” job of the sort that can be done sitting at a desk. We will have to abandon the idea that it makes any sense to spend a quarter of a million dollars giving an elderly person with an incurable illness six more months of life. We will have to relearn the old distinction between the deserving poor – those who are willing to work and simply need the opportunity, or who have fallen into destitution through circumstances outside their control – and those who are simply trying to game the system. The great majority of us will get to find out what it’s like to make things instead of buying them, even when that means a sharp reduction in quality; to skip meals, or make do with very little, because the money to pay for anything more simply isn’t there; to treat serious illnesses at home because care from a doctor costs too much; I could go on for paragraphs, but I trust you get the idea.
All these changes, it needs to be said, would be inevitable at this point even if the industrial world depended on renewable resources and had a stable, sustainable relationship with the planetary biosphere that supports all our lives. The United States has played its recent hands in the game of empire very badly indeed, and responded to each loss by doubling down and raising the stakes even higher. If, as a growing number of perceptive commentators have suggested, the US government has been reduced to borrowing money from itself in order to pay its bills – the theme of last week’s Archdruid Report post – the end of that road is in sight. It’s hard to see this as anything but a desperation move on the part of a political and economic establishment that sees no other options for short-term survival and thinks it has nothing left to lose. It’s the exact equivalent of paying household bills by running up debt on credit cards; it can buy a little time, but at the cost of making bankruptcy a certainty once that time runs out.
The global context of the crisis, though, also needs to be kept in mind. The industrial world does not depend on renewable resources, and its relationship with the biosphere is leading it straight down the well-worn path of overshoot and collapse; the endgame of American empire, while it would be taking place anyway, has the additional factor of the limits to growth in play. In an alternate world where energy and resource flows could be counted on to remain stable for the foreseeable future, it’s quite possible that one of the rising powers might offer America the same devil’s bargain we offered Britain in 1942, and prop up the husk of our empire just long enough to take it over for themselves.
As it is, it cannot have escaped the attention of any other nation on the planet that something like a quarter of the world’s dwindling resource production could be made available for other countries, if only the United States were to lose the ability to purchase energy and other resources from outside its own borders. It’s not hard to think of nations that would be in a position to profit mightily from such a readjustment, and nothing so unseemly as a global war would necessarily be required to make it happen; to name only one possibility, it’s by no means unthinkable that the United States, having manufactured “color revolutions” to order in countries around the world, might turn out to be vulnerable to the same sort of well-organized mob action here at home.
Exactly how things will play out in the months and years to come is anybody’s guess. One of the consequences of America’s descent into Third World status, though, is that a great many of us may have scant leisure to contemplate global and national issues amid the struggle to keep food on the table and a roof over our heads. In the long run, this shift in focus may have certain advantages; I have argued in previous posts that those nations that undergo the deindustrial transition soonest, and are thus forced to learn how to get by on the very modest energy and resource flows available in the absence of fossil fuels, may find that this gives them a head start in making changes that everyone else will have to make in due time. Still, making the most of those advantages will require a very different approach to economics, among other things, than most of us have pursued (or imagined pursuing) so far.
Interestingly, this brings us back to the point where this blog’s exploration of deindustrial economics started some months ago: the thought of the maverick economist E.F. Schumacher. Among his other achievements, Schumacher developed a theory of economic development for the Third World that cut straight across the assumptions of his own time and ours alike, and proposed a route toward relative prosperity that took the limits to growth and the failures of empire into account. That route was not taken in his time; it may be the only way left open in ours. We’ll discuss it in detail in next week’s post.
Wednesday, February 03, 2010
Endgame
I’ve mentioned more than once in these essays the foreshortening effect that textbook history can have on our understanding of the historical events going on around us. The stark chronologies most of us get fed in school can make it hard to remember that even the most drastic social changes happen over time, amid the fabric of everyday life and a flurry of events that can seem more important at the time.
This becomes especially problematic in times like the present, when apocalyptic prophecy is a central trope in the popular culture that frames a people’s hopes and fears for the future. When the collective imagination becomes obsessed with the dream of a sudden cataclysm that sweeps away the old world overnight and ushers in the new, even relatively rapid social changes can pass by unnoticed. The twilight years of Rome offer a good object lesson; so many people were convinced that the Second Coming might occur at any moment that the collapse of classical civilization went almost unnoticed; only a tiny handful of writers from those years show any recognition that something out of the ordinary was happening at all.
Reflections of this sort have been much on my mind lately, and there’s a reason for that. Scattered among the statistical noise that makes up most of today’s news are data points that suggest to me that business as usual is quietly coming to an end around us, launching us into a new world for which very few of us have made any preparations at all.
Here’s one example. Friends of mine in a couple of midwestern states have mentioned that the steady trickle of refugees from the Chicago slums into their communities has taken a sharp turn up. There’s a long history of dysfunction behind this. Back in 1999, Chicago began tearing down its vast empire of huge high-rise projects, promising to replace them with less ghastly and more widely distributed housing for the poor. Most of the replacements, of course, never got built. When the waiting list for Section 8 rent subsidies, the only other option available, got long enough to become a public relations problem, the bureaucrats in charge simply closed the list to new applicants; rumors (hotly denied by the Chicago city government) claim that poor families in Chicago were openly advised to move to other states. Whether for that reason or simple economic survival, a fair number of them did.
Fast forward to the middle of 2009. Around then, facing budget deficits second only to California, the state of Illinois quietly stopped paying its social service providers. In theory, the money is still allocated; in practice, it’s been more than six months since Illinois preschools, senior centers, food banks, and the like have received a check from the state for the services they provide, and many of them are on the verge of going broke. Subsidized rent has apparently taken an equivalent hit. Believers in free-market economics have been insisting for years that the end of rent subsidies would let the free market reduce rents to a level that people could afford, but I don’t recommend holding your breath; this is the same free market, remember, that gave the United States some of the world’s worst slums in the late 19th and early 20th centuries.
The actual effects have been instructive. Squeezed between sharply contracting benefits and a sharply contracting job market, many of Chicago’s poor are hitting the road, heading in any direction that offers more options. Forget the survivalist fantasy of violent hordes pouring out of the inner cities to ravage everything in their path; today’s slum residents are instead becoming the Okies of the Great Recession. In the process, part of business as usual in the United States is coming to an end.
Illinois is far from the only state that backed itself into a corner by assuming that rising tax revenues from a bubble economy could be extrapolated indefinitely into the future. 41 US states currently face budget deficits. California has received most of the media attention so far, a good deal of it focused on the political gridlock that has kept the state frozen in crisis for years. Behind the partisan posturing in Sacramento, though, lies a deeper and harsher reality. The state of California is essentially bankrupt; nearly all the mistakes made by the once-wealthy states of the Rust Belt as they slid down the curve of their own decline have been faithfully copied by California as it approaches its destiny as the Rust Belt of the 21st century. I wonder how many local governments in neighboring states have drawn up plans for dealing with the tide of economic refugees once California can no longer pay for its welfare system, and the poor of Los Angeles and other California cities join those of Chicago on the road?
I could go on, but I think the point has been made. State governments are the canaries in our national coal mine; their tax receipts are one of the very few measures of economic activity that aren’t being systematically fiddled by the federal government. The figures coming out of state revenue offices strike a jarring contrast with the handwaving about “green shoots” and an imminent return to prosperity heard from Washington DC and the media. Across the country, every few months, states that have already cut spending drastically to cope with record declines in tax income find that they have to go back and do it all over again, because their revenue – and by inference, the incomes, purchases, business activity, and other economic phenomena that feed into taxes – has dropped even further. Now it’s true that state budgets get hit whenever the economy goes into recession, and keep on hurting even when the recession is supposed to be over, but compared to past examples, the losses clobbering state funding these days are off the scale, and a great many programs that have been fixtures of American public life for as long as most of us have been living are facing the chopping block.
A different reality pertains within the Washington DC beltway. Where states that fail to balance their budgets get their bond ratings cut and, in some cases, are having trouble finding buyers for their debt at less than usurious interest rates, the federal government seems to be able to defy the normal behavior of bond markets with impunity. Despite soaring deficits, not to mention a growing disinclination on the part of foreign governments to keep on financing the same, every new issuance of US treasury bills somehow finds buyers in such abundance that interest rates stay remarkably low. A few weeks ago, Tom Whipple of ASPO became the latest in a tolerably large number of perceptive observers who have pointed out that this makes sense only if the US government is surreptitiously buying its own debt.
The process works something like this. The Federal Reserve, which is not actually a government agency but a consortium of large banks working under a Federal charter, has the statutory right to mint money in the US. These days, that can be done by a few keystrokes on a computer, and another few keystrokes can transfer that money to any bank in the nation. Some of those banks use the money to buy up US treasury bills, probably by way of subsidiaries chartered in the Cayman Islands and the like, and these same off-book subsidiaries then stash the T-bills and keep them off the books. The money thus laundered finally arrives at the US treasury, where it gets spent.
It may be a bit more complex than that. Those huge sums of money voted by Congress to bail out the financial system may well have been diverted into this process – that would certainly explain why the Department of the Treasury and the Federal Reserve Bank of New York have stonewalled every attempt to trace exactly where all that money went. Friendly foreign governments may also have a hand in the process. One way or another, though, those of my readers who remember the financial engineering that got Enron its fifteen minutes of fame may find all this uncomfortably familiar – and it is. The world’s largest economy has become, in effect, the United States of Enron.
Plenty of countries in the past have tried to cover expenses that overshot income by spinning the presses at the local mint. The result is generally hyperinflation, of the sort made famous in the 1920s by Germany and more recently by Zimbabwe. That I know of, though, nobody has tried the experiment with a national economy in a steep deflationary depression, of the sort that has been taking shape in America and elsewhere since the real estate bubble crashed and burned in 2008. In theory, at least in the short term, it might just work; the inflationary pressures caused by printing money wholesale could conceivably cancel out the deflationary pressures of a collapsing bubble and a contracting economy – at least for a while.
The difficulty, of course, is that pumping the money supply fixes the symptoms of economic failure without treating the causes, and in every case I know of, governments that resort to it end up caught on a treadmill that requires ever larger infusions of paper money just to maintain the status quo. Sooner or later, as the amount of currency in circulation outstrips the goods and services available to buy, inflation spins out of control, the currency loses most or all of its value, and the economy grinds to a halt until a new currency can be issued on some sounder basis. In 1920s Germany, they managed this last feat by taking out a mortgage on the entire country, and issued “Rentenmarks” backed by that mortgage. In the wake of the late housing bubble, that seems an unlikely option here, though no doubt some gimmick will be found.
It’s crucial to realize, though, that this move comes at the end of a long historical trajectory. From the early days of the industrial revolution into the early 1970s, the United States possessed the immense economic advantage of sizable reserves of whatever the cutting-edge energy source happened to be. During what Lewis Mumford called the eotechnic era, when waterwheels were the prime mover for industry and canals were the core transportation technology, the United States prospered because it had an abundance of mill sites and internal waterways. During Mumford’s paleotechnic era, when coal and railways replaced water and canal boats, the United States once again found itself blessed with huge coal reserves, and the arrival of the neotechnic era, when petroleum and highways became the new foundation of power, the United States found that nature had supplied it with so much oil that in 1950, it produced more petroleum than all other countries combined.
That trajectory came to an abrupt end in the 1970s, when nuclear power – expected by nearly everyone to be the next step in the sequence – turned out to be hopelessly uneconomical, and renewables proved unable to take up the slack. The neotechnic age, in effect, turned out to have no successor. Since then, for most of the last thirty years, the United States has been trying to stave off the inevitable – the sharp downward readjustment of our national standard of living and international importance following the peak and decline of our petroleum production and the depletion of most of the other natural resources that once undergirded American economic and political power. We’ve tried accelerating drawdown of natural resources; we’ve tried abandoning our national infrastructure, our industries, and our agricultural hinterlands; we’ve tried building ever more baroque systems of financial gimmickry to prop up our decaying economy with wealth from overseas; over the last decade and a half, we’ve resorted to systematically inflating speculative bubbles – and now, with our backs to the wall, we’re printing money as though there’s no tomorrow.
Now it’s possible that the current US administration will be able to pull one more rabbit out of its hat, and find a new gimmick to keep things going for a while longer. I have to confess that this does not look likely to me. Monetizing the national debt, as economists call the attempt to pay a nation’s bills by means of a hyperactive printing press, is a desperation move; it’s hard to imagine any reason that it would have been chosen if there were any other option in sight.
What this means, if I’m right, is that we may have just moved into the endgame of America’s losing battle with the consequences of its own history. For many years now, people in the peak oil scene – and the wider community of those concerned about the future, to be sure – have had, or thought they had, the luxury of ample time to make plans and take action. Every so often books would be written and speeches made claiming that something had to be done right away, while there was still time, but most people took that as the rhetorical flourish it usually was, and went on with their lives in the confident expectation that the crisis was still a long ways off.
We may no longer have that option. If I read the signs correctly, America has finally reached the point where its economy is so deep into overshoot that catabolic collapse is beginning in earnest. If so, a great many of the things most of us in this country have treated as permanent fixtures are likely to go away over the years immediately before us, as the United States transforms itself into a Third World country. The changes involved won’t be sudden, and it seems unlikely that most of them will get much play in the domestic mass media; a decade from now, let’s say, when half the American workforce has no steady work, decaying suburbs have mutated into squalid shantytowns, and domestic insurgencies flare across the south and the mountain West, those who still have access to cable television will no doubt be able to watch talking heads explain how we’re all better off than we were in 2000.
Those of my readers who haven’t already been beggared by the unraveling of what’s left of the economy, and have some hope of keeping a roof over their heads for the foreseeable future, might be well advised to stock their pantries, clear their debts, and get to know their neighbors, if they haven’t taken these sensible steps already. Those of my readers who haven’t taken the time already to learn a practical skill or two, well enough that others might be willing to pay or barter for the results, had better get a move on. Those of my readers who want to see some part of the heritage of the present saved for the future, finally, may want to do something practical about that, and soon. I may be wrong – and to be frank, I hope that I’m wrong – but it looks increasingly to me as though we’re in for a very rough time in the very near future.
This becomes especially problematic in times like the present, when apocalyptic prophecy is a central trope in the popular culture that frames a people’s hopes and fears for the future. When the collective imagination becomes obsessed with the dream of a sudden cataclysm that sweeps away the old world overnight and ushers in the new, even relatively rapid social changes can pass by unnoticed. The twilight years of Rome offer a good object lesson; so many people were convinced that the Second Coming might occur at any moment that the collapse of classical civilization went almost unnoticed; only a tiny handful of writers from those years show any recognition that something out of the ordinary was happening at all.
Reflections of this sort have been much on my mind lately, and there’s a reason for that. Scattered among the statistical noise that makes up most of today’s news are data points that suggest to me that business as usual is quietly coming to an end around us, launching us into a new world for which very few of us have made any preparations at all.
Here’s one example. Friends of mine in a couple of midwestern states have mentioned that the steady trickle of refugees from the Chicago slums into their communities has taken a sharp turn up. There’s a long history of dysfunction behind this. Back in 1999, Chicago began tearing down its vast empire of huge high-rise projects, promising to replace them with less ghastly and more widely distributed housing for the poor. Most of the replacements, of course, never got built. When the waiting list for Section 8 rent subsidies, the only other option available, got long enough to become a public relations problem, the bureaucrats in charge simply closed the list to new applicants; rumors (hotly denied by the Chicago city government) claim that poor families in Chicago were openly advised to move to other states. Whether for that reason or simple economic survival, a fair number of them did.
Fast forward to the middle of 2009. Around then, facing budget deficits second only to California, the state of Illinois quietly stopped paying its social service providers. In theory, the money is still allocated; in practice, it’s been more than six months since Illinois preschools, senior centers, food banks, and the like have received a check from the state for the services they provide, and many of them are on the verge of going broke. Subsidized rent has apparently taken an equivalent hit. Believers in free-market economics have been insisting for years that the end of rent subsidies would let the free market reduce rents to a level that people could afford, but I don’t recommend holding your breath; this is the same free market, remember, that gave the United States some of the world’s worst slums in the late 19th and early 20th centuries.
The actual effects have been instructive. Squeezed between sharply contracting benefits and a sharply contracting job market, many of Chicago’s poor are hitting the road, heading in any direction that offers more options. Forget the survivalist fantasy of violent hordes pouring out of the inner cities to ravage everything in their path; today’s slum residents are instead becoming the Okies of the Great Recession. In the process, part of business as usual in the United States is coming to an end.
Illinois is far from the only state that backed itself into a corner by assuming that rising tax revenues from a bubble economy could be extrapolated indefinitely into the future. 41 US states currently face budget deficits. California has received most of the media attention so far, a good deal of it focused on the political gridlock that has kept the state frozen in crisis for years. Behind the partisan posturing in Sacramento, though, lies a deeper and harsher reality. The state of California is essentially bankrupt; nearly all the mistakes made by the once-wealthy states of the Rust Belt as they slid down the curve of their own decline have been faithfully copied by California as it approaches its destiny as the Rust Belt of the 21st century. I wonder how many local governments in neighboring states have drawn up plans for dealing with the tide of economic refugees once California can no longer pay for its welfare system, and the poor of Los Angeles and other California cities join those of Chicago on the road?
I could go on, but I think the point has been made. State governments are the canaries in our national coal mine; their tax receipts are one of the very few measures of economic activity that aren’t being systematically fiddled by the federal government. The figures coming out of state revenue offices strike a jarring contrast with the handwaving about “green shoots” and an imminent return to prosperity heard from Washington DC and the media. Across the country, every few months, states that have already cut spending drastically to cope with record declines in tax income find that they have to go back and do it all over again, because their revenue – and by inference, the incomes, purchases, business activity, and other economic phenomena that feed into taxes – has dropped even further. Now it’s true that state budgets get hit whenever the economy goes into recession, and keep on hurting even when the recession is supposed to be over, but compared to past examples, the losses clobbering state funding these days are off the scale, and a great many programs that have been fixtures of American public life for as long as most of us have been living are facing the chopping block.
A different reality pertains within the Washington DC beltway. Where states that fail to balance their budgets get their bond ratings cut and, in some cases, are having trouble finding buyers for their debt at less than usurious interest rates, the federal government seems to be able to defy the normal behavior of bond markets with impunity. Despite soaring deficits, not to mention a growing disinclination on the part of foreign governments to keep on financing the same, every new issuance of US treasury bills somehow finds buyers in such abundance that interest rates stay remarkably low. A few weeks ago, Tom Whipple of ASPO became the latest in a tolerably large number of perceptive observers who have pointed out that this makes sense only if the US government is surreptitiously buying its own debt.
The process works something like this. The Federal Reserve, which is not actually a government agency but a consortium of large banks working under a Federal charter, has the statutory right to mint money in the US. These days, that can be done by a few keystrokes on a computer, and another few keystrokes can transfer that money to any bank in the nation. Some of those banks use the money to buy up US treasury bills, probably by way of subsidiaries chartered in the Cayman Islands and the like, and these same off-book subsidiaries then stash the T-bills and keep them off the books. The money thus laundered finally arrives at the US treasury, where it gets spent.
It may be a bit more complex than that. Those huge sums of money voted by Congress to bail out the financial system may well have been diverted into this process – that would certainly explain why the Department of the Treasury and the Federal Reserve Bank of New York have stonewalled every attempt to trace exactly where all that money went. Friendly foreign governments may also have a hand in the process. One way or another, though, those of my readers who remember the financial engineering that got Enron its fifteen minutes of fame may find all this uncomfortably familiar – and it is. The world’s largest economy has become, in effect, the United States of Enron.
Plenty of countries in the past have tried to cover expenses that overshot income by spinning the presses at the local mint. The result is generally hyperinflation, of the sort made famous in the 1920s by Germany and more recently by Zimbabwe. That I know of, though, nobody has tried the experiment with a national economy in a steep deflationary depression, of the sort that has been taking shape in America and elsewhere since the real estate bubble crashed and burned in 2008. In theory, at least in the short term, it might just work; the inflationary pressures caused by printing money wholesale could conceivably cancel out the deflationary pressures of a collapsing bubble and a contracting economy – at least for a while.
The difficulty, of course, is that pumping the money supply fixes the symptoms of economic failure without treating the causes, and in every case I know of, governments that resort to it end up caught on a treadmill that requires ever larger infusions of paper money just to maintain the status quo. Sooner or later, as the amount of currency in circulation outstrips the goods and services available to buy, inflation spins out of control, the currency loses most or all of its value, and the economy grinds to a halt until a new currency can be issued on some sounder basis. In 1920s Germany, they managed this last feat by taking out a mortgage on the entire country, and issued “Rentenmarks” backed by that mortgage. In the wake of the late housing bubble, that seems an unlikely option here, though no doubt some gimmick will be found.
It’s crucial to realize, though, that this move comes at the end of a long historical trajectory. From the early days of the industrial revolution into the early 1970s, the United States possessed the immense economic advantage of sizable reserves of whatever the cutting-edge energy source happened to be. During what Lewis Mumford called the eotechnic era, when waterwheels were the prime mover for industry and canals were the core transportation technology, the United States prospered because it had an abundance of mill sites and internal waterways. During Mumford’s paleotechnic era, when coal and railways replaced water and canal boats, the United States once again found itself blessed with huge coal reserves, and the arrival of the neotechnic era, when petroleum and highways became the new foundation of power, the United States found that nature had supplied it with so much oil that in 1950, it produced more petroleum than all other countries combined.
That trajectory came to an abrupt end in the 1970s, when nuclear power – expected by nearly everyone to be the next step in the sequence – turned out to be hopelessly uneconomical, and renewables proved unable to take up the slack. The neotechnic age, in effect, turned out to have no successor. Since then, for most of the last thirty years, the United States has been trying to stave off the inevitable – the sharp downward readjustment of our national standard of living and international importance following the peak and decline of our petroleum production and the depletion of most of the other natural resources that once undergirded American economic and political power. We’ve tried accelerating drawdown of natural resources; we’ve tried abandoning our national infrastructure, our industries, and our agricultural hinterlands; we’ve tried building ever more baroque systems of financial gimmickry to prop up our decaying economy with wealth from overseas; over the last decade and a half, we’ve resorted to systematically inflating speculative bubbles – and now, with our backs to the wall, we’re printing money as though there’s no tomorrow.
Now it’s possible that the current US administration will be able to pull one more rabbit out of its hat, and find a new gimmick to keep things going for a while longer. I have to confess that this does not look likely to me. Monetizing the national debt, as economists call the attempt to pay a nation’s bills by means of a hyperactive printing press, is a desperation move; it’s hard to imagine any reason that it would have been chosen if there were any other option in sight.
What this means, if I’m right, is that we may have just moved into the endgame of America’s losing battle with the consequences of its own history. For many years now, people in the peak oil scene – and the wider community of those concerned about the future, to be sure – have had, or thought they had, the luxury of ample time to make plans and take action. Every so often books would be written and speeches made claiming that something had to be done right away, while there was still time, but most people took that as the rhetorical flourish it usually was, and went on with their lives in the confident expectation that the crisis was still a long ways off.
We may no longer have that option. If I read the signs correctly, America has finally reached the point where its economy is so deep into overshoot that catabolic collapse is beginning in earnest. If so, a great many of the things most of us in this country have treated as permanent fixtures are likely to go away over the years immediately before us, as the United States transforms itself into a Third World country. The changes involved won’t be sudden, and it seems unlikely that most of them will get much play in the domestic mass media; a decade from now, let’s say, when half the American workforce has no steady work, decaying suburbs have mutated into squalid shantytowns, and domestic insurgencies flare across the south and the mountain West, those who still have access to cable television will no doubt be able to watch talking heads explain how we’re all better off than we were in 2000.
Those of my readers who haven’t already been beggared by the unraveling of what’s left of the economy, and have some hope of keeping a roof over their heads for the foreseeable future, might be well advised to stock their pantries, clear their debts, and get to know their neighbors, if they haven’t taken these sensible steps already. Those of my readers who haven’t taken the time already to learn a practical skill or two, well enough that others might be willing to pay or barter for the results, had better get a move on. Those of my readers who want to see some part of the heritage of the present saved for the future, finally, may want to do something practical about that, and soon. I may be wrong – and to be frank, I hope that I’m wrong – but it looks increasingly to me as though we’re in for a very rough time in the very near future.