It’s been a few weeks since the conversation on this blog veered away from the decline and fall of American empire to comment on points raised by the recent Age of Limits conference. Glancing back over the detour, I feel it was worth making, but it’s time to circle back to that earlier theme and take it one step further—that is, to shift from how we got here and where we are to where we are headed and why.
That’s a timely topic just at the moment, because events on the other side of the Atlantic have placed some of the crucial factors in high relief. A diversity of forces have come together to turn Europe into the rolling economic debacle it is today, and not all of them are shared by industrial civilization as a whole. Still, a close look at the European crisis will make it possible to make sense of the broader predicament of the industrial world, on the one hand, and the way that predicament will likely play out in the collapse of what remains of the American economy on the other.
Let’s begin with an overview. During the global real estate bubble of the last decade, European banks invested heavily and recklessly in a great many dubious projects, and were hit hard when the bubble burst and those projects moved abruptly toward their real value, which in most cases was somewhere down close to zero. Only one European nation, Iceland, did the intelligent thing, allowed its insolvent banks to fail, paid out to those depositors who were covered by deposit insurance, and drew a line under the problem. Everywhere else, governments caved in to pressure from the rentier class—the people whose income depends on investments rather than salaries, wages, or government handouts—and from foreign governments, and assumed responsibility for all the debts of their failed banks without exception.
Countries that did so, however, found that the interest rates they had to pay to borrow in credit markets quickly soared to ruinous levels, as investors sensibly backed away from countries that were too deeply in debt. Ireland and Greece fell into this trap, and turned to the IMF and the financial agencies of the European Union for help, only to discover the hard way that the "help" consisted of loans at market rates—that is, adding more debt on top of an already crushing debt burden—with conditions attached: specifically, the conditions that were inflicted on a series of Third World countries in the wake of the 1998 financial crash, with catastrophic results.
It used to be called the Washington Consensus, though nobody’s using that term now for the "austerity measures" currently being imposed on the southern half of Europe. Basically, it amounts to the theory that the best therapy for a nation over its head in debt consists of massive cuts to government spending and the enthusiastic privatization, at fire-sale prices, of government assets. In theory, again, debtor countries that embrace this set of prescriptions are supposed to return promptly to prosperity. In practice—and it’s been tried on well over two dozen countries over the last three decades or so, so there’s an ample body of experience—debtor countries that embrace this set of prescriptions are stripped to the bare walls by their creditors and remain in an economic coma until populist politicians seize power, tell the IMF where it can put its economic ideology, and default on their unpayable debts. That’s what Iceland did, as Russia, Argentina, and any number of other countries did earlier, and it’s the only way for an overindebted country to return to prosperity.
That reality, though, is not exactly welcome news to those nations profiting off the modern form of wealth pump, in which unpayable loans usually play a large role. Whenever you see the Washington Consensus being imposed on a country, look for the nations that are advocating it most loudly and it’s a safe bet that they’ll be the countries most actively engaged in stripping assets from the debtor nation. In today’s European context, that would be Germany. It’s one of the mordant ironies of contemporary history that Europe fought two of the world’s most savage wars in the firt half of the twentieth century to deny Germany a European empire, then spent the second half of the same century allowing Germany to attain peacefully nearly every one of its war aims short of overseas colonies and a victory parade down the Champs Élysées.
Since the foundation of the Eurozone, in particular, European economic policy has been managed for German benefit even—as happens tolerably often—at the expense of other European nations. The single currency itself is an immense boon to the German economy, which spent decades struggling with exchange rates that made German exports more expensive, and foreign imports more affordable, to Germany’s detriment. The peseta, the lira, the franc and other European currencies can no longer respond to trade imbalances by losing value relative to the deutschmark now that it’s all one currency. The resulting system—combined with the free trade regulations demanded by economic orthodoxy and enforced by bureaucrats in Brussels—has functioned as a highly efficient wealth pump, and has allowed Germany and a few of its northern European neighbors to prosper while southern Europe stumbles deeper into economic collapse.
In one sense, then, it’s no wonder that German governmental officials are insisting at the top of their lungs that other European countries have to bail out failing banks and then use tax revenues to pay off the resulting debt, even if that requires those countries to follow what amounts to a recipe for national economic suicide. The end of the wealth pump might not mean the endgame for Germany’s current prosperity, but it would certainly make matters much more difficult for the German economy, and thus for the future prospects not only of Angela Merkel but of a great many other German politicians. Now even the most blinkered politician ought to recognize that trying to squeeze the last drop of wealth out of southern Europe is simply going to speed up the arrival of the populist politicians mentioned a few paragraphs back, but I suppose it’s possible that this generation of German politicians are too clueless or too harried to think of that.
Still, there may be more going on, because all these disputes are taking place in a wider context.
The speculative bubble that popped so dramatically in 2008 was by most measures the biggest in human history, considerably bigger than the one that blew the global economy to smithereens in 1929. Even so, the events of 1929 and the Great Depression that followed it remain the gold standard of economic crisis, and the managers of the world’s major central banks in 2008 were not unaware of the factors that turned the 1929 crash into what remains, even by today’s standards, the worst economic trauma of modern times. Most of those factors amount to catastrophic mistakes on the part of central bankers, so it’s just as well that today’s central bankers are paying attention.
The key to understanding the Great Depression lies in understanding what exactly it was that went haywire in 1929. In the United States, for example, all the factors that made for ample prosperity in the 1920s were still solidly in place in 1930. Nothing had gone wrong with the farms, factories, mines and oil wells that undergirded the American economy, nor was there any shortage of laborers ready to work or consumers eager to consume. What happened was that the money economy—the system of abstract tokens that industrial societies use to allocate goods and services among people—had frozen up. Since most economic activity in an industrial society depends on the flow of money, and there are no systems in place to take over from the money economy if that grinds to a halt, a problem with the distribution and movement of money made it impossible for the real economy of goods and services to function.
That was what the world’s central bankers were trying to prevent in 2008 and the years immediately afterward, and it’s what they’re still trying to prevent today. That’s what the trillions of dollars that were loaned into existence by the Fed, the Bank of England, and other central banks, and used to prop up failing financial institutions, were meant to do, and it may well be part what’s behind the frantic attempts already mentioned to stave off defaults and prevent banks from paying the normal price for the resoundingly stupid investments of the former boom—though of course the unwillingness of bankers to pay that price with their own careers, and the convenience of large banks as instruments of wealth pumping, also have large roles there.
In 1929 and 1930, panic selling in the US stock market erased millions of dollars in notional wealth—yes, that was a lot of money then—and made raising capital next to impossible for businesses and individuals alike. In 1931, the crisis spread into the banking industry, kicking off a chain reaction of bank failures that pushed the global money system into cardiac arrest and forced the economies of the industrial world to their knees. The world’s central bankers set out to prevent a repeat of that experience. It’s considered impolite in many circles to mention this, but by and large they succeeded; the global economy is still in a world of hurt, no question, but the complete paralysis of the monetary system that made the Great Depression what it was has so far been avoided.
There’s a downside to that, however, which is that keeping the monetary system from freezing up has done nothing to deal with the underlying problem driving the current crisis, the buildup of an immense dead weight of loans and other financial instruments that are supposed to be worth something, and aren’t. Balance sheets throughout the global economy are loaded to the bursting point with securitized loans that will never be paid back, asset-backed securities backed by worthless assets, derivatives of derivatives of derivatives wished into being by what amounts to make-believe, and equally valuable financial exotica acquired during the late bubble by people who were too giddy with paper profits to notice their actual value, which in most cases is essentially zero.
What makes this burden so lethal is that financial institutions of all kinds are allowed to treat this worthless paper as though it still has some approximation of its former theoretical value, even though everyone in the financial industry knows how much it’s really worth. Forcing firms to value it at market rates would cause a catastrophic string of bankruptcies; even forcing firms to admit to how much of it they have, and of what kinds, could easily trigger bank runs and financial panic; while it remains hidden, though, nobody knows when enough of it will blow up and cause another financial firm to implode—and so the trust that’s essential to the functioning of a money economy goes away in a hurry. Nobody wants to loan money to a firm whose other assets might suddenly turn into twinkle dust; for that matter, nobody wants to let their own cash reserves decline, in case their other assets turn into the same illiquid substance; and so the flow of money through the economy slows to a creep, and fails to do the job it’s supposed to do of facilitating the production and exchange of goods and services.
That’s the risk you take when you try to stop a financial panic without tackling the underlying burden of worthless assets generated by the preceding bubble. Much more often than not, in the past, it’s been a risk worth running; if you can only hold on until the impact of the panic fades, economic growth in some nonfinancial sector of the economy picks up, the financial industry can replace its bogus assets with something different and presumably better, and away you go. That’s what happened in the wake of the tech-stock panic of 2000 and 2001: the Fed dropped interest rates, made plenty of money available to financial firms in trouble, and did everything else it could think of to postpone bankruptcies until the housing bubble started boosting the economy again. It doesn’t always work—Japan has been stuck in permanent recession in the wake of its gargantuan 1990 stock market crash, precisely because it didn’t work—but in American economic history, at least, it’s tended to work more often than not.
Still, there was a major warning sign in the wake of the tech-stock fiasco that should have been heeded, and was not: what boosted the economy anew wasn’t an economic expansion in some nonfinancial sector, but a second and even larger speculative bubble in the financial sphere. I discussed in posts late last year (here and here) ecological economist Herman Daly’s comments about shortages of "bankable projects"—that is, projects that are likely to bring in enough of a return on investment to pay back loans with interest and still make a profit for somebody. In an expanding economy, bankable projects are easy to find, since it’s precisely the expansion of an expanding economy that makes a positive return on investment the normal way of things. If you flood your economy with cheap credit to counter the aftermath of a speculative bubble, and the only thing that comes out of it is an even bigger speculative bubble, something significant has happened to the mechanisms of economic growth.
More specifically, something other than a paralysis of the money system has happened to the mechanisms of economic growth. That’s the unlearned lesson of the last decade. In the wake of the 2001 tech stock crash, and then again in the aftermath of 2008’s even larger financial panic, the Fed flooded the American economy with quantities of cheap credit so immense that any viable project for producing goods and services ought to have been able to find ample capital to get off the ground. Instead of an entrepreneurial boom, though, both periods saw money pile up in the financial industry, because there was a spectacular shortage of viable projects outside it. Outside of manipulating money and gaming the system, there simply isn’t much that makes a profit any more.
Next week we’ll discuss why that is happening. In the meantime, though, it’s important to note what the twilight of investment means for the kick-the-can strategy that’s guiding the Fed and other central banks in their efforts to fix the world economy. That strategy depends on the belief that sooner or later, even the most battered economy will finally begin to improve, and the upswing will make it possible to replace the temporary patches on the economy with something more solid. It’s been a viable strategy fairly often in the past, but it worked poorly in 2001 and it doesn’t appear to be working at all at this point. Thus it’s probable that the Fed is treading water, waiting for a rescue that isn’t on its way; what will happen as that latter point becomes clear is something we’ll be exploring at length later on in this series of posts.
End of the World of the Week #27
Those of my readers who, as I was, were young and impressionable in the early 1970s may well remember The Jupiter Effect, a once-famous book by John Gribben and Stephem Plagemann which saw print in 1974. Gribben and Plagemann were astrophysicists, which gave their claims a good deal of apparent credibility. They had noticed that on March 10, 1982, most of the planets in the solar system would be aligned with one another, and Jupiter—the most massive of the planets—would be part of the alignment as it passed through its closest approach to Earth. This additional gravitational tug, they proposed, would set off sunspots, solar flares, and earthquakes, making life interesting for our species and just possibly ending life on Earth.
The Jupiter Effect quickly developed a following, and joined a flurry of other apparently scientific predictions of imminent doom in circulation at that time. I’m not sure how many people were still waiting for catastrophe when 1982 came around, but any who were must have been sorely disappointed; the sunspot count for March 10 was nothing out of the ordinary, the solar flares and earthquakes failed to put in an appearance, and the only known result of the Jupiter Effect was that high tides that day were all of 0.04 millimeters higher than they would otherwise have been.