Sign in or register
for additional privileges

Growing Apart

A Political History of American Inequality

Colin Gordon, Author

You appear to be using an older verion of Internet Explorer. For the best experience please upgrade your IE version or switch to a another web browser.

Choosing Unemployment: Macroeconomic Policy and American Inequality

Macroeconomic policies, as the name suggests, take aim at the economy’s overall performance on everything from growth and job rates to price stability. Fiscal policies use taxes and public spending to shape economic growth, demand, and distribution. Monetary policies use interest rates and government securities to control the supply of money and the pace of economic growth. Macroeconomic policies, in other words, speak to both the broad parameters and priorities of public policy—where and how we raise and spend money—and the more immediate management of interest rates and the money supply.

Government revenue and spending policies, in the aggregate, have a substantial impact on economic growth and economic distribution. Federal spending reached about 20 percent of GDP during the Korean War and has been there ever since, dipping a little during good times, rising a little during recessions. Most of this spending—and its distribution—reflects changing policy priorities and demands. Defense spending, for instance, rose into the Vietnam era and then declined, with spikes in the early 1980s and after 9/11. Since then, spending on social programs, especially health care, has taken up most of the slack.

Fiscal policy also reflects the countercyclical logic of public finance: spending as a share of the economy tends to rise during economic downturns and fall during expansions, while tax revenue does the opposite. In this sense, policy commitments, especially on social programs, lean in against fluctuations in the business cycle, protecting the most vulnerable and sustaining demand during hard times. Additional spending commitments (a “stimulus” for example) can enhance these automatic stabilizers. On the other hand, cutbacks elsewhere in government budgets, at either the state or federal level, can blunt them.Choices on the tax side of the ledger, meanwhile, shape the distributional impact of all of this.

In this respect, fiscal and monetary policies are often understood to involve two kinds of choices. The first of these is about deficits (the gap between government revenues and expenditures in a given year) and debt (the accumulation of those shortfalls over time). Although fretting about the debt is now a bipartisan hobby, much of this is cynical or misplaced. As Jared Bernstein and others have argued tirelessly, the issue is not debt itself but “what are you borrowing for, how long you will need to pay for it, and how you are going to pay it back.” And the only reason that spending decisions are linked so inextricably to deficit worries is that we have virtually foreclosed the possibility of new revenues.

The second choice, casting a long shadow over fiscal and monetary policies, is the supposed “trade-off” between jobs and prices—between those policies that lower the unemployment rate (but risk a higher rate of inflation) and those that control prices (but risk a higher rate of unemployment). In theory, the goal is to sustain growth and employment just to the point at which they start to push up prices.  In practice, U.S. policymakers have (especially since the 1970s) allowed inflation anxieties to trump all other macroeconomic goals. The result is a starkly asymmetrical success story: over the last thirty years, the annual inflation rate has topped 5 percent only once (reaching 5.4 percent in 1990); it has been under 4 percent for twenty-five of those thirty years; and it has been at or below 3 percent for seventeen of them [see graphic below]. But the social costs have been high. Over that same span, wages have stagnated and the share of national income going to wages has fallen steadily, while cycles of recession and unemployment have continued unabated.


A Short History of U.S. Macroeconomic Policy


For the first half of the last century, fiscal and monetary policies were modest tools. The Federal Reserve, established in 1913, had the charge to maintain an “elastic” currency and to “establish a more effective supervision of banking.” The Fed could expand the money supply by lending to its members or by purchasing government securities, and its presence as a “lender of last resort” could smooth out interest rates and prevent bank runs. But America’s adherence to the gold standard meant that the Fed had little impact on inflation in normal times and little capacity to arrest it during moments of upheaval—as during World War I, when an inflow of gold from Europe early in the conflict and a spike in federal spending after American entry played temporary havoc with the value of the American dollar.

The war boom lingered into 1919, as postwar relief and consumption of durable goods (interrupted during the war) sustained demand. Despite the combination of low unemployment and high inflation, the Fed initially maintained a light hand in order to reduce the cost of retiring war-era debt and sustain the value of the final issue of war bonds, but then raised rates in late 1919 and early 1920, just as the postwar boom began to falter. Whether the Fed was more concerned for its own position than that of broader economy, or simply misunderstood the timing and impact of higher rates, its actions deepened the recession of 1921.

Through the 1920s, the Republican Harding, Coolidge, and Hoover administrations viewed monetary policy narrowly, as a tool for ensuring the stability of the banking system. A few mavericks, including Benjamin Strong, the influential governor of the New York Federal Reserve Bank, took a broader view. They recognized that buying and selling government securities could actually shape access to credit and the pace of economic growth.

The economic crisis that began in the late 1920s would expose the limits—both institutional and ideological—of the era’s conventional wisdom. The Great Depression was, above all, a crisis of inadequate demand born of background inequality. But the Fed played a crucial, and counterproductive, role: it let too much easy credit flow into stock speculation in the 1920s, then turned off the spigot too late. The Fed then made matters worse by failing to turn the spigot back on after the market crash—when lower rates and easier access to credit might have arrested the nation’s slide into recession—and fumbled its role as the lender of last resort to a panicked banking system.

The 1930s did see the embrace of some countercyclical spending but, while New Deal programs yielded substantial gains in productivity and employment, their scale was never sufficient to accomplish full recovery. Within a decade the calculus had changed dramatically—in part because the New Deal policies now girded the floor under the economy, and in part because spending on World War II hammered home the effectiveness of a large-scale “stimulus.” In a sense, the Fed emerged from the war more powerful but less needed, and responded accordingly. Fed officials followed a “lean against the wind” policy that, in a growing economy bolstered by a strong international presence, meant only occasional intervention to slow growth and check inflation. This pattern of the 1950s and 1960s yielded an abiding faith in the notion of a simple and stable trade-off between inflation and unemployment, the so-called “Phillips curve.”

This faith would be misplaced. In the long shadow cast by the 1930s, monetary policy proceeded on the assumption that the nation faced no major risk of serious or sustained unemployment—and that it could always wrestle a little more production and a few more jobs out of the economy. This optimism eventually generated inflationary pressures, magnified by the OPEC oil embargo price shock and by the fiscal burden of the war in Vietnam. Things soon began to fall apart, and the Fed lacked the will or experience or authority to respond.

The resulting “stagflation”—simultaneously high rates of inflation and unemployment—undercut the core premises of the “Phillips curve” trade-off. The Nixon administration responded with wage and price controls, the first ever in peacetime, in 1971–2. When inflation proved stubborn, the Fed tightened the money supply in 1974 even as the economy slowed, pushing unemployment up to 9 percent. The impact on prices—which were inflated more by the oil shock than by an overheated economy—would be minimal.

As inflation persisted, the Fed, now under new chairman Paul Volcker, took more dramatic steps. In late 1979, the Fed set aside its usual strategy of raising interest rates and instead tightened control over private bank reserves, effectively reducing the supply of money. This had the same effect as raising the rates directly—interest rates rose sharply as money became scarce—but also provided political cover by compelling market forces to bid up interest rates.

Volcker’s Fed clung to the policy even as the economy plunged into recession, winning price stability at the expense of sharply slower growth, starkly higher unemployment, and yawning trade deficits. Reagan maintained tight monetary policies through the 1980s, in part to sustain low rates of inflation and in part as a way of exporting austerity politics to the rest of the world in what became known as the “Washington Consensus.” 

Through the Clinton and Bush years, the Fed (under Alan Greenspan) kept interest rates low even as the economy boomed. As unemployment dipped to a twenty-year low, wage gains remained modest and the overall rate of inflation remained low. In effect, the “cold bath” monetary policy of the preceding decades had worked so well that an economic boom no longer brought with it the same inflationary pressures. Of course, low interest rates didn’t just sustain growth. They also inflated the housing bubble that would pop so dramatically in 2007.

The federal budget, for its part, had fallen into deficit during the economic crisis of the early 1970s. With the exception of a brief surplus in the late 1990s, it would remain there, and this annual shortfall would begin to erode a longstanding consensus on fiscal policy that once had even conservative economists concluding, as Milton Friedman did glumly in 1966, that “we are all Keynesians now.” In the new deficit environment, supply-side budget hawks in both parties began questioning the logic or sustainability of federal programs and commitments. Tax cuts intended to “starve the beast” further—and intentionally—undermined budgetary support for a raft of other policies.

Since the 1970s, macroeconomic policy has firmly—and destructively—chased the goals of price stability and deficit reduction. This is, by any measure, a curious choice. Inflation has not reared its head in a generation [see graphic below]. And the cures policymakers have advanced for the deficit—such as sweeping cuts to “entitlement” programs—don’t address the actual sources of these deficits: the rising cost of medical care in the United States, increases in military spending, and revenue losses from tax cuts and economic recessions. More importantly, this faulty macroeconomic policy has contributed directly to growing inequality.

Macroeconomic Policy and American Inequality


This combination of “foot-on-the-brake” monetary policy and pervasive budgetary anxiety feeds inequality at both ends of the income spectrum. The preoccupation with inflation serves, first and foremost, the nation’s financial powers. For Wall Street, inflation has always been a particular and overriding concern. By calibrating its policies to this concern, the Fed has operated over recent decades as a guardian of the interests and assets of its member banks, not as a steward of economic growth.

This fixation on “sound money,” in turn, has had real and unfortunate consequences for working Americans. And these consequences, including high unemployment and its associated social and economic ills, haven’t been accidental. In practical terms, policymakers have dampened inflation by dampening wages, through both sustained joblessness and a wider range of policies (deregulation, trade liberalization, attacks on collective bargaining, cuts to social programs) designed to erode workers’ bargaining power. Indeed, the architects of this policy have consistently described America’s retreat from full employment as a way of “zapping labor” with concessionary bargaining, trade exposure, and monetary restraint.

That such policies continue unabated only adds insult to injury: given the dramatic losses in bargaining power since the 1970s, even sustained economic growth is unlikely to generate wage-based inflationary pressures. As Dean Baker and Jared Bernstein have argued, recent experience suggests that an unemployment rate as low as 4 percent (last seen during the boom of the late 1990s) runs little risk of spurring inflation—while at the same time promising to deliver substantial benefits, including higher wages (especially for those towards the bottom of the earnings distribution ladder) and healthier fiscal returns. The “Phillips Curve” has flattened in recent business cycles, meaning that even if pushing the jobless rate below 5 or 4 percent did spur inflation, the cost of approaching full employment would still be dwarfed by the benefits.

Unemployment and underemployment entail dramatic economic hardship. (For a full graphical overview, see the work of Mike Alberti at Remapping Debate.) The choice of price stability over full employment pushes the burden of economic volatility down the income ladder. Slack in the labor market contributes to wage weakness across the board, but especially at the lower wage deciles. Over the past generation, the only respite from unrelenting downward pressure on wages came during a brief spell of full employment in the late 1990s. Those years saw wage gains across the board, closely resembling the shared prosperity of the 1947–1973 era. But on either side of that boom, when high rates of unemployment were the norm, wages (especially for those at the median and below) fell steadily [see graphic below].


Unemployment also brings with it a range of burdens that don’t always register with economists. Joblessness undermines workers’ health, in part by undercutting health coverage and in part by raising personal levels of stress and anxiety. Unemployment undermines family security and economic mobility, yielding higher divorce rates and upheaval for children. Lengthy spells of unemployment erode skills, professional contacts, and reemployment prospects. This threatens both future incomes for those affected and economic productivity across the economy. In short, unemployment undermines economic growth.

Among those burdened or threatened by unemployment, America’s most vulnerable—the young, the old, the less educated, and African-American and Latino workers—suffer the most. (See again Mike Alberti’s graphical overview at Remapping Debate.) This widens our income and wealth gap even further, as low-wage workers are more likely to face unemployment, more likely to face long spells of unemployment, and more likely to go without job-based health insurance.

In turn, the perils of unemployment are exaggerated by patterns of underemployment. Our recent recession and recovery featured not only a sustained unemployment crisis but a number of troubling weaknesses in the labor market. And progress on each of these [summarized in the graphic below] has been even slower. Long-term unemployment (the share of the unemployed who were without work for more than twenty-seven weeks) shot up during the recession and has stayed high. The share of part-time work, and more importantly the share of “involuntary” part-time workers (those who want full-time work but can’t get it) are also on the rise—and are seemingly immune to the recovery.


The insured unemployment rate (the share of workers that are unemployed and drawing unemployment benefits) captures the economic and politics of the last business cycle. At the depth of the recession, about 5 percent of the labor force was unemployed and drawing benefits. Today, four years into a recovery punctuated by federal sequestration and a carnival of nastiness in state politics, only 2.3 percent are unemployed and currently covered. The rate of unemployment is falling slowly. But the rate of unemployment that we are doing anything about is dropping like a stone.

The unwillingness to pursue full employment is accompanied, compounded, and constrained by the simultaneous pursuit of budgetary austerity. After a brief flirtation with “stimulus” in 2008–9 (an infusion watered down by cuts in local and state spending), Congress has dramatically reined in spending—a tack unprecedented [see graphic below] on the recovery side of the business cycle. This, as Jared Bernstein points out, is a little like medieval bleeding—it’s based on faith (or bad science), it uses the wrong tools, and it is virtually guaranteed to make the patient sicker.
Austerity sustains and widens inequality, on both the taxation and spending sides of the ledger. Public goods and services become harder to sustain, and even countercyclical stabilizers like unemployment insurance and food stamps come under attack. Altogether, the doctrine of austerity flies in the face of historical and comparative evidence that demonstrates, convincingly and consistently, that fiscal restraint in hard times is cruel and counterproductive. (For a refreshing alternative proposal, see the Congressional Progressive Caucus’s “Better Off Budget”.)  And austerity's own logic--that debt levels in the United States are both unsustainable and killing growth--collapses under close scrutiny [for more on the austerity debate].

Despite the economic woes and enthusiasm for austerity sweeping much of Europe, the United States remains an outlier. At the nadir of the recession (January 2009), our unemployment rate pushed higher than that of almost all our peers [see graphic below].  Five years later (February 2014), we are closer to the middle of the pack but—across the downturn and recovery—our support for the unemployed remained comparatively meager. In the United States, eligibility is more stringent, the long-term unemployed are more likely to fall off the rolls, and the benefit itself (the replacement rate) is smaller. And, in the American context, interruptions in employment are all the more damaging because they also interrupt access to job-based health coverage and the accumulation of retirement savings.


We understand the benefits of full employment, but refuse to pursue them. Tight labor markets bid up wages—especially for those at the lower end of the scale—and help to arrest wage and income inequality. With improvements in workers’ bargaining power come improvements in job quality, including more expansive or generous benefits. And tight labor markets open more employment opportunities, encouraging participation in the labor force and movement from part-time to full-time employment for those who want it.

The gains to individual workers and their families, in turn, spread across the economy. Employers facing higher labor costs are encouraged to ensure that the labor they pay for is productive, and are more likely to invest in training and other efficiencies. Healthy rates of labor force participation and wage growth reduce demands on public programs while—via income, sales, and payroll taxes—filling public coffers. All of the deleterious effects of chronic unemployment are reversed: workers are healthier, better educated, and more mobile. And full employment is the clearest and best solution to the demand gap that is prolonging the recession.

NEXT: Conclusion

Comment on this page
 

Discussion of "Choosing Unemployment: Macroeconomic Policy and American Inequality"

Add your voice to this discussion.

Checking your signed in status ...

Previous page on path Differences that Matter, page 8 of 8 Path end, return home