Sign in or register
for additional privileges

Growing Apart

A Political History of American Inequality

Colin Gordon, Author

You appear to be using an older verion of Internet Explorer. For the best experience please upgrade your IE version or switch to a another web browser.

A Tattered Safety Net: Social Policy and American Inequality

In examining the political roots of inequality in the United States, we have so far focused on the basic framework for private compensation—including private and public policies that shaped the bargaining power of workers, the regulatory “floor” of wage and hour standards, and patterns of job-based health and retirement security. The next step is to look at those policies that kick in when the private labor market fails—what is commonly called “the welfare state.” 

Such policies provide individuals and families with protection against the uncertainty of the market by managing risks (unemployment or retirement, for example) or by shaping the distribution of incomes more directly—through the tax system, the provision of public goods and services (education, health care), and direct assistance (such as cash benefits or food stamps). In turn, the terms of these policies set social goals and priorities—for example by focusing their attention on those deemed especially deserving (mothers, children, veterans, the elderly), or by setting conditions on the receipt of assistance (such as getting a job or passing a drug test).

The American welfare state is widely regarded as a poor cousin to those of its democratic peers. As the most unequal wealthy country, the United States also does the least to address that inequality through public policy—despite strong historical and international evidence that social spending programs can drastically reduce inequality. Our welfare state spends less, as a share of the national economy, on basic social programs. Its programs are both less universal in their coverage and more intrusive. And American social policies are remarkably deferential to the market—allowing employment status to largely determine the distribution of public and private benefits alike, and assuming that the purpose of public programs is to mop up those whom private coverage has left behind.

A Short History of American Social Policy


The development of American social policy got off to a shaky start in the first decades of the twentieth century. Early innovations occurred almost exclusively at the state and local level, an approach which reflected federal weakness and uneven regional motives: the “Jim Crow” South betrayed little interest in policies that might challenge the racial logic of its low wage economy; other states hesitated at instituting policies that might put them at a competitive disadvantage. Provision was further fragmented (and limited) by a fascination with job-based “social insurance” programs, whose contributory funding mechanism avoided both the universalism implicit in general revenue financing and the stigma attached to benefits based on need. 

Public programs, meanwhile, reached out reluctantly to a few categories of the exceptionally needy: widows, veterans, single mothers, children, and the disabled. The result was a skeleton of a welfare state, whose most conspicuous feature was the distinction it drew between benefits based on employment (such as workers’ compensation) and those based on need (such as widows’ pensions). A patchwork of scattered public programs, charitable relief, and various forms of mutual aid provided some security, but paled against the misery of the emerging industrial economy and the volatility of the business cycle. When the economy crashed in 1929, voluntarism and local relief crashed just as spectacularly.*

The New Deal pulled together these diverse strands of relief, but left intact the basic division between job-based programs and limited assistance to the most needy. The Social Security Act included discrete titles offering benefits for the elderly poor, a contributory retirement program, unemployment insurance, aid to dependent children, and funds for maternal and child health. The Act was fiscally conservative, decentralized, and limited in its reach. At its core were the contributory social insurance programs: unemployment insurance and old age pensions. At its periphery were the titles covering the categorically eligible on a means-tested basis (such as aid to dependent children). Wherever these titles threatened regional interests, the Act deferred the administrative details (including the level of benefits) to the states. And those social programs that could not accommodate such compromises—most famously health care—were dropped from the Act altogether.

Inequality was embedded in the design and logic of the Social Security Act. As with job-based benefits, payroll taxes (and the contributory programs they supported) were easily introduced at the core of the economy but not at its edges—indeed agricultural workers and domestic workers were specifically excluded from coverage. Such exclusions, alongside the deference to local administration of all but the old-age pension programs, were direct and effective concessions to the southerners in Congress—who urgently wanted federal relief, but just as urgently wanted to ensure that federal benefits would not upset the economic and racial order of Jim Crow [for more background on the limits of the original Act]. 

As late as 1939, eight states had not even created local programs, and the federal government did not have a uniform set of rules and requirements for state programs until 1967. A “reasonable subsistence compatible with decency and health” threshold was struck from the original act in favor of a clause requiring assistance “as far as practicable under the conditions in such State.” Social policies for men and women proceeded on separate and unequal tracks, as the economic crisis of the 1930s reinforced the distinction (at once ethical, political, and fiscal) between social insurance for male breadwinners and social welfare for children, mothers, and widows. 

Despite all of these shortcomings, however, the impact of the Social Security titles was dramatic. Consider the old age security program. In the years after its passage in 1935, the reach and generosity of the program broadened: In 1939, allowance was made for the payment of retirement benefits to a spouse or minor children and for the payment of survivor benefits when covered workers died before retirement. In 1950 and 1954, occupational coverage was expanded to include regularly employed farm and domestic workers, many self-employed professionals, state and local workers, and employees of nonprofits. In 1956, insurance against disability was added, and initially restrictive provisions (age thresholds, retroactivity, waiting periods) were loosened over time. In 1965, Medicare extended health care to the elderly. And, across this era, the benefits, wage base, and contributions of employees and employers were ratcheted up.    

The trajectory of Aid to Dependent Children was more complicated. As with pension program, the modest provisions of the original title expanded over time. The title’s original focus on “needy dependent children” widened after 1950 to allow for coverage of parents, and in 1962 the program changed its name to Aid to Families with Dependent Children (AFDC). Such expansion first allowed states to claim federal reimbursement for broader coverage, and then required them to do so. Both the AFDC caseload [see graphic below] and benefit levels grew dramatically through the 1960s: the cash value of AFDC, food stamps, and Medicaid almost doubled between 1965 and 1975. This growth reflected the “Great Society” expansion of social assistance programs, successful legal challenges to idiosyncratic and often discriminatory eligibility provisions in the states (such as the “absent father” rule), and the remarkable organizational efforts of welfare recipients themselves.*

The innovations of the Great Society—which included both expansion of existing programs and new forays into urban policy, health care (Medicare and Medicaid were established in 1965), education, and civil rights—were the high-water mark in the history of the American welfare state.* Doubts about the legitimacy of AFDC and other means-tested programs picked up steam in the 1970s. In part, this was a consequence of their modest success: as discriminatory provisions fell and caseloads began to better resemble the demographics of the nation’s poor, more legislators invoked racial stereotypes—about the dysfunction of black families, the promiscuity of black women, and the work ethic of black men—to challenge social spending policies.

These anxieties were exaggerated by state and federal fiscal troubles, which encouraged ever more stringent targeting of means-testing programs. Over time, the original goal of AFDC—to enable marginal workers, and especially the mothers of young children, to leave the labor market—became its greatest political liability. If the old age insurance program was the “third rail” of American social policy, which even conservatives touched at their peril, AFDC became a political piñata: politicians lined up to take turns bashing it—and both wild swings and direct hits seemed to pay off.*

The attack on the welfare state took a number of forms. Most directly, the federal government cut spending and narrowed eligibility (often with more stringent work requirements). Beginning with Nixon’s “new federalism,” the federal government also moved to push more fiscal and programmatic responsibility back to the states. Tax cuts and fiscal constraints (at both the federal and state levels) had the effect of locking in these cuts by subjecting programs to constant budgetary pressure. And the steady erosion of the racial liberalism of the Great Society—most evident in the Reagan-era retreat from civil rights regulations and enforcement—hardened inaccurate but politically potent assumptions about those who turned to state assistance.  

From the early 1970s on, innovations in social policy consisted largely of efforts to either cut public and private costs (embodied in the protracted health care debate) or to strangle access to public programs. In 1996, the Clinton Administration and a Republican Congress “ended welfare as we knew it,” replacing the Social Security Act’s AFDC title with a block grant scheme that froze federal spending and—in a complete reversal of the program’s original premises—pushed mothers into the labor force. In short order, the new program—Temporary Assistance for Needy Families (TANF)—cut program participation and benefits in half. The economic boom of the late 1990s cushioned the immediate impact, as many of those pushed off the rolls could find work, but—by the onset of the 2001 recession—these job prospects had largely disappeared.*

Even at its peak, the American welfare state betrayed a number of key weaknesses. Its coverage was fragmented, especially by the lasting split between contributory “entitlements” and needs-based, means-tested programs. The politics of this were shaped by race (the Jim Crow aversion to federal standards or universal coverage) and gender (the core and persistent assumption that men were to be protected as breadwinners and women were to be protected as mothers). And the scope and structure of key programs reflected an unusual deference to markets—especially the assumption that state programs would supplement, but not displace, the security that flowed from employment.

American Social Policy and American Inequality


Social policy exists to cushion the impact of the market—including the inequality that follows from disparate patterns of education, employment, and opportunity. It aims to secure the incomes of workers, and to bolster the incomes of those who cannot (or cannot be expected to) work. On this score, the United States has sustained a very different boundary between the market and the policies that mop up after market failures than most of its peers. Many basic benefits, as we have seen in the case of health care, are more private than public. And American policies, following the expectation that security would and should flow from employment, are less generous across the board. 

Even public programs increasingly favor workers over nonworkers—a pattern evident in sustained support for payroll-based benefits and the Earned Income Tax Credit, alongside declining support for AFDC and TANF—in an era in which work itself is increasingly scarce, unpredictable, contingent, or temporary. Weak social policy is accompanied by weak labor market policies, and neither impose much of an obligation on private employers. American social policy has few universal programs, preferring to fragment coverage and eligibility by need, contribution, age, family status, and geography. All of this is particularly true of the last generation of social policy which, pivoting on the repeal of AFDC in 1996, has seen the United States cut social protections more fiercely and more deeply than most of its peers.

These cuts would be less dispiriting if there were any doubt about the impact of social policy on inequality. But the historical record shows that the most ambitious social programs have made a real difference. We know for example that the policies installed during the “war on poverty” in the 1960s yielded a substantial reduction in poverty—bringing the national rate from about 25 percent to about 15 percent [for the limits of the poverty threshold itself]. And this occurred across an era in which the forces driving wage and income inequality (the decline of labor, slipping labor standards, the corrosion of job-based security) , and hence the task of reducing poverty, grew steadily. (For a summary graphical account of the war on poverty, see Demos’s Tracking Poverty Project.)

We know that the Social Security pension program has almost singlehandedly eradicated poverty among American seniors [see graphic below]. Before Social Security, almost 80 percent of American seniors lived in poverty. As Social Security contributions and payments became established early in the postwar era, poverty among seniors began to fall—and continued to do so under the Great Society, abetted by the passage of Medicare in 1965. Since the program’s growth slowed in the 1980s and 1990s, poverty among seniors has leveled out at about 10 percent.  


We know, too, that social programs made a big difference during our recent recession. Programs such as unemployment insurance, the earned income tax credit, and food stamps kept about 41 million Americans out of poverty in 2012. The poverty rate (for all ages) in 2012 was about 16 percent; without these programs, it would have been closer to 30 percent.

These successes should not mask, however, the stark (and widening) gaps in American social provision. Our health care system leaves tens of millions uninsured—while spending more and accomplishing less (in terms of basic health outcomes) than those of our peers. We trail the world (and not just our OECD peers) in the provision of basic family or sick leave. And our public education is compromised by deep local disparities in funding for K-12 schools, not to mention a full retreat from low-cost access to postsecondary education or training.

Cuts to social programs have only widened the gap between poor Americans and everyone else—especially the retreat from AFDC since 1996. By any measure, the TANF program is a weak substitute for the program it displaced. By the mid to late 1970s, AFDC reached about a third of all poor families, and over 80 percent of poor families with children. With the implementation of TANF, this coverage shrank almost immediately (1996-1997) to about half of all poor families and about two-thirds of those with children. By 2010-2011, only 20 percent of all poor families, and just over 27 percent for those with children, were receiving TANF assistance. Between 1992 and 2010 alone, one million more American children slipped below the poverty line. The share of Americans living in severe poverty (below 50 percent of the poverty line) has almost doubled since 1972.



Many of the programs that remain are poorly targeted, creating eligibility traps and gaps for those that might benefit from them [for more on the "cliff effect" and other eligibility traps]. As a rule, social insurance programs (like Social Security pensions) are generous but poorly targeted [see graphic below], while means-tested programs are well-targeted but meager. As a result, American social policy closes most of the poverty gap for elderly families and individuals, for whom social security benefits flow to rich and poor alike.  But it accomplishes progressively less for single-parent, two-parent, and childless families—for whom means-tested benefits are both less generous and less universal. The gap is especially acute for non-elderly childless families who—regardless of their income—rarely meet the eligibility threshold for public assistance.


The dismantling of the safety net comes just as the economic risks faced by American families have grown steeper and starker. Wage stagnation, the loss of work-based benefits, employment insecurity, the rising costs of education and health care, and, more recently, a dramatic dip in housing wealth have combined to make economic mobility more elusive, income more volatile, and economic catastrophe (for example, bankruptcy) more likely.  Not only has the American welfare state failed to face up, or adapt, to these new realities, it has retreated from them and—in the bargain—made things even worse.

American Social Policy in International Perspective


The meagerness of American social provision is even clearer against an international backdrop. Across the OECD, the net redistributive impact (cash benefits received minus direct taxes paid) of social policy for low income households stands at about 40 percent of market income. But it is barely half that in the United States, making our rate of redistribution one of the lowest in the industrialized world. Much the same pattern holds for other kinds of social spending, including such in-kind benefits as education or public health care. Indeed, in each category of social spending, the United States spends significantly less than its peers [see graphic below]—the only exception being health care, a reflection of unusually high U.S. health costs rather than more generous coverage.*


The latest numbers from the OECD [for interactive graphic]—which compare inequality, incomes, and poverty rates across its member countries, before and after the impact of taxes and transfers—present yet another reminder of the United States’ dismal ranking among its peers. They also make a remarkable case for the power of social policy to combat inequality. At the pre-transfer or market rate of poverty, the U.S. poverty rate is pretty close to those in other settings [see graphic below; for a narrated version]. But after taxes and transfers—that is, after social policies and the mechanisms for paying for them have kicked in—the U.S. poverty rate leaps ahead of its peers.
Initializing...

All of this cuts in another dangerous direction: As the shredding of the safety net has contributed to inequality, that inequality, in turn, has eroded the political commitment to the common welfare. Part of this reflects a weak commitment to universal programs, which has invited invidious distinctions between the deserving and the undeserving poor, eroded support for benefits that flow to others, and raised the costs (given the burden of determining and monitoring program eligibility) of delivering scattered benefits. Widening disparities in income and wealth heighten the economic need for compensatory or redistributive social policies, but—by undermining social solidarity and raising the stakes for the well-off—they also make such policies harder to win or sustain.*

Next: Who Pays? Taxes and American Inequality

Comment on this page
 

Discussion of "A Tattered Safety Net: Social Policy and American Inequality"

Add your voice to this discussion.

Checking your signed in status ...

Previous page on path Differences that Matter, page 4 of 8 Next page on path