Viewpoint: Washington Needs to Embrace Its Role as Ultimate Risk Manager

By Justin Fox | April 23, 2020

The actions that the federal, state and local governments in the U.S. have taken to stop the spread of the new coronavirus and to mitigate the resulting economic fallout have been tagged with the term “unprecedented” a lot over the past few weeks. In sheer scale and speed, they are. But government’s role as risk manager in a crisis isn’t new at all. It dates to the beginnings of the nation.

In the sharp economic downturn of the early 1780s, state after state passed “stay” laws giving those with mortgage debts protection from foreclosures, even as wealthy property owners such as Virginia’s James Madison lamented “such an interposition of the law in private contracts.” In Massachusetts, where the legislature failed to enact such protections, one result was the Shays Rebellion of angry farmers in the western part of the state. Federal troops crushed the rebellion in 1787, and the U.S. Constitution crafted that year by Madison and others took aim at debt-forgiveness by barring states from passing any “Law impairing the Obligation of Contracts.” But Massachusetts finally enacted a debt moratorium that year as well, and even after the Constitution was ratified “the states continued passing stay laws, installment laws, and numerous other infringements on existing debt contracts in times of distress,” historian David Moss wrote in “When All Else Fails: Government as the Ultimate Risk Manager.”

The book was published in 2002, and I’ve had it on the shelf for years — waiting for the right occasion, I guess. While this does in fact seem an appropriate time to read it, I’m now kicking myself for not having cracked it open earlier. The book is great, shedding lots of light on the origins of modern practices including bankruptcy, monetary policy and disaster relief, and doing so quite entertainingly. It also makes our current strange and frightening moment at least a bit more comprehensible.

“When All Else Fails” makes clear that, though the range of risk-management responsibilities that government has taken on in the U.S. has expanded a lot over the centuries, there was never a time when our elected officials didn’t try to manage risk. In fact, those state debt relief laws of the 18th and early 19th centuries had much more teeth than the nonbinding or bank-approved mortgage moratoriums that states have offered this time around. That’s because court rulings eventually backed up that constitutional ban on infringing existing debt contracts. Lawmakers at the state and federal level then channeled their desire to help debtors into permissive bankruptcy laws that reduced the stigma of financial failure and paved the way for fresh starts. The first, temporary, national bankruptcy laws followed in the wake of economic crises in 1800, 1841 and 1867, and a permanent one came in 1898 after a brutal depression that had begun in 1893.

There’s a sense in which we try to deny ex ante that this is something we would want to do ex post. People wanted to deny that Fannie Mae and Freddie Mac had a government guarantee, until the federal government had to guarantee them. There are some similar factors here. If government is going to provide extensive bailouts and support after a crisis strikes, there needs to be a lot of work done on gaming it out in advance. We need to think hard about the risk management piece beforehand.

This was all part of the “Phase I” of government risk management in the U.S. that Moss, a professor at Harvard Business School, has dubbed “security for business.” Along with bankruptcy law, the major elements are limited liability for corporate shareholders and monetary and bank-regulatory policy aimed at promoting the not always compatible goals of economic growth and stability. Most Phase I policies took shape in the 19th century, although a couple of the biggest monetary and banking innovations came with the creation of the Federal Reserve in 1913 and of federal deposit insurance in 1933.

Phase II, “security for workers,” started with the widespread adoption of state workers’ compensation laws between 1910 and 1920. The group leading this effort, the American Association for Labor Legislation, targeted mandatory health insurance for workers as its next big reform, and for a little while in 1915 and 1916 it looked to have the wind at its back, with support from the American Medical Association and the tacit approval of the National Association of Manufacturers. Enthusiasm quickly dwindled once actual legislation began taking shape in 1917, though, and it wasn’t until nine decades later that the U.S. got a kludgy semblance of universal health insurance with the Affordable Care Act. Efforts to establish state unemployment insurance programs foundered in those days too, but succeeded during the Great Depression with first a few states setting up plans and then Congress devising a national framework in 1935. That was done in the same legislation that created Social Security, another major risk-management advance.

Phase III, which really got going in the 1960s, strove for “security for all” by means of changes in product liability law; a rapid expansion of health, safety and environmental regulation; and disaster relief. “A nation widely known for its anti-statist sentiments and its faith in limited government,” Moss wrote, “the United States is nonetheless up to its elbows in risk management.”

Moss finished the book manuscript in mid-2001 and got some pushback from early readers about his depiction of a huge and continuing government risk-management role. “The era of big government is over,” President Bill Clinton had declared five years earlier, banking regulations had been rolled back and changes to Social Security that shifted more risk to individuals seemed to be in the offing. Government was trying to get out of the risk management business, or at least some parts of it.

Then came the terrorist attacks of Sept. 11, 2001, and a government response that included airline bailouts, federal reinsurance of terrorism risks, a federally funded victim compensation fund, greatly increased domestic-security efforts and overseas interventions that were meant, in theory at least, to reduce risk. Seven years later there was a global financial crisis, and enormous government efforts to quell it. And now this.

“When All Else Fails” makes the argument that some risks are simply too big and too systemic to be managed successfully by the private sector, and many others fraught with incentive and perception problems that make some government intervention necessary or at least helpful. And the federal government, because of (among other things) its ability over extended periods to spend more than it takes in, clearly can intervene in ways that the budget-constrained states cannot — something that Senate Majority Leader Mitch McConnell seems to misunderstand with his recent suggestion that states simply go bankrupt.

So Moss is clearly not opposed to Washington playing a big role in a crisis. When I talked to him last week, though, he raised the concern that three huge government firefighting efforts within 20 years might indicate a certain failure of risk management. “We’ve had this series of very large-scale events,” he said. “It’s starting to seem like it’s not an anomaly.” One link, he speculated, could be a continued unwillingness to accept in normal times that risk management is a key government role.

This does seem especially hard for what may be a once-in-a-century pandemic. There have in fact been frequent government efforts in the U.S. and elsewhere to “game out” such a scenario, but except in places with recent experience with a coronavirus epidemic, they don’t seem to have resulted in much actual preparedness. Still, the events of the past couple of months certainly do make clear that government is “the ultimate risk manager.” It needs to act more like it.

Was this article valuable?

Here are more articles you may enjoy.