Econlib

The Library of Economics and Liberty is dedicated to advancing the study of economics, markets, and liberty. Econlib offers a unique combination of resources for students, teachers, researchers, and aficionados of economic thought.

Econlib publishes three to four new economics articles and columns each month. The latest articles and columns are made available to the public on the first Monday of each month.

All Econlib articles and columns are written exclusively for us at the Library of Economics and Liberty, on various economics topics by renowned professors, researchers, and journalists worldwide. All articles and columns are retained online free of charge for public readership. Many articles and columns are discussed in concurrent comments and debate in our blog EconLog.

EconLog

The Library of Economics and Liberty features the popular daily blog EconLog. Bloggers Bryan Caplan, David Henderson, Alberto Mingardi, Scott Sumner, Pierre Lemieux and guest bloggers write on topical economics of interest to them, illuminating subjects from politics and finance, to recent films and cultural observations, to history and literature.

EconTalk

The Library of Economics and Liberty carries the podcast EconTalk, hosted by Russ Roberts. The weekly talk show features one-on-one discussions with an eclectic mix of authors, professors, Nobel laureates, entrepreneurs, leaders of charities and businesses, and people on the street. The emphases are on using topical books and the news to illustrate economic principles. Exploring how economics emerges in practice is a primary theme.

CEE

The Concise Encyclopedia of Economics features authoritative editions of classics in economics, and related works in history, political theory, and philosophy, complete with definitions and explanations of economics terms and ideas.

Visit the Library of Economics and Liberty

Recent Posts

Here are the 10 latest posts from EconLog.

EconLog June 4, 2020

A Humble State with No Motocarde

In many ways, the modern world, including economic freedom, was born from the fear of tyranny and the institutions (not always successful) to prevent it. In Power and Prosperity: Outgrowing Communist and Capitalist Dictatorships (Basic Books, 2000), famous economist Mancur Olson had interesting historical remarks about Italian city-states in early modern times:

Sometimes, when leading families or merchants organized a government for their city, they not only provided for some power sharing through voting but took pains to reduce the probability that the government’s chief executive could assume autocratic power. For a time in Genoa, for example, the chief administrator of the government had to be an outsider—and thus someone with no membership in any of the powerful families in the city. Moreover, he was constrained to a fixed term of office, forced to leave the city after the end of his term, and forbidden from marrying into any of the local families. In Venice, after a doge who attempted to make himself autocrat was beheaded for his offense, subsequent doges were followed in official processions by a sword-bearing symbolic executioner as a reminder of the punishment intended for any leader who attempted to assume dictatorial power. As the theory predicts, the same city-states also tended to have more elaborate courts, contracts, and property rights than most of the European kingdoms of the time. As is well known, these city-states also created the most advanced economies in Europe, not to mention the culture of the Renaissance.

EconLog June 4, 2020

A Humble State with No Motorcade

In many ways, the modern world, including economic freedom, was born from the fear of tyranny and the institutions (not always successful) to prevent it. In Power and Prosperity: Outgrowing Communist and Capitalist Dictatorships (Basic Books, 2000), famous economist Mancur Olson had interesting historical remarks about Italian city-states in early modern times:

Sometimes, when leading families or merchants organized a government for their city, they not only provided for some power sharing through voting but took pains to reduce the probability that the government’s chief executive could assume autocratic power. For a time in Genoa, for example, the chief administrator of the government had to be an outsider—and thus someone with no membership in any of the powerful families in the city. Moreover, he was constrained to a fixed term of office, forced to leave the city after the end of his term, and forbidden from marrying into any of the local families. In Venice, after a doge who attempted to make himself autocrat was beheaded for his offense, subsequent doges were followed in official processions by a sword-bearing symbolic executioner as a reminder of the punishment intended for any leader who attempted to assume dictatorial power. As the theory predicts, the same city-states also tended to have more elaborate courts, contracts, and property rights than most of the European kingdoms of the time. As is well known, these city-states also created the most advanced economies in Europe, not to mention the culture of the Renaissance.

This quote is from pp. 39-40 of Olson’s book. Part of it is reproduced at Liberty Tree quotes.

Instead of a bully state, we are in urgent need of a humble state where political leaders and bureaucrats know their place. I especially like Venice’s symbolic executioner, who could beneficially replace the motorcade or, at the very least, occupy the last limousine. (For more discussion of related issues, see my Econlog post “Praetorian Guards from Ancient Greece to Palm Beach or the Hamptons,” January 14, 2019.)

(2 COMMENTS)

EconLog June 3, 2020

Transaction Costs are the Costs of Engaging in Economic Calculation

This year marks the 100supth/sup anniversary of the publication of Ludwig von Mises’s seminal article, “Economic Calculation in the Socialist Commonwealth,” which marked the first salvo in what later became the socialist calculation debate. Though the contributions of F.A. Hayek to that debate, and to economic science more broadly, have been well recognized, what is somewhat forgotten today is that the fundamental contributions of another economist were also born out of the socialist calculation debate. I am referring to none other than Ronald Coase.

EconLog June 3, 2020

Five (More) Books: Economic, Political and Social Ethnography of Soviet Life

In my previous two posts, I offered recommendations for reading on the Russian Revolution and the Soviet economy. Today, I’d like to turn our attention everyday life in the Soviet Union.

My most cherished comment on one of my books dealing with the Soviet system was from then Department Chair of Economics at Moscow State University, who upon reading my discussion of the contrast between how the system was supposed to work and how it really worked wrote to me to tell me that my description fit perfectly with the daily life that he and his family had to endure.  I had done my job then.  I think the purpose of economic theory is to aid us in our task of making sense of the political economy of everyday life.  Not theorems and graphs on blackboards and textbooks, but the lived reality out the window in the social settings we find ourselves exploring as social scientists and scholars.

EconLog June 2, 2020

Morality is Broken

The recent episode of EconTalk with Martin Gurri was one of the most jarring in my memory, simply because of the timing of its release. The episode was recorded prior to the onset of the COVID-19 pandemic. As Gurri himself asked on twitter, “Before the pandemic crisis, there was a revolt of the public. What are the odds that it won’t return, with renewed force, after the lockdown?”

And shortly after the episode’s release came the protests over the death of George Floyd in Minneapolis. Gurri describes The Revolt of the Public as “the global conflict between a public that won’t take “yes” for an answer and elites who want to bring back the 20th century” (again, via twitter).

It seemed Gurri was speaking about both of these incidents as I listened, even though that could not have been the case. So what lessons should we take from Gurri’s episode? He describes a digital earthquake, generating a tsunami of information, leading to increased social and political turbulence. How has this affected our institutions and their credibility? Since for this episode there are more questions than we can possibly ask in the space below, I’d like to encourage you to pose some questions this time. What would you like to discuss? Share your questions in the Comments, and let’s continue the conversation.

For those who would prefer a conversation starter, please use the prompts below. We love to hear from you.

 

 

1- Roberts suggests we fundamentally misunderstood the information revolution; how so? Why did just the profusion, the tsunami-like aspect of it, lead to such a dismantling of authority and credibility and trust?

 

2- There certainly were mass movements before the digital era. How is this one different? (The discussion of nihilism might be helpful here…)

 

3- Roberts asks Gurri if he’s worried about future of democracy. (He is.) What does Gurri mean when he speaks of “flattening the pyramid”? How might such flattening mitigate the “sectarian approach to politics” Gurri is so concerned with?

 

4- Roberts asks how we can make the world more democratic and as least vicious as we can. How would you answer that question? Are there any suggestions from Roberts and/or Gurri you could get behind? Explain.

 

5- What is your reaction to Gurri’s call for a new elite class, equivalent to a scientific class? What effects does Gurri hope this would have? To what extent can a new elite repair our broken political morality?

 

 

(0 COMMENTS)

EconLog June 2, 2020

Herd immunity was never a feasible option

Bryan Caplan has a post on Covid-19 that is full of sensible ideas. But I disagree with one of his claims:

18. Alex Tabarrok is wrong to state, “Social distancing, closing non-essential firms and working from home protect the vulnerable but these same practices protect workers in critical industries. Thus, the debate between protecting the vulnerable and protecting the economy is moot.” Moot?!  True, there is a mild trade-off between protecting the vulnerable and protecting the economy.  But if we didn’t care about the vulnerable at all, the disease would have already run its course and economic life would already have strongly rebounded.  Wouldn’t self-protection have stymied this?  Not if the government hadn’t expanded unemployment coverage and benefits, because most people don’t save enough money to quit their jobs for a couple of months.  With most of the workforce still on the job, fast exponential growth would have given us herd immunity long ago.  The death toll would have been several times higher, but that’s the essence of the trade-off between protecting the vulnerable and protecting the economy.

From my vantage point in Orange County, that just doesn’t seem feasible.  People here are taking quite aggressive steps to avoid getting the disease, and I believe that would be true regardless of which public policies were chosen by authorities.  Removing the lockdown will help the economy a bit, as would ending the enhanced unemployment insurance program.  But the previous (less generous) unemployment compensation program combined with voluntary social distancing is enough to explain the vast bulk of the depression we are in.

In many countries, the number of active cases is falling close to zero.  In those places, it will be possible to get people to return to service industries where human interaction is significant.  Speaking for myself, I’m unlikely to get a haircut, go to the dentist, go to a movie, eat in a crowded restaurant, or many other activities until there is a vaccine. (Although if I were single I’d be much more active.) If I were someone inclined to take cruises, I’d also stay away from that industry until there was a vaccine.  I’ll do much less flying, although I’d be willing to fly if highly motivated.  For now, I’ll focus on outdoor restaurants (fortunately quite plentiful in Orange County) and vacations by automobile. Universities are beginning to announce that classes will remain online in the fall.

If you think in terms of “near-zero cases” and “herd immunity” as the two paths to normalcy in the fall of this year, I’d say near-zero cases are much more feasible.  Lots of countries have done the former—as far as I know none have succeeded with the latter approach.  Unfortunately, America has botched this pandemic so badly (partly for reasons described by Bryan) that it will be very difficult to get the active caseload down to a level where consumers feel safe.

Don’t get me wrong, both the lockdown and the change in unemployment compensation create problems for the economy.  But they are not the decisive factor causing the current depression.  If the changes in the unemployment compensation program were made permanent, then at some point this would become the decisive factor causing a high unemployment rate.  But not yet.

BTW, I am not arguing that it wouldn’t be better if people had a more rational view of risks, as Bryan suggested in a more recent post.  This post is discussing the world as it is.

Here’s a selection of countries with 35-76 active cases (right column), followed by a group with less than ten.  Many are tiny countries and some have dubious data, but not all.

. . .

 

(22 COMMENTS)

EconLog June 2, 2020

Something to Learn from the Trump Presidency

The president of the United States tweeted a video of an alleged rioter (who, in all likelihood, is an American citizen, not a “Mexican rapist”) with the threatening comment:

“Anarchists, we see you!”

Is it for the president to identify suspects? So much for the ideal of the rule of law, it seems.

But my point is different and relates to the benefits of personal knowledge. I have always hoped that a journalist would, during a press conference, ask the president something like “Mr. President, what do you mean exactly by ‘socialism’?” Or, “Mr. President, what do you mean by ‘the extreme left’ and how does it differ from the left?”

Since Mr. Trump’s tweet of yesterday and his other recent references to “anarchists” as another type of scapegoat, my dream has changed. I would now propose questions like the following:

Mr. President, what is an anarchist? What does an anarchist believe?

Mr. President, do you think that Henry David Thoreau, Lysander Spooner, and Murray Rothbard were anarchists?

What about David Friedman?

Do you think that Anthony de Jasay was a conservative anarchist?

Of course, looters have to be stopped and arrested but different sorts of anarchists exist, just as there are different sorts of defenders of the state. Another idea for a question along those lines:

Mr. President, don’t you think that the so-called “anarchist” rioters and looters actually want more state power, just like the “extreme left” you attack?

The following question may be problematic for both Mr. Trump and the libertarians involved:

Mr. President, what do you think of the anarcho-capitalists who, during your 2016 election campaign, created a group called “Libertarians for Trump”?

More seriously, I suggest the Trump presidency has taught something important to those of us who define themselves as libertarians or fellow travelers: knowledge is important, both in the sense of a minimal culture about what has been happening in the world until yesterday and in the sense of an intellectual capacity to learn. To advance liberty, an ignorant disrupter is not sufficient. He is more likely to advance tyranny. If he appears to defend one libertarian cause—say, the Second Amendment—he will more probably bring it into disrepute.

(24 COMMENTS)

EconLog June 2, 2020

What I’m Doing

  1. The U.S. political system is deeply dysfunctional, especially during this crisis.  Power-hunger reigns in the name of Social Desirability Bias.  Fear of punishment aside, I don’t care what authorities say.  They should heed my words, not the other way around.

  2. Few private individuals are using quantitative risk analysis to guide their personal behavior.  Fear of personally antagonizing such people aside, I don’t care what they say either.

  3. am extremely interested in listening to the rare individuals who do use quantitative risk analysis to guide personal behavior.  Keep up the good work, life-coach quants – with a special shout-out to Rob Wiblin.

  4. After listening, though, I shall keep my own counsel.  As long as I maintain my normal intellectual hygiene, my betting record shows that my own counsel is highly reliable.

  5. What does my own counsel say?  While I wish better information were available, I now know enough to justify my return to 90%-normal life.  The rest of my immediate family agrees.  What does this entail?  Above all, I am now happy to socialize in-person with friends.  I am happy to let my children play with other kids.  I am also willing to not only eat take-out food, but dine in restaurants.  I am pleased to accommodate nervous friends by socializing outdoors and otherwise putting them at ease.  Yet personally, I am at ease either way.

  6. I will still take precautions comparable to wearing a seat belt.  I will wear a mask and gloves to shop in high-traffic places, such as grocery stores.  I will continue to keep my distance from nervous and/or high-risk strangers.  Capla-Con 2020 will be delayed until winter at the earliest.  Alas.

  7. Tyler suggests that people like me “are worse at intertemporal substitution than I had thought.”  In particular:

It either will continue at that pace or it won’t.  Let’s say that pace continues (unlikely in my view, but this is simply a scenario, at least until the second wave).  That is an ongoing risk higher than other causes of death, unless you are young.  You don’t have to be 77 for it to be your major risk worry.

Death from coronavirus is plausibly my single-highest risk worry.  But it is still only a tiny share of my total risk, and the cost of strict risk reduction is high for me.  Avoiding everyone except my immediate family makes my every day much worse.  And intertemporal substitution is barely helpful.  Doubling my level of socializing in 2022 to compensate for severe isolation in 2020 won’t make me feel better.

Alternatively, let’s say the pace of those deaths will fall soon, and furthermore let’s say it will fall by a lot.  The near future will be a lot safer!  Which is all the more reason to play it very safe right now, because your per week risk currently is fairly high (in many not all parts of America).  Stay at home and wear a mask when you do go out.  If need be, make up for that behavior in the near future by indulging in excess.

Suppose Tyler found out that an accident-free car were coming in 2022.  Would he “intertemporally substitute” by ceasing driving until then?  I doubt it.  In any case, what I really expect is at least six more months of moderately elevated disease risk.  My risk is far from awful now – my best guess is that I’m choosing a 1-in-12,000 marginal increase in the risk of death from coronavirus.  But this risk won’t fall below 1-in-50,000 during the next six months, and moderate second waves are likely.  Bottom line: The risk is mild enough for me to comfortably face, and too durable for me to comfortably avoid.

  1. The risk analysis is radically different for people with underlying health conditions.  Many of them are my friends.  To such friends: I fully support your decision to avoid me, but I am happy to flexibly accommodate you if you too detest the isolation.  I also urge you to take advantage of any opportunities you have to reduce your personal risk.   But it’s not my place to nag you to your face.

  2. What about high-risk strangers?  I’m happy to take reasonable measures to reduce their risk.  If you’re wearing a mask, I treat that as a request for extra distance, and I honor it.  But I’m not going to isolate myself out of fear of infecting high-risk people who won’t isolate themselves.

  3. Most smart people aren’t doing what I’m doing.  Shouldn’t I be worried?  Only slightly.  Even smart people are prone to herding and hysteria.  I’ve now spent three months listening to smart defenders of the conventional view.  Their herding and hysteria are hard to miss.  Granted, non-smart contrarians sound even worse.  But smart contrarians make the most sense of all.

  4. Even if I’m right, wouldn’t it be more prudent me to act on my beliefs without publicizing them?  That’s probably what Dale Carnegie would advise, but if Dale were here, I’d tell him, “Candor on touchy topics is my calling and my business.  It’s worked well for me so far, and I shall stay the course.”

  5. I’ve long believed a strong version of (a) buy-and-hold is the best investment strategy, and (b) financial market performance is only vaguely related to objective economic conditions.  Conditions in March were so bleak that I set aside both of these beliefs and moved from 100% stocks to 90% bonds.  As a result of my excessive open-mindedness, my family has lost an enormous amount of money.  The situation is so weird that I’m going to wait until January to return to my normal investment strategy.  After that, I will never again deviate from buy-and-hold.  Never!

(20 COMMENTS)

EconLog June 1, 2020

A libertarian is a conservative who has been oppressed

When I was young, there was an old saying that a conservative is a liberal who has been mugged. I suspect that the debates between liberals and conservatives are especially fierce precisely because they are generally based on genetics and random life experiences, not rational thought.

Along these lines, a Politico article by Rich Lowry caught my eye:

The intellectual fashion among populists and religious traditionalists has been to attempt to forge a post-liberty or “post-liberal” agenda to forge a deeper foundation for the new Republican Party. Instead of obsessing over freedom and rights, conservatives would look to government to protect the common good.

This project, though, has been rocked by its first real-life encounter with governments acting to protect, as they see it, the common good.

One of its architects, the editor of the religious journal First Things, R.R. Reno, has sounded like one of the libertarians he so scorns during the crisis. First, he complained he might get shamed if he were to host a dinner party during the height of the pandemic, although delaying a party would seem a small price to pay for someone so intensely committed to the common good.

More recently, he went on a tirade against wearing masks. Reno is apparently fine with a much stronger government, as long as it never issues public-health guidance not to his liking. Then, it’s to the barricades for liberty, damn it.

Ouch!  Lowry and Reno are both conservatives, but I’m guessing they are not the best of friends.

PS:  Tom Wolfe’s version is pretty close to the sentiments in this post:

If a conservative is a liberal who’s been mugged, a liberal is a conservative who’s been arrested.

(23 COMMENTS)

EconLog June 1, 2020

A “Hodgepodge” of State-Based COVID-19 Rules May be Just What the Doctor Ordered

coronavirus-hodgepodge.jpg

At the peak of the COVID-19 pandemic, only seven states—Arkansas, Iowa, North and South Dakota, Nebraska, Utah, and Wyoming—had resisted statewide stay-at-home or lockdown regulations to contain the virus (a new strain of coronavirus disease). Even though those states have less than 10 percent of the country’s population, they continue to be pressed by a chorus of federal and state medical experts, media, and policy makers for their lockdown regulatory failures.

As I write this in early May, most states have begun loosening in varying ways and to varying degrees their COVID-19 containment controls. Only three states—California, New York, and Washington seem likely to keep their lockdown orders in place into June. Critics, again, took to their media bullhorns and to the streets in protests with posters condemning the loosening of controls for their disregard for human life over the economy. The critics argue that the loosening of controls will cause another surge in virus infections and deaths

These critics argue that the states without full lockdown policies and the far larger number of states allowing for businesses to reopen can be incubators for the virus, which can spread nationwide through porous borders and cause a second surge in virus infections and deaths. Consequently, the states loosening their containment policies are endangering Americans in the lockdown states who are dutifully sacrificing their jobs and opportunities to socialize with friends. State-based policymaking has created a “hodgepodge” of widely varying state and local containment (and loosening) rules1—for example, on where and when to wear face masks, the size of allowable social gatherings, the definition of “essential businesses,” and when and where businesses can reopen—all with different levels of enforcement that complicates doing business nationally.

The critics have strongly insisted that national rules that are uniform across all counties are necessary to suppress the deadly virus that is overtaxing the country’s healthcare system. Anthony Fauci, director of the National Institute of Allergy and Infectious Diseases and a prominent member of President Trump’s COVID-19 task force, confesses that he simply “can’t understand” why all states are not on the “same [regulatory] page.”

Resisting the pressure to nationalize the “coronavirus war” has a straightforward reason: States and local communities differ, most prominently in population density. But there is a much stronger reason to defer to state and local governments: When so much is unknown about a highly contagious disease, the country needs opportunities to try a variety of policy remedies, just as it needs to test medical remedies (vaccines and antibody therapies) for nonvictims, especially those in frontline healthcare jobs.

As intended by the American Founders, devolution of major swaths of policy making—including devising containment policies—to states allows the needed, and totally unavoidable, policy experimentation, based on varying local health and economic conditions best understood by state policymakers. Future research is then possible to discover which state policies are most effective in achieving the intertwined goals of minimizing infections and deaths and maximizing economic activity.

The Case Against National Containment Policies

Medical experts at high offices in the nation’s capital may know more about COVID-19 and its effects than all others, but there’s a whole lot they yet don’t know and can’t know. Not the least of these unknowns are the widely varying local circumstances across the nation’s 3,007 counties and what combinations of rules and regulations will have the best public health and economic outcomes for communities far from national political arenas. Experts at the national level also can’t know the tradeoffs between safety and economic losses that different groups of Americans may be willing to accept. And the reality of containment is stark: Progressively greater safety can come at escalating economic costs, measured in lost production, jobs, and income. Trade-offs are unavoidable.

Experts’ Limited Knowledge

One example of a knowledge problem is the experts’ insistence a few weeks ago that only infected people need to wear face masks. Now, everyone is urged at least to wear makeshift facial “coverings”—including bandanas—when going out, even if only to take a walk on a deserted beach as asymptomatic victims may transmit the disease. Obviously, the pandemic response is an ongoing project based on experience, with varying policy experimentation included. New York officials, for instance, have concluded based on the state’s escalating deaths that they probably should have instituted stringent containment policies when California did, if not before.

The experts have insisted that all people maintain a “social distance” of at least six feet, as if that specific distance emerged strictly from science. But in democracies, public policies are necessarily influenced not only by the available science but by the public’s willingness to accept them. The six-feet social-distance rule was devised by COVID-19 experts who concluded that, according to COVID-19 task force member, Deborah Birx, “it is clear that exposure occurs” when people are within six feet for fifteen minutes of an infected person. The distance is based on findings that the virus could be spread indoors up to six feet through water droplets expelled through coughs, sneezes, and even exhaled breaths.

But under different conditions, not all virus-laden droplets may not behave in the same way. For instance, based on a small-sample Chinese study in a Wuhan hospital ward,2 the Center for Disease Control (CDC) concluded that the droplets may travel through the air indoors as far as 13 feet and even farther on shoes of healthcare workers. Outdoors, some droplets may be shot as far as 10 yards or more and may linger for several minutes, depending on atmospheric conditions and on the force behind the coughs and sneezes. Thirty Finnish researchers simulated the potential spread of a “cloud” of virus-laden water droplets from a single cough or sneeze a dramatic YouTube video hosted by the University of Helsinki.3 As laboratory simulations of coughs have shown, even N-95 face masks are far from complete barriers to the spread of virus-laden droplets. Homemade face masks, made of thin bandana and t-shirt materials, can be porous, if not all but ineffective.

The six-foot rule likely emerged as a compromise that experts and policy makers believed struck an appropriate balanced between a social distance the public would likely follow, and the number of infections and deaths people would accept. (German policymakers, on the other hand, struck a different compromise based on an assessment of different studies, and set the social-distance rule at 1.5 meters, or about five feet, maybe because the populace would accept more infections to avoid tighter social distancing.)

Obviously, the science behind the social-distancing rule depends on the particular studies available, the interpretation of the findings, and unavoidably, the subjective judgements of the tradeoffs involved. Perhaps, to match scientific findings with transparency, the six-foot rule could be presented as “you should stay at least six feet from all other people to lower the probability of infection to what we find to be acceptable levels, given the additional deaths that will result from not choosing a greater social distance.” I grant that the message is complicated, but that only validates a point easily overlooked in media reports: The experts say they are simply following the science, whereas in fact, policymaking is not so simple. Policies are beset with nonscientific considerations, for example, the pithiness of the policy measures.

Policymaking Under Uncertainty

As experts have said from the start, the current COVID-19 is a new strain to epidemiology, which means humans’ immune systems are vulnerable because they have not yet developed antibodies and FDA-approved vaccines are unavailable. Moreover, the pandemic has been, and remains, enshrouded in uncertainties about the contagiousness and virulence of the virus, although it should be obvious to everyone by now that COVID-19 is deadly. At the same time, more than 95 percent of victims recover with no lasting health effects, and maybe a quarter (or even half, depending on the study) of infected people are asymptomatic, but still can be transmitters.

The effect of various containment rules—whether social distancing or stringent lockdowns—on the economy remains largely a guestimate; however, many economists—including those at the St. Louis Fed4 and Moody’s speculate that the economic destruction from the array of state lockdowns could supersede that of the Great Depression in peak unemployment rate (25 percent in the Great Depression versus 32 percent projected for May 2020) and decreased GDP (26 percent in the Depression versus 29 percent between the start of March and April 2020, with a far more dramatic drop possible by year’s end).

“In these uncertain times, we need experimentation not only in labs to find vaccines and disease treatments, but also in political venues to devise effective policies.”

In these uncertain times, we need experimentation not only in labs to find vaccines and disease treatments, but also in political venues to devise effective policies. If federal mandates cut off all opportunities to try various containment policies on the part of state and local governments, a lot of potentially valuable information will be forever lost. Virus experts and the public badly need more empirical evidence regarding the relative public health and economic consequences of different policies, if for no other reason than that there is a very good chance that the virus may retreat in the summer only to re-emerge more virulent than ever in the fall or next year.

Containment Policy “Hodgepodge” as a Research Goldmine

With states calling their own policy shots, researchers will soon be able to assess how these various policies affect COVID-19 infections and deaths and the state economies. Such research findings could then inform future policymakers’ discussions of the inherent, unavoidable tradeoffs between reduced public health and economic damage.

Research Avenues

Researchers will have a field day with such data, knowing that their findings will not only help shape future pandemic policies, but also boost substantially their professional reputations and incomes. Here are just a few of the benefits:

  • • Researchers will be able to estimate the lives saved and economic damage averted by different levels of social distancing policies and lockdowns, and well as different levels of enforcement. They will be able to assess the extent to which, say, California averted (or aggravated) virus deaths and increased (or decreased) economic damage with its early statewide lockdown. Surprise findings may arise, too, if studies show that Texas and Georgia, by delaying their lockdown policies, were able to devise more effective measures than California. Researchers will be able to assess whether and to what extent North Dakota’s and Arkansas’ refusals to institute lockdowns cost lives and lowered their states’ unemployment and real income losses compared to lockdown states.5 By being among the first to reopen their economies, Texas and Georgia will allow researchers to assess the value (or lack of value) of their early, limited, and gradual decontrol policies, especially since California and New York seem intent on lagging all other states in reopening their economies.
  • • Might researchers not learn that the health/economic cost tradeoffs of lockdowns are more favorable in urban areas (Manhattan) than in rural communities (Fargo)?
  • • Might they also learn how those states with partial lockdowns or lax enforcement fared relative to those that with strict enforcement (such as required and not just recommende use of face masks in public)?
  • • With the retreat of the pandemic (or after the infections and death-rate curves begin to “flatten”), researchers might also estimate how various shortages of personal protective equipment (from N95 face masks to ventilators) in different states affected death rates among victims, their family members, and frontline caregivers and worked to extend or deepen the economic downturn.

The research questions from the pandemic are virtually endless, especially when tangential topics are considered. For instance, consider questions surrounding state minimum-wage, price gouging, and environmental policies:

  • • Economists have long studied the employment and income effects of minimum wage policies, but almost always under “normal” economic circumstances. State minimum wages have varied widely over the last decades, but now researchers may be able to assess how states with relatively high minimums (California, Washington, and New York) fared in job and income losses during the lockdowns, as compared with states that have held to the 7.25 an hour federal minimum since 2009 (Texas and North Carolina).
  • • Similarly, economists have long argued against price-gouging laws (such as a California law that makes it illegal for sellers to raise prices “too much”), most frequently on the grounds that such laws invariably lead to “panic buying” and empty store shelves. From the beginning of the pandemic, supplies of important personal protection equipment, most notably, face masks and hand sanitizer, have been short, and shortages have extended to toilet paper and even rice, dried beans and flour. Such shortages can be partially, if not totally alleviated, through price increases, which discourage panic buying and hording and induce suppliers to increase production.

  • An overlooked potential consequence of price-control laws is that they can be deadly during pandemics. By preventing substantial price increases in face of a pandemic-caused spike in demand, critically needed protections such as face masks and sanitation supplies are hoarded, and their production is tempered. Thus, fewer protections are available, resulting in more infections and, very likely, more deaths. Future research on the effects of various states’ differing price controls will allow for assessments of the extent to which the controls added to the severity of the pandemic.

  • • Environmental researchers could be the big winners from the pandemic, which has drastically caused a reduction in greenhouse gas emissions (with easily observable effects in Wuhan and Los Angeles).

The research on cross-state outcomes can be fortified with research on the effects of cross-country containment policies. Several countries, most notably Sweden, have not to date instituted lockdowns, and those that have locked down have done so at different times and at different levels of stringency and enforcement. Countries also are relaxing their rules at different times.

Containment Policies Versus Herd Immunity Spreading

Faced with a new virus, the body’s immune system develops antibodies to fight disease, and these newly created antibodies may remain an active defense against any new invasion of COVID-19. As COVID-19 infections spread and people recover or die, the susceptible population will shrink and spread of the disease can slow. Through so-called “herd immunity,” diseases can be self-correcting, at least partially. And at this writing, preliminary evidence has emerged that suggests that a flattening of the infection and death-rate curves is underway at least in parts of the United States, as well as in China and Italy and other European countries, including Sweden. Already, Sweden’s experience with its strategy of deliberately allowing for herd immunity to contain the virus’ spread, at least partially, has begun to gain respect and adherents (as well as critics) among experts and pundits.

Medical and policy experts have been quick to attribute the flattening to effective policies. In an NBC News interview, Mr. Fauci lowered the forecasted total COVID-19 deaths in the United States from a range of 100,000 to 240,000 to 60,000 (a drop of as much as 75 percent).6 He concluded, “The real data are telling us it is highly likely we are having a definite positive effect by the mitigation things that we’re doing, this physical separation.” New York Governor Andrew Cuomo also pointed to the possible leveling of the state’s hospitalizations as a sign that mitigation policies were working.

Not so fast. The open question is not whether containment policies have had a positive effect, but how much of the flattening of infection rate and any future downturn can be attributed to the policies. Other things must be considered. For example, the drop in forecasted deaths might be partially attributable to earlier forecasts being founded on models based on worst-case-scenarios assumptions and limited data designed to trigger broad public support for containment policies. Neither Fauci nor Cuomo have mentioned (to date) that their “good news” on the pandemic peak could also be at least partially attributable to herd immunity. Such immunity could be substantial, if the virus is indeed highly contagiousness of the virus, spread by asymptomatic victims, and thus, could have begun to spread as far back as November 2019. Focused research on herd immunity will surely inform policymakers’ future decisions on the breadth and stringency of containment and enforcement policies, as well as the tie between the percentage of the population with immunity and the suppression of the spread of infections.

Concluding Comments

Critics of the “hodgepodge” of state and local COVID-19 containment policies seem to remain confident that a single set of strict federal containment rules—especially if passed earlier this year—would have slowed the spread of the virus and reduced infections and deaths, primarily because all Americans would have been effected, not just about 90 percent. But that may be wishful thinking because the President and Congress had the power to impose a federal mandate but chose not to, or perhaps believed the required legislation would not pass.

When President Trump claimed “total” authority over states’ reopening policies, many argued persuasively that such a claim was unconstitutional. But if the President does not have the power to reopen state economies, then how does he have the constitutional authority to lockdown the economy in the first place? Democracy often works slowly at all levels, but especially at the federal level because of various competing and conflicting political interests, as well as differences in how people assess threats, especially new and “historic” ones. Virus experts give the appearance that they follow “science” and use underlying “data” for guidance, a position that the California and New York governors have endorsed. The problem is that science does not always offer clear, indisputable policy directives.

Some may presume that a federal policy, if ever adopted, would be stricter than the collective impact of the hodgepodge” of state regulations. Not necessarily so. Politics would still be at play in Washington, D.C., and private-sector and state lobbyists would work hard and with greater focus to ensure federal policies minimized damage to their interests. With a localized approach to containment policy, lobbyists have had to divide their time among all 50 states and numerous legislators, not just the 535 members of Congress accessible in a few large buildings in the Capitol.

The restaurant and retail interests would likely have pressed to be identified as “essential” for the country and, thereby, not subject to closure. By relying on state-based policies, some states—say, California and Washington State—might have been forced to accept less stringent containment policies, just to achieve the required majorities in the House and Senate. Members of Congress from states with delayed, lax, and no containment policies would likely have worked just as diligently to delay a federal mandate and to pass less stringent regulations than some other states, such as California and Washington, might have preferred.

For more on these topics, see “Liberty in the Wake of Coronavirus,”, by Aris Trantidis, Library of Economics and Liberty, May 4, 2020. See also the EconTalk podcast episodes Paul Romer on the COVID-19 Pandemic and Tyler Cowen on the COVID-19 Pandemic.

Critics who leap to object, “Lives should be saved at all economic costs,” must keep in mind that massive unemployment and income losses also can kill. For example, Oxford University researchers have estimated that during the Great Recession, European and North American suicides went up by 10,000. This time the increase could be far greater because the economic damage could be far greater. With families in lockdown and with growing financial stresses, news reports have surfaced that domestic violence calls to 18 police departments hotlines surged by as much as 35 percent during March alone, and these calls may have been fueled by the substantial surge in March alcohol sales (and moderated by a surge in prescriptions for depression and insomnia).

Policy makers need to know more about the impact of lockdowns on lives saved from virus containment and lives lost because the economic damage done. The balance could catch some policy makers and experts by surprise.

My point is simple: With state-based policymaking, opportunities can emerge to test policies. But with an all-inclusive federal mandate policy, we could potentially lose a lot of scientific guidance on how better to deal with the COVID-19 pandemic and all future pandemics. Lives and the economy could hang in the balance on future research findings.


Footnotes

[1] See here for more: 2020 State and Local Government Responses to COVID-19. Stateside.com, May 28, 2020.

[2] See this CDC dispatch for more: https://wwwnc.cdc.gov/eid/article/26/7/20-0885_article

[3] Available online at: https://www.youtube.com/watch?vWZSKoNGTR6Q

[4] See here: https://www.stlouisfed.org/on-the-economy/2020/march/back-envelope-estimates-next-quarters-unemployment-rate

[5] To date, one analyst has computed a simple correlation between states’ death per million population and the count of days to lockdown, concluding in the Wall Street Journal that there was “not correlation.” However, the issue of whether lockdowns saved lives needs further study, given that many variables not considered in this study could explain the spread of death rates across states.

[6] Available online at: https://www.bloomberg.com/news/articles/2020-04-09/fauci-says-u-s-virus-deaths-may-be-60-000-halving-projections


*Richard McKenzie is a professor (emeritus) in the Merage Business School at the University of California, Irvine and author of A Brain-Focused Foundation for Economic Science, Boston: Palgrave/MacMillan; 2018.

For more articles by Richard McKenzie, see the Archive.


(0 COMMENTS)

Here are the 10 latest posts from EconTalk.

EconTalk June 1, 2020

Sarah Carr on Charter Schools, Educational Reform, and Hope Against Hope

Hope-Against-Hope-200x300.jpg Journalist and author Sarah Carr talks about her book Hope Against Hope with EconTalk host Russ Roberts. Carr looked at three schools in New Orleans in the aftermath of Hurricane Katrina and chronicled their successes, failures, and the challenges facing educational reform in the poorest parts of America.

EconTalk May 25, 2020

Martin Gurri on the Revolt of the Public

revolt-of-the-public-203x300.jpg Author Martin Gurri, Visiting Fellow at George Mason University’s Mercatus Center, talks about his book The Revolt of the Public with EconTalk host Russ Roberts. Gurri argues that a digital tsunami–the increase in information that the web provides–has destabilized authority and many institutions. He talks about the amorphous nature of recent populist protest movements around […]

EconTalk May 18, 2020

Robert Pondiscio on How the Other Half Learns

HowOtherHalfLearns-199x300.jpg Author and teacher Robert Pondiscio of the Thomas B. Fordham Institute talks about his book How the Other Half Learns with EconTalk host Russ Roberts. Pondiscio shares his experience of being embedded in a Success Academy Charter School in New York City for a year–lessons about teaching, education policy, and student achievement.

The post Robert Pondiscio on How the Other Half Learns appeared first on Econlib.

EconTalk May 15, 2020

Paul Romer on the COVID-19 Pandemic

In this Bonus Episode of EconTalk, economist and Nobel Laureate Paul Romer discusses the coronavirus pandemic with EconTalk host Russ Roberts. Romer argues that the status quo of shutdown and fear of infection is unsustainable. Returning to normal requires an inexpensive, quick, and relatively painless test. Such tests are now available. The challenge is in […]

The post Paul Romer on the COVID-19 Pandemic appeared first on Econlib.

EconTalk May 11, 2020

Branko Milanovic on Capitalism, Alone

Capitalism-Alone-199x300.jpg Economist and author Branko Milanovic of the Graduate Center, CUNY, talks about his book, Capitalism, Alone, with EconTalk host Russ Roberts. They discuss inequality, the challenge of corruption in the Chinese system, and Milanovic’s claim that in American capitalism, the texture of daily life is increasingly affected by the sharing economy and other opportunities.

EconTalk May 4, 2020

L.A. Paul on Vampires, Life Choices, and Transformation

Transformative-Experience-191x300.jpg Philosopher and author L.A. Paul talks about her book Transformative Experience with EconTalk host Russ Roberts. Paul explores the uncertainties that surround the transformative experiences that we choose and that happen to us without choosing. How should we think about the morality and personal impact of these kinds of experiences, especially when some decisions are […]

EconTalk April 27, 2020

Alan Lightman on Stardust, Meaning, Religion, and Science

man-and-the-universe.jpg Physicist and author Alan Lightman talks with EconTalk host Russ Roberts about the origins of the universe, meaning, transcendence, and the relationship between science and religion.

EconTalk April 20, 2020

Vinay Prasad on Cancer Drugs, Medical Ethics, and Malignant

Malignant-200x300.jpg Oncologist, author, and podcaster Vinay Prasad talks about his book Malignant with EconTalk host Russ Roberts. Prasad lays out the conflicts of interest and scientific challenges that make drugs that fight cancer so disappointing at times. The conversation looks at how policy changes might improve the incentives facing doctors, pharmaceutical companies, and patients.

EconTalk April 13, 2020

Ed Leamer on Manufacturing, Effort, and Inequality

good-jobs.jpgEconomist Ed Leamer of UCLA talks about manufacturing, effort, and inequality with EconTalk host Russ Roberts. The conversation draws on recent empirical work of Leamer’s on how measured inequality is affected by the work effort of Americans at different levels of education. The conversation ends with a discussion of how education can be transformed when […]

EconTalk April 6, 2020

Arnold Kling on the Three Languages of Politics, Revisited

three-languages-215x300.jpgEconomist and author Arnold Kling talks about the revised edition of his book The Three Languages of Politics in front of a live audience at the Cato Institute, recorded in September of 2019. Kling talks about the changed political landscape in the United States and around the world and how his ideas have changed since […]

Here are the 10 latest posts from CEE.

CEE March 24, 2020

Harold Demsetz

Harold Demsetz made major contributions to the economics of property rights and to the economics of industrial organization. He also coined the term “the Nirvana approach.” Economists have altered it slightly but use it widely. Demsetz was one of the few top economists of his era to communicate almost entirely in words and not math. Demsetz also defended both economic freedom and civil liberties.

Drawing on anthropological research, Demsetz noted that, although the native Canadians (Canadians often call them First Nations people) in Labrador had property rights in the early 18supth/sup century, they did not have property rights in the mid-17supth/sup century. What changed? Demsetz argued that the advent of the fur trade in the late 17supth/sup century made it more valuable to establish property rights so that the beavers were not overtrapped. By contrast, native Americans on the southwestern plains of the United States did not establish property rights; Demsetz reasoned that it was because the animals they hunted wandered over wide tracts of land and, therefore, the cost of establishing property rights was prohibitive. One of Demsetz’s most important contributions was a 1967 article, “Toward a Theory of Property Rights.” In it, he argued that property rights tend to develop where the gains from defining and enforcing those rights exceed the costs. He found confirming evidence in the presence or absence of property rights among native Americans and native Canadians, and he dismissed the idea that they were primitive people who couldn’t understand or appreciate property rights. Instead, he argued, they developed property rights in areas of North America where the property was worth defending.

In the 1960s, the dominant view in the area of economics called industrial organization was that concentration in industries was bad because it led to monopoly. In the 1970s, Demsetz challenged that view. He argued that the kind of monopoly to worry about is caused by government regulation that prohibits firms from entering an industry. He pointed to the Civil Aeronautics Board’s restrictions on entry by new airlines and the Federal Communication Commission’s hobbling of cable TV as examples. He wrote, “The legal route of monopoly runs through Washington and the state capitals.” But, he argued, if a few firms achieved a large market share through economies of scale or through superior performance, we should not worry, and the antitrust officials should not go after such firms. As long as the government doesn’t restrict new competitors, firms with a large market share will face competition in the future.

In a 1969 article, “Information and Efficiency: Another Viewpoint,” Demsetz accused fellow economist Kenneth Arrow of taking the “Nirvana approach” and recommended instead a “comparative institutions approach.” He wrote, “[T]hose who adopt the nirvana viewpoint seek to discover discrepancies between the ideal and the real and if discrepancies are found, they deduce that the real is inefficient.” Specifically, Arrow showed ways in which the free market might provide too little innovation, but then simply assumed that government intervention would get the economy closer to the optimum. Demsetz conceded that ideal government intervention might improve things, but he noted that Arrow, like many economists, had failed to show that actual government intervention would do so. Economists have slightly changed the label on Demsetz’s insight: they now refer to it as the “Nirvana fallacy.”

Another major Demsetz contribution was his thinking about natural monopoly, evidenced best in his 1968 article “Why Regulate Utilities?” In that article, Demsetz stated that the theory of natural monopoly “is deficient for it fails to reveal the logical steps that carry it from scale economies in production to monopoly price in the market place.” How so? Demsetz argued that competing providers could bid to be the single provider and that consumers, if well organized, could choose among competing providers. The competition among potential providers would prevent the winning provider from charging a monopoly price.

Economists often use negative externalities as a justification for government regulation. One standard example is pollution; in their actions, polluters do not take into account the damage imposed on others. Demsetz pointed out that governments also impose negative externalities. In the above-mentioned 1967 article on property rights, Demsetz wrote, “Perhaps one of the most significant cases of externalities is the extensive use of the military draft. The taxpayer benefits by not paying the full cost of staffing the armed services.” He added, “It has always seemed incredible to me that so many economists can recognize an externality when they see smoke but not when they see the draft.” Demsetz was a strong opponent of the draft.

One of Demsetz’s other contributions, co-authored with Armen A. Alchian, was his 1972 article “Production, Information Costs, and Economic Organization.” A 2011 article written by three Nobel Prize winners—Kenneth J. Arrow, Daniel L. McFadden, and Robert M. Solow—and three other economists—B. Douglas Bernheim, Martin S. Feldstein, and James M. Poterba, stated that this article was one of the top 20 articles publishes in the American Economic Review in the first 100 years of its existence. In it, Alchian and Demsetz proposed the idea that the reason to have firms is that team production is important and monitoring the productivity of team members is difficult. Therefore, they argued, firms, to be effective, must have people in the firm who monitor and who are residual claimants. These people, often, but not always, the owners, get some fraction of the profits of the firm and, therefore, have an incentive to monitor effectively. That helps solve the classic principal-agent problem.

In a famous 1933 book titled The Modern Corporation and Private Property, Adolf A. Berle and Gardiner C. Means had argued that diffusion of ownership in modern corporations gave managers of large corporations more control, shifting it from the owners. These managers, they argued, would use that control to benefit themselves. Demsetz and co-author Kenneth Lehn questioned that reasoning. They argued that owners would not give up control without getting something in return. If Berle and Means were correct, they wrote, then one should observe a lower rate of profit in firms with highly diffused ownership. But if Demsetz and Lehn were correct, one should see no such relationship because diffused ownership would happen where there were profitable reasons for it to happen. They wrote:

A decision by shareholders to alter the ownership structure of their firm from concentrated to diffuse should be a decision made in awareness of its consequences for loosening control over professional management. The higher cost and reduced profit that would be associated with this loosening in owner control should be offset by lower capital costs or other profit-enhancing aspects of diffuse ownership if shareholders choose to broaden ownership.

Demsetz and Lehn found “no significant relationship between ownership concentration and accounting profit rate,” just as they expected.

In a 2013 tribute to Demsetz’s co-author Alchian, economist Thomas Hubbard highlighted their 1972 article, writing:

This paper may be the most influential paper in the economics of organization, catalyzing the development of the field as we know it. It is the most-cited paper published in the AER [American Economic Review] in the past 40 years. (If one takes away finance and econometrics methods papers, it is the most-cited ‘economics’ paper, period.) It is truly a spectacular piece. It is a theory not only of firms’ boundaries, but also the firm’s hierarchical and financial structure.[1]

He was also an early defender of the rights of homosexuals. At the September 1978 Mont Pelerin Society meetings in Hong Kong, Demsetz criticized, on grounds of individual rights, the Briggs Initiative, on the November 1978 California ballot. This initiative would have banned homosexuals from teaching in public schools.  The initiative was defeated, helped by the opposition of Demsetz’s fellow Californian Ronald Reagan.

For more on Demsetz’s life and work, see A Conversation with Harold Demsetz, an Intellectual Portrait at Econlib Videos.

Demsetz, a native of Chicago, earned his undergraduate degree in economics at the University of Illinois in 1953 and his Ph.D. in economics at Northwestern University in 1959. He taught at the University of Michigan from 1958 to 1960, at UCLA from 1960 to 1963, at the University of Chicago from 1963 to 1971, and then again at UCLA from 1971 until his retirement. In 2013, he was made a Distinguished Fellow of the American Economic Association.

In 1963, when Demsetz was on the UCLA faculty, a University of Chicago economist named Reuben Kessel asked him if he was happy there. Demsetz, sensing an offer in the works, answered, “Make me unhappy.” The University of Chicago did just that, and Demsetz moved to Chicago for eight productive years.

 

 

Selected Works

  1. . “Minorities in the Marketplace.” North Carolina Law Review, Vol. 43, No. 2: 271-97.

  2. . “Toward a Theory of Property Rights.” American Economic Review, Vol. 57, No. 2, (May, 1967): 347-59.

  3. . “Why Regulate Utilities?” Journal of Law and Economics,Vol. 11, No. 1, (April, 1968): 55-65.

1972 (with Armen A. Alchian). “Production, Information Costs, and Economic Organization,” American Economic Review, Vol. 62, No. 5, (December 1972): 777-95.

  1. . “Industry Structure, Market Rivalry, and Public Policy,” Journal of Law and Economics,Vol. 16, No. 1 (April, 1973): 1-9.

  2. . ‘Two Systems of Belief about Monopoly,” in Industrial Concentration: The New Learning, edited by H. J. Goldschmid, H. M. Mann and J. F. Weston, Little, Brown.

1985 (with Kenneth Lehn). “The Structure of Corporate Ownership: Causes and Consequences,” Journal of Political Economy, Vol. 93, No. 6 (December, 1985): 1155-1177.

  1. . Ownership, Control, and the Firm. Cambridge, MA: Basil Blackwell.

  2. . Efficiency, Competition, and Policy. Cambridge, MA: Basil Blackwell.


[1] Thomas N. Hubbard, “A Legend in Economics Passes,” Digitopoly, February 20, 2013. At: https://digitopoly.org/2013/02/20/a-legend-in-economics-passes/

(0 COMMENTS)

CEE July 19, 2019

Richard H. Thaler

 

Richard H. Thaler won the 2017 Nobel Prize in Economic Science for “his contributions to behavioral economics.”

In most of his work, Thaler has challenged the standard economist’s model of rational human beings.  He showed some of the ways that people systematically depart from rationality and some of the decisions that resulted. He has used these insights to propose ways to help people save, and save more, for retirement. Thaler also advocates something called “libertarian paternalism.”

Economists generally assume that more choices are better than fewer choices. But if that were so, argues Thaler, people would be upset, not happy, when the host at a dinner party removes the pre-dinner bowl of cashews. Yet many of us are happy that it’s gone. Purposely taking away our choice to eat more cashews, he argues, makes up for our lack of self-control. This simple contradiction between the economists’ model of rationality and actual human behavior, plus many more that Thaler has observed, leads him to divide the population into “Econs” and “Humans.” Econs, according to Thaler, are people who are economically rational and fit the model completely. Humans are the vast majority of people.

Thaler (1980) noticed another anomaly in people’s thinking that is inconsistent with the idea that people are rational. He called it the “endowment effect.” People must be paid much more to give something up (their “endowment”) than they are willing to pay to acquire it. So, to take one of his examples from a survey, people, when asked how much they are willing to accept to take on an added mortality risk of one in one thousand, would give, as a typical response, the number 10,000. But a typical response by people, when asked how much they would pay to reduce an existing risk of death by one in one thousand, was 200.

One of Thaler’s most-cited articles is Werner F. M. De Bondt and Richard Thaler (1985). In that paper they compared the stocks of “losers” and “winners.” They defined losers as stocks that had recently dropped in value and winners as stocks that had recently increased in value, and their hypothesis was that people overreact to news, driving the prices of winners too high and the prices of losers too low. Consistent with that hypothesis, they found that the portfolio of losers outperformed the portfolio of winners.

One of the issues to which Thaler applied his thinking is that of saving for retirement. In his book Misbehaving, Thaler argues that if everyone were an Econ, it wouldn’t matter whether employers’ default option was not to sign up their employees for tax-advantaged retirement accounts and let them opt in or to sign them all up and let employees opt out. There are transactions costs associated with getting out of either default option, of course, but they are small relative to the stakes involved. For that reason, argued Thaler, either option should lead to about the same percentage of employees taking advantage of the program. Yet Brigitte G. Madrian and Dennis F. Shea found[1] that before a company they studied had tried automatic enrollment, only 49 percent of employees had joined the plan. When enrollment became the default, 84 percent of employees stayed enrolled. That is a large difference relative to what most economists would have expected.

Thaler and economist Shlomo Benartzi, arguing that people tend to be myopic and heavily discount the future, helped design a private pension plan to enable people to save more. Called Save More Tomorrow, it automatically increases the percent of their gross pay that people save in 401(k) plans every time they get a pay raise. That way, people can save more without ever having to cut their current consumption expenditures. Many “Econs” were presumably already doing that, but this plan helps Humans, as well. When a midsize manufacturing firm implemented their plan, participants, at the end of four annual raises, had almost quadrupled their saving rate.

In their book Nudge, Thaler, along with co-author Cass Sunstein, a law professor, used behavioral economics to argue for “nudging” people to make better decisions. In each person, they argued, are an impulsive Doer and a farsighted Planner. In the retirement saving example above, the Doer wants to spend now and the Planner wants to save for retirement. Which preferences should be taken account of in public policy?

As noted earlier, Thaler believes in “libertarian paternalism.” In Nudge, he and Sunstein lay out the concept. The basic idea is to have the government set paternalist rules as defaults but let people choose to opt out at low cost. One example is laws requiring motorcyclists to wear helmets. That is paternalism. How to make it “libertarian paternalist?” They favorably cite New York Times columnist John Tierney’s proposal that motorcyclists who don’t want to wear helmets be required to take an extra driving course and show proof of health insurance.

In a review of Nudge, Thomas Leonard writes:

The irony is that behavioral economics, having attacked Homo Economicus as an empirically false description of human choice, now proposes, in the name of paternalism, to enshrine the very same fellow as the image of what people should want to be. Or, more precisely, what paternalists want people to be. For the consequence of dividing the self has been to undermine the very idea of true preferences. If true preferences don’t exist, the libertarian paternalist cannot help people get what they truly want. He can only make like an old fashioned paternalist, and give people what they should want.[2]

In some areas, Thaler seems to have departed from the view that long-term considerations should guide economic policy. A standard view among economists is that after a flood or hurricane, a government should refrain from imposing price controls on crucial items such fresh water, food, or plywood. That way, goes the economic reasoning, suppliers in other parts of the country have an incentive to move goods to where they are needed most and buyers will be careful not to stock up as much immediately after the flood or hurricane. In 2012, when asked about a proposed anti-price-gouging law in Connecticut, Thaler answered succinctly, “Not needed. Big firms hold prices firm. ‘Entrepreneurs’ with trucks help meet supply. Are the latter covered? If so, [the proposed law is] bad.”[3] What he was getting at is that, to some extent, we get the best of both worlds. Companies like Wal-Mart, worried about their reputation with consumers, will refrain from price gouging but will stock up in advance; one-time entrepreneurs, not worried about their reputations, will supply high-priced items to people who want them badly. But in a Marketplace interview in September 2017,[4] Thaler said, “A time of crisis is a time for all of us to pitch in; it’s not a time for all of us to grab.” He seemed to have moved from the mainstream economists’ view to the popular view.

One relatively unexplored area in Thaler’s work is how government officials show the same irrationality that many of us show and the implications of that fact for government policy.

Thaler earned his Bachelor of Arts degree with a major in economics at Case Western Reserve University in 1965, his Masters degree in economics from the University of Rochester in 1970, and his Ph.D. in economics from the University of Rochester in 1974. He was a professor at the University of Rochester’s Graduate School of Management from 1974 to 1978 and a professor at Cornell University’s Johnson School of Management from 1978 to 1995. He has been a professor at the University of Chicago’s Booth School of Business since 1995.

 

 

Selected Works

  1. . Toward a Positive Theory of Consumer Choice,” Journal of Economic Behavior and Organization 1, No. 1, pp. 39-60.

  2. . (with Werner F.M. De Bondt.) “Does the Stock Market Overreact?,” Journal of Finance, Vol. 40, pp. 793-805.

  3. . The Winner’s Curse: Paradoxes and Anomalies of Economic Life. Princeton University Press.

  4. . (with Cass R. Sunstein.) “Libertarian Paternalism,” American Economic Review, Vol. 93, No. 2, pp. 175-179.

  5. . (with Shlomo Benartzi.) “Save More TomorrowsupTM/sup: Using Behavioral Economics to Increase Employee Saving,” Journal of Political Economy, Vol. 112, No. S1, pp. S164-87.

  6. . (with Cass Sunstein.) Nudge: Improving Decisions About Health, Wealth, and Happiness, New Haven: Yale University Press.

  7. . Misbehaving: The Making of Behavioral Economics, New York: W.W. Norton.

 

 

 

[1] Brigitte C. Madrian and Dennis F. Shea, “The Power of Suggestion: Inertia in 401(k) Participation and Savings Behavior,” Quarterly Journal of Economics, Vol. CXVI, Issue 4, November 2001, pp. 1149-1187. At: https://www.ssc.wisc.edu/scholz/Teaching_742/Madrian_Shea.pdf

[2] Thomas Leonard, “Review of Richard Thaler and Cass Sunstein, Nudge: Improving Decisions about Health, Wealth, and Happiness.” Constitutional Political Economy 19(4): 356-360.

[3] http://www.igmchicago.org/surveys/price-gouging

[4] https://www.marketplace.org/shows/marketplace/09012017

(0 COMMENTS)

CEE May 28, 2019

William D. Nordhaus

.jpg)

William D. Nordhaus was co-winner, along with Paul M. Romer, of the 2018 Nobel Prize in Economic Science “for integrating climate change into long-run macroeconomic analysis.”

Starting in the 1970s, Nordhaus constructed increasingly comprehensive models of the interaction between the economy and additions of carbon dioxide to the atmosphere, along with its effects on global warming. Economists use these models, along with assumptions about various magnitudes, to compute the “social cost of carbon” (SCC). The idea is that past a certain point, additions of carbon dioxide to the atmosphere heat the earth and thus create a global negative externality. The SCC is the net cost that using that additional carbon imposes on society. While the warmth has some benefits in, for example, causing longer growing seasons and improving recreational alternatives, it also has costs such as raising ocean levels, making some land uses obsolete. The SCC is the net of these social costs and is measured at the current margin. (The “current margin” language is important because otherwise one can get the wrong impression that any use of carbon is harmful.) Nordhaus and others then use the SCC to recommend taxes on carbon. In 2017, Nordhaus computed the optimal tax to be 31 per ton of carbon dioxide. To put that into perspective, a 31 carbon tax would increase the price of gasoline by about 28 cents.

CEE May 28, 2019

William D. Nordhaus

 

William D. Nordhaus was co-winner, along with Paul M. Romer, of the 2018 Nobel Prize in Economic Science “for integrating climate change into long-run macroeconomic analysis.”

Starting in the 1970s, Nordhaus constructed increasingly comprehensive models of the interaction between the economy and additions of carbon dioxide to the atmosphere, along with its effects on global warming. Economists use these models, along with assumptions about various magnitudes, to compute the “social cost of carbon” (SCC). The idea is that past a certain point, additions of carbon dioxide to the atmosphere heat the earth and thus create a global negative externality. The SCC is the net cost that using that additional carbon imposes on society. While the warmth has some benefits in, for example, causing longer growing seasons and improving recreational alternatives, it also has costs such as raising ocean levels, making some land uses obsolete. The SCC is the net of these social costs and is measured at the current margin. (The “current margin” language is important because otherwise one can get the wrong impression that any use of carbon is harmful.) Nordhaus and others then use the SCC to recommend taxes on carbon. In 2017, Nordhaus computed the optimal tax to be 31 per ton of carbon dioxide. To put that into perspective, a 31 carbon tax would increase the price of gasoline by about 28 cents per gallon.

Nordhaus noted, though, that there is a large amount of uncertainty about the optimal tax. For the 31 tax above, the actual optimal tax could be as little as 6 per ton or as much as 93.

Interestingly, according to Nordhaus’s model, setting too high a carbon tax can be worse than setting no carbon tax at all. According to the calibration of Nordhaus’s model in 2007, with no carbon tax and no other government controls, the present value of damages from environment damage and abatement costs would be 22.59 trillion (in 2004 dollars). Nordhaus’s optimal carbon tax would have reduced damage but increased abatement costs, for a total of 19.52 trillion, an improvement of only 3.07 trillion. But the cost of a policy to limit the temperature increase to only 1.5 C would have been 37.03 trillion, which is 16.4 trillion more than the cost of the “do nothing” option. Those numbers will be different today, but what is not different is that the cost of doing nothing is substantially below the cost of limiting the temperature increase to only 1.5 C.

One item the Nobel committee did not mention is his demonstration that the price of light has fallen by many orders of magnitude over the last 200 years. He showed that the price of light in 1992, adjusted for inflation, was less than one tenth of one percent of its price in 1800. Failure to take this reduction fully into account, noted Nordhaus, meant that economists have substantially underestimated the real growth rate of the economy and the growth rate of real wages.

Nordhaus also did pathbreaking work on the distribution of gains from innovation. In a 2004 study he wrote:

Only a minuscule fraction of the social returns from technological advances over the 1948-2001 period was captured by producers, indicating that most of the benefits of technological change are passed on to consumers rather than captured by producers.

Nordhaus earned his B.A. degree at Yale University in 1963 and his Ph.D. in economics at MIT in 1967. From 1977 to 1979, he was a member of President Carter’s Council of Economic Advisers.

 

 


Selected Works

  1. . “Economic Growth and Climate: The Case of Carbon Dioxide.” American Economic Review, Vol. 67, No. 1, pp. 341-346.

  2. . “Do Real-Output and Real-Wage Measures Capture Reality? The History of Lighting Suggests Not,” in Timothy F. Bresnahan and Robert J. Gordon, editors, The Economics of New Goods. Chicago: University of Chicago Press, 1996.

  3. . (with J. Boyer.) Warming the World: Economic Models of Global Warming. Cambridge, MA: MIT Press.

  4. . “Schumpeterian Profits in the American Economy: Theory and Measurement,” NBER Working Paper No. 10433, April 2004.

  5. . “Projections and Uncertainties about Climate Change in an Era of Minimal Climate Policies,” NBER Working Paper No. 22933.

(0 COMMENTS)

CEE May 28, 2019

Paul M. Romer

In 2018, U.S. economist Paul M. Romer was co-recipient, along with William D. Nordhaus, of the Nobel Prize in Economic Science for “integrating technological innovations into long-run macroeconomic analysis.”

Romer developed “endogenous growth theory.” Before his work in the 1980s and early 1990s, the dominant economic model of economic growth was one that MIT economist Robert Solow developed in the 1950s. Even though Solow concluded that technological change was a key driver of economic growth, his own model made technological change exogenous. That is, technological change was not something determined in the model but was an outside factor. Romer made it endogenous.

CEE May 28, 2019

Paul M. Romer

 

In 2018, U.S. economist Paul M. Romer was co-recipient, along with William D. Nordhaus, of the Nobel Prize in Economic Science for “integrating technological innovations into long-run macroeconomic analysis.”

Romer developed “endogenous growth theory.” Before his work in the 1980s and early 1990s, the dominant economic model of economic growth was one that MIT economist Robert Solow developed in the 1950s. Even though Solow concluded that technological change was a key driver of economic growth, his own model made technological change exogenous. That is, technological change was not something determined in the model but was an outside factor. Romer made it endogenous.

There are actually two very different phases in Romer’s work on endogenous growth theory. Romer (1986) and Romer (1987) had an AK model. Real output was equal to A times K, where A is a positive constant and K is the amount of physical capital. The model assumes diminishing marginal returns to K, but assumes also that part of a firm’s investment in capital results in the production of new technology or human capital that, because it is non-rival and non-excludable, generates spillovers (positive externalities) for all firms. Because this technology is embodied in physical capital, as the capital stock (K) grows, there are constant returns to a broader measure of capital that includes the new technology. Modeling growth this way allowed Romer to keep the assumption of perfect competition, so beloved by economists.

In Romer (1990), Romer rejected his own earlier model. Instead, he assumed that firms are monopolistically competitive. That is, industries are competitive, but many firms within a given industry have market power. Monopolistically competitive firms develop technology that they can exclude others from using. The technology is non-rival; that is, one firm’s use of the technology doesn’t prevent other firms from using it. Because they can exploit their market power by innovating, they have an incentive to innovate. It made sense, therefore, to think carefully about how to structure such incentives.

Consider new drugs. Economists estimate that the cost of successfully developing and bringing a new drug to market is about 2.6 billion. Once the formula is discovered and tested, another firm could copy the invention of the firm that did all the work. If that second firm were allowed to sell the drug, the first firm would probably not do the work in the first place. One solution is patents. A patent gives the inventor a monopoly for a fixed number of years during which it can charge a monopoly price. This monopoly price, earned over years, gives drug companies a strong incentive to innovate.

Another way for new ideas to emerge, notes Romer, is for governments to subsidize research and development.

The idea that technological change is not just an outside factor but itself is determined within the economic system might seem obvious to those who have read the work of Joseph Schumpeter. Why did Romer get a Nobel Prize for his insights? It was because Romer’s model didn’t “blow up.” Previous economists who had tried mathematically to model growth in a Schumpeterian way had failed to come up with models in which the process of growth was bounded.

To his credit, Romer lays out some of his insights on growth in words and very simple math. In the entry on economic growth in The Concise Encyclopedia of Economics, Romer notes the huge difference in long run well being that would result from raising the economic growth rate by only a few percentage points. The “rule of 72” says that the length of time over which a magnitude doubles can be computed by dividing the growth rate into 72. It actually should be called the rule of 70, but the math with 72 is slightly easier. So, for example, if an economy grows by 2 percent per year, it will take 36 years for its size to double. But if it grows by 4 percent per year, it will double in 18 years.

Romer warns that policy makers should be careful about using endogenous growth theory to justify government intervention in the economy. In a 1998 interview he stated:

A lot of people see endogenous growth theory as a blanket seal of approval for all of their favourite government interventions, many of which are very wrong-headed. For example, much of the discussion about infrastructure is just wrong. Infrastructure is to a very large extent a traditional physical good and should be provided in the same way that we provide other physical goods, with market incentives and strong property rights. A move towards privatization of infrastructure provision is exactly the right way to go. The government should be much less involved in infrastructure provision.[1]

In the same interview, he stated, “Selecting a few firms and giving them money has obvious problems” and that governments “must keep from taxing income at such high rates that it severely distorts incentives.”

In 2000, Romer introduced Aplia, an on-line set of problems and answers that economics professors could assign to their students and easily grade. The upside is that students are more prepared for lectures and exams and can engage with their fellow students in economic experiments on line. The downside of Aplia, according to some economics professors, is that students get less practice actually manually drawing demand and supply curves.

In 2009, Romer started advocating “Charter Cities.” His idea was that many people are stuck in countries with bad rules that make wealth creation difficult. If, he argued, an outside government could start a charter city in a country that had bad rules, people in that country could move there. Of course, this would require the cooperation of the country with the bad rules and getting that cooperation is not an easy task. His primary example of such an experiment working is Hong Kong, which was run by the British government until 1997. In a 2009 speech on charter cities, Romer stated, “Britain, through its actions in Hong Kong, did more to reduce world poverty than all the aid programs that we’ve undertaken in the last century.”[2]

Romer earned a B.S. in mathematics in 1977, an M.A. in economics in 1978, and a Ph.D. in economics in 1983, all from the University of Chicago. He also did graduate work at MIT and Queen’s University. He has taught at the University of Rochester, the University of Chicago, UC Berkeley, and Stanford University, and is currently a professor at New York University.

He was chief economist at the World Bank from 2106 to 2018.

 

 

[1] “Interview with Paul M. Romer,” in Brian Snowdon and Howard R. Vane, Modern Macroeconomics: Its Origins, Development and Current State, Cheltenham, UK: Edward Elgar, 2005, p. 690.

[2] Paul Romer, “Why the world needs charter cities,” TEDGlobal 2009.

 


Selected Works

  1. “Increasing Returns and Long-Run Growth.” Journal of Political Economy, Vol. 94, No. 5, pp. 1002-1037.
  2. “Growth Based on Increasing Returns Due to Specialization.” American Economic Review, Papers and Proceedings, Vol. 77, No. 2, pp. 56-62.
  3. “Endogenous Technological Change.” Journal of Political Economy. Vol. 98, No. 5, S71-S102.
  4. “Mathiness in the Theory of Economic Growth.” American Economic Review, Vol. 105, No. 5, pp. 89-93.

 

(0 COMMENTS)

CEE March 13, 2019

Jean Tirole

 

In 2014, French economist Jean Tirole was awarded the Nobel Prize in Economic Sciences “for his analysis of market power and regulation.” His main research, in which he uses game theory, is in an area of economics called industrial organization. Economists studying industrial organization apply economic analysis to understanding the way firms behave and why certain industries are organized as they are.

From the late 1960s to the early 1980s, economists George Stigler, Harold Demsetz, Sam Peltzman, and Yale Brozen, among others, played a dominant role in the study of industrial organization. Their view was that even though most industries don’t fit the economists’ “perfect competition” model—a model in which no firm has the power to set a price—the real world was full of competition. Firms compete by cutting their prices, by innovating, by advertising, by cutting costs, and by providing service, just to name a few. Their understanding of competition led them to skepticism about much of antitrust law and most government regulation.

In the 1980s, Jean Tirole introduced game theory into the study of industrial organization, also known as IO. The key idea of game theory is that, unlike for price takers, firms with market power take account of how their rivals are likely to react when they change prices or product offerings. Although the earlier-mentioned economists recognized this, they did not rigorously use game theory to spell out some of the implications of this interdependence. Tirole did.

One issue on which Tirole and his co-author Jean-Jacques Laffont focused was “asymmetric information.” A regulator has less information than the firms it regulates. So, if the regulator guesses incorrectly about a regulated firm’s costs, which is highly likely, it could set prices too low or too high. Tirole and Laffont showed that a clever regulator could offset this asymmetry by constructing contracts and letting firms choose which contract to accept. If, for example, some firms can take measures to lower their costs and other firms cannot, the regulator cannot necessarily distinguish between the two types. The regulator, recognizing this fact, may offer the firms either a cost-plus contract or a fixed-price contract. The cost-plus contract will appeal to firms with high costs, while the fixed-price contract will appeal to firms that can lower their costs. In this way, the regulator maintains incentives to keep costs down.

Their insights are most directly applicable to government entities, such as the Department of Defense, in their negotiations with firms that provide highly specialized military equipment. Indeed, economist Tyler Cowen has argued that Tirole’s work is about principal-agent theory rather than about reining in big business per se. In the Department of Defense example, the Department is the principal and the defense contractor is the agent.

One of Tirole’s main contributions has been in the area of “two-sided markets.” Consider Google. It can offer its services at one price to users (one side) and offer its services at a different price to advertisers (the other side). The higher the price to users, the fewer users there will be and, therefore, the less money Google will make from advertising. Google has decided to set a zero price to users and charge for advertising. Tirole and co-author Jean-Charles Rochet showed that the decision about profit-maximizing pricing is complicated, and they use substantial math to compute such prices under various theoretical conditions. Although Tirole believes in antitrust laws to limit both monopoly power and the exercise of monopoly power, he argues that regulators must be cautious in bringing the law to bear against firms in two-sided markets. An example of a two-sided market is a manufacturer of videogame consoles. The two sides are game developers and game players. He notes that it is very common for companies in such markets to set low prices on one side of the market and high prices on the other. But, he writes, “A regulator who does not bear in mind the unusual nature of a two-sided market may incorrectly condemn low pricing as predatory or high pricing as excessive, even though these pricing structures are adopted even by the smallest platforms entering the market.”

Tirole has brought the same kind of skepticism to some other related regulatory issues. Many regulators, for example, have advocated government regulation of interchange fees (IFs) in payment card associations such as Visa and MasterCard. But in 2003, Rochet and Tirole wrote that “given the [economics] profession’s current state of knowledge, there is no reason to believe that the IFs chosen by an association are systematically too high or too low, as compared with socially optimal levels.”

After winning the Nobel Prize, Tirole wrote a book for a popular audience, Economics for the Common Good. In it, he applied economics to a wide range of policy issues, laying out, among other things, the advantages of free trade for most residents of a given country and why much legislation and regulation causes negative unintended consequences.

Like most economists, Tirole favors free trade. In Economics for the Common Good, he noted that French consumers gain from freer trade in two ways. First, free trade exposes French monopolies and oligopolies to competition. He argued that two major French auto companies, Renault and Peugeot-Citroen, “sharply increased their efficiency” in response to car imports from Japan. Second, free trade gives consumers access to cheaper goods from low-wage countries.

In that same book, Tirole considered the unintended consequences of a hypothetical, but realistic, case in which a non-governmental organization, wanting to discourage killing elephants for their tusks, “confiscates ivory from traffickers.” In this hypothetical example, the organization can destroy the ivory or sell it. Destroying the ivory, he reasoned, would drive up the price. The higher price could cause poachers to kill more elephants. Another example he gave is of the perverse effects of price ceilings. Not only do they cause shortages, but also, as a result of these shortages, people line up and waste time in queues. Their time spent in queues wipes out the financial gain to consumers from the lower price, while also hurting the suppliers. No one wins and wealth is destroyed.

Also in that book, Tirole criticized the French government’s labor policies, which make it difficult for employers to fire people. He noted that this difficulty makes employers less likely to hire people in the first place. As a result, the unemployment rate in France was above 7 percent for over 30 years. The effect on young people has been particularly pernicious. When he wrote this book, the unemployment rate for French residents between 15 and 24 years old was 24 percent, and only 28.6 percent of percent of those in that age group had jobs. This was much lower than the OECD average of 39.6 percent, Germany’s 46.8 percent, and the Netherlands’ 62.3 percent.

One unintended, but predictable, consequence of government regulations of firms, which Tirole pointed out in Economics for the Common Good, is to make firms artificially small. When a French firm with 49 employees hires one more employee, he noted, it is subject to 34 additional legal obligations. Not surprisingly, therefore, in a figure that shows the number of enterprises with various numbers of employees, a spike occurs at 47 to 49 employees.

In Economics for the Common Good, Tirole ranged widely over policy issues in France. In addressing the French university system, he criticized the system’s rejection of selective admission to university. He argued that such a system causes the least prepared students to drop out and concluded that “[O]n the whole, the French educational system is a vast insider-trading crime.”

Tirole is chairman of the Toulouse School of Economics and of the Institute for Advanced Study in Toulouse. A French citizen, he was born in Troyes, France and earned his Ph.D. in economics in 1981 from the Massachusetts Institute of Technology.


Selected Works

 

  1. . (Co-authored with Jean-Jacques Laffont).“Using Cost Observation to Regulate Firms”. Journal of Political Economy. 94:3 (Part I). June: 614-641.

  2. . The Theory of Industrial Organization. MIT Press.

  3. . (Co-authored with Drew Fudenberg).“Moral Hazard and Renegotiation in Agency Contracts”, Econometrica, 58:6. November: 1279-1319.

  4. . (Co-authored with Jean-Jacques Laffont). A Theory of Incentives in Procurement and Regulation. MIT Press.

2003: (Co-authored with Jean-Charles Rochet). “An Economic Analysis of the Determination of Interchange Fees in Payment Card Systems.” Review of Network Economics. 2:2: 69-79.

  1. . (Co-authored with Jean-Charles Rochet). “Two-Sided Markets: A Progress Report.” The RAND Journal of Economics. 37:3. Autumn: 645-667.

2017, Economics for the Common Good. Princeton University Press.

 

(0 COMMENTS)

CEE November 30, 2018

The 2008 Financial Crisis

It was, according to accounts filtering out of the White House, an extraordinary scene. Hank Paulson, the U.S. treasury secretary and a man with a personal fortune estimated at 700m (380m), had got down on one knee before the most powerful woman in Congress, Nancy Pelosi, and begged her to save his plan to rescue Wall Street.

    The Guardian, September 26, 20081

The financial crisis of 2008 was a complex event that took most economists and market participants by surprise. Since then, there have been many attempts to arrive at a narrative to explain the crisis, but none has proven definitive. For example, a Congressionally-chartered ten-member Financial Crisis Inquiry Commission produced three separate narratives, one supported by the members appointed by the Democrats, one supported by four members appointed by the Republicans, and a third written by the fifth Republican member, Peter Wallison.2

It is important to appreciate that the financial system is complex, not merely complicated. A complicated system, such as a smartphone, has a fixed structure, so it behaves in ways that are predictable and controllable. A complex system has an evolving structure, so it can evolve in ways that no one anticipates. We will never have a proven understanding of what caused the financial crisis, just as we will never have a proven understanding of what caused the first World War.

There can be no single, definitive narrative of the crisis. This entry can cover only a small subset of the issues raised by the episode.

Metaphorically, we may think of the crisis as a fire. It started in the housing market, spread to the sub-prime mortgage market, then engulfed the entire mortgage securities market and, finally, swept through the inter-bank lending market and the market for asset-backed commercial paper.

Home sales began to slow in the latter part of 2006. This soon created problems for the sector of the mortgage market devoted to making risky loans, with several major lenders—including the largest, New Century Financial—declaring bankruptcy early in 2007. At the time, the problem was referred to as the “sub-prime mortgage crisis,” confined to a few marginal institutions.

But by the spring of 2008, trouble was apparent at some Wall Street investment banks that underwrote securities backed by sub-prime mortgages. On March 16, commercial bank JP Morgan Chase acquired one of these firms, Bear Stearns, with help from loan guarantees provided by the Federal Reserve, the central bank of the United States.

Trouble then began to surface at all the major institutions in the mortgage securities market. By late summer, many investors had lost confidence in Freddie Mac and Fannie Mae, and the interest rates that lenders demanded from them were higher than what they could pay and still remain afloat. On September 7, the U.S. Treasury took these two GSEs into “conservatorship.”

Finally, the crisis hit the short-term inter-bank collateralized lending markets, in which all of the world’s major financial institutions participate. This phase began after government officials’ unsuccessful attempts to arrange a merger of investment bank Lehman Brothers, which declared bankruptcy on September 15. This bankruptcy caused the Reserve Primary money market fund, which held a lot of short-term Lehman securities, to mark down the value of its shares below the standard value of one dollar each. That created jitters in all short-term lending markets, including the inter-bank lending market and the market for asset-backed commercial paper in general, and caused stress among major European banks.

The freeze-up in the interbank lending market was too much for leading public officials to bear. Under intense pressure to act, Treasury Secretary Henry Paulson proposed a 700 billion financial rescue program. Congress initially voted it down, leading to heavy losses in the stock market and causing Secretary Paulson to plead for its passage. On a second vote, the measure, known as the Troubled Assets Relief Program (TARP), was approved.

In hindsight, within each sector affected by the crisis, we can find moral hazard, cognitive failures, and policy failures. Moral hazard (in insurance company terminology) arises when individuals and firms face incentives to profit from taking risks without having to bear responsibility in the event of losses. Cognitive failures arise when individuals and firms base decisions on faulty assumptions about potential scenarios. Policy failures arise when regulators reinforce rather than counteract the moral hazard and cognitive failures of market participants.

The Housing Sector

From roughly 1990 to the middle of 2006, the housing market was characterized by the following:

  • an environment of low interest rates, both in nominal and real (inflation-adjusted) terms. Low nominal rates create low monthly payments for borrowers. Low real rates raise the value of all durable assets, including housing.
  • prices for houses rising as fast as or faster than the overall price level
  • an increase in the share of households owning rather than renting
  • loosening of mortgage underwriting standards, allowing households with weaker credit histories to qualify for mortgages.
  • lower minimum requirements for down payments. A standard requirement of at least ten percent was reduced to three percent and, in some cases, zero. This resulted in a large increase in the share of home purchases made with down payments of five percent or less.
  • an increase in the use of new types of mortgages with “negative amortization,” meaning that the outstanding principal balance rises over time.
  • an increase in consumers’ borrowing against their houses to finance spending, using home equity loans, second mortgages, and refinancing of existing mortgages with new loans for larger amounts.
  • an increase in the proportion of mortgages going to people who were not planning to live in the homes that they purchased. Instead, they were buying them to speculate. 3

These phenomena produced an increase in mortgage debt that far outpaced the rise in income over the same period. The trends accelerated in the three years just prior to the downturn in the second half of 2006.

The rise in mortgage debt relative to income was not a problem as long as home prices were rising. A borrower having difficulty finding the cash to make a mortgage payment on a house that had appreciated in value could either borrow more with the house as collateral or sell the house to pay off the debt.

But when house prices stopped rising late in 2006, households that had taken on too much debt began to default. This set in motion a reverse cycle: house foreclosures increased the supply of homes for sale; meanwhile, lenders became wary of extending credit, and this reduced demand. Prices fell further, leading to more defaults and spurring lenders to tighten credit still further.

During the boom, some people were speculating in non-owner-occupied homes, while others were buying their own homes with little or no money down. And other households were, in the vernacular of the time, “using their houses as ATMs,” taking on additional mortgage debt in order to finance consumption.

In most states in the United States, once a mortgage lender forecloses on a property, the borrower is not responsible for repayment, even if the house cannot be sold for enough to cover the loan. This creates moral hazard, particularly for property speculators, who can enjoy all of the profits if house prices rise but can stick lenders with some of the losses if prices fall.

One can see cognitive failure in the way that owners of houses expected home prices to keep rising at a ten percent rate indefinitely, even though overall inflation was less than half that amount.4Also, many house owners seemed unaware of the risks of mortgages with “negative amortization.”

Policy failure played a big role in the housing sector. All of the trends listed above were supported by public policy. Because they wanted to see increased home ownership, politicians urged lenders to loosen credit standards. With the Community Reinvestment Act for banks and Affordable Housing Goals for Freddie Mac and Fannie Mae, they spurred traditional mortgage lenders to increase their lending to minority and low-income borrowers. When the crisis hit, politicians blamed lenders for borrowers’ inability to repay, and political pressure exacerbated the credit tightening that subsequently took place

The Sub-prime Mortgage Sector

Until the late 1990s, few lenders were willing to give mortgages to borrowers with problematic credit histories. But sub-prime mortgage lenders emerged and grew rapidly in the decade leading up to the crisis. This growth was fueled by financial innovations, including the use of credit scoring to finely grade mortgage borrowers, and the use of structured mortgage securities (discussed in the next section) to make the sub-prime sector attractive to investors with a low tolerance for risk. Above all, it was fueled by rising home prices, which created a history of low default rates.

There was moral hazard in the sub-prime mortgage sector because the lenders were not holding on to the loans and, therefore, not exposing themselves to default risk. Instead, they packaged the mortgages into securities and sold them to investors, with the securities market allocating the risk.

Because they sold loans in the secondary market, profits at sub-prime lenders were driven by volume, regardless of the likelihood of default. Turning down a borrower meant getting no revenue. Approving a borrower meant earning a fee. These incentives were passed through to the staff responsible for finding potential borrowers and underwriting loans, so that personnel were compensated based on “production,” meaning the new loans they originated.

Although in theory the sub-prime lenders were passing on to others the risks that were embedded in the loans they were making, they were among the first institutions to go bankrupt during the financial crisis. This shows that there was cognitive failure in the management at these companies, as they did not foresee the house price slowdown or its impact on their firms.

Cognitive failure also played a role in the rise of mortgages that were underwritten without verification of the borrowers’ income, employment, or assets. Historical data showed that credit scores were sufficient for assessing borrower risk and that additional verification contributed little predictive value. However, it turned out that once lenders were willing to forgo these documents, they attracted a different set of borrowers, whose propensity to default was higher than their credit scores otherwise indicated.

There was policy failure in that abuses in the sub-prime mortgage sector were allowed to continue. Ironically, while the safety and soundness of Freddie Mac and Fannie Mae were regulated under the Department of Housing and Urban Development, which had an institutional mission to expand home ownership, consumer protection with regard to mortgages was regulated by the Federal Reserve Board, whose primary institutional missions were monetary policy and bank safety. Though mortgage lenders were setting up borrowers to fail, the Federal Reserve made little or no effort to intervene. Even those policy makers who were concerned about practices in the sub-prime sector believed that, on balance, sub-prime mortgage lending was helping a previously under-served set of households to attain home ownership.5

Mortagage Securities

A mortgage security consists of a pool of mortgage loans, the payments on which are passed through to pension funds, insurance companies, or other institutional investors looking for reliable returns with little risk. The market for mortgage securities was created by two government agencies, known as Ginnie Mae and Freddie Mac, established in 1968 and 1970, respectively.

Mortgage securitization expanded in the 1980s, when Fannie Mae, which previously had used debt to finance its mortgage purchases, began issuing its own mortgage-backed securities. At the same time, Freddie Mac was sold to shareholders, who encouraged Freddie to grow its market share. But even though Freddie and Fannie were shareholder-owned, investors treated their securities as if they were government-backed. This was known as an implicit government guarantee.

Attempts to create a market for private-label mortgage securities (PLMS) without any form of government guarantee were largely unsuccessful until the late 1990s. The innovations that finally got the PLMS market going were credit scoring and the collateralized debt obligation (CDO).

Before credit scoring was used in the mortgage market, there was no quantifiable difference between any two borrowers who were approved for loans. With credit scoring, the Wall Street firms assembling pools of mortgages could distinguish between a borrower with a very good score (750, as measured by the popular FICO system) and one with a more doubtful score (650).

Using CDOs, Wall Street firms were able to provide major institutional investors with insulation from default risk by concentrating that risk in other sub-securities (“tranches”) that were sold to investors who were more tolerant of risk. In fact, these basic CDOs were enhanced by other exotic mechanisms, such as credit default swaps, that reallocated mortgage default risk to institutions in which hardly any observer expected to find it, including AIG Insurance.

There was moral hazard in the mortgage securities market, as Freddie Mac and Fannie Mae sought profits and growth on behalf of shareholders, but investors in their securities expected (correctly, as it turned out) that the government would protect them against losses. Years before the crisis, critics grumbled that the mortgage giants exemplified privatized profits and socialized risks.6

There was cognitive failure in the assessment of default risk. Assembling CDOs and other exotic instruments required sophisticated statistical modeling. The most important driver of expectations for mortgage defaults is the path for house prices, and the steep, broad-based decline in home prices that took place in 2006-2009 was outside the range that some modelers allowed for.

Another source of cognitive failure is the “suits/geeks” divide. In many firms, the financial engineers (“geeks) understood the risks of mortgage-related securities fairly well, but their conclusions did not make their way to the senior management level (“suits”).

There was policy failure on the part of bank regulators. Their previous adverse experience was with the Savings and Loan Crisis, in which firms that originated and retained mortgages went bankrupt in large numbers. This caused bank regulators to believe that mortgage securitization, which took risk off the books of depository institutions, would be safer for the financial system. For the purpose of assessing capital requirements for banks, regulators assigned a weight of 100 percent to mortgages originated and held by the bank, but assigned a weight of only 20 percent to the bank’s holdings of mortgage securities issued by Freddie Mac, Fannie Mae, or Ginnie Mae. This meant that banks needed to hold much more capital to hold mortgages than to hold mortgage-related securities; that naturally steered them toward the latter.

In 2001, regulators broadened the low-risk umbrella to include AAA-rated and AA-rated tranches of private-label CDOs. This ruling helped to generate a flood of PLMS, many of them backed by sub-prime mortgage loans.7

By using bond ratings as a key determinant of capital requirements, the regulators effectively put the bond rating agencies at the center of the process of creating private-label CDOs. The rating agencies immediately became subject to both moral hazard and cognitive failure. The moral hazard came from the fact that the rating agencies were paid by the issuers of securities, who wanted the most generous ratings possible, rather than being paid by the regulators, who needed more rigorous ratings. The cognitive failure came from the fact that that models that the rating agencies used gave too little weight to potential scenarios of broad-based declines in house prices. Moreover, the banks that bought the securities were happy to see them rated AAA because the high ratings made the securities eligible for lower capital requirements on the part of the banks. Both sides, therefore, buyers and sellers, had bad incentives.

There was policy failure on the part of Congress. Officials in both the Clinton and Bush Administrations were unhappy with the risk that Freddie Mac and Fannie Mae represented to taxpayers. But Congress balked at any attempt to tighten regulation of the safety and soundness of those firms.8

The Inter-bank Lending Market

There are a number of mechanisms through which financial institutions make short-term loans to one another. In the United States, banks use the Federal Funds market to manage short-term fluctuations in reserves. Internationally, banks lend in what is known as the LIBOR market.

One of the least known and most important markets is for “repo,” which is short for “repurchase agreement.” As first developed, the repo market was used by government bond dealers to finance inventories of securities, just as an automobile dealer might finance an inventory of cars. A money-market fund might lend money for one day or one week to a bond dealer, with the loan collateralized by a low-risk long-term security.

In the years leading up to the crisis, some dealers were financing low-risk mortgage-related securities in the repo market. But when some of these securities turned out to be subject to price declines that took them out of the “low-risk” category, participants in the repo market began to worry about all repo collateral. Repo lending offers very low profit margins, and if an investor has to be very discriminating about the collateral backing a repo loan, it can seem preferable to back out of repo lending altogether. This, indeed, is what happened, in what economist Gary Gorton and others called a “run on repo.”9

Another element of institutional panic was “collateral calls” involving derivative financial instruments. Derivatives, such as credit default swaps, are like side bets. The buyer of a credit default swap is betting that a particular debt instrument will default. The seller of a credit default swap is betting the opposite.

In the case of mortgage-related securities, the probability of default seemed low prior to the crisis. Sometimes, buyers of credit default swaps were merely satisfying the technical requirements to record the underlying securities as AAA-rated. They could do this if they obtained a credit default swap from an institution that was itself AAA-rated. AIG was an insurance company that saw an opportunity to take advantage of its AAA rating to sell credit default swaps on mortgage-related securities. AIG collected fees, and its Financial Products division calculated that the probability of default was essentially zero. The fees earned on each transaction were low, but the overall profit was high because of the enormous volume. AIG’s credit default swaps were a major element in the expansion of shadow banking by non-bank financial institutions during the run-up to the crisis.

Late in 2005, AIG abruptly stopped writing credit default swaps, in part because its own rating had been downgraded below AAA earlier in the year for unrelated reasons. By the time AIG stopped selling credit default swaps on mortgage-related securities, it had outstanding obligations on 80 billion of underlying securities and was earning 1 billion a year in fees.10

Because AIG no longer had its AAA rating and because the underlying mortgage securities, while not in default, were increasingly shaky, provisions in the contracts that AIG had written allowed the buyers of credit default swaps to require AIG to provide protection in the form of low-risk securities posted as collateral. These “collateral calls” were like a margin call that a stock broker will make on an investor who has borrowed money to buy stock that subsequently declines in value. In effect, collateral calls were a run on AIG’s shadow bank.

These collateral calls were made when the crisis in the inter-bank lending market was near its height in the summer of 2008 and banks were hoarding low-risk securities. In fact, the shortage of low-risk securities may have motivated some of the collateral calls, as institutions like Deutsche Bank and Goldman Sachs sought ways to ease their own liquidity problems. In any event, AIG could not raise enough short-term funds to meet its collateral calls without trying to dump long-term securities into a market that had little depth to absorb them. It turned to Federal authorities for a bailout, which was arranged and creatively backed by the Federal Reserve, but at the cost of reducing the value of shares in AIG.

With repos and derivatives, there was moral hazard in that the traders and executives of the narrow units that engaged in exotic transactions were able to claim large bonuses on the basis of short-term profits. But the adverse long-term consequences were spread to the rest of the firm and, ultimately, to taxpayers.

There was cognitive failure in that the collateral calls were an unanticipated risk of the derivatives business. The financial engineers focused on the (remote) chances of default on the underlying securities, not on the intermediate stress that might emerge from collateral calls.

There was policy failure when Congress passed the Commodity Futures Modernization Act. This legislation specified that derivatives would not be regulated by either of the agencies with the staff most qualified to understand them. Rather than require oversight by the Securities and Exchange Commission or the Commodity Futures Trading Commission (which regulated market-traded derivatives), Congress decreed that the regulator responsible for overseeing each firm would evaluate its derivative position. The logic was that a bank that was using derivatives to hedge other transactions should have its derivative position evaluated in a larger context. But, as it happened, the insurance and bank regulators who ended up with this responsibility were not equipped to see the dangers at firms such as AIG.

There was also policy failure in that officials approved of securitization that transferred risk out of the regulated banking sector. While Federal Reserve Officials were praising the risk management of commercial banks,11risk was accumulating in the shadow banking sector (non-bank institutions in the financial system), including AIG insurance, money market funds, Wall Street firms such as Bear Stearns and Lehman Brothers, and major foreign banks. When problems in the shadow banking sector contributed to the freeze in inter-bank lending and in the market for asset-backed commercial paper, policy makers felt compelled to extend bailouts to satisfy the needs of these non-bank institutions for liquid assets.

Conclusion

In terms of the fire metaphor suggested earlier, in hindsight, we can see that the markets for housing, sub-prime mortgages, mortgage-related securities, and inter-bank lending were all highly flammable just prior to the crisis. Moral hazard, cognitive failures, and policy failures all contributed the combustible mix.

The crisis also reflects a failure of the economics profession. A few economists, most notably Robert Shiller,12warned that the housing market was inflated, as indicated by ratios of prices to rents that were high by historical standards. Also, when risk-based capital regulation was proposed in the wake of the Savings and Loan Crisis and the Latin American debt crisis, a group of economists known as the Shadow Regulatory Committee warned that these regulations could be manipulated. They recommended, instead, greater use of senior subordinated debt at regulated financial institutions.13Many economists warned about the incentives for risk-taking at Freddie Mac and Fannie Mae.14

But even these economists failed to anticipate the 2008 crisis, in large part because economists did not take note of the complex mortgage-related securities and derivative instruments that had been developed. Economists have a strong preference for parsimonious models, and they look at financial markets through a lens that includes only a few types of simple assets, such as government bonds and corporate stock. This approach ignores even the repo market, which has been important in the financial system for over 40 years, and, of course, it omits CDOs, credit default swaps and other, more recent innovations.

Financial intermediaries do not produce tangible output that can be measured and counted. Instead, they provide intangible benefits that economists have never clearly articulated. The economics profession has a long way to go to catch up with modern finance.


About the Author

Arnold Kling was an economist with the Federal Reserve Board and with the Federal Home Loan Mortgage Corporation before launching one of the first Web-based businesses in 1994.  His most recent books areSpecialization and Trade and The Three Languages of Politics. He earned his Ph.D. in economics from the Massachusetts Institute of Technology.


Footnotes

1

“A desperate plea – then race for a deal before ‘sucker goes down’” The Guardian, September 26, 2008. https://www.theguardian.com/business/2008/sep/27/wallstreet.useconomy1

 

2

The report and dissents of the Financial Crisis Inquiry Commission can be found at https://fcic.law.stanford.edu/

3

See Stefania Albanesi, Giacomo De Giorgi, and Jaromir Nosal 2017, “Credit Growth and the Financial Crisis: A New Narrative” NBER working paper no. 23740. http://www.nber.org/papers/w23740

 

4

Karl E. Case and Robert J. Shiller 2003, “Is there a Bubble in the Housing Market?” Cowles Foundation Paper 1089 http://www.econ.yale.edu/shiller/pubs/p1089.pdf

 

5

Edward M. Gramlich 2004, “Subprime Mortgage Lending: Benefits, Costs, and Challenges,” Federal Reserve Board speeches. https://www.federalreserve.gov/boarddocs/speeches/2004/20040521/

 

6

For example, in 1999, Treasury Secretary Lawrence Summers said in a speech, “Debates about systemic risk should also now include government-sponsored enterprises.” See Bethany McLean and Joe Nocera 2010, All the Devils are Here: The Hidden History of the Financial Crisis Portfolio/Penguin Press. The authors write that Federal Reserve Chairman Alan Greenspan was also, like Summers, disturbed by the moral hazard inherent in the GSEs.

 

7

Jeffrey Friedman and Wladimir Kraus 2013, Engineering the Financial Crisis: Systemic Risk and the Failure of Regulation, University of Pennsylvania Press.

 

8

See McLean and Nocera, All the Devils are Here

 

9

Gary Gorton, Toomas Laarits, and Andrew Metrick 2017, “The Run on Repo and the Fed’s Response,” Stanford working paper. https://www.gsb.stanford.edu/sites/gsb/files/fin_11_17_gorton.pdf

 

10

Talking Points Memo 2009, “The Rise and Fall of AIG’s Financial Products Unit” https://talkingpointsmemo.com/muckraker/the-rise-and-fall-of-aig-s-financial-products-unit

 

11

Chairman Ben S. Bernanke 2006, “Modern Risk Management and Banking Supervision,” Federal Reserve Board speeches. https://www.federalreserve.gov/newsevents/speech/bernanke20060612a.htm

 

12

National Public Radio 2005, “Yale Professor Predicts Housing ’Bubble’ Will Burst” https://www.npr.org/templates/story/story.php?storyId4679264

 

13

Shadow Financial Regulatory Committee 2001, “The Basel Committee’s Revised Capital Accord Proposal” https://www.bis.org/bcbs/ca/shfirect.pdf

14

See the discussion in Viral V. Acharya, Matthew Richardson, Stijn Van Nieuwerburgh and Lawrence J. White 2011, Guaranteed to Fail: Fannie Mae, Freddie Mac, and the Debacle of Mortgage Finance, Princeton University Press.

 

(0 COMMENTS)

CEE September 18, 2018

Christopher Sims

 

Christopher Sims was awarded, along with Thomas Sargent, the 2011 Nobel Prize in Economic Sciences. The Nobel committee cited their “empirical research on cause and effect in the macroeconomy.” The economists who spoke at the press conference announcing the award emphasized Sargent’s and Sims’ analysis of role of people’s expectations.

One of Sims’s earliest famous contributions was his work on money-income causality, which was cited by the Nobel committee. Money and income move together, but which causes which? Milton Friedman argued that changes in the money supply caused changes in income, noting that the supply of money often rises before income rises. Keynesians such as James Tobin argued that changes in income caused changes in the amount of money. Money seems to move first, but causality, said Tobin and others, still goes the other way: people hold more money when they expect income to rise in the future.

Which view is true? In 1972 Sims applied Clive Granger’s econometric test of causality. On Granger’s definition one variable is said to cause another variable if knowledge of the past values of the possibly causal variable helps to forecast the effect variable over and above the knowledge of the history of the effect variable itself. Implementing a test of this incremental predictability, Sims concluded “[T]he hypothesis that causality is unidirectional from money to income [Friedman’s view] agrees with the postwar U.S. data, whereas the hypothesis that causality is unidirectional from income to money [Tobin’s view] is rejected.”

Sims’s influential article “Macroeconomics and Reality” was a criticism of both the usual econometric interpretation of large-scale Keynesian econometric models and ofRobert Lucas’s influential earlier criticism of these Keynesian models (the so-called Lucas critique). Keynesian econometricians had claimed that with sufficiently accurate theoretical assumptions about the structure of the economy, correlations among the macroeconomic variables could be used to measure the strengths of various structural connections in the economy. Sims argued that there was no basis for thinking that these theoretical assumptions were sufficiently accurate. Such so-called “identifying assumptions” were, Sims said, literally “incredible.” Lucas, on the other hand, had not rejected the idea of such identification. Rather he had pointed out that, if people held “rational expectations” – that is, expectations that, though possibly incorrect, did not deviate on average from what actually occurs in a correctable, systematic manner – then failing to account for them would undermine the stability of the econometric estimates and render the macromodels useless for policy analysis. Lucas and his New Classical followers argued that in forming their expectations people take account of the rules implicitly followed by monetary and fiscal policymakers; and, unless those rules were integrated into the econometric model, every time the policymakers adopted a new policy (i.e., new rules), the estimates would shift in unpredictable ways.

While rejecting the structural interpretation of large-scale macromodels, Sims did not reject the models themselves, writing: “[T]here is no immediate prospect that large-scale macromodels will disappear from the scene, and for good reason: they are useful tools in forecasting and policy analysis.” Sims conceded that the Lucas critique was correct in those cases in which policy regimes truly changed. But he argued that such regime changes were rare and that most economic policy was concerned with the implementation of a particular policy regime. For that purpose, the large-scale macromodels could be helpful, since what was needed for forecasting was a model that captured the complex interrelationships among variables and not one that revealed the deeper structural connections.

In the same article, Sims proposed an alternative to large-scale macroeconomic models, the vector autoregression (or VAR). In Sims’s view, the VAR had the advantages of the earlier macromodels, in that it could capture the complex interactions among a relatively large number of variables needed for policy analysis and yet did not rely on as many questionable theoretical assumptions. With subsequent developments by Sims and others, the VAR became a major tool of empirical macroeconomic analysis.

Sims has also suggested that sticky prices are caused by “rational inattention,” an idea imported from electronic communications. Just as computers do not access information on the Internet infinitely fast (but rather, in bits per second), individual actors in an economy have only a finite ability to process information. This delay produces some sluggishness and randomness, and allows for more accurate forecasts than conventional models, in which people are assumed to be highly averse to change.

Sims’s recent work has focused on the fiscal theory of the price level, the view that inflation in the end is determined by fiscal problems—the overall amount of debt relative to the government’s ability to repay it—rather than by the split in government debt between base money and bonds. In 1999, Sims suggested that the fiscal foundations of the European Monetary Union were “precarious” and that a fiscal crisis in one country “would likely breed contagion effects in other countries.” The Greek financial crisis about a decade later seemed to confirm his prediction.

Christopher Sims earned his B.A. in mathematics in 1963 and his Ph.D. in economics in 1968, both from Harvard University. He taught at Harvard from 1968 to 1970, at the University of Minnesota from 1970 to 1990, at Yale University from 1990 to 1999, and at Princeton University from 1999 to the present. He has been a Fellow of the Econometric Society since 1974, a member of the American Academy of Arts and Sciences since 1988, a member of the National Academy of Sciences since 1989, President of the Econometric Society (1995), and President of the American Economic Association (2012). He has been a Visiting Scholar for the Federal Reserve Banks of Atlanta, New York, and Philadelphia off and on since 1994.


Selected Works

  1. . “Money, Income, and Causality.” American Economic Review 62: 4 (September): 540-552.

  2. . “Macroeconomics and Reality.” Econometrica 48: 1 (January): 1-48.

1990 (with James H. Stock and Mark W. Watson). “Inference in Linear Time Series Models with some Unit Roots.” Econometrica 58: 1 (January): 113-144.

  1. . “The Precarious Fiscal Foundations of EMU.” De Economist 147:4 (December): 415-436.

  2. . “Implications of Rational Inattention.” Journal of Monetary Economics 50: 3 (April): 665–690.

(0 COMMENTS)

CEE June 28, 2018

Gordon Tullock

 

Gordon Tullock, along with his colleague James M. Buchanan, was a founder of the School of Public Choice. Among his contributions to public choice were his study of bureaucracy, his early insights on rent seeking, his study of political revolutions, his analysis of dictatorships, and his analysis of incentives and outcomes in foreign policy. Tullock also contributed to the study of optimal organization of research, was a strong critic of common law, and did work on evolutionary biology. He was arguably one of the ten or so most influential economists of the last half of the twentieth century. Many economists believe that Tullock deserved to share Buchanan’s 1986 Nobel Prize or even deserved a Nobel Prize on his own.

One of Tullock’s early contributions to public choice was The Calculus of Consent: Logical Foundations of Constitutional Democracy, co-authored with Buchanan in 1962. In that path-breaking book, the authors assume that people seek their own interests in the political system and then consider the results of various rules and political structures. One can think of their book as a political economist’s version of Montesquieu.

One of the most masterful sections of The Calculus of Consent is the chapter in which the authors, using a model formulated by Tullock, consider what good decision rules would be for agreeing to have someone in government make a decision for the collective. An individual realizes that if only one person’s consent is required, and he is not that person, he could have huge costs imposed on him. Requiring more people’s consent in order for government to take action reduces the probability that that individual will be hurt. But as the number of people required to agree rises, the decision costs rise. In the extreme, if unanimity is required, people can game the system and hold out for a disproportionate share of benefits before they give their consent. The authors show that the individual’s preferred rule would be one by which the costs imposed on him plus the decision costs are at a minimum. That preferred rule would vary from person to person. But, they note, it would be highly improbable that the optimal decision rule would be one that requires a simple majority. They write, “On balance, 51 percent of the voting population would not seem to be much preferable to 49 percent.” They suggest further that the optimal rule would depend on the issues at stake. Because, they note, legislative action may “produce severe capital losses or lucrative capital gains” for various groups, the rational person, not knowing his own future position, might well want strong restraints on the exercise of legislative power.

Tullock’s part of The Calculus of Consent was a natural outgrowth of an unpublished manuscript written in the 1950s that later became his 1965 book, The Politics of Bureaucracy. Buchanan, reminiscing about that book, summed up Tullock’s approach and the book’s significance:

The substantive contribution in the manuscript was centered on the hypothesis that, regardless of role, the individual bureaucrat responds to the rewards and punishments that he confronts. This straightforward, and now so simple, hypothesis turned the whole post-Weberian quasi-normative approach to bureaucracy on its head. . . . The economic theory of bureaucracy was born.1

Buchanan noted in his reminiscence that Tullock’s “fascinating analysis” was “almost totally buried in an irritating personal narrative account of Tullock’s nine-year experience in the foreign service hierarchy.” Buchanan continued: “Then, as now, Tullock’s work was marked by his apparent inability to separate analytical exposition from personal anecdote.” Translation: Tullock learned from his experiences. As a Foreign Service officer with the U.S. State Department for nine years Tullock learned, up close and “personal,” how dysfunctional bureaucracy can be. In a later reminiscence, Tullock concluded:

A 90 per cent cut-back on our Foreign Service would save money without really damaging our international relations or stature.2

Tullock made many other contributions in considering incentives within the political system. Particularly noteworthy was his work on political revolutions and on dictatorships.

Consider, first, political revolutions. Any one person’s decision to participate in a revolution, Tullock noted, does not much affect the probability that the revolution will succeed. Therefore, each person’s actions do not much affect his expected benefits from revolution. On the other hand, a ruthless head of government can individualize the costs by heavily punishing those who participate in a revolution. So anyone contemplating participating in a revolution will be comparing heavy individual costs with small benefits that are simply his pro rata share of the overall benefits. Therefore, argued Tullock, for people to participate, they must expect some large benefits that are tied to their own participation, such as a job in the new government. That would explain an empirical regularity that Tullock noted—namely that “in most revolutions, the people who overthrow the existing government were high officials in that government before the revolution.”

This thinking carried over to his work on autocracy. In Autocracy, Tullock pointed out that in most societies at most times, governments were not democratically elected but were autocracies: they were dictatorships or kingdoms. For that reason, he argued, analysts should do more to understand them. Tullock’s book was his attempt to get the discussion started. In a chapter titled “Coups and Their Prevention,” Tullock argued that one of the autocrat’s main challenges is to survive in office. He wrote: “The dictator lives continuously under the Sword of Damocles and equally continuously worries about the thickness of the thread.” Tullock pointed out that a dictator needs his countrymen to believe not that he is good, just, or ordained by God, but that those who try to overthrow him will fail.”

Among modern economists, Tullock was the earliest discoverer of the concept of “rent seeking,” although he did not call it that. Before his work, the usual measure of the deadweight loss from monopoly was the part of the loss in consumer surplus that did not increase producer surplus for the monopolist. Consumer surplus is the maximum amount that consumers are willing to pay minus the amount they actually pay; producer surplus, also called “economic rent,” is the amount that producers get minus the minimum amount for which they would be willing to produce. Harberger3 had estimated that for the U.S. economy in the 1950s, that loss was very low, on the order of 0.1 percent of Gross National Product. In “The Welfare Cost of Tariffs, Monopolies, and Theft,” Tullock argued that this method understated the loss from monopoly because it did not take account of the investment of the monopolist—and of others trying to be monopolists—in becoming monopolists. These investments in monopoly are a loss to the economy. Tullock also pointed out that those who seek tariffs invest in getting those tariffs, and so the standard measure of the loss from tariffs understated the loss. His analysis, as the tariff example illustrates, applies more to firms seeking special privileges from government than to private attempts to monopolize via the free market because private attempts often lead, as if by an invisible hand, to increased competition.”

One of Tullock’s most important insights in public choice was in a short article in 1975 titled “The Transitional Gains Trap.” He noted that even though rent seeking often leads to big gains for the rent seekers, those gains are capitalized in asset prices, which means that buyers of the assets make a normal return on the asset. So, for example, if the government requires the use of ethanol in gasoline, owners of land on which corn is grown will find that their land is worth more because of the regulatory requirement. (Ethanol in the United States is produced from corn.) They gain when the regulation is first imposed. But when they sell the land, the new owner pays a price equal to the present value of the stream of the net profits from the land. So the new owner doesn’t get a supra-normal rate of return from the land. In other words, the owner at the time that the regulation was imposed got “transitional gains,” but the new owner does not. This means that the new owner will suffer a capital loss if the regulation is removed and will fight hard to keep the regulation in place, arguing, correctly, that he paid for those gains. That makes repealing the regulation more difficult than otherwise. Tullock notes that, therefore, we should try hard to avoid getting into these traps because they are hard to get out of.

Tullock was one of the few public choice economists to apply his tools to foreign policy. In Open Secrets of American Foreign Policy, he takes a hard-headed look at U.S. foreign policy rather than the romantic “the United States is the good guys” view that so many Americans take. For example, he wrote of the U.S. government’s bombing of Serbia under President Bill Clinton:

[T]he bombing campaign was a clear-cut violation of the United Nations Charter and hence, should be regarded as a war crime. It involved the use of military forces without the sanction of the Security Council and without any colorable claim of self-defense. Of course, it was not a first—we [the U.S. government] had done the same thing in Vietnam, Grenada and Panama.

Possibly Tullock’s most underappreciated contributions were in the area of methodology and the economics of research. About a decade after spending six months with philosopher Karl Popper at the Center for Advanced Studies in Palo Alto, Tullock published The Organization of Inquiry. In it, he considered why scientific discovery in both the hard sciences and economics works so well without any central planner, and he argued that centralized funding by government would slow progress. After arguing that applied science is generally more valuable than pure science, Tullock wrote:

Nor is there any real justification for the general tendency to consider pure research as somehow higher and better than applied research. It is certainly more pleasant to engage in research in fields that strike you as interesting than to confine yourself to fields which are likely to be profitable, but there is no reason why the person choosing the more pleasant type of research should be considered more noble.4

In Tullock’s view, a system of prizes for important discoveries would be an efficient way of achieving important breakthroughs. He wrote:

As an extreme example, surely offering a reward of 1 billion for the first successful ICBM would have resulted in both a large saving of money for the government and much faster production of this weapon.5

Tullock was born in Rockford, Illinois and was an undergrad at the University of Chicago from 1940 to 1943. His time there was interrupted when he was drafted into the U.S. Army. During his time at Chicago, though, he completed a one-semester course in economics taught by Henry Simons. After the war, he returned to the University of Chicago Law School, where he completed the J.D. degree in 1947. He was briefly with a law firm in 1947 before going into the Foreign Service, where he worked for nine years. He was an economics professor at the University of South Carolina (1959-1962), the University of Virginia (1962-1967), Rice University (1968-1969), the Virginia Polytechnic Institute and State University (1968-1983), George Mason University (1983-1987), the University of Arizona (1987-1999), and again at George Mason University (1999-2008). In 1966, he started the journal Papers in Non-Market Decision Making, which, in 1969, was renamed Public Choice.


Selected Works

 

  1. . The Calculus of Consent. (Co-authored with James M. Buchanan.) Ann Arbor, Michigan: University of Michigan Press.

  2. . The Politics of Bureaucracy. Public Affairs Press. Washington, D.C.: Public Affairs Press.

  3. . The Organization of Inquiry. Durham, North Carolina: Duke University Press.

  4. . “The Welfare Costs of Tariffs, Monopolies, and Theft,” Western Economic Journal, 5:3 (June): 224-232.

  5. . Toward a Mathematics of Politics. Ann Arbor, Michigan: University of Michigan Press.

  6. . “The Paradox of Revolution.” Public Choice. Vol. 11. Fall: 89-99.

1975: “The Transitional Gains Trap.” Bell Journal of Economics, 6:2 (Autumn): 671-678.

1987: Autocracy. Hingham, Massachusetts: Kluwer Academic Publishers.

  1. . Open Secrets of American Foreign Policy. New Jersey: World Scientific Publishing Co.

 


Footnotes

James M. Buchanan. 1987. The qualities of a natural economist. In Charles K. Rowley, (Ed.) (1987). Democracy and public choice. Oxford and New York: Basil Blackwell, 9-19.

 

Gordon Tullock. 2009. Memories of an unexciting life. Unfinished and unpublished manuscript. Tucson, 2009. Quoted in Charles K. Rowley and Daniel Houser. “The Life and Times of Gordon Tullock.” 2011. George Mason University. Department of Economics. Paper No. 11-56. December 20.

 

Arnold C. Harberger. 1954 “Monopoly and Resource Allocation.” American Economic Review. 44(2): 77-87.

 

Tullock. 1966. P. 14.

 

Tullock. 1966. P. 168.

 

(0 COMMENTS)

Here are the 10 latest posts from Econlib.

Econlib June 4, 2020

A Humble State with No Motocarde

In many ways, the modern world, including economic freedom, was born from the fear of tyranny and the institutions (not always successful) to prevent it. In Power and Prosperity: Outgrowing Communist and Capitalist Dictatorships (Basic Books, 2000), famous economist Mancur Olson had interesting historical remarks about Italian city-states in early modern times:

Sometimes, when leading families or merchants organized a government for their city, they not only provided for some power sharing through voting but took pains to reduce the probability that the government’s chief executive could assume autocratic power. For a time in Genoa, for example, the chief administrator of the government had to be an outsider—and thus someone with no membership in any of the powerful families in the city. Moreover, he was constrained to a fixed term of office, forced to leave the city after the end of his term, and forbidden from marrying into any of the local families. In Venice, after a doge who attempted to make himself autocrat was beheaded for his offense, subsequent doges were followed in official processions by a sword-bearing symbolic executioner as a reminder of the punishment intended for any leader who attempted to assume dictatorial power. As the theory predicts, the same city-states also tended to have more elaborate courts, contracts, and property rights than most of the European kingdoms of the time. As is well known, these city-states also created the most advanced economies in Europe, not to mention the culture of the Renaissance.

This quote is from pp. 39-40 of Olson’s book. Part of it is reproduced at Liberty Tree quotes.

Instead of a bully state, we are in urgent need of a humble state where political leaders and bureaucrats know their place. I especially like Venice’s symbolic executioner, who could beneficially replace the motocarde or, at the very least, occupy the last limousine. (For more discussion of related issues, see my Econlog post “Praetorian Guards from Ancient Greece to Palm Beach or the Hamptons,” January 14, 2019.)

(0 COMMENTS)

Econlib June 4, 2020

A Humble State with No Motorcade

In many ways, the modern world, including economic freedom, was born from the fear of tyranny and the institutions (not always successful) to prevent it. In Power and Prosperity: Outgrowing Communist and Capitalist Dictatorships (Basic Books, 2000), famous economist Mancur Olson had interesting historical remarks about Italian city-states in early modern times:

Sometimes, when leading families or merchants organized a government for their city, they not only provided for some power sharing through voting but took pains to reduce the probability that the government’s chief executive could assume autocratic power. For a time in Genoa, for example, the chief administrator of the government had to be an outsider—and thus someone with no membership in any of the powerful families in the city. Moreover, he was constrained to a fixed term of office, forced to leave the city after the end of his term, and forbidden from marrying into any of the local families. In Venice, after a doge who attempted to make himself autocrat was beheaded for his offense, subsequent doges were followed in official processions by a sword-bearing symbolic executioner as a reminder of the punishment intended for any leader who attempted to assume dictatorial power. As the theory predicts, the same city-states also tended to have more elaborate courts, contracts, and property rights than most of the European kingdoms of the time. As is well known, these city-states also created the most advanced economies in Europe, not to mention the culture of the Renaissance.

This quote is from pp. 39-40 of Olson’s book. Part of it is reproduced at Liberty Tree quotes.

Instead of a bully state, we are in urgent need of a humble state where political leaders and bureaucrats know their place. I especially like Venice’s symbolic executioner, who could beneficially replace the motorcade or, at the very least, occupy the last limousine. (For more discussion of related issues, see my Econlog post “Praetorian Guards from Ancient Greece to Palm Beach or the Hamptons,” January 14, 2019.)

(2 COMMENTS)

Econlib June 3, 2020

Transaction Costs are the Costs of Engaging in Economic Calculation

This year marks the 100supth/sup anniversary of the publication of Ludwig von Mises’s seminal article, “Economic Calculation in the Socialist Commonwealth,” which marked the first salvo in what later became the socialist calculation debate. Though the contributions of F.A. Hayek to that debate, and to economic science more broadly, have been well recognized, what is somewhat forgotten today is that the fundamental contributions of another economist were also born out of the socialist calculation debate. I am referring to none other than Ronald Coase.

 

As Coase outlines in his Nobel Prize Address, he had been a student of Arnold Plant in the Department of Commerce at the LSE, who introduced him to Adam Smith’s invisible hand, and the role that the price system plays in coordinating the allocation of resources to their most valued uses without central direction. The insights of Coase, like Mises, were both motivated from the attempt by the Bolsheviks to implement central planning in Soviet Russia. As Coase writes, “Lenin had said that the economic system in Russia would be run as one big factory. However, many economists in the West maintained that this was an impossibility,” a claim first put forth by Mises in his 1920 article. “And yet there were factories in the West, and some of them were extremely large. How did one reconcile the views expressed by economists on the role of the pricing system and the impossibility of successful central economic planning with the existence of management and of these apparently planned societies, firms, operating within our own economy?” The answer put forth to this puzzle was what Coase referred to as the “costs of using the price mechanism,” (Coase 1992, 715). This concept, which later came to be known as “transaction costs,” was first expounded in his seminal article, “The Nature of the Firm” (1937) and later developed in subsequent articles, “The Federal Communications Commission” (1959) and “The Problem of Social Cost” (1960). But, it is interesting to note that Coase also states that “a large part of what we think of as economic activity is designed to accomplish what high transaction costs would otherwise prevent or to reduce transaction costs so that individuals can freely negotiate and we can take advantage of that diffused knowledge of which Hayek has told us” (1992: 716).

My point here is not to trace the historical origins of the parallel insights drawn by Mises and Coase, or other economists working in the Austrian tradition and the transaction-cost tradition for that matter. Rather what I wish to suggest here is that what Coase (not just Hayek) had been stressing in his insights learned from the socialist calculation debate cannot be fully appreciated without placing his contributions in the context of what Mises had claimed regarding the problem of economic calculation. Reframed within this context, I would argue that the concept of transaction costs can also be understood as the costs of engaging in economic calculation. However controversial my claim may seem, this reframing of transaction costs as the costs of associated with economic calculation has a precedent that can be found not only in Coase, but also in more recent insights made by economists working in the Austrian tradition (see Baird 2000; Piano and Rouanet 2018).

How do transaction costs relate to the problem of economic calculation? According to Coase, the most “obvious” transaction cost is “that of discovering what the relevant prices are” (1937: 390). The costs of pricing a good (i.e. transaction costs) are based, fundamentally, on the costs of defining and enforcing property rights in order to create the institutional conditions necessary for establishing exchange ratios, hence prices, in the first place. This also entails not only cost of negotiation and drawing up contracts between trading partners, but also discovering who are the relevant traders partners are, as well as discovering what are the actual attributes, such as quality, of the good or service being exchanged.

Carl Dalhman (1979) argues that all such transaction costs can be subsumed under the umbrella of information costs, but the nature of such information is not one that can be obtained only through active search per se, as if all such information is already “out there” and therefore acontextual. Rather, the very nature of such information is not just tacit and dispersed (Hayek 1945), but contextual (see Boettke 1998). The discovery of relevant trading partners, the valuable attributes of a good being exchanged, and the price to which trading partners agree, emerges only within a context of exchangeable and enforceable private property rights. This last point is precisely the argument that Ludwig von Mises had meant in his claim that economic calculation under socialism is impossible! Outside the context of private property, subjectively held knowledge cannot be communicated as publicly held information without first establishing the terms of exchange in money prices to allocate resources to their most valued uses.

In his Presidential Address to the Society for the Development of Austrian Economics, published in The Review of Austrian Economics as “Alchian and Menger on Money,” Charles Baird (2000) best illustrates the point I’m making here. Carl Menger (1892) and Armen Alchian (1977) had made distinct, though complementary stories as to why money emerges spontaneously, namely to reduce transaction costs. Menger argued that money emerges to avoid the costs associated with the double coincidence of wants between exchange partners. On the other hand, Alchian emphasized that money emerges to reduce the costs of calculating and pricing the value of the various attributes of a good, such as in comparing the quality of different diamonds. Money prices reduce the costs of pricing the quality of diamonds, thereby providing information, discovered by middlemen, to non-specialists about what kind of diamond they are purchasing (i.e. higher quality or lower quality). As Baird writes, “Menger’s story is incomplete. But so, too, is Alchian’s. On the other hand, both stories are complete on their own terms. Clearly what is needed is someone to put these two stories together” (2000: 119). Thus, reframing transaction costs from an Austrian perspective, money, firms and other institutional arrangements emerge to reduce the costs associated with economic calculation.

In a lecture written to honor F.A. Hayek in 1979, later published posthumously in The Review of Austrian Economics, James Buchanan boldly declared the following: “The diverse approaches of the intersecting schools [of economics] must be the bases for conciliation, not conflict. We must marry the property-rights, law-and-economics, public-choice, Austrian subjectivist approaches” (Buchanan 2015: 260). The link that “marries” these distinct schools, including the Austrian School, is the notion of transaction costs. However, this underlying link cannot be understood without first reframing, I would argue, the concept of transaction costs as the costs of engaging in economic calculation. The “marriage” of these intersecting schools, as Buchanan and others have suggested, highlights distinct aspects of the economic forces at work in the market process, as well as the alternative institutional arrangements that emerge to reduce the cost of transacting and thereby exploit the gains from productive specialization and exchange.

 

 


Rosolino Candela is a Senior Fellow in the F.A. Hayek Program for Advanced Study in Philosophy, Politics, and Economics, and Associate Director of Academic and Student Programs  at the Mercatus Center at George Mason University

 

 

References

Alchian, Armen A. 1977. “Why Money?” Journal of Money, Credit and Banking 9(1): 133–140.

 

Boettke, Peter J. 1998. “Economic Calculation: The Austrian Contribution to Political Economy.”  Advances in Austrian Economics 5: 131–158.

Buchanan, James M. 2015. “NOTES ON HAYEK–Miami, 15 February, 1979.” The Review of         Austrian Economics 28(3): 257–260.

Coase, Ronald H. 1937. “The Nature of the Firm.” Economica 4(16): 386–405.

Coase, Ronald H. 1959. “The Federal Communications Commission.” The Journal of Law & Economics 2: 1–40.

Coase, Ronald H. 1960. “The Problem of Social Cost.” The Journal of Law & Economics 3: 1–44

Dahlman, Carl J. 1979. “The Problem of Externality.” The Journal of Law & Economics 22(1): 141–162.

Hayek, F.A. 1945. “The Use of Knowledge in Society.”

Menger, Karl. 1892. “On the Origin of Money.” The Economic Journal 2(6): 239–255.

Mises, Ludwig von. [1920] 1975. “Economic Calculation in the Socialist Commonwealth.” In F.A.     Hayek  (Ed.), Collectivist Economic Planning (pp. 87–130). Clifton, NJ: August M. Kelley.

Piano, Ennio E., and Louis Rouanet. 2018. “Economic Calculation and the Organization of     Markets.” The Review of Austrian Economics, https://doi.org/10.1007/s11138-018-0425-4

(6 COMMENTS)

Econlib June 3, 2020

Five (More) Books: Economic, Political and Social Ethnography of Soviet Life

In my previous two posts, I offered recommendations for reading on the Russian Revolution and the Soviet economy. Today, I’d like to turn our attention everyday life in the Soviet Union.

My most cherished comment on one of my books dealing with the Soviet system was from then Department Chair of Economics at Moscow State University, who upon reading my discussion of the contrast between how the system was supposed to work and how it really worked wrote to me to tell me that my description fit perfectly with the daily life that he and his family had to endure.  I had done my job then.  I think the purpose of economic theory is to aid us in our task of making sense of the political economy of everyday life.  Not theorems and graphs on blackboards and textbooks, but the lived reality out the window in the social settings we find ourselves exploring as social scientists and scholars.

Perhaps the best window into everyday life is through ethnographies, either loose ones such as those written by journalists, or rigorous ones written by social scientists.  Since this was always my intent, rather than merely writing down the history of the political leaders, I was from the beginning drawn to first-person accounts and the impact of the Soviet experience on the lives of ordinary citizens.  Obviously, given the weight I place on ideology and policy initiatives, I think you also have to pay attention to the official view and the official documents. But truth lies in the interaction between the official policy and the impact felt in the day-to-day life of people experiencing those policies.

 

The first work I would suggest then is Emma Goldman’s My Disillusionment in Russia (originally published in 1923) based on her time in Russia in 1920 and 1921.

Emma Goldman was a Russian anarchist who had emigrated with her family to the United States in 1885. She was arrested multiple times in the US for political activism and for distributing pamphlets advocating for social change. She was deported from the US under the Anarchist Exclusion Act, and she went to Finland and then eventually to Russia.  A revolutionary, Goldman was originally prone to view the Russian Revolution as a positive signal of world-wide revolution, but once inside the system her disillusionment began.  She witnessed first-hand an economic system that could not work, and the origins of political terror which were unspeakable to her.  One of my favorite scenes in her book is the description of a May Day parade which was supposed to exhibit great enthusiasm for the revolution but instead only revealed the thinly veiled pain and suffering of the people in the streets with forced shows of support for the regime.  Read this book and then watch the movie Reds, with a new appreciation for the scene where Emma Goldman confronts John Reed with the reality of the situation in 1921.

 

Shelia Fitzpatrick’s Everyday Stalinism (originally published in 1999) is a wonderful social history exploring the systems of survival that ordinary individuals developed to cope with the scarcity and repression of the Stalin regime.  The informal norms and networks that make life possible in such harsh conditions are unearthed in Fitzpatrick’s account.

 

Warren Nutter’s The Strange World of Ivan Ivanov (originally published in 1969) is an examination in comparative terms of the economic life of an ordinary individual and their family in the Soviet system as compared to the US. It is an eye opening comparison, and one that anyone hoping to understand the impact of communism on the lives of those who have the misfortune of having to live under that system. Any romantic notion of a move from a “kingdom of necessity” to a “kingdom of freedom” will be disabused quickly by any cursory reading of Goldman, Fitzpatrick, and Nutter.

 

There are two journalistic accounts from the Brezhnev years and from the Gorbachev years that I would suggest are foundational to get a window into the daily life of a Soviet citizen.  The first would be Hedrick Smith’s The Russians (originally published in 1975), which explained the underground market, the queuing system, and the samizdat culture (including jazz music).  The second is David Remnick’s Lenin’s Tomb (originally published in 1993), which details everyday life during the last days of the Soviet empire.  Both of these books give a fantastic bottom-up account of the day-to-day life of ordinary individuals struggling to survive and cope with the difficult conditions and changing circumstances as the Soviet system economically, politically and socially corrosively eats itself from within.

 

 


Peter J. Boettke is University Professor of Economics & Philosophy, George Mason University, Fairfax, VA 22030.


As an Amazon Associate, Econlib earns from qualifying purchases.

(0 COMMENTS)

Econlib June 2, 2020

Morality is Broken

The recent episode of EconTalk with Martin Gurri was one of the most jarring in my memory, simply because of the timing of its release. The episode was recorded prior to the onset of the COVID-19 pandemic. As Gurri himself asked on twitter, “Before the pandemic crisis, there was a revolt of the public. What are the odds that it won’t return, with renewed force, after the lockdown?”

And shortly after the episode’s release came the protests over the death of George Floyd in Minneapolis. Gurri describes The Revolt of the Public as “the global conflict between a public that won’t take “yes” for an answer and elites who want to bring back the 20th century” (again, via twitter).

It seemed Gurri was speaking about both of these incidents as I listened, even though that could not have been the case. So what lessons should we take from Gurri’s episode? He describes a digital earthquake, generating a tsunami of information, leading to increased social and political turbulence. How has this affected our institutions and their credibility? Since for this episode there are more questions than we can possibly ask in the space below, I’d like to encourage you to pose some questions this time. What would you like to discuss? Share your questions in the Comments, and let’s continue the conversation.

For those who would prefer a conversation starter, please use the prompts below. We love to hear from you.

 

 

1- Roberts suggests we fundamentally misunderstood the information revolution; how so? Why did just the profusion, the tsunami-like aspect of it, lead to such a dismantling of authority and credibility and trust?

 

2- There certainly were mass movements before the digital era. How is this one different? (The discussion of nihilism might be helpful here…)

 

3- Roberts asks Gurri if he’s worried about future of democracy. (He is.) What does Gurri mean when he speaks of “flattening the pyramid”? How might such flattening mitigate the “sectarian approach to politics” Gurri is so concerned with?

 

4- Roberts asks how we can make the world more democratic and as least vicious as we can. How would you answer that question? Are there any suggestions from Roberts and/or Gurri you could get behind? Explain.

 

5- What is your reaction to Gurri’s call for a new elite class, equivalent to a scientific class? What effects does Gurri hope this would have? To what extent can a new elite repair our broken political morality?

 

 

(0 COMMENTS)

Econlib June 2, 2020

Herd immunity was never a feasible option

Bryan Caplan has a post on Covid-19 that is full of sensible ideas. But I disagree with one of his claims:

18. Alex Tabarrok is wrong to state, “Social distancing, closing non-essential firms and working from home protect the vulnerable but these same practices protect workers in critical industries. Thus, the debate between protecting the vulnerable and protecting the economy is moot.” Moot?!  True, there is a mild trade-off between protecting the vulnerable and protecting the economy.  But if we didn’t care about the vulnerable at all, the disease would have already run its course and economic life would already have strongly rebounded.  Wouldn’t self-protection have stymied this?  Not if the government hadn’t expanded unemployment coverage and benefits, because most people don’t save enough money to quit their jobs for a couple of months.  With most of the workforce still on the job, fast exponential growth would have given us herd immunity long ago.  The death toll would have been several times higher, but that’s the essence of the trade-off between protecting the vulnerable and protecting the economy.

From my vantage point in Orange County, that just doesn’t seem feasible.  People here are taking quite aggressive steps to avoid getting the disease, and I believe that would be true regardless of which public policies were chosen by authorities.  Removing the lockdown will help the economy a bit, as would ending the enhanced unemployment insurance program.  But the previous (less generous) unemployment compensation program combined with voluntary social distancing is enough to explain the vast bulk of the depression we are in.

In many countries, the number of active cases is falling close to zero.  In those places, it will be possible to get people to return to service industries where human interaction is significant.  Speaking for myself, I’m unlikely to get a haircut, go to the dentist, go to a movie, eat in a crowded restaurant, or many other activities until there is a vaccine. (Although if I were single I’d be much more active.) If I were someone inclined to take cruises, I’d also stay away from that industry until there was a vaccine.  I’ll do much less flying, although I’d be willing to fly if highly motivated.  For now, I’ll focus on outdoor restaurants (fortunately quite plentiful in Orange County) and vacations by automobile. Universities are beginning to announce that classes will remain online in the fall.

If you think in terms of “near-zero cases” and “herd immunity” as the two paths to normalcy in the fall of this year, I’d say near-zero cases are much more feasible.  Lots of countries have done the former—as far as I know none have succeeded with the latter approach.  Unfortunately, America has botched this pandemic so badly (partly for reasons described by Bryan) that it will be very difficult to get the active caseload down to a level where consumers feel safe.

Don’t get me wrong, both the lockdown and the change in unemployment compensation create problems for the economy.  But they are not the decisive factor causing the current depression.  If the changes in the unemployment compensation program were made permanent, then at some point this would become the decisive factor causing a high unemployment rate.  But not yet.

BTW, I am not arguing that it wouldn’t be better if people had a more rational view of risks, as Bryan suggested in a more recent post.  This post is discussing the world as it is.

Here’s a selection of countries with 35-76 active cases (right column), followed by a group with less than ten.  Many are tiny countries and some have dubious data, but not all.

. . .

 

(22 COMMENTS)

Econlib June 2, 2020

Something to Learn from the Trump Presidency

The president of the United States tweeted a video of an alleged rioter (who, in all likelihood, is an American citizen, not a “Mexican rapist”) with the threatening comment:

“Anarchists, we see you!”

Is it for the president to identify suspects? So much for the ideal of the rule of law, it seems.

But my point is different and relates to the benefits of personal knowledge. I have always hoped that a journalist would, during a press conference, ask the president something like “Mr. President, what do you mean exactly by ‘socialism’?” Or, “Mr. President, what do you mean by ‘the extreme left’ and how does it differ from the left?”

Since Mr. Trump’s tweet of yesterday and his other recent references to “anarchists” as another type of scapegoat, my dream has changed. I would now propose questions like the following:

Mr. President, what is an anarchist? What does an anarchist believe?

Mr. President, do you think that Henry David Thoreau, Lysander Spooner, and Murray Rothbard were anarchists?

What about David Friedman?

Do you think that Anthony de Jasay was a conservative anarchist?

Of course, looters have to be stopped and arrested but different sorts of anarchists exist, just as there are different sorts of defenders of the state. Another idea for a question along those lines:

Mr. President, don’t you think that the so-called “anarchist” rioters and looters actually want more state power, just like the “extreme left” you attack?

The following question may be problematic for both Mr. Trump and the libertarians involved:

Mr. President, what do you think of the anarcho-capitalists who, during your 2016 election campaign, created a group called “Libertarians for Trump”?

More seriously, I suggest the Trump presidency has taught something important to those of us who define themselves as libertarians or fellow travelers: knowledge is important, both in the sense of a minimal culture about what has been happening in the world until yesterday and in the sense of an intellectual capacity to learn. To advance liberty, an ignorant disrupter is not sufficient. He is more likely to advance tyranny. If he appears to defend one libertarian cause—say, the Second Amendment—he will more probably bring it into disrepute.

(24 COMMENTS)

Econlib June 2, 2020

What I’m Doing

  1. The U.S. political system is deeply dysfunctional, especially during this crisis.  Power-hunger reigns in the name of Social Desirability Bias.  Fear of punishment aside, I don’t care what authorities say.  They should heed my words, not the other way around.

  2. Few private individuals are using quantitative risk analysis to guide their personal behavior.  Fear of personally antagonizing such people aside, I don’t care what they say either.

  3. am extremely interested in listening to the rare individuals who do use quantitative risk analysis to guide personal behavior.  Keep up the good work, life-coach quants – with a special shout-out to Rob Wiblin.

  4. After listening, though, I shall keep my own counsel.  As long as I maintain my normal intellectual hygiene, my betting record shows that my own counsel is highly reliable.

  5. What does my own counsel say?  While I wish better information were available, I now know enough to justify my return to 90%-normal life.  The rest of my immediate family agrees.  What does this entail?  Above all, I am now happy to socialize in-person with friends.  I am happy to let my children play with other kids.  I am also willing to not only eat take-out food, but dine in restaurants.  I am pleased to accommodate nervous friends by socializing outdoors and otherwise putting them at ease.  Yet personally, I am at ease either way.

  6. I will still take precautions comparable to wearing a seat belt.  I will wear a mask and gloves to shop in high-traffic places, such as grocery stores.  I will continue to keep my distance from nervous and/or high-risk strangers.  Capla-Con 2020 will be delayed until winter at the earliest.  Alas.

  7. Tyler suggests that people like me “are worse at intertemporal substitution than I had thought.”  In particular:

It either will continue at that pace or it won’t.  Let’s say that pace continues (unlikely in my view, but this is simply a scenario, at least until the second wave).  That is an ongoing risk higher than other causes of death, unless you are young.  You don’t have to be 77 for it to be your major risk worry.

Death from coronavirus is plausibly my single-highest risk worry.  But it is still only a tiny share of my total risk, and the cost of strict risk reduction is high for me.  Avoiding everyone except my immediate family makes my every day much worse.  And intertemporal substitution is barely helpful.  Doubling my level of socializing in 2022 to compensate for severe isolation in 2020 won’t make me feel better.

Alternatively, let’s say the pace of those deaths will fall soon, and furthermore let’s say it will fall by a lot.  The near future will be a lot safer!  Which is all the more reason to play it very safe right now, because your per week risk currently is fairly high (in many not all parts of America).  Stay at home and wear a mask when you do go out.  If need be, make up for that behavior in the near future by indulging in excess.

Suppose Tyler found out that an accident-free car were coming in 2022.  Would he “intertemporally substitute” by ceasing driving until then?  I doubt it.  In any case, what I really expect is at least six more months of moderately elevated disease risk.  My risk is far from awful now – my best guess is that I’m choosing a 1-in-12,000 marginal increase in the risk of death from coronavirus.  But this risk won’t fall below 1-in-50,000 during the next six months, and moderate second waves are likely.  Bottom line: The risk is mild enough for me to comfortably face, and too durable for me to comfortably avoid.

  1. The risk analysis is radically different for people with underlying health conditions.  Many of them are my friends.  To such friends: I fully support your decision to avoid me, but I am happy to flexibly accommodate you if you too detest the isolation.  I also urge you to take advantage of any opportunities you have to reduce your personal risk.   But it’s not my place to nag you to your face.

  2. What about high-risk strangers?  I’m happy to take reasonable measures to reduce their risk.  If you’re wearing a mask, I treat that as a request for extra distance, and I honor it.  But I’m not going to isolate myself out of fear of infecting high-risk people who won’t isolate themselves.

  3. Most smart people aren’t doing what I’m doing.  Shouldn’t I be worried?  Only slightly.  Even smart people are prone to herding and hysteria.  I’ve now spent three months listening to smart defenders of the conventional view.  Their herding and hysteria are hard to miss.  Granted, non-smart contrarians sound even worse.  But smart contrarians make the most sense of all.

  4. Even if I’m right, wouldn’t it be more prudent me to act on my beliefs without publicizing them?  That’s probably what Dale Carnegie would advise, but if Dale were here, I’d tell him, “Candor on touchy topics is my calling and my business.  It’s worked well for me so far, and I shall stay the course.”

  5. I’ve long believed a strong version of (a) buy-and-hold is the best investment strategy, and (b) financial market performance is only vaguely related to objective economic conditions.  Conditions in March were so bleak that I set aside both of these beliefs and moved from 100% stocks to 90% bonds.  As a result of my excessive open-mindedness, my family has lost an enormous amount of money.  The situation is so weird that I’m going to wait until January to return to my normal investment strategy.  After that, I will never again deviate from buy-and-hold.  Never!

(20 COMMENTS)

Econlib June 1, 2020

A libertarian is a conservative who has been oppressed

When I was young, there was an old saying that a conservative is a liberal who has been mugged. I suspect that the debates between liberals and conservatives are especially fierce precisely because they are generally based on genetics and random life experiences, not rational thought.

Along these lines, a Politico article by Rich Lowry caught my eye:

The intellectual fashion among populists and religious traditionalists has been to attempt to forge a post-liberty or “post-liberal” agenda to forge a deeper foundation for the new Republican Party. Instead of obsessing over freedom and rights, conservatives would look to government to protect the common good.

This project, though, has been rocked by its first real-life encounter with governments acting to protect, as they see it, the common good.

One of its architects, the editor of the religious journal First Things, R.R. Reno, has sounded like one of the libertarians he so scorns during the crisis. First, he complained he might get shamed if he were to host a dinner party during the height of the pandemic, although delaying a party would seem a small price to pay for someone so intensely committed to the common good.

More recently, he went on a tirade against wearing masks. Reno is apparently fine with a much stronger government, as long as it never issues public-health guidance not to his liking. Then, it’s to the barricades for liberty, damn it.

Ouch!  Lowry and Reno are both conservatives, but I’m guessing they are not the best of friends.

PS:  Tom Wolfe’s version is pretty close to the sentiments in this post:

If a conservative is a liberal who’s been mugged, a liberal is a conservative who’s been arrested.

(23 COMMENTS)

Econlib June 1, 2020

A “Hodgepodge” of State-Based COVID-19 Rules May be Just What the Doctor Ordered

coronavirus-hodgepodge.jpg

At the peak of the COVID-19 pandemic, only seven states—Arkansas, Iowa, North and South Dakota, Nebraska, Utah, and Wyoming—had resisted statewide stay-at-home or lockdown regulations to contain the virus (a new strain of coronavirus disease). Even though those states have less than 10 percent of the country’s population, they continue to be pressed by a chorus of federal and state medical experts, media, and policy makers for their lockdown regulatory failures.

As I write this in early May, most states have begun loosening in varying ways and to varying degrees their COVID-19 containment controls. Only three states—California, New York, and Washington seem likely to keep their lockdown orders in place into June. Critics, again, took to their media bullhorns and to the streets in protests with posters condemning the loosening of controls for their disregard for human life over the economy. The critics argue that the loosening of controls will cause another surge in virus infections and deaths

These critics argue that the states without full lockdown policies and the far larger number of states allowing for businesses to reopen can be incubators for the virus, which can spread nationwide through porous borders and cause a second surge in virus infections and deaths. Consequently, the states loosening their containment policies are endangering Americans in the lockdown states who are dutifully sacrificing their jobs and opportunities to socialize with friends. State-based policymaking has created a “hodgepodge” of widely varying state and local containment (and loosening) rules1—for example, on where and when to wear face masks, the size of allowable social gatherings, the definition of “essential businesses,” and when and where businesses can reopen—all with different levels of enforcement that complicates doing business nationally.

The critics have strongly insisted that national rules that are uniform across all counties are necessary to suppress the deadly virus that is overtaxing the country’s healthcare system. Anthony Fauci, director of the National Institute of Allergy and Infectious Diseases and a prominent member of President Trump’s COVID-19 task force, confesses that he simply “can’t understand” why all states are not on the “same [regulatory] page.”

Resisting the pressure to nationalize the “coronavirus war” has a straightforward reason: States and local communities differ, most prominently in population density. But there is a much stronger reason to defer to state and local governments: When so much is unknown about a highly contagious disease, the country needs opportunities to try a variety of policy remedies, just as it needs to test medical remedies (vaccines and antibody therapies) for nonvictims, especially those in frontline healthcare jobs.

As intended by the American Founders, devolution of major swaths of policy making—including devising containment policies—to states allows the needed, and totally unavoidable, policy experimentation, based on varying local health and economic conditions best understood by state policymakers. Future research is then possible to discover which state policies are most effective in achieving the intertwined goals of minimizing infections and deaths and maximizing economic activity.

The Case Against National Containment Policies

Medical experts at high offices in the nation’s capital may know more about COVID-19 and its effects than all others, but there’s a whole lot they yet don’t know and can’t know. Not the least of these unknowns are the widely varying local circumstances across the nation’s 3,007 counties and what combinations of rules and regulations will have the best public health and economic outcomes for communities far from national political arenas. Experts at the national level also can’t know the tradeoffs between safety and economic losses that different groups of Americans may be willing to accept. And the reality of containment is stark: Progressively greater safety can come at escalating economic costs, measured in lost production, jobs, and income. Trade-offs are unavoidable.

Experts’ Limited Knowledge

One example of a knowledge problem is the experts’ insistence a few weeks ago that only infected people need to wear face masks. Now, everyone is urged at least to wear makeshift facial “coverings”—including bandanas—when going out, even if only to take a walk on a deserted beach as asymptomatic victims may transmit the disease. Obviously, the pandemic response is an ongoing project based on experience, with varying policy experimentation included. New York officials, for instance, have concluded based on the state’s escalating deaths that they probably should have instituted stringent containment policies when California did, if not before.

The experts have insisted that all people maintain a “social distance” of at least six feet, as if that specific distance emerged strictly from science. But in democracies, public policies are necessarily influenced not only by the available science but by the public’s willingness to accept them. The six-feet social-distance rule was devised by COVID-19 experts who concluded that, according to COVID-19 task force member, Deborah Birx, “it is clear that exposure occurs” when people are within six feet for fifteen minutes of an infected person. The distance is based on findings that the virus could be spread indoors up to six feet through water droplets expelled through coughs, sneezes, and even exhaled breaths.

But under different conditions, not all virus-laden droplets may not behave in the same way. For instance, based on a small-sample Chinese study in a Wuhan hospital ward,2 the Center for Disease Control (CDC) concluded that the droplets may travel through the air indoors as far as 13 feet and even farther on shoes of healthcare workers. Outdoors, some droplets may be shot as far as 10 yards or more and may linger for several minutes, depending on atmospheric conditions and on the force behind the coughs and sneezes. Thirty Finnish researchers simulated the potential spread of a “cloud” of virus-laden water droplets from a single cough or sneeze a dramatic YouTube video hosted by the University of Helsinki.3 As laboratory simulations of coughs have shown, even N-95 face masks are far from complete barriers to the spread of virus-laden droplets. Homemade face masks, made of thin bandana and t-shirt materials, can be porous, if not all but ineffective.

The six-foot rule likely emerged as a compromise that experts and policy makers believed struck an appropriate balanced between a social distance the public would likely follow, and the number of infections and deaths people would accept. (German policymakers, on the other hand, struck a different compromise based on an assessment of different studies, and set the social-distance rule at 1.5 meters, or about five feet, maybe because the populace would accept more infections to avoid tighter social distancing.)

Obviously, the science behind the social-distancing rule depends on the particular studies available, the interpretation of the findings, and unavoidably, the subjective judgements of the tradeoffs involved. Perhaps, to match scientific findings with transparency, the six-foot rule could be presented as “you should stay at least six feet from all other people to lower the probability of infection to what we find to be acceptable levels, given the additional deaths that will result from not choosing a greater social distance.” I grant that the message is complicated, but that only validates a point easily overlooked in media reports: The experts say they are simply following the science, whereas in fact, policymaking is not so simple. Policies are beset with nonscientific considerations, for example, the pithiness of the policy measures.

Policymaking Under Uncertainty

As experts have said from the start, the current COVID-19 is a new strain to epidemiology, which means humans’ immune systems are vulnerable because they have not yet developed antibodies and FDA-approved vaccines are unavailable. Moreover, the pandemic has been, and remains, enshrouded in uncertainties about the contagiousness and virulence of the virus, although it should be obvious to everyone by now that COVID-19 is deadly. At the same time, more than 95 percent of victims recover with no lasting health effects, and maybe a quarter (or even half, depending on the study) of infected people are asymptomatic, but still can be transmitters.

The effect of various containment rules—whether social distancing or stringent lockdowns—on the economy remains largely a guestimate; however, many economists—including those at the St. Louis Fed4 and Moody’s speculate that the economic destruction from the array of state lockdowns could supersede that of the Great Depression in peak unemployment rate (25 percent in the Great Depression versus 32 percent projected for May 2020) and decreased GDP (26 percent in the Depression versus 29 percent between the start of March and April 2020, with a far more dramatic drop possible by year’s end).

“In these uncertain times, we need experimentation not only in labs to find vaccines and disease treatments, but also in political venues to devise effective policies.”

In these uncertain times, we need experimentation not only in labs to find vaccines and disease treatments, but also in political venues to devise effective policies. If federal mandates cut off all opportunities to try various containment policies on the part of state and local governments, a lot of potentially valuable information will be forever lost. Virus experts and the public badly need more empirical evidence regarding the relative public health and economic consequences of different policies, if for no other reason than that there is a very good chance that the virus may retreat in the summer only to re-emerge more virulent than ever in the fall or next year.

Containment Policy “Hodgepodge” as a Research Goldmine

With states calling their own policy shots, researchers will soon be able to assess how these various policies affect COVID-19 infections and deaths and the state economies. Such research findings could then inform future policymakers’ discussions of the inherent, unavoidable tradeoffs between reduced public health and economic damage.

Research Avenues

Researchers will have a field day with such data, knowing that their findings will not only help shape future pandemic policies, but also boost substantially their professional reputations and incomes. Here are just a few of the benefits:

  • • Researchers will be able to estimate the lives saved and economic damage averted by different levels of social distancing policies and lockdowns, and well as different levels of enforcement. They will be able to assess the extent to which, say, California averted (or aggravated) virus deaths and increased (or decreased) economic damage with its early statewide lockdown. Surprise findings may arise, too, if studies show that Texas and Georgia, by delaying their lockdown policies, were able to devise more effective measures than California. Researchers will be able to assess whether and to what extent North Dakota’s and Arkansas’ refusals to institute lockdowns cost lives and lowered their states’ unemployment and real income losses compared to lockdown states.5 By being among the first to reopen their economies, Texas and Georgia will allow researchers to assess the value (or lack of value) of their early, limited, and gradual decontrol policies, especially since California and New York seem intent on lagging all other states in reopening their economies.
  • • Might researchers not learn that the health/economic cost tradeoffs of lockdowns are more favorable in urban areas (Manhattan) than in rural communities (Fargo)?
  • • Might they also learn how those states with partial lockdowns or lax enforcement fared relative to those that with strict enforcement (such as required and not just recommende use of face masks in public)?
  • • With the retreat of the pandemic (or after the infections and death-rate curves begin to “flatten”), researchers might also estimate how various shortages of personal protective equipment (from N95 face masks to ventilators) in different states affected death rates among victims, their family members, and frontline caregivers and worked to extend or deepen the economic downturn.

The research questions from the pandemic are virtually endless, especially when tangential topics are considered. For instance, consider questions surrounding state minimum-wage, price gouging, and environmental policies:

  • • Economists have long studied the employment and income effects of minimum wage policies, but almost always under “normal” economic circumstances. State minimum wages have varied widely over the last decades, but now researchers may be able to assess how states with relatively high minimums (California, Washington, and New York) fared in job and income losses during the lockdowns, as compared with states that have held to the 7.25 an hour federal minimum since 2009 (Texas and North Carolina).
  • • Similarly, economists have long argued against price-gouging laws (such as a California law that makes it illegal for sellers to raise prices “too much”), most frequently on the grounds that such laws invariably lead to “panic buying” and empty store shelves. From the beginning of the pandemic, supplies of important personal protection equipment, most notably, face masks and hand sanitizer, have been short, and shortages have extended to toilet paper and even rice, dried beans and flour. Such shortages can be partially, if not totally alleviated, through price increases, which discourage panic buying and hording and induce suppliers to increase production.

  • An overlooked potential consequence of price-control laws is that they can be deadly during pandemics. By preventing substantial price increases in face of a pandemic-caused spike in demand, critically needed protections such as face masks and sanitation supplies are hoarded, and their production is tempered. Thus, fewer protections are available, resulting in more infections and, very likely, more deaths. Future research on the effects of various states’ differing price controls will allow for assessments of the extent to which the controls added to the severity of the pandemic.

  • • Environmental researchers could be the big winners from the pandemic, which has drastically caused a reduction in greenhouse gas emissions (with easily observable effects in Wuhan and Los Angeles).

The research on cross-state outcomes can be fortified with research on the effects of cross-country containment policies. Several countries, most notably Sweden, have not to date instituted lockdowns, and those that have locked down have done so at different times and at different levels of stringency and enforcement. Countries also are relaxing their rules at different times.

Containment Policies Versus Herd Immunity Spreading

Faced with a new virus, the body’s immune system develops antibodies to fight disease, and these newly created antibodies may remain an active defense against any new invasion of COVID-19. As COVID-19 infections spread and people recover or die, the susceptible population will shrink and spread of the disease can slow. Through so-called “herd immunity,” diseases can be self-correcting, at least partially. And at this writing, preliminary evidence has emerged that suggests that a flattening of the infection and death-rate curves is underway at least in parts of the United States, as well as in China and Italy and other European countries, including Sweden. Already, Sweden’s experience with its strategy of deliberately allowing for herd immunity to contain the virus’ spread, at least partially, has begun to gain respect and adherents (as well as critics) among experts and pundits.

Medical and policy experts have been quick to attribute the flattening to effective policies. In an NBC News interview, Mr. Fauci lowered the forecasted total COVID-19 deaths in the United States from a range of 100,000 to 240,000 to 60,000 (a drop of as much as 75 percent).6 He concluded, “The real data are telling us it is highly likely we are having a definite positive effect by the mitigation things that we’re doing, this physical separation.” New York Governor Andrew Cuomo also pointed to the possible leveling of the state’s hospitalizations as a sign that mitigation policies were working.

Not so fast. The open question is not whether containment policies have had a positive effect, but how much of the flattening of infection rate and any future downturn can be attributed to the policies. Other things must be considered. For example, the drop in forecasted deaths might be partially attributable to earlier forecasts being founded on models based on worst-case-scenarios assumptions and limited data designed to trigger broad public support for containment policies. Neither Fauci nor Cuomo have mentioned (to date) that their “good news” on the pandemic peak could also be at least partially attributable to herd immunity. Such immunity could be substantial, if the virus is indeed highly contagiousness of the virus, spread by asymptomatic victims, and thus, could have begun to spread as far back as November 2019. Focused research on herd immunity will surely inform policymakers’ future decisions on the breadth and stringency of containment and enforcement policies, as well as the tie between the percentage of the population with immunity and the suppression of the spread of infections.

Concluding Comments

Critics of the “hodgepodge” of state and local COVID-19 containment policies seem to remain confident that a single set of strict federal containment rules—especially if passed earlier this year—would have slowed the spread of the virus and reduced infections and deaths, primarily because all Americans would have been effected, not just about 90 percent. But that may be wishful thinking because the President and Congress had the power to impose a federal mandate but chose not to, or perhaps believed the required legislation would not pass.

When President Trump claimed “total” authority over states’ reopening policies, many argued persuasively that such a claim was unconstitutional. But if the President does not have the power to reopen state economies, then how does he have the constitutional authority to lockdown the economy in the first place? Democracy often works slowly at all levels, but especially at the federal level because of various competing and conflicting political interests, as well as differences in how people assess threats, especially new and “historic” ones. Virus experts give the appearance that they follow “science” and use underlying “data” for guidance, a position that the California and New York governors have endorsed. The problem is that science does not always offer clear, indisputable policy directives.

Some may presume that a federal policy, if ever adopted, would be stricter than the collective impact of the hodgepodge” of state regulations. Not necessarily so. Politics would still be at play in Washington, D.C., and private-sector and state lobbyists would work hard and with greater focus to ensure federal policies minimized damage to their interests. With a localized approach to containment policy, lobbyists have had to divide their time among all 50 states and numerous legislators, not just the 535 members of Congress accessible in a few large buildings in the Capitol.

The restaurant and retail interests would likely have pressed to be identified as “essential” for the country and, thereby, not subject to closure. By relying on state-based policies, some states—say, California and Washington State—might have been forced to accept less stringent containment policies, just to achieve the required majorities in the House and Senate. Members of Congress from states with delayed, lax, and no containment policies would likely have worked just as diligently to delay a federal mandate and to pass less stringent regulations than some other states, such as California and Washington, might have preferred.

For more on these topics, see “Liberty in the Wake of Coronavirus,”, by Aris Trantidis, Library of Economics and Liberty, May 4, 2020. See also the EconTalk podcast episodes Paul Romer on the COVID-19 Pandemic and Tyler Cowen on the COVID-19 Pandemic.

Critics who leap to object, “Lives should be saved at all economic costs,” must keep in mind that massive unemployment and income losses also can kill. For example, Oxford University researchers have estimated that during the Great Recession, European and North American suicides went up by 10,000. This time the increase could be far greater because the economic damage could be far greater. With families in lockdown and with growing financial stresses, news reports have surfaced that domestic violence calls to 18 police departments hotlines surged by as much as 35 percent during March alone, and these calls may have been fueled by the substantial surge in March alcohol sales (and moderated by a surge in prescriptions for depression and insomnia).

Policy makers need to know more about the impact of lockdowns on lives saved from virus containment and lives lost because the economic damage done. The balance could catch some policy makers and experts by surprise.

My point is simple: With state-based policymaking, opportunities can emerge to test policies. But with an all-inclusive federal mandate policy, we could potentially lose a lot of scientific guidance on how better to deal with the COVID-19 pandemic and all future pandemics. Lives and the economy could hang in the balance on future research findings.


Footnotes

[1] See here for more: 2020 State and Local Government Responses to COVID-19. Stateside.com, May 28, 2020.

[2] See this CDC dispatch for more: https://wwwnc.cdc.gov/eid/article/26/7/20-0885_article

[3] Available online at: https://www.youtube.com/watch?vWZSKoNGTR6Q

[4] See here: https://www.stlouisfed.org/on-the-economy/2020/march/back-envelope-estimates-next-quarters-unemployment-rate

[5] To date, one analyst has computed a simple correlation between states’ death per million population and the count of days to lockdown, concluding in the Wall Street Journal that there was “not correlation.” However, the issue of whether lockdowns saved lives needs further study, given that many variables not considered in this study could explain the spread of death rates across states.

[6] Available online at: https://www.bloomberg.com/news/articles/2020-04-09/fauci-says-u-s-virus-deaths-may-be-60-000-halving-projections


*Richard McKenzie is a professor (emeritus) in the Merage Business School at the University of California, Irvine and author of A Brain-Focused Foundation for Economic Science, Boston: Palgrave/MacMillan; 2018.

For more articles by Richard McKenzie, see the Archive.


(0 COMMENTS)

This site uses local and third-party cookies to maintain your shopping cart and to analyze traffic. If you want to know more, click here. By closing this banner or clicking any link in this page, you agree with this practice.