Econlib

The Library of Economics and Liberty is dedicated to advancing the study of economics, markets, and liberty. Econlib offers a unique combination of resources for students, teachers, researchers, and aficionados of economic thought.

Econlib publishes three to four new economics articles and columns each month. The latest articles and columns are made available to the public on the first Monday of each month.

All Econlib articles and columns are written exclusively for us at the Library of Economics and Liberty, on various economics topics by renowned professors, researchers, and journalists worldwide. All articles and columns are retained online free of charge for public readership. Many articles and columns are discussed in concurrent comments and debate in our blog EconLog.

EconLog

The Library of Economics and Liberty features the popular daily blog EconLog. Bloggers Bryan Caplan, David Henderson, Alberto Mingardi, Scott Sumner, Pierre Lemieux and guest bloggers write on topical economics of interest to them, illuminating subjects from politics and finance, to recent films and cultural observations, to history and literature.

EconTalk

The Library of Economics and Liberty carries the podcast EconTalk, hosted by Russ Roberts. The weekly talk show features one-on-one discussions with an eclectic mix of authors, professors, Nobel laureates, entrepreneurs, leaders of charities and businesses, and people on the street. The emphases are on using topical books and the news to illustrate economic principles. Exploring how economics emerges in practice is a primary theme.

CEE

The Concise Encyclopedia of Economics features authoritative editions of classics in economics, and related works in history, political theory, and philosophy, complete with definitions and explanations of economics terms and ideas.

Visit the Library of Economics and Liberty

Recent Posts

Here are the 10 latest posts from EconLog.

EconLog June 23, 2019

Alan Reynolds on the Disappearing Middle Class, by David Henderson

At this link is a 22-minute talk that economist Alan Reynolds gave at Harvard in 1987. The other participants were John Kenneth Galbraith, Barry Bluestone, Lester Thurow, and Frank Levy.

There is so much content in this one short talk that it impossible to summarize without losing a lot of content. It’s also funny and you can see that many people in his audience seemed to enjoy it too.

The topic is “The Disappearing Middle Class.”

Notice at about the 5:30 point, Lester Thurow seems to enjoy one of Alan’s zingers.

Great pithy line at the 19:43 point to 19:55

EconLog June 23, 2019

A Chicken Game Between Two Governments, by Pierre Lemieux

If President Trump called off an Iranian attack in order to save innocent civilians, he deserves praise. Render to God what is God’s, and to Caesar what is Caesar’s. But, as a Wall Street Journal editorial emphasized, there is something off in this story (“Iran Calls Trump’s Bluff,” June 21, 2019):

It’s important to understand how extraordinary this is. The Commander in Chief ordered ships and planes into battle but recalled them because he hadn’t asked in advance what the damage and casualties might be? While the planes were in the air, he asked, oh, by the way? This is hard to take at face value. …

More likely, he changed his mind because he had second thoughts about the military and political consequences of engaging in a conflict he promised as a candidate to avoid.

Strategic threats have long been modeled with the help of game theory. A classic is economist Thomas Schelling’s 1960 book, The Strategy of Conflict (Harvard University Press). In game-theoretic terms, if you play a chicken game without having credibly committed to staying in the middle of the road, expect to lose one way or another: the other player is likely not to swerve; and you will have to swerve yourself, or else a crash will occur.

EconLog June 22, 2019

Macro theory for all times and all places, by Scott Sumner

In 1936, John Maynard Keynes came up with a macro model that was a product of its time. That’s the wrong way to do macro. Models should be based on the empirical facts of all countries and all time periods.

If your macro model cannot explain why the US experienced a major deflation (with NGDP falling nearly 30%) from mid-1920 to mid-1921, the model is useless. If it cannot explain why industrial production suddenly fell by 30% after mid-1920, the model is useless. If your model cannot explain why the 1921 depression was followed by a very quick recovery, whereas the 1929-33 depression was followed by a long 8-year recovery, the model is useless. You need a truly “General Theory”, applicable to all times and all places.

Unfortunately, most macro models cannot explain these important facts, and many others.

My previous post on the US and Hong Kong Phillips Curves got some positive feedback:

Brian Donohue Jun 21 2019 at 4:45pm Very clever.

ChrisA Jun 22 2019 at 3:49am Agreed. With empirical data like this, who can deny the monetary framework is the right way to do macro-economics. The theory is sound but that doesn’t always transfer into empirics.

When I wrote the post I overlooked this point.  It seems obvious to me that macro models must be monetary models.  But lots of people disagree. So Chris is right, it does constitute a powerful argument for the monetary approach to macro, which many people still do not accept.  What other explanation is there for the difference between the US and Hong Kong Phillips curves?  Surely there is no alternative explanation as simple and elegant.

Think of the following two-part model:

NGDP M*V(i, bank failures), where velocity is positively related to nominal interest rates and negatively related to risk of bank failures.  (Since 2008, you need to add IOR.)

And unexpected NGDP shocks affect real output:

This is a simple but incredibly powerful framework for analyzing a wide range of macro issues.  Of course it’s just a framework; we still need to model all of the factors that impact aggregate supply.  But this framework can easily explain the stylized facts that I discussed above.  Thus the big decline in NGDP during 1920-21 was mostly caused by a huge decline in the monetary base.  Note that the monetary base does not even appear in most New Keynesian models.  What is their explanation for 1920-21?  The quick recovery was due to the fact that nominal hourly wages were more flexible in those days.  The contraction of 1929-33 was caused by a fall in the monetary base during 1929-30, and a fall in velocity during 1930-33, caused by lower interest rates and bank failures.  The slow recovery from the Great Depression was caused by the NIRA, the Wagner Act, and minimum wage laws, which sharply contracted the supply of labor, and also the contractionary monetary policy of 1937-38.

The framework also applies to modern times.  The deep recessions of 1982 and 2009 were caused by NGDP growth falling far below expectations, due to tight money policies.

 

(8 COMMENTS)

EconLog June 21, 2019

Does the Fed set monetary policy in Hong Kong?, by Scott Sumner

David Beckworth recently conducted an excellent interview of Mike Bird, which is full of fascinating observations. At one point they were discussing the Hong Kong currency board, which since 1983 has fixed the exchange rate between the Hong Kong and US dollars.  This exchange caught my eye:

Beckworth: Yeah. So, interestingly, the Fed sets monetary policy for Hong Kong effectively, right?

Bird: Yup. Absolutely, which is very noticeable in the Hong Kong housing market over the past 10 years or so.

I think that’s right.  But this doesn’t necessarily mean what you think it means.  While the US is setting the monetary policy in Hong Kong, that policy is nothing like monetary policy it sets in the US.

Because of arbitrage, interest rates tend to be roughly the same in any two free market economies with fixed exchange rates and open capital markets.  Thus when the Fed moves its target short-term interest rate up and down, interest rates in Hong Kong tend to move in unison.

However, interest rates are not monetary policy.  What matters is the difference between the policy rate and the equilibrium interest rate.  For simplicity, let’s assume the equilibrium rate is the interest rate that maintains 2% inflation. (In practice, employment also affects the equilibrium rate.)  In the US, the Fed moves the policy rate up and down in an attempt to keep interest rates close to equilibrium.  This doesn’t work perfectly (inflation fell to 0% in 2009) but overall the Fed does a reasonably good job of keeping inflation close to 2%.

But the equilibrium interest rate in Hong Kong is very different from the equilibrium rate in the US.  As a result, there are often wide gaps between the actual interest rate in Hong Kong and the equilibrium rate.  This causes wild swings in inflation, from double digits to negative 4%.  These wild swings in monetary policy result in the Hong Kong economy being dominated by demand shocks, one of the three prerequisites for the Phillips Curve model to work.

Interestingly, Hong Kong also has the other two prerequisites for a stable Phillips Curve.  First, inflation expectations are stable due to the US dollar peg combined with the Fed’s 2% inflation target, despite big fluctuations in actual inflation.  And second, the natural rate of unemployment is stable due to Hong Kong’s relatively laissez-faire labor market regulations. Put the three together and you have one of the few countries where the Phillips Curve still holds:

We should all thank Hong Kong for being willing to perform a natural experiment as to what would happen if you arbitrarily used monetary policy to create large swings in actual inflation, in an economy with stable inflation expectations (roughly 2%) and a stable natural rate of unemployment (roughly 4%).

PS.  This also explains why the Phillips Curve is such a bad model.  There are very few countries where all three Phillips Curve prerequisites hold true.  And the US is not one of them.  The Fed should stop relying on the Phillips Curve.

(11 COMMENTS)

EconLog June 21, 2019

Henderson on Uwe Reinhardt’s Last Book, by David Henderson

Uwe Reinhardt was a well-known health economist at Princeton University who died in 2017. An outspoken advocate of government regulation of health insurance, he helped design the single-payer system adopted by Taiwan’s government.

Reinhardt’s last book is Priced Out: The Economic and Ethical Costs of American Health Care. In it, he argues that U.S. health care is too expensive, its administrative costs are too high, the U.S. tax system subsidizes health care for high-income people, and the government should increase the subsidy for health care for low-income people. He also expresses strong skepticism about requiring people to pay more out of pocket for their own health care, claiming it will not push consumers to price-shop for care.

Unfortunately, in the book Reinhardt biases his comparison of drug prices across countries and says nothing about the U.S. Food and Drug Administration’s role in causing high drug prices. In claiming that people won’t price-shop when their incentives are changed by higher deductibles, he uses one company’s experiment to generalize to the whole country. Yet he himself, with his advocacy of reference pricing, argues that people who have to pay out of pocket will price shop. In discussing the tax treatment of employer-provided health insurance, he likens taking advantage of the tax break to feeding at the public trough. An immigrant himself—first from Germany to Canada, and then from Canada to the United States—Reinhardt criticizes the hiring of immigrant doctors. One refreshing proposal, though, is his idea for letting people avoid the Affordable Care Act (ACA) and take responsibility for their own health insurance.

This is from David R. Henderson, “Reinhardt’s Last Book,” Regulation, Summer 2019. (Scroll down just past half-way.)

Some other highlights follow.

Reinhardt’s view on immigration of doctors

His language on another issue, immigration of doctors, is also disturbing. In the aforementioned C-SPAN interview, he stated that when doctors trained in other countries move to the United States, we are “robbing them of their physicians.” We’re not. When Reinhardt and I moved to the United States from Canada, the United States did not “rob” Canada of budding economists; we immigrated. The case with doctors is no different.

Is food less important than health care?

In her epilogue, [Tsung-Mei] Cheng points out that when Reinhardt was a child in Germany, he and his siblings had health care through the “social insurance” system that Chancellor Otto von Bismarck established back in 1883. She comments, “Germans may not always have had enough food in those years, but all had the health care they needed.” That comment is telling. There are tradeoffs. Would you rather spend a dollar on health care or on food, and who should get to choose? Cheng’s comment implies, and presumably her husband would have agreed, that it should be the government’s choice and the government should choose health care over food. Why that’s so is unclear. Elsewhere in the book, Reinhardt notes that health care contributes “no more than 10 percent to 20 percent of observed cross-country variations” in health status measures. For some people, especially poor people, food could easily be more important than health care.

A Reinhardt policy proposal that I like

I’ll end with a Reinhardt policy idea I like, about how government should deal with a perverse incentive that the ACA creates. The ACA requires insurers that sell individual health insurance policies to practice community rating. “Community rating” means that insurance companies charge the same premium to healthy people that they charge to unhealthy people. The result, he notes, is that a high percentage of healthy people will game the system, refraining from buying insurance until they are sick.

The usual solution that health policy analysts advocate for this is to require the uninsured to wait months or years before they can buy community-rated insurance. Reinhardt takes this idea a step further. He would require all American residents, at age 26, to buy community-rated insurance. If they refuse, they would not be allowed to buy community-rated insurance at any time in the future. They would instead go uninsured or buy insurance priced according to risk.

As long as government would not regulate the items that individual insurance would cover, this strikes me as a good proposal. I could imagine young people saving a few thousand dollars a year and then, when they get sick at, say, age 50, having tens of thousands of dollars to spend on health care. I could also imagine them doing what my daughter did before the ACA prohibited it: buy risk-priced health insurance with guaranteed renewability. That would be a large improvement on the current system.

I have one little regret. I read a few years ago that Uwe would be out in Monterey to give a talk at the Community Hospital of the Monterey Peninsula. (We locals call it CHOMP.) I emailed him to see if I could take him for coffee when he was here. He said yes but, unfortunately, cancelled just before the event. I would have liked to discuss these issues with him.

 

(1 COMMENTS)

EconLog June 20, 2019

Murray on the Prevalence of “Poverty”, by Bryan Caplan

Charles Murray‘s Losing Ground contains a most surprising claim:

[P]overty did not simply climb upward on our national list of problems; it abruptly reappeared from nowhere.  In the prologue to this book, 1950 was described as a year in which poverty was not part of the discourse about domestic policy – indeed, as a year in which the very word “poverty” was seldom used.  The silence was not peculiar to 1950.  From the outset of the Second World War until 1962, little in the popular press, in political rhetoric, or in the published work of American scholars focused on poverty in America.

When Murray wrote these words, there was really no way to verify this climb.  When I read this passage, I was fairly incredulous.  Now, however, Google’s Ngram can check this quote in the blink of an eye.  Result:

How wrong I was.  Murray’s I-was-there account matches the data very closely.  Use of the word “poverty” was in slow decline until about 1960.  Then in a few years, the prevalence of the word rose about 70%, receding moderating in the early 1970s.  The word’s rise resumed in the mid-80s, right around the time of Losing Ground‘s publication.  Coincidence?  I think not.  Murray’s influence is so mighty, he single-handedly changed the words of the English-speaking world.

EconLog June 20, 2019

Henderson on Cowen on Big Business, by David Henderson

Tyler Cowen’s latest book, Big Business: A Love Letter to an American Anti-Hero, is excellent. Cowen, an economics professor at George Mason University, makes a strong evidence-based case that big business in America is an important—probably the most important—contributor to our well-being.

In a heavily footnoted book with references to scores of high-quality articles and books, Cowen argues that:

  • Businesses are less deceptive than many other actors in society.
  • CEOs are not, in general, paid too much.
  • Most people like their jobs and often find them a safer haven than their homes
  • Big business is not particularly monopolistic.
  • Big tech companies are not evil.
  • Wall Street and finance companies in general are responsible for much of our prosperity.
  • Cronyism by big business is not a major factor in government policy.

EconLog June 20, 2019

The Trump administration’s advice to Malaysia, by Scott Sumner

In a recent post I discussed John Cochrane’s reaction to the Treasury department’s criteria for “currency manipulation”. John’s post included a link to the report, which contains some quite odd recommendations that raise additional question marks. Consider the following:

Malaysia’s external rebalancing in recent years is welcome, and the authorities should pursue appropriate policies to support a continuation of this trend, including by encouraging high-quality and transparent investment and ensuring sufficient social spending, which can help minimize precautionary saving.

This is bizarre on multiple levels:

  1. What business is it of the US government to give Malaysia advice on its economic policy regime?  Did they ask for advice?  How would our Congress feel if the Malaysian government instructed it on how to conduct our fiscal policy?

EconLog June 19, 2019

Giving USA Gets the Incentives Half Right, by David Henderson

 

“After reaching record-breaking levels of giving in 2017, American individuals and organizations continued their generous support of charitable institutions in 2018,” said Rick Dunham, chair of Giving USA Foundation and CEO of Dunham Company. “However, the environment for giving in 2018 was far more complex than most years, with shifts in tax policy and the volatility of the stock market. This is particularly true for the wide range of households that comprise individual giving and provide over two-thirds of all giving.”

A number of competing factors in the economic and public policy environments may have affected donors’ decisions in 2018, shifting some previous giving patterns. Many economic variables that shape giving, such as personal income, had relatively strong growth, while the stock market decline in late 2018 may have had a dampening effect. The policy environment also likely influenced some donors’ behavior. One important shift in the 2018 giving landscape is the drop in the number of individuals and households who itemize various types of deductions on their tax returns. This shift came in response to the federal tax policy change that doubled the standard deduction. More than 45 million households itemized deductions in 2016. Numerous studies suggest that number may have dropped to approximately 16 to 20 million households in 2018, reducing an incentive for charitable giving.

This is from Giving USA, “Giving USA 2019: Americans gave 427.71 billion to charity in 2018 amid complex year for charitable giving,” June 18, 2019.

Strikingly, although individual contributions to charity to fell in real terms, they did not fall much. The report states:

Giving by individuals totaled an estimated 292.09 billion, declining 1.1% in 2018 (a decrease of 3.4%, adjusted for inflation).

Note in the second quoted paragraph above that the author at Giving USA understands that the doubling of the standard deduction caused many fewer people to itemize and that, therefore, the incentive to give to charity fell.

But there’s one incentive factor that the summary of the report totally misses. And this factor means that the steady state for future charitable giving by individuals is probably higher than the author at Giving USA expects. What is that factor?

HINT: Did people know before December 31, 2017 about the change in the tax code that would start in 2018?

(8 COMMENTS)

EconLog June 18, 2019

Ominous News from the San Francisco Fed, by David Henderson

And you thought the Fed was just about monetary policy.

The Federal Reserve Bank of San Francisco wants banks to get extra credit for making loans that help communities adapt to climate change and prepare for future natural disasters.

A paper released on Monday by researchers at the San Francisco Fed argues that banks should receive credit for climate-adaptation investments under the Community Reinvestment Act, which requires banks to lend to low- and moderate-income communities.

The report represents the latest in a series of small steps by Federal Reserve banks to recognize climate change as a threat to the U.S. financial system.

These are the opening three paragraphs of a news item in today’s Wall Street Journal. It’s Laura Kusisto, “San Francisco Fed Wants to Reward Banks for Combating Climate Change,” Wall Street Journal, June 18, 2019.

The Fed is already the central planner of the U.S. money supply. It now seems to be toying with the idea of becoming a central planner in other areas.

(8 COMMENTS)

Here are the 10 latest posts from EconTalk.

EconTalk June 24, 2019

Eric Topol on Deep Medicine

Deep-Medicine.jpg Cardiologist and author Eric Topol talks about his book Deep Medicine with EconTalk host Russ Roberts. Topol argues that doctors spend too little face-to-face time with patients, and the use of artificial intelligence and machine learning is a chance to emphasize the human side of medicine and to expand the power of human connection in […]

EconTalk June 17, 2019

Anja Shortland on Kidnap

Kidnap.jpgAnja Shortland of King’s College London talks about her book Kidnap with EconTalk host Russ Roberts. Kidnapping is relatively common in parts of the world where government authority is weak. Shortland explores this strange, frightening, but surprisingly orderly world. She shows how the interaction between kidnappers, victims, and insurance companies creates a somewhat predictable set […]

EconTalk June 10, 2019

Bjorn Lomborg on the Costs and Benefits of Attacking Climate Change

2019/06/climate-change.jpg Bjorn Lomborg, President of the Copenhagen Consensus Center, talks about the costs and benefits of attacking climate change with EconTalk host Russ Roberts. Lomborg argues that we should always be aware of tradeoffs and effectiveness when assessing policies to reduce global warming. He advocates for realistic solutions that consider the potential to improve human life […]

EconTalk June 3, 2019

Alain Bertaud on Cities, Planning, and Order Without Design

Order-Without-Design-235x300.jpg Urbanist and author Alain Bertaud of NYU talks about his book Order without Design with EconTalk host Russ Roberts. Bertaud explores the role of zoning and planning alongside the emergent factors that affect the growth of cities. He emphasizes the importance of cities as places for people to work and looks at how preferences and […]

EconTalk May 27, 2019

David Epstein on Mastery, Specialization, and Range

Range-195x300.jpg Journalist and author David Epstein talks about his book Range with EconTalk host Russ Roberts. Epstein explores the costs of specialization and the value of breadth in helping to create mastery in our careers and in life. What are the best backgrounds for solving problems? Can mastery be achieved without specialization at a young age? […]

EconTalk May 20, 2019

Mary Hirschfeld on Economics, Culture, and Aquinas and the Market

Aquinas-and-the-Market-198x300.jpg Author, economist, and theologian Mary Hirschfeld of Villanova University talks about her book, Aquinas and the Market, with EconTalk host Russ Roberts. Hirschfeld looks at the nature of our economic activity as buyers and sellers and whether our pursuit of economic growth and material well-being comes at a cost. She encourages a skeptical stance about […]

EconTalk May 13, 2019

Robert Burton on Being Certain

On-Being-Certain-200x300.jpg Neurologist and author Robert Burton talks about his book, On Being Certain, with EconTalk host Russ Roberts. Burton explores our need for certainty and the challenge of being skeptical about what our brain tells us must be true. Where does what Burton calls “the feeling of knowing” come from? Why can memory lead us astray? […]

EconTalk May 6, 2019

Mauricio Miller on Poverty, Social Work, and the Alternative

The-Alternative-200x300.jpg Poverty activist, social entrepreneur and author, Mauricio Miller, talks about his book The Alternative with EconTalk host Russ Roberts. Miller, a MacArthur genius grant recipient, argues that we have made poverty tolerable when we should be trying to make it more escapable. This is possible, he argues, if we invest in the poor and encourage […]

EconTalk April 29, 2019

Emily Oster on Cribsheet

Economist and author Emily Oster of Brown University talks about her book Cribsheet with EconTalk host Russ Roberts. Oster explores what the data and evidence can tell us about parenting in areas such as breastfeeding, sleep habits, discipline, vaccination, and food allergies. Oster often finds that commonly held views on some of these topics are […]

The post Emily Oster on Cribsheet appeared first on Econlib.

EconTalk April 22, 2019

Paul Romer on Growth, Cities, and the State of Economics

economic-growth.jpg Nobel Laureate Paul Romer of New York University talks with EconTalk host Russ Roberts about the nature of growth, the role of cities in the economy, and the state of economics. Romer also reflects on his time at the World Bank and why he left his position there as Chief Economist.

Here are the 10 latest posts from CEE.

CEE May 28, 2019

William D. Nordhaus

.jpg)

William D. Nordhaus was co-winner, along with Paul M. Romer, of the 2018 Nobel Prize in Economic Science “for integrating climate change into long-run macroeconomic analysis.”

Starting in the 1970s, Nordhaus constructed increasingly comprehensive models of the interaction between the economy and additions of carbon dioxide to the atmosphere, along with its effects on global warming. Economists use these models, along with assumptions about various magnitudes, to compute the “social cost of carbon” (SCC). The idea is that past a certain point, additions of carbon dioxide to the atmosphere heat the earth and thus create a global negative externality. The SCC is the net cost that using that additional carbon imposes on society. While the warmth has some benefits in, for example, causing longer growing seasons and improving recreational alternatives, it also has costs such as raising ocean levels, making some land uses obsolete. The SCC is the net of these social costs and is measured at the current margin. (The “current margin” language is important because otherwise one can get the wrong impression that any use of carbon is harmful.) Nordhaus and others then use the SCC to recommend taxes on carbon. In 2017, Nordhaus computed the optimal tax to be 31 per ton of carbon dioxide. To put that into perspective, a 31 carbon tax would increase the price of gasoline by about 28 cents.

CEE May 28, 2019

William D. Nordhaus

.jpg)

William D. Nordhaus was co-winner, along with Paul M. Romer, of the 2018 Nobel Prize in Economic Science “for integrating climate change into long-run macroeconomic analysis.”

Starting in the 1970s, Nordhaus constructed increasingly comprehensive models of the interaction between the economy and additions of carbon dioxide to the atmosphere, along with its effects on global warming. Economists use these models, along with assumptions about various magnitudes, to compute the “social cost of carbon” (SCC). The idea is that past a certain point, additions of carbon dioxide to the atmosphere heat the earth and thus create a global negative externality. The SCC is the net cost that using that additional carbon imposes on society. While the warmth has some benefits in, for example, causing longer growing seasons and improving recreational alternatives, it also has costs such as raising ocean levels, making some land uses obsolete. The SCC is the net of these social costs and is measured at the current margin. (The “current margin” language is important because otherwise one can get the wrong impression that any use of carbon is harmful.) Nordhaus and others then use the SCC to recommend taxes on carbon. In 2017, Nordhaus computed the optimal tax to be 31 per ton of carbon dioxide. To put that into perspective, a 31 carbon tax would increase the price of gasoline by about 28 cents per gallon.

Nordhaus noted, though, that there is a large amount of uncertainty about the optimal tax. For the 31 tax above, the actual optimal tax could be as little as 6 per ton or as much as 93.

Interestingly, according to Nordhaus’s model, setting too high a carbon tax can be worse than setting no carbon tax at all. According to the calibration of Nordhaus’s model in 2007, with no carbon tax and no other government controls, the present value of damages from environment damage and abatement costs would be 22.59 trillion (in 2004 dollars). Nordhaus’s optimal carbon tax would have reduced damage but increased abatement costs, for a total of 19.52 trillion, an improvement of only 3.07 trillion. But the cost of a policy to limit the temperature increase to only 1.5 C would have been 37.03 trillion, which is 16.4 trillion more than the cost of the “do nothing” option. Those numbers will be different today, but what is not different is that the cost of doing nothing is substantially below the cost of limiting the temperature increase to only 1.5 C.

One item the Nobel committee did not mention is his demonstration that the price of light has fallen by many orders of magnitude over the last 200 years. He showed that the price of light in 1992, adjusted for inflation, was less than one tenth of one percent of its price in 1800. Failure to take this reduction fully into account, noted Nordhaus, meant that economists have substantially underestimated the real growth rate of the economy and the growth rate of real wages.

Nordhaus also did pathbreaking work on the distribution of gains from innovation. In a 2004 study he wrote:

Only a minuscule fraction of the social returns from technological advances over the 1948-2001 period was captured by producers, indicating that most of the benefits of technological change are passed on to consumers rather than captured by producers.

Nordhaus earned his B.A. degree at Yale University in 1963 and his Ph.D. in economics at MIT in 1967. From 1977 to 1979, he was a member of President Carter’s Council of Economic Advisers.

 

 


Selected Works

  1. . “Economic Growth and Climate: The Case of Carbon Dioxide.” American Economic Review, Vol. 67, No. 1, pp. 341-346.

  2. . “Do Real-Output and Real-Wage Measures Capture Reality? The History of Lighting Suggests Not,” in Timothy F. Bresnahan and Robert J. Gordon, editors, The Economics of New Goods. Chicago: University of Chicago Press, 1996.

  3. . (with J. Boyer.) Warming the World: Economic Models of Global Warming. Cambridge, MA: MIT Press.

  4. . “Schumpeterian Profits in the American Economy: Theory and Measurement,” NBER Working Paper No. 10433, April 2004.

  5. . “Projections and Uncertainties about Climate Change in an Era of Minimal Climate Policies,” NBER Working Paper No. 22933.

(0 COMMENTS)

CEE May 28, 2019

Paul M. Romer

In 2018, U.S. economist Paul M. Romer was co-recipient, along with William D. Nordhaus, of the Nobel Prize in Economic Science for “integrating technological innovations into long-run macroeconomic analysis.”

Romer developed “endogenous growth theory.” Before his work in the 1980s and early 1990s, the dominant economic model of economic growth was one that MIT economist Robert Solow developed in the 1950s. Even though Solow concluded that technological change was a key driver of economic growth, his own model made technological change exogenous. That is, technological change was not something determined in the model but was an outside factor. Romer made it endogenous.

CEE May 28, 2019

Paul M. Romer

In 2018, U.S. economist Paul M. Romer was co-recipient, along with William D. Nordhaus, of the Nobel Prize in Economic Science for “integrating technological innovations into long-run macroeconomic analysis.”

Romer developed “endogenous growth theory.” Before his work in the 1980s and early 1990s, the dominant economic model of economic growth was one that MIT economist Robert Solow developed in the 1950s. Even though Solow concluded that technological change was a key driver of economic growth, his own model made technological change exogenous. That is, technological change was not something determined in the model but was an outside factor. Romer made it endogenous.

There are actually two very different phases in Romer’s work on endogenous growth theory. Romer (1986) and Romer (1987) had an AK model. Real output was equal to A times K, where A is a positive constant and K is the amount of physical capital. The model assumes diminishing marginal returns to K, but assumes also that part of a firm’s investment in capital results in the production of new technology or human capital that, because it is non-rival and non-excludable, generates spillovers (positive externalities) for all firms. Because this technology is embodied in physical capital, as the capital stock (K) grows, there are constant returns to a broader measure of capital that includes the new technology. Modeling growth this way allowed Romer to keep the assumption of perfect competition, so beloved by economists.

In Romer (1990), Romer rejected his own earlier model. Instead, he assumed that firms are monopolistically competitive. That is, industries are competitive, but many firms within a given industry have market power. Monopolistically competitive firms develop technology that they can exclude others from using. The technology is non-rival; that is, one firm’s use of the technology doesn’t prevent other firms from using it. Because they can exploit their market power by innovating, they have an incentive to innovate. It made sense, therefore, to think carefully about how to structure such incentives.

Consider new drugs. Economists estimate that the cost of successfully developing and bringing a new drug to market is about 2.6 billion. Once the formula is discovered and tested, another firm could copy the invention of the firm that did all the work. If that second firm were allowed to sell the drug, the first firm would probably not do the work in the first place. One solution is patents. A patent gives the inventor a monopoly for a fixed number of years during which it can charge a monopoly price. This monopoly price, earned over years, gives drug companies a strong incentive to innovate.

Another way for new ideas to emerge, notes Romer, is for governments to subsidize research and development.

The idea that technological change is not just an outside factor but itself is determined within the economic system might seem obvious to those who have read the work of Joseph Schumpeter. Why did Romer get a Nobel Prize for his insights? It was because Romer’s model didn’t “blow up.” Previous economists who had tried mathematically to model growth in a Schumpeterian way had failed to come up with models in which the process of growth was bounded.

To his credit, Romer lays out some of his insights on growth in words and very simple math. In the entry on economic growth in The Concise Encyclopedia of Economics, Romer notes the huge difference in long run well being that would result from raising the economic growth rate by only a few percentage points. The “rule of 72” says that the length of time over which a magnitude doubles can be computed by dividing the growth rate into 72. It actually should be called the rule of 70, but the math with 72 is slightly easier. So, for example, if an economy grows by 2 percent per year, it will take 36 years for its size to double. But if it grows by 4 percent per year, it will double in 18 years.

Romer warns that policy makers should be careful about using endogenous growth theory to justify government intervention in the economy. In a 1998 interview he stated:

A lot of people see endogenous growth theory as a blanket seal of approval for all of their favourite government interventions, many of which are very wrong-headed. For example, much of the discussion about infrastructure is just wrong. Infrastructure is to a very large extent a traditional physical good and should be provided in the same way that we provide other physical goods, with market incentives and strong property rights. A move towards privatization of infrastructure provision is exactly the right way to go. The government should be much less involved in infrastructure provision.[1]

In the same interview, he stated, “Selecting a few firms and giving them money has obvious problems” and that governments “must keep from taxing income at such high rates that it severely distorts incentives.”

In 2000, Romer introduced Aplia, an on-line set of problems and answers that economics professors could assign to their students and easily grade. The upside is that students are more prepared for lectures and exams and can engage with their fellow students in economic experiments on line. The downside of Aplia, according to some economics professors, is that students get less practice actually manually drawing demand and supply curves.

In 2009, Romer started advocating “Charter Cities.” His idea was that many people are stuck in countries with bad rules that make wealth creation difficult. If, he argued, an outside government could start a charter city in a country that had bad rules, people in that country could move there. Of course, this would require the cooperation of the country with the bad rules and getting that cooperation is not an easy task. His primary example of such an experiment working is Hong Kong, which was run by the British government until 1997. In a 2009 speech on charter cities, Romer stated, “Britain, through its actions in Hong Kong, did more to reduce world poverty than all the aid programs that we’ve undertaken in the last century.”[2]

Romer earned a B.S. in mathematics in 1977, an M.A. in economics in 1978, and a Ph.D. in economics in 1983, all from the University of Chicago. He also did graduate work at MIT and Queen’s University. He has taught at the University of Rochester, the University of Chicago, UC Berkeley, and Stanford University, and is currently a professor at New York University.

He was chief economist at the World Bank from 2106 to 2018.

 

 

[1] “Interview with Paul M. Romer,” in Brian Snowdon and Howard R. Vane, Modern Macroeconomics: Its Origins, Development and Current State, Cheltenham, UK: Edward Elgar, 2005, p. 690.

[2] Paul Romer, “Why the world needs charter cities,” TEDGlobal 2009.

 


Selected Works

  1. “Increasing Returns and Long-Run Growth.” Journal of Political Economy, Vol. 94, No. 5, pp. 1002-1037.
  2. “Growth Based on Increasing Returns Due to Specialization.” American Economic Review, Papers and Proceedings, Vol. 77, No. 2, pp. 56-62.
  3. “Endogenous Technological Change.” Journal of Political Economy. Vol. 98, No. 5, S71-S102.
  4. “Mathiness in the Theory of Economic Growth.” American Economic Review, Vol. 105, No. 5, pp. 89-93.

 

(0 COMMENTS)

CEE March 13, 2019

Jean Tirole

Jean Tirole .jpg "Ecole polytechnique Université Paris-Saclay [CC BY-SA 2.0 (https://creativecommons.org/licenses/by-sa/2.0)], via Wikimedia Commons") 

In 2014, French economist Jean Tirole was awarded the Nobel Prize in Economic Sciences “for his analysis of market power and regulation.” His main research, in which he uses game theory, is in an area of economics called industrial organization. Economists studying industrial organization apply economic analysis to understanding the way firms behave and why certain industries are organized as they are.

From the late 1960s to the early 1980s, economists George Stigler, Harold Demsetz, Sam Peltzman, and Yale Brozen, among others, played a dominant role in the study of industrial organization. Their view was that even though most industries don’t fit the economists’ “perfect competition” model—a model in which no firm has the power to set a price—the real world was full of competition. Firms compete by cutting their prices, by innovating, by advertising, by cutting costs, and by providing service, just to name a few. Their understanding of competition led them to skepticism about much of antitrust law and most government regulation.

In the 1980s, Jean Tirole introduced game theory into the study of industrial organization, also known as IO. The key idea of game theory is that, unlike for price takers, firms with market power take account of how their rivals are likely to react when they change prices or product offerings. Although the earlier-mentioned economists recognized this, they did not rigorously use game theory to spell out some of the implications of this interdependence. Tirole did.

One issue on which Tirole and his co-author Jean-Jacques Laffont focused was “asymmetric information.” A regulator has less information than the firms it regulates. So, if the regulator guesses incorrectly about a regulated firm’s costs, which is highly likely, it could set prices too low or too high. Tirole and Laffont showed that a clever regulator could offset this asymmetry by constructing contracts and letting firms choose which contract to accept. If, for example, some firms can take measures to lower their costs and other firms cannot, the regulator cannot necessarily distinguish between the two types. The regulator, recognizing this fact, may offer the firms either a cost-plus contract or a fixed-price contract. The cost-plus contract will appeal to firms with high costs, while the fixed-price contract will appeal to firms that can lower their costs. In this way, the regulator maintains incentives to keep costs down.

Their insights are most directly applicable to government entities, such as the Department of Defense, in their negotiations with firms that provide highly specialized military equipment. Indeed, economist Tyler Cowen has argued that Tirole’s work is about principal-agent theory rather than about reining in big business per se. In the Department of Defense example, the Department is the principal and the defense contractor is the agent.

One of Tirole’s main contributions has been in the area of “two-sided markets.” Consider Google. It can offer its services at one price to users (one side) and offer its services at a different price to advertisers (the other side). The higher the price to users, the fewer users there will be and, therefore, the less money Google will make from advertising. Google has decided to set a zero price to users and charge for advertising. Tirole and co-author Jean-Charles Rochet showed that the decision about profit-maximizing pricing is complicated, and they use substantial math to compute such prices under various theoretical conditions. Although Tirole believes in antitrust laws to limit both monopoly power and the exercise of monopoly power, he argues that regulators must be cautious in bringing the law to bear against firms in two-sided markets. An example of a two-sided market is a manufacturer of videogame consoles. The two sides are game developers and game players. He notes that it is very common for companies in such markets to set low prices on one side of the market and high prices on the other. But, he writes, “A regulator who does not bear in mind the unusual nature of a two-sided market may incorrectly condemn low pricing as predatory or high pricing as excessive, even though these pricing structures are adopted even by the smallest platforms entering the market.”

Tirole has brought the same kind of skepticism to some other related regulatory issues. Many regulators, for example, have advocated government regulation of interchange fees (IFs) in payment card associations such as Visa and MasterCard. But in 2003, Rochet and Tirole wrote that “given the [economics] profession’s current state of knowledge, there is no reason to believe that the IFs chosen by an association are systematically too high or too low, as compared with socially optimal levels.”

After winning the Nobel Prize, Tirole wrote a book for a popular audience, Economics for the Common Good. In it, he applied economics to a wide range of policy issues, laying out, among other things, the advantages of free trade for most residents of a given country and why much legislation and regulation causes negative unintended consequences.

Like most economists, Tirole favors free trade. In Economics for the Common Good, he noted that French consumers gain from freer trade in two ways. First, free trade exposes French monopolies and oligopolies to competition. He argued that two major French auto companies, Renault and Peugeot-Citroen, “sharply increased their efficiency” in response to car imports from Japan. Second, free trade gives consumers access to cheaper goods from low-wage countries.

In that same book, Tirole considered the unintended consequences of a hypothetical, but realistic, case in which a non-governmental organization, wanting to discourage killing elephants for their tusks, “confiscates ivory from traffickers.” In this hypothetical example, the organization can destroy the ivory or sell it. Destroying the ivory, he reasoned, would drive up the price. The higher price could cause poachers to kill more elephants. Another example he gave is of the perverse effects of price ceilings. Not only do they cause shortages, but also, as a result of these shortages, people line up and waste time in queues. Their time spent in queues wipes out the financial gain to consumers from the lower price, while also hurting the suppliers. No one wins and wealth is destroyed.

Also in that book, Tirole criticized the French government’s labor policies, which make it difficult for employers to fire people. He noted that this difficulty makes employers less likely to hire people in the first place. As a result, the unemployment rate in France was above 7 percent for over 30 years. The effect on young people has been particularly pernicious. When he wrote this book, the unemployment rate for French residents between 15 and 24 years old was 24 percent, and only 28.6 percent of percent of those in that age group had jobs. This was much lower than the OECD average of 39.6 percent, Germany’s 46.8 percent, and the Netherlands’ 62.3 percent.

One unintended, but predictable, consequence of government regulations of firms, which Tirole pointed out in Economics for the Common Good, is to make firms artificially small. When a French firm with 49 employees hires one more employee, he noted, it is subject to 34 additional legal obligations. Not surprisingly, therefore, in a figure that shows the number of enterprises with various numbers of employees, a spike occurs at 47 to 49 employees.

In Economics for the Common Good, Tirole ranged widely over policy issues in France. In addressing the French university system, he criticized the system’s rejection of selective admission to university. He argued that such a system causes the least prepared students to drop out and concluded that “[O]n the whole, the French educational system is a vast insider-trading crime.”

Tirole is chairman of the Toulouse School of Economics and of the Institute for Advanced Study in Toulouse. A French citizen, he was born in Troyes, France and earned his Ph.D. in economics in 1981 from the Massachusetts Institute of Technology.


Selected Works

 

  1. . (Co-authored with Jean-Jacques Laffont).“Using Cost Observation to Regulate Firms”. Journal of Political Economy. 94:3 (Part I). June: 614-641.

  2. . The Theory of Industrial Organization. MIT Press.

  3. . (Co-authored with Drew Fudenberg).“Moral Hazard and Renegotiation in Agency Contracts”, Econometrica, 58:6. November: 1279-1319.

  4. . (Co-authored with Jean-Jacques Laffont). A Theory of Incentives in Procurement and Regulation. MIT Press.

2003: (Co-authored with Jean-Charles Rochet). “An Economic Analysis of the Determination of Interchange Fees in Payment Card Systems.” Review of Network Economics. 2:2: 69-79.

  1. . (Co-authored with Jean-Charles Rochet). “Two-Sided Markets: A Progress Report.” The RAND Journal of Economics. 37:3. Autumn: 645-667.

2017, Economics for the Common Good. Princeton University Press.

 

(0 COMMENTS)

CEE November 30, 2018

The 2008 Financial Crisis

It was, according to accounts filtering out of the White House, an extraordinary scene. Hank Paulson, the U.S. treasury secretary and a man with a personal fortune estimated at 700m (380m), had got down on one knee before the most powerful woman in Congress, Nancy Pelosi, and begged her to save his plan to rescue Wall Street.

    The Guardian, September 26, 20081

The financial crisis of 2008 was a complex event that took most economists and market participants by surprise. Since then, there have been many attempts to arrive at a narrative to explain the crisis, but none has proven definitive. For example, a Congressionally-chartered ten-member Financial Crisis Inquiry Commission produced three separate narratives, one supported by the members appointed by the Democrats, one supported by four members appointed by the Republicans, and a third written by the fifth Republican member, Peter Wallison.2

It is important to appreciate that the financial system is complex, not merely complicated. A complicated system, such as a smartphone, has a fixed structure, so it behaves in ways that are predictable and controllable. A complex system has an evolving structure, so it can evolve in ways that no one anticipates. We will never have a proven understanding of what caused the financial crisis, just as we will never have a proven understanding of what caused the first World War.

There can be no single, definitive narrative of the crisis. This entry can cover only a small subset of the issues raised by the episode.

Metaphorically, we may think of the crisis as a fire. It started in the housing market, spread to the sub-prime mortgage market, then engulfed the entire mortgage securities market and, finally, swept through the inter-bank lending market and the market for asset-backed commercial paper.

Home sales began to slow in the latter part of 2006. This soon created problems for the sector of the mortgage market devoted to making risky loans, with several major lenders—including the largest, New Century Financial—declaring bankruptcy early in 2007. At the time, the problem was referred to as the “sub-prime mortgage crisis,” confined to a few marginal institutions.

But by the spring of 2008, trouble was apparent at some Wall Street investment banks that underwrote securities backed by sub-prime mortgages. On March 16, commercial bank JP Morgan Chase acquired one of these firms, Bear Stearns, with help from loan guarantees provided by the Federal Reserve, the central bank of the United States.

Trouble then began to surface at all the major institutions in the mortgage securities market. By late summer, many investors had lost confidence in Freddie Mac and Fannie Mae, and the interest rates that lenders demanded from them were higher than what they could pay and still remain afloat. On September 7, the U.S. Treasury took these two GSEs into “conservatorship.”

Finally, the crisis hit the short-term inter-bank collateralized lending markets, in which all of the world’s major financial institutions participate. This phase began after government officials’ unsuccessful attempts to arrange a merger of investment bank Lehman Brothers, which declared bankruptcy on September 15. This bankruptcy caused the Reserve Primary money market fund, which held a lot of short-term Lehman securities, to mark down the value of its shares below the standard value of one dollar each. That created jitters in all short-term lending markets, including the inter-bank lending market and the market for asset-backed commercial paper in general, and caused stress among major European banks.

The freeze-up in the interbank lending market was too much for leading public officials to bear. Under intense pressure to act, Treasury Secretary Henry Paulson proposed a 700 billion financial rescue program. Congress initially voted it down, leading to heavy losses in the stock market and causing Secretary Paulson to plead for its passage. On a second vote, the measure, known as the Troubled Assets Relief Program (TARP), was approved.

In hindsight, within each sector affected by the crisis, we can find moral hazard, cognitive failures, and policy failures. Moral hazard (in insurance company terminology) arises when individuals and firms face incentives to profit from taking risks without having to bear responsibility in the event of losses. Cognitive failures arise when individuals and firms base decisions on faulty assumptions about potential scenarios. Policy failures arise when regulators reinforce rather than counteract the moral hazard and cognitive failures of market participants.

The Housing Sector

From roughly 1990 to the middle of 2006, the housing market was characterized by the following:

  • an environment of low interest rates, both in nominal and real (inflation-adjusted) terms. Low nominal rates create low monthly payments for borrowers. Low real rates raise the value of all durable assets, including housing.
  • prices for houses rising as fast as or faster than the overall price level
  • an increase in the share of households owning rather than renting
  • loosening of mortgage underwriting standards, allowing households with weaker credit histories to qualify for mortgages.
  • lower minimum requirements for down payments. A standard requirement of at least ten percent was reduced to three percent and, in some cases, zero. This resulted in a large increase in the share of home purchases made with down payments of five percent or less.
  • an increase in the use of new types of mortgages with “negative amortization,” meaning that the outstanding principal balance rises over time.
  • an increase in consumers’ borrowing against their houses to finance spending, using home equity loans, second mortgages, and refinancing of existing mortgages with new loans for larger amounts.
  • an increase in the proportion of mortgages going to people who were not planning to live in the homes that they purchased. Instead, they were buying them to speculate. 3

These phenomena produced an increase in mortgage debt that far outpaced the rise in income over the same period. The trends accelerated in the three years just prior to the downturn in the second half of 2006.

The rise in mortgage debt relative to income was not a problem as long as home prices were rising. A borrower having difficulty finding the cash to make a mortgage payment on a house that had appreciated in value could either borrow more with the house as collateral or sell the house to pay off the debt.

But when house prices stopped rising late in 2006, households that had taken on too much debt began to default. This set in motion a reverse cycle: house foreclosures increased the supply of homes for sale; meanwhile, lenders became wary of extending credit, and this reduced demand. Prices fell further, leading to more defaults and spurring lenders to tighten credit still further.

During the boom, some people were speculating in non-owner-occupied homes, while others were buying their own homes with little or no money down. And other households were, in the vernacular of the time, “using their houses as ATMs,” taking on additional mortgage debt in order to finance consumption.

In most states in the United States, once a mortgage lender forecloses on a property, the borrower is not responsible for repayment, even if the house cannot be sold for enough to cover the loan. This creates moral hazard, particularly for property speculators, who can enjoy all of the profits if house prices rise but can stick lenders with some of the losses if prices fall.

One can see cognitive failure in the way that owners of houses expected home prices to keep rising at a ten percent rate indefinitely, even though overall inflation was less than half that amount.4Also, many house owners seemed unaware of the risks of mortgages with “negative amortization.”

Policy failure played a big role in the housing sector. All of the trends listed above were supported by public policy. Because they wanted to see increased home ownership, politicians urged lenders to loosen credit standards. With the Community Reinvestment Act for banks and Affordable Housing Goals for Freddie Mac and Fannie Mae, they spurred traditional mortgage lenders to increase their lending to minority and low-income borrowers. When the crisis hit, politicians blamed lenders for borrowers’ inability to repay, and political pressure exacerbated the credit tightening that subsequently took place

The Sub-prime Mortgage Sector

Until the late 1990s, few lenders were willing to give mortgages to borrowers with problematic credit histories. But sub-prime mortgage lenders emerged and grew rapidly in the decade leading up to the crisis. This growth was fueled by financial innovations, including the use of credit scoring to finely grade mortgage borrowers, and the use of structured mortgage securities (discussed in the next section) to make the sub-prime sector attractive to investors with a low tolerance for risk. Above all, it was fueled by rising home prices, which created a history of low default rates.

There was moral hazard in the sub-prime mortgage sector because the lenders were not holding on to the loans and, therefore, not exposing themselves to default risk. Instead, they packaged the mortgages into securities and sold them to investors, with the securities market allocating the risk.

Because they sold loans in the secondary market, profits at sub-prime lenders were driven by volume, regardless of the likelihood of default. Turning down a borrower meant getting no revenue. Approving a borrower meant earning a fee. These incentives were passed through to the staff responsible for finding potential borrowers and underwriting loans, so that personnel were compensated based on “production,” meaning the new loans they originated.

Although in theory the sub-prime lenders were passing on to others the risks that were embedded in the loans they were making, they were among the first institutions to go bankrupt during the financial crisis. This shows that there was cognitive failure in the management at these companies, as they did not foresee the house price slowdown or its impact on their firms.

Cognitive failure also played a role in the rise of mortgages that were underwritten without verification of the borrowers’ income, employment, or assets. Historical data showed that credit scores were sufficient for assessing borrower risk and that additional verification contributed little predictive value. However, it turned out that once lenders were willing to forgo these documents, they attracted a different set of borrowers, whose propensity to default was higher than their credit scores otherwise indicated.

There was policy failure in that abuses in the sub-prime mortgage sector were allowed to continue. Ironically, while the safety and soundness of Freddie Mac and Fannie Mae were regulated under the Department of Housing and Urban Development, which had an institutional mission to expand home ownership, consumer protection with regard to mortgages was regulated by the Federal Reserve Board, whose primary institutional missions were monetary policy and bank safety. Though mortgage lenders were setting up borrowers to fail, the Federal Reserve made little or no effort to intervene. Even those policy makers who were concerned about practices in the sub-prime sector believed that, on balance, sub-prime mortgage lending was helping a previously under-served set of households to attain home ownership.5

Mortagage Securities

A mortgage security consists of a pool of mortgage loans, the payments on which are passed through to pension funds, insurance companies, or other institutional investors looking for reliable returns with little risk. The market for mortgage securities was created by two government agencies, known as Ginnie Mae and Freddie Mac, established in 1968 and 1970, respectively.

Mortgage securitization expanded in the 1980s, when Fannie Mae, which previously had used debt to finance its mortgage purchases, began issuing its own mortgage-backed securities. At the same time, Freddie Mac was sold to shareholders, who encouraged Freddie to grow its market share. But even though Freddie and Fannie were shareholder-owned, investors treated their securities as if they were government-backed. This was known as an implicit government guarantee.

Attempts to create a market for private-label mortgage securities (PLMS) without any form of government guarantee were largely unsuccessful until the late 1990s. The innovations that finally got the PLMS market going were credit scoring and the collateralized debt obligation (CDO).

Before credit scoring was used in the mortgage market, there was no quantifiable difference between any two borrowers who were approved for loans. With credit scoring, the Wall Street firms assembling pools of mortgages could distinguish between a borrower with a very good score (750, as measured by the popular FICO system) and one with a more doubtful score (650).

Using CDOs, Wall Street firms were able to provide major institutional investors with insulation from default risk by concentrating that risk in other sub-securities (“tranches”) that were sold to investors who were more tolerant of risk. In fact, these basic CDOs were enhanced by other exotic mechanisms, such as credit default swaps, that reallocated mortgage default risk to institutions in which hardly any observer expected to find it, including AIG Insurance.

There was moral hazard in the mortgage securities market, as Freddie Mac and Fannie Mae sought profits and growth on behalf of shareholders, but investors in their securities expected (correctly, as it turned out) that the government would protect them against losses. Years before the crisis, critics grumbled that the mortgage giants exemplified privatized profits and socialized risks.6

There was cognitive failure in the assessment of default risk. Assembling CDOs and other exotic instruments required sophisticated statistical modeling. The most important driver of expectations for mortgage defaults is the path for house prices, and the steep, broad-based decline in home prices that took place in 2006-2009 was outside the range that some modelers allowed for.

Another source of cognitive failure is the “suits/geeks” divide. In many firms, the financial engineers (“geeks) understood the risks of mortgage-related securities fairly well, but their conclusions did not make their way to the senior management level (“suits”).

There was policy failure on the part of bank regulators. Their previous adverse experience was with the Savings and Loan Crisis, in which firms that originated and retained mortgages went bankrupt in large numbers. This caused bank regulators to believe that mortgage securitization, which took risk off the books of depository institutions, would be safer for the financial system. For the purpose of assessing capital requirements for banks, regulators assigned a weight of 100 percent to mortgages originated and held by the bank, but assigned a weight of only 20 percent to the bank’s holdings of mortgage securities issued by Freddie Mac, Fannie Mae, or Ginnie Mae. This meant that banks needed to hold much more capital to hold mortgages than to hold mortgage-related securities; that naturally steered them toward the latter.

In 2001, regulators broadened the low-risk umbrella to include AAA-rated and AA-rated tranches of private-label CDOs. This ruling helped to generate a flood of PLMS, many of them backed by sub-prime mortgage loans.7

By using bond ratings as a key determinant of capital requirements, the regulators effectively put the bond rating agencies at the center of the process of creating private-label CDOs. The rating agencies immediately became subject to both moral hazard and cognitive failure. The moral hazard came from the fact that the rating agencies were paid by the issuers of securities, who wanted the most generous ratings possible, rather than being paid by the regulators, who needed more rigorous ratings. The cognitive failure came from the fact that that models that the rating agencies used gave too little weight to potential scenarios of broad-based declines in house prices. Moreover, the banks that bought the securities were happy to see them rated AAA because the high ratings made the securities eligible for lower capital requirements on the part of the banks. Both sides, therefore, buyers and sellers, had bad incentives.

There was policy failure on the part of Congress. Officials in both the Clinton and Bush Administrations were unhappy with the risk that Freddie Mac and Fannie Mae represented to taxpayers. But Congress balked at any attempt to tighten regulation of the safety and soundness of those firms.8

The Inter-bank Lending Market

There are a number of mechanisms through which financial institutions make short-term loans to one another. In the United States, banks use the Federal Funds market to manage short-term fluctuations in reserves. Internationally, banks lend in what is known as the LIBOR market.

One of the least known and most important markets is for “repo,” which is short for “repurchase agreement.” As first developed, the repo market was used by government bond dealers to finance inventories of securities, just as an automobile dealer might finance an inventory of cars. A money-market fund might lend money for one day or one week to a bond dealer, with the loan collateralized by a low-risk long-term security.

In the years leading up to the crisis, some dealers were financing low-risk mortgage-related securities in the repo market. But when some of these securities turned out to be subject to price declines that took them out of the “low-risk” category, participants in the repo market began to worry about all repo collateral. Repo lending offers very low profit margins, and if an investor has to be very discriminating about the collateral backing a repo loan, it can seem preferable to back out of repo lending altogether. This, indeed, is what happened, in what economist Gary Gorton and others called a “run on repo.”9

Another element of institutional panic was “collateral calls” involving derivative financial instruments. Derivatives, such as credit default swaps, are like side bets. The buyer of a credit default swap is betting that a particular debt instrument will default. The seller of a credit default swap is betting the opposite.

In the case of mortgage-related securities, the probability of default seemed low prior to the crisis. Sometimes, buyers of credit default swaps were merely satisfying the technical requirements to record the underlying securities as AAA-rated. They could do this if they obtained a credit default swap from an institution that was itself AAA-rated. AIG was an insurance company that saw an opportunity to take advantage of its AAA rating to sell credit default swaps on mortgage-related securities. AIG collected fees, and its Financial Products division calculated that the probability of default was essentially zero. The fees earned on each transaction were low, but the overall profit was high because of the enormous volume. AIG’s credit default swaps were a major element in the expansion of shadow banking by non-bank financial institutions during the run-up to the crisis.

Late in 2005, AIG abruptly stopped writing credit default swaps, in part because its own rating had been downgraded below AAA earlier in the year for unrelated reasons. By the time AIG stopped selling credit default swaps on mortgage-related securities, it had outstanding obligations on 80 billion of underlying securities and was earning 1 billion a year in fees.10

Because AIG no longer had its AAA rating and because the underlying mortgage securities, while not in default, were increasingly shaky, provisions in the contracts that AIG had written allowed the buyers of credit default swaps to require AIG to provide protection in the form of low-risk securities posted as collateral. These “collateral calls” were like a margin call that a stock broker will make on an investor who has borrowed money to buy stock that subsequently declines in value. In effect, collateral calls were a run on AIG’s shadow bank.

These collateral calls were made when the crisis in the inter-bank lending market was near its height in the summer of 2008 and banks were hoarding low-risk securities. In fact, the shortage of low-risk securities may have motivated some of the collateral calls, as institutions like Deutsche Bank and Goldman Sachs sought ways to ease their own liquidity problems. In any event, AIG could not raise enough short-term funds to meet its collateral calls without trying to dump long-term securities into a market that had little depth to absorb them. It turned to Federal authorities for a bailout, which was arranged and creatively backed by the Federal Reserve, but at the cost of reducing the value of shares in AIG.

With repos and derivatives, there was moral hazard in that the traders and executives of the narrow units that engaged in exotic transactions were able to claim large bonuses on the basis of short-term profits. But the adverse long-term consequences were spread to the rest of the firm and, ultimately, to taxpayers.

There was cognitive failure in that the collateral calls were an unanticipated risk of the derivatives business. The financial engineers focused on the (remote) chances of default on the underlying securities, not on the intermediate stress that might emerge from collateral calls.

There was policy failure when Congress passed the Commodity Futures Modernization Act. This legislation specified that derivatives would not be regulated by either of the agencies with the staff most qualified to understand them. Rather than require oversight by the Securities and Exchange Commission or the Commodity Futures Trading Commission (which regulated market-traded derivatives), Congress decreed that the regulator responsible for overseeing each firm would evaluate its derivative position. The logic was that a bank that was using derivatives to hedge other transactions should have its derivative position evaluated in a larger context. But, as it happened, the insurance and bank regulators who ended up with this responsibility were not equipped to see the dangers at firms such as AIG.

There was also policy failure in that officials approved of securitization that transferred risk out of the regulated banking sector. While Federal Reserve Officials were praising the risk management of commercial banks,11risk was accumulating in the shadow banking sector (non-bank institutions in the financial system), including AIG insurance, money market funds, Wall Street firms such as Bear Stearns and Lehman Brothers, and major foreign banks. When problems in the shadow banking sector contributed to the freeze in inter-bank lending and in the market for asset-backed commercial paper, policy makers felt compelled to extend bailouts to satisfy the needs of these non-bank institutions for liquid assets.

Conclusion

In terms of the fire metaphor suggested earlier, in hindsight, we can see that the markets for housing, sub-prime mortgages, mortgage-related securities, and inter-bank lending were all highly flammable just prior to the crisis. Moral hazard, cognitive failures, and policy failures all contributed the combustible mix.

The crisis also reflects a failure of the economics profession. A few economists, most notably Robert Shiller,12warned that the housing market was inflated, as indicated by ratios of prices to rents that were high by historical standards. Also, when risk-based capital regulation was proposed in the wake of the Savings and Loan Crisis and the Latin American debt crisis, a group of economists known as the Shadow Regulatory Committee warned that these regulations could be manipulated. They recommended, instead, greater use of senior subordinated debt at regulated financial institutions.13Many economists warned about the incentives for risk-taking at Freddie Mac and Fannie Mae.14

But even these economists failed to anticipate the 2008 crisis, in large part because economists did not take note of the complex mortgage-related securities and derivative instruments that had been developed. Economists have a strong preference for parsimonious models, and they look at financial markets through a lens that includes only a few types of simple assets, such as government bonds and corporate stock. This approach ignores even the repo market, which has been important in the financial system for over 40 years, and, of course, it omits CDOs, credit default swaps and other, more recent innovations.

Financial intermediaries do not produce tangible output that can be measured and counted. Instead, they provide intangible benefits that economists have never clearly articulated. The economics profession has a long way to go to catch up with modern finance.


About the Author

Arnold Kling was an economist with the Federal Reserve Board and with the Federal Home Loan Mortgage Corporation before launching one of the first Web-based businesses in 1994.  His most recent books areSpecialization and Trade and The Three Languages of Politics. He earned his Ph.D. in economics from the Massachusetts Institute of Technology.


Footnotes

1

“A desperate plea – then race for a deal before ‘sucker goes down’” The Guardian, September 26, 2008. https://www.theguardian.com/business/2008/sep/27/wallstreet.useconomy1

 

2

The report and dissents of the Financial Crisis Inquiry Commission can be found at https://fcic.law.stanford.edu/

3

See Stefania Albanesi, Giacomo De Giorgi, and Jaromir Nosal 2017, “Credit Growth and the Financial Crisis: A New Narrative” NBER working paper no. 23740. http://www.nber.org/papers/w23740

 

4

Karl E. Case and Robert J. Shiller 2003, “Is there a Bubble in the Housing Market?” Cowles Foundation Paper 1089 http://www.econ.yale.edu/shiller/pubs/p1089.pdf

 

5

Edward M. Gramlich 2004, “Subprime Mortgage Lending: Benefits, Costs, and Challenges,” Federal Reserve Board speeches. https://www.federalreserve.gov/boarddocs/speeches/2004/20040521/

 

6

For example, in 1999, Treasury Secretary Lawrence Summers said in a speech, “Debates about systemic risk should also now include government-sponsored enterprises.” See Bethany McLean and Joe Nocera 2010, All the Devils are Here: The Hidden History of the Financial Crisis Portfolio/Penguin Press. The authors write that Federal Reserve Chairman Alan Greenspan was also, like Summers, disturbed by the moral hazard inherent in the GSEs.

 

7

Jeffrey Friedman and Wladimir Kraus 2013, Engineering the Financial Crisis: Systemic Risk and the Failure of Regulation, University of Pennsylvania Press.

 

8

See McLean and Nocera, All the Devils are Here

 

9

Gary Gorton, Toomas Laarits, and Andrew Metrick 2017, “The Run on Repo and the Fed’s Response,” Stanford working paper. https://www.gsb.stanford.edu/sites/gsb/files/fin_11_17_gorton.pdf

 

10

Talking Points Memo 2009, “The Rise and Fall of AIG’s Financial Products Unit” https://talkingpointsmemo.com/muckraker/the-rise-and-fall-of-aig-s-financial-products-unit

 

11

Chairman Ben S. Bernanke 2006, “Modern Risk Management and Banking Supervision,” Federal Reserve Board speeches. https://www.federalreserve.gov/newsevents/speech/bernanke20060612a.htm

 

12

National Public Radio 2005, “Yale Professor Predicts Housing ’Bubble’ Will Burst” https://www.npr.org/templates/story/story.php?storyId4679264

 

13

Shadow Financial Regulatory Committee 2001, “The Basel Committee’s Revised Capital Accord Proposal” https://www.bis.org/bcbs/ca/shfirect.pdf

14

See the discussion in Viral V. Acharya, Matthew Richardson, Stijn Van Nieuwerburgh and Lawrence J. White 2011, Guaranteed to Fail: Fannie Mae, Freddie Mac, and the Debacle of Mortgage Finance, Princeton University Press.

 

(0 COMMENTS)

CEE September 18, 2018

Christopher Sims

Nobel Prize 2011-Nobel interviews KVA-DSC 8118

Christopher Sims was awarded, along with Thomas Sargent, the 2011 Nobel Prize in Economic Sciences. The Nobel committee cited their “empirical research on cause and effect in the macroeconomy.” The economists who spoke at the press conference announcing the award emphasized Sargent’s and Sims’ analysis of role of people’s expectations.

One of Sims’s earliest famous contributions was his work on money-income causality, which was cited by the Nobel committee. Money and income move together, but which causes which? Milton Friedman argued that changes in the money supply caused changes in income, noting that the supply of money often rises before income rises. Keynesians such as James Tobin argued that changes in income caused changes in the amount of money. Money seems to move first, but causality, said Tobin and others, still goes the other way: people hold more money when they expect income to rise in the future.

Which view is true? In 1972 Sims applied Clive Granger’s econometric test of causality. On Granger’s definition one variable is said to cause another variable if knowledge of the past values of the possibly causal variable helps to forecast the effect variable over and above the knowledge of the history of the effect variable itself. Implementing a test of this incremental predictability, Sims concluded “[T]he hypothesis that causality is unidirectional from money to income [Friedman’s view] agrees with the postwar U.S. data, whereas the hypothesis that causality is unidirectional from income to money [Tobin’s view] is rejected.”

Sims’s influential article “Macroeconomics and Reality” was a criticism of both the usual econometric interpretation of large-scale Keynesian econometric models and ofRobert Lucas’s influential earlier criticism of these Keynesian models (the so-called Lucas critique). Keynesian econometricians had claimed that with sufficiently accurate theoretical assumptions about the structure of the economy, correlations among the macroeconomic variables could be used to measure the strengths of various structural connections in the economy. Sims argued that there was no basis for thinking that these theoretical assumptions were sufficiently accurate. Such so-called “identifying assumptions” were, Sims said, literally “incredible.” Lucas, on the other hand, had not rejected the idea of such identification. Rather he had pointed out that, if people held “rational expectations” – that is, expectations that, though possibly incorrect, did not deviate on average from what actually occurs in a correctable, systematic manner – then failing to account for them would undermine the stability of the econometric estimates and render the macromodels useless for policy analysis. Lucas and his New Classical followers argued that in forming their expectations people take account of the rules implicitly followed by monetary and fiscal policymakers; and, unless those rules were integrated into the econometric model, every time the policymakers adopted a new policy (i.e., new rules), the estimates would shift in unpredictable ways.

While rejecting the structural interpretation of large-scale macromodels, Sims did not reject the models themselves, writing: “[T]here is no immediate prospect that large-scale macromodels will disappear from the scene, and for good reason: they are useful tools in forecasting and policy analysis.” Sims conceded that the Lucas critique was correct in those cases in which policy regimes truly changed. But he argued that such regime changes were rare and that most economic policy was concerned with the implementation of a particular policy regime. For that purpose, the large-scale macromodels could be helpful, since what was needed for forecasting was a model that captured the complex interrelationships among variables and not one that revealed the deeper structural connections.

In the same article, Sims proposed an alternative to large-scale macroeconomic models, the vector autoregression (or VAR). In Sims’s view, the VAR had the advantages of the earlier macromodels, in that it could capture the complex interactions among a relatively large number of variables needed for policy analysis and yet did not rely on as many questionable theoretical assumptions. With subsequent developments by Sims and others, the VAR became a major tool of empirical macroeconomic analysis.

Sims has also suggested that sticky prices are caused by “rational inattention,” an idea imported from electronic communications. Just as computers do not access information on the Internet infinitely fast (but rather, in bits per second), individual actors in an economy have only a finite ability to process information. This delay produces some sluggishness and randomness, and allows for more accurate forecasts than conventional models, in which people are assumed to be highly averse to change.

Sims’s recent work has focused on the fiscal theory of the price level, the view that inflation in the end is determined by fiscal problems—the overall amount of debt relative to the government’s ability to repay it—rather than by the split in government debt between base money and bonds. In 1999, Sims suggested that the fiscal foundations of the European Monetary Union were “precarious” and that a fiscal crisis in one country “would likely breed contagion effects in other countries.” The Greek financial crisis about a decade later seemed to confirm his prediction.

Christopher Sims earned his B.A. in mathematics in 1963 and his Ph.D. in economics in 1968, both from Harvard University. He taught at Harvard from 1968 to 1970, at the University of Minnesota from 1970 to 1990, at Yale University from 1990 to 1999, and at Princeton University from 1999 to the present. He has been a Fellow of the Econometric Society since 1974, a member of the American Academy of Arts and Sciences since 1988, a member of the National Academy of Sciences since 1989, President of the Econometric Society (1995), and President of the American Economic Association (2012). He has been a Visiting Scholar for the Federal Reserve Banks of Atlanta, New York, and Philadelphia off and on since 1994.


Selected Works

  1. . “Money, Income, and Causality.” American Economic Review 62: 4 (September): 540-552.

  2. . “Macroeconomics and Reality.” Econometrica 48: 1 (January): 1-48.

1990 (with James H. Stock and Mark W. Watson). “Inference in Linear Time Series Models with some Unit Roots.” Econometrica 58: 1 (January): 113-144.

  1. . “The Precarious Fiscal Foundations of EMU.” De Economist 147:4 (December): 415-436.

  2. . “Implications of Rational Inattention.” Journal of Monetary Economics 50: 3 (April): 665–690.

(0 COMMENTS)

CEE June 28, 2018

Gordon Tullock

Gordon tullock

Gordon Tullock, along with his colleague James M. Buchanan, was a founder of the School of Public Choice. Among his contributions to public choice were his study of bureaucracy, his early insights on rent seeking, his study of political revolutions, his analysis of dictatorships, and his analysis of incentives and outcomes in foreign policy. Tullock also contributed to the study of optimal organization of research, was a strong critic of common law, and did work on evolutionary biology. He was arguably one of the ten or so most influential economists of the last half of the twentieth century. Many economists believe that Tullock deserved to share Buchanan’s 1986 Nobel Prize or even deserved a Nobel Prize on his own.

One of Tullock’s early contributions to public choice was The Calculus of Consent: Logical Foundations of Constitutional Democracy, co-authored with Buchanan in 1962. In that path-breaking book, the authors assume that people seek their own interests in the political system and then consider the results of various rules and political structures. One can think of their book as a political economist’s version of Montesquieu.

One of the most masterful sections of The Calculus of Consent is the chapter in which the authors, using a model formulated by Tullock, consider what good decision rules would be for agreeing to have someone in government make a decision for the collective. An individual realizes that if only one person’s consent is required, and he is not that person, he could have huge costs imposed on him. Requiring more people’s consent in order for government to take action reduces the probability that that individual will be hurt. But as the number of people required to agree rises, the decision costs rise. In the extreme, if unanimity is required, people can game the system and hold out for a disproportionate share of benefits before they give their consent. The authors show that the individual’s preferred rule would be one by which the costs imposed on him plus the decision costs are at a minimum. That preferred rule would vary from person to person. But, they note, it would be highly improbable that the optimal decision rule would be one that requires a simple majority. They write, “On balance, 51 percent of the voting population would not seem to be much preferable to 49 percent.” They suggest further that the optimal rule would depend on the issues at stake. Because, they note, legislative action may “produce severe capital losses or lucrative capital gains” for various groups, the rational person, not knowing his own future position, might well want strong restraints on the exercise of legislative power.

Tullock’s part of The Calculus of Consent was a natural outgrowth of an unpublished manuscript written in the 1950s that later became his 1965 book, The Politics of Bureaucracy. Buchanan, reminiscing about that book, summed up Tullock’s approach and the book’s significance:

The substantive contribution in the manuscript was centered on the hypothesis that, regardless of role, the individual bureaucrat responds to the rewards and punishments that he confronts. This straightforward, and now so simple, hypothesis turned the whole post-Weberian quasi-normative approach to bureaucracy on its head. . . . The economic theory of bureaucracy was born.1

Buchanan noted in his reminiscence that Tullock’s “fascinating analysis” was “almost totally buried in an irritating personal narrative account of Tullock’s nine-year experience in the foreign service hierarchy.” Buchanan continued: “Then, as now, Tullock’s work was marked by his apparent inability to separate analytical exposition from personal anecdote.” Translation: Tullock learned from his experiences. As a Foreign Service officer with the U.S. State Department for nine years Tullock learned, up close and “personal,” how dysfunctional bureaucracy can be. In a later reminiscence, Tullock concluded:

A 90 per cent cut-back on our Foreign Service would save money without really damaging our international relations or stature.2

Tullock made many other contributions in considering incentives within the political system. Particularly noteworthy was his work on political revolutions and on dictatorships.

Consider, first, political revolutions. Any one person’s decision to participate in a revolution, Tullock noted, does not much affect the probability that the revolution will succeed. Therefore, each person’s actions do not much affect his expected benefits from revolution. On the other hand, a ruthless head of government can individualize the costs by heavily punishing those who participate in a revolution. So anyone contemplating participating in a revolution will be comparing heavy individual costs with small benefits that are simply his pro rata share of the overall benefits. Therefore, argued Tullock, for people to participate, they must expect some large benefits that are tied to their own participation, such as a job in the new government. That would explain an empirical regularity that Tullock noted—namely that “in most revolutions, the people who overthrow the existing government were high officials in that government before the revolution.”

This thinking carried over to his work on autocracy. In Autocracy, Tullock pointed out that in most societies at most times, governments were not democratically elected but were autocracies: they were dictatorships or kingdoms. For that reason, he argued, analysts should do more to understand them. Tullock’s book was his attempt to get the discussion started. In a chapter titled “Coups and Their Prevention,” Tullock argued that one of the autocrat’s main challenges is to survive in office. He wrote: “The dictator lives continuously under the Sword of Damocles and equally continuously worries about the thickness of the thread.” Tullock pointed out that a dictator needs his countrymen to believe not that he is good, just, or ordained by God, but that those who try to overthrow him will fail.”

Among modern economists, Tullock was the earliest discoverer of the concept of “rent seeking,” although he did not call it that. Before his work, the usual measure of the deadweight loss from monopoly was the part of the loss in consumer surplus that did not increase producer surplus for the monopolist. Consumer surplus is the maximum amount that consumers are willing to pay minus the amount they actually pay; producer surplus, also called “economic rent,” is the amount that producers get minus the minimum amount for which they would be willing to produce. Harberger3 had estimated that for the U.S. economy in the 1950s, that loss was very low, on the order of 0.1 percent of Gross National Product. In “The Welfare Cost of Tariffs, Monopolies, and Theft,” Tullock argued that this method understated the loss from monopoly because it did not take account of the investment of the monopolist—and of others trying to be monopolists—in becoming monopolists. These investments in monopoly are a loss to the economy. Tullock also pointed out that those who seek tariffs invest in getting those tariffs, and so the standard measure of the loss from tariffs understated the loss. His analysis, as the tariff example illustrates, applies more to firms seeking special privileges from government than to private attempts to monopolize via the free market because private attempts often lead, as if by an invisible hand, to increased competition.”

One of Tullock’s most important insights in public choice was in a short article in 1975 titled “The Transitional Gains Trap.” He noted that even though rent seeking often leads to big gains for the rent seekers, those gains are capitalized in asset prices, which means that buyers of the assets make a normal return on the asset. So, for example, if the government requires the use of ethanol in gasoline, owners of land on which corn is grown will find that their land is worth more because of the regulatory requirement. (Ethanol in the United States is produced from corn.) They gain when the regulation is first imposed. But when they sell the land, the new owner pays a price equal to the present value of the stream of the net profits from the land. So the new owner doesn’t get a supra-normal rate of return from the land. In other words, the owner at the time that the regulation was imposed got “transitional gains,” but the new owner does not. This means that the new owner will suffer a capital loss if the regulation is removed and will fight hard to keep the regulation in place, arguing, correctly, that he paid for those gains. That makes repealing the regulation more difficult than otherwise. Tullock notes that, therefore, we should try hard to avoid getting into these traps because they are hard to get out of.

Tullock was one of the few public choice economists to apply his tools to foreign policy. In Open Secrets of American Foreign Policy, he takes a hard-headed look at U.S. foreign policy rather than the romantic “the United States is the good guys” view that so many Americans take. For example, he wrote of the U.S. government’s bombing of Serbia under President Bill Clinton:

[T]he bombing campaign was a clear-cut violation of the United Nations Charter and hence, should be regarded as a war crime. It involved the use of military forces without the sanction of the Security Council and without any colorable claim of self-defense. Of course, it was not a first—we [the U.S. government] had done the same thing in Vietnam, Grenada and Panama.

Possibly Tullock’s most underappreciated contributions were in the area of methodology and the economics of research. About a decade after spending six months with philosopher Karl Popper at the Center for Advanced Studies in Palo Alto, Tullock published The Organization of Inquiry. In it, he considered why scientific discovery in both the hard sciences and economics works so well without any central planner, and he argued that centralized funding by government would slow progress. After arguing that applied science is generally more valuable than pure science, Tullock wrote:

Nor is there any real justification for the general tendency to consider pure research as somehow higher and better than applied research. It is certainly more pleasant to engage in research in fields that strike you as interesting than to confine yourself to fields which are likely to be profitable, but there is no reason why the person choosing the more pleasant type of research should be considered more noble.4

In Tullock’s view, a system of prizes for important discoveries would be an efficient way of achieving important breakthroughs. He wrote:

As an extreme example, surely offering a reward of 1 billion for the first successful ICBM would have resulted in both a large saving of money for the government and much faster production of this weapon.5

Tullock was born in Rockford, Illinois and was an undergrad at the University of Chicago from 1940 to 1943. His time there was interrupted when he was drafted into the U.S. Army. During his time at Chicago, though, he completed a one-semester course in economics taught by Henry Simons. After the war, he returned to the University of Chicago Law School, where he completed the J.D. degree in 1947. He was briefly with a law firm in 1947 before going into the Foreign Service, where he worked for nine years. He was an economics professor at the University of South Carolina (1959-1962), the University of Virginia (1962-1967), Rice University (1968-1969), the Virginia Polytechnic Institute and State University (1968-1983), George Mason University (1983-1987), the University of Arizona (1987-1999), and again at George Mason University (1999-2008). In 1966, he started the journal Papers in Non-Market Decision Making, which, in 1969, was renamed Public Choice.


Selected Works

 

  1. . The Calculus of Consent. (Co-authored with James M. Buchanan.) Ann Arbor, Michigan: University of Michigan Press.

  2. . The Politics of Bureaucracy. Public Affairs Press. Washington, D.C.: Public Affairs Press.

  3. . The Organization of Inquiry. Durham, North Carolina: Duke University Press.

  4. . “The Welfare Costs of Tariffs, Monopolies, and Theft,” Western Economic Journal, 5:3 (June): 224-232.

  5. . Toward a Mathematics of Politics. Ann Arbor, Michigan: University of Michigan Press.

  6. . “The Paradox of Revolution.” Public Choice. Vol. 11. Fall: 89-99.

1975: “The Transitional Gains Trap.” Bell Journal of Economics, 6:2 (Autumn): 671-678.

1987: Autocracy. Hingham, Massachusetts: Kluwer Academic Publishers.

  1. . Open Secrets of American Foreign Policy. New Jersey: World Scientific Publishing Co.

 


Footnotes

James M. Buchanan. 1987. The qualities of a natural economist. In Charles K. Rowley, (Ed.) (1987). Democracy and public choice. Oxford and New York: Basil Blackwell, 9-19.

 

Gordon Tullock. 2009. Memories of an unexciting life. Unfinished and unpublished manuscript. Tucson, 2009. Quoted in Charles K. Rowley and Daniel Houser. “The Life and Times of Gordon Tullock.” 2011. George Mason University. Department of Economics. Paper No. 11-56. December 20.

 

Arnold C. Harberger. 1954 “Monopoly and Resource Allocation.” American Economic Review. 44(2): 77-87.

 

Tullock. 1966. P. 14.

 

Tullock. 1966. P. 168.

 

(0 COMMENTS)

CEE February 5, 2018

Division of Labor

Division of labor combines specialization and the partition of a complex production task into several, or many, sub-tasks. Its importance in economics lies in the fact that a given number of workers can produce far more output using division of labor compared to the same number of workers each working alone. Interestingly, this is true even if those working alone are expert artisans. The production increase has several causes. According to Adam Smith, these include increased dexterity from learning, innovations in tool design and use as the steps are defined more clearly, and savings in wasted motion changing from one task to another.

Though the scientific understanding of the importance of division of labor is comparatively recent, the effects can be seen in most of human history. It would seem that exchange can arise only from differences in taste or circumstance. But division of labor implies that this is not true. In fact, even a society of perfect clones would develop exchange, because specialization alone is enough to reward advances such as currency, accounting, and other features of market economies.

In the early 1800s, David Ricardo developed a theory of comparative advantage as an explanation for the origins of trade. And this explanation has substantial power, particularly in a pre-industrial world. Assume, for example, that England is suited to produce wool, while Portugal is suited to produce wine. If each nation specializes, then total consumption in the world, and in each nation, is expanded. Interestingly, this is still true if one nation is better at producing both commodities: even the less productive nation benefits from specialization and trade.

In a world with industrial production based on division of labor, however, comparative advantage based on weather and soil conditions becomes secondary. Ricardo himself recognized this in his broader discussion of trade, as Meoqui points out. The reason is that division of labor produces a cost advantage where none existed before—an advantage based simply on specialization. Consequently, even in a world without comparative advantage, division of labor would create incentives for specialization and exchange.

Origins

The Neolithic Revolution, with its move to fixed agriculture and greater population densities, fostered specialization in both production of consumer goods and military protection. As Plato put it:

A State [arises] out of the needs of mankind; no one is self-sufficing, but all of us have many wants… Then, as we have many wants, and many persons are needed to supply them, one takes a helper… and another… [W]hen these partners and helpers are gathered together in one habitation the body of inhabitants is termed a State… And they exchange with one another, and one gives, and another receives, under the idea that the exchange will be for their good. (The Republic, Book II)

This idea of the city-state, or polis, as a nexus of cooperation directed by the leaders of the city is a potent tool for the social theorist. It is easy to see that the extent of specialization was limited by the size of the city: a clan has one person who plays on a hollow log with sticks; a moderately sized city might have a string quartet; and a large city could support a symphony.

One of the earliest sociologists, Muslim scholar Ibn Khaldun (1332-1406), also emphasized what he called “cooperation” as a means of achieving the benefits of specialization:

The power of the individual human being is not sufficient for him to obtain (the food) he needs, and does not provide him with as much food as he requires to live. Even if we assume an absolute minimum of food –that is, food enough for one day, (a little) wheat, for instance – that amount of food could be obtained only after much preparation such as grinding, kneading, and baking. Each of these three operations requires utensils and tools that can be provided only with the help of several crafts, such as the crafts of the blacksmith, the carpenter, and the potter. Assuming that a man could eat unprepared grain, an even greater number of operations would be necessary in order to obtain the grain: sowing and reaping, and threshing to separate it from the husks of the ear. Each of these operations requires a number of tools and many more crafts than those just mentioned. It is beyond the power of one man alone to do all that, or (even) part of it, by himself. Thus, he cannot do without a combination of many powers from among his fellow beings, if he is to obtain food for himself and for them. Through cooperation, the needs of a number of persons, many times greater than their own (number), can be satisfied. [From Muqaddimah (Introduction), First Prefatory Discussion in chapter 1; parenthetical expression in original in Rosenthal translation]

This sociological interpretation of specialization as a consequence of direction, limited by the size of the city, later motivated scholars such as Emile Durkheim (1858-1917) to recognize the central importance of division of labor for human flourishing.

Smith’s Insight

It is common to say that Adam Smith “invented” or “advocated” division of labor. Such claims are simply mistaken, on several grounds (see, for a discussion, Kennedy 2008). Smith described how decentralized market exchange fosters division of labor among cities or across political units, rather than just within them as previous thinkers had done. Smith had two key insights: First, division of labor would be powerful even if all human beings were identical, because differences in productive capacity are learned. Smith’s parable of the “street porter and the philosopher” illustrates the depth of this insight. As Smith put it:

[T]he very different genius which appears to distinguish men of different professions, when grown up to maturity, is not upon many occasions so much the cause, as the effect of the division of labour. The difference between the most dissimilar characters, between a philosopher and a common street porter, for example, seems to arise not so much from nature, as from habit, custom, and education. (WoN, V. 1, Ch 2; emphasis in original.)

Second, the division of labor gives rise to market institutions and expands the extent of the market. Exchange relations relentlessly push against borders and expand the effective locus of cooperation. The benefit to the individual is that first dozens, then hundreds, and ultimately millions, of other people stand ready to work for each of us, in ways that are constantly being expanded into new activities and new products.

Smith gives an example—the pin factory—that has become one of the central archetypes of economic theory. As Munger (2007) notes, Smith divides pin-making into 18 operations. But that number is arbitrary: labor is divided into the number of operations that fit the extent of the market. In a small market, perhaps three workers, each performing several different operations, could be employed. In a city or small country, as Smith saw, 18 different workers might be employed. In an international market, the optimal number of workers (or their equivalent in automated steps) would be even larger.

The interesting point is that there would be constant pressure on the factory to (a) expand the number of operations even more, and to automate them through the use of tools and other capital; and to (b) expand the size of the market served with consequently lower-cost pins so that the expanded output could be sold. Smith recognized this dynamic pressure in the form of what can only be regarded today as a theorem, the title of Chapter 3 in Book I of the Wealth of Nations: “That the Division of Labor is Limited by the Extent of the Market.” George Stigler treated this claim as a testable theorem in his 1951 article, and developed its insights in the context of modern economics.

Still, the full importance of Smith’s insight was not recognized and developed until quite recently. James Buchanan presented the starkest description of the implications of Smith’s theory (James Buchanan and Yong Yoon, 2002). While the bases of trade and exchange can be differences in tastes or capacities, market institutions would develop even if such differences were negligible. The Smithian conception of the basis for trade and the rewards from developing market institutions is more general and more fundamental than the simple version implied by deterministic comparative advantage.

Division of labor is a hopeful doctrine. Nearly any nation, regardless of its endowment of natural resources, can prosper simply by developing a specialization. That specialization might be determined by comparative advantage, lying in climate or other factors, of course. But division of labor alone is sufficient to create trading opportunities and the beginnings of prosperity. By contrast, nations that refuse the opportunity to specialize, clinging to mercantilist notions of independence and economic self-sufficiency, doom themselves and their populations to needless poverty.


About the Author

Michael Munger is the Director of the PPE Program at Duke University.


Further Reading

Buchanan, James, and Yong Yoon. 2002. “Globalization as Framed by the Two Logics of Trade,” The Independent Review, 6(3): 399-405.

Durkheim, Emile, 1984. Division of Labor in Society. New York: MacMillan.

Kennedy, Gavin. 2008. “Basic Errors About the Role of Adam Smith.” April 2: http://adamsmithslostlegacy.blogspot.com/2008/04/basic-errors-about-role-of-adam-smith.html

Khaldun, Ibn. 1377. Muqaddimah (Introductory) http://www.muslimphilosophy.com/ik/Muqaddimah/

Morales Meoqui, Jorge , 2015. Ricardo’s numerical example versus Ricardian trade model: A comparison of two distinct notions of comparative advantage DOI: 10.13140/RG.2.1.2484.5527/1 Link: https://www.researchgate.net/publication/283206070_Ricardos_numerical_example_versus_Ricardian_trade_model_A_comparison_of_two_distinct_notions_of_comparative_advantage

Munger, Michael. 2007. “I’ll Stick With These: Some Sharp Observations on the Division of Labor.” Indianapolis, Liberty Fund. http://www.econlib.org/library/Columns/y2007/Mungerpins.html

Plato, n.d. The Republic. Translated by Benjamin Jowett. http://classics.mit.edu/Plato/republic.html

Roberts, Russell. 2006. “Treasure Island: The Power of Trade. Part II. How Trade Transforms Our Standard of Living.” Indianapolis, Liberty Fund. http://www.econlib.org/library/Columns/y2006/Robertsstandardofliving.html

Smith, Adam. 1759/1853. (Revised Edition). The Theory of Moral Sentiments, New Edition. With a biographical and critical Memoir of the Author, by Dugald Stewart (London: Henry G. Bohn, 1853). 7/27/2015. http://oll.libertyfund.org/titles/2620

Smith, Adam. 1776/1904. An Inquiry into the Nature and Causes of the Wealth of Nations by Adam Smith, edited with an Introduction, Notes, Marginal Summary and an Enlarged Index by Edwin Cannan (London: Methuen, 1904). Vol. 1. 7/27/2015. http://oll.libertyfund.org/titles/237

Stigler, George. 1951. “The Division of Labor is Limited by the Extent of the Market.” Journal of Political Economy. 59(3): 185-193

(0 COMMENTS)

CEE February 5, 2018

Hoover’s Economic Policies

When it was all over, I once made a list of New Deal ventures begun during Hoover’s years as Secretary of Commerce and then as president. . . . The New Deal owed much to what he had begun.1 —FDR advisor Rexford G. Tugwell

Many historians, most of the general public, and even many economists think of Herbert Hoover, the president who preceded Franklin D. Roosevelt, as a defender of laissez-faire economic policy. According to this view, Hoover’s dogmatic commitment to small government led him to stand by and do nothing while the economy collapsed in the wake of the 1929 stock market crash. The reality is quite different. Far from being a bystander, Hoover actively intervened in the economy, advocating and implementing polices that were quite similar to those that Franklin Roosevelt later implemented. Moreover, many of Hoover’s interventions, like those of his successor, caused the great depression to be “great”—that is, to last a long time.

Hoover’s early career

Hoover, a very successful mining engineer, thought that the engineer’s focus on efficiency could enable government to play a larger and more constructive role in the economy. In 1917, he became head of the wartime Food Administration, working to reduce American food consumption. Many Democrats, including FDR, saw him as a potential presidential candidate for their party in the 1920s. For most of the 1920s, Hoover was Secretary of Commerce under Republican Presidents Harding and Coolidge. As Commerce Secretary during the 1920-21 recession, Hoover convened conferences between government officials and business leaders as a way to use government to generate “cooperation” rather than individualistic competition. He particularly liked using the “cooperation” that was seen during wartime as an example to follow during economic crises. In contrast to Harding’s more genuine commitment to laissez-faire, Hoover began one 1921 conference with a call to “do something” rather than nothing. That conference ended with a call for more government planning to avoid future depressions, as well as using public works as a solution once they started.2 Pulitzer-Prize winning historian David Kennedy summarized Hoover’s work in the 1920-21 recession this way: “No previous administration had moved so purposefully and so creatively in the face of an economic downturn. Hoover had definitively made the point that government should not stand by idly when confronted with economic difficulty.”3 Harding, and later Coolidge, rejected most of Hoover’s ideas. This may well explain why the 1920-21 recession, as steep as it was, was fairly short, lasting 18 months.

Interestingly, though, in his role as Commerce Secretary, Hoover created a new government program called “Own Your Own Home,” which was designed to increase the level of homeownership. Hoover jawboned lenders and the construction industry to devote more resources to homeownership, and he argued for new rules that would allow federally chartered banks to do more residential lending. In 1927, Congress complied, and with this government stamp of approval and the resources made available by Federal Reserve expansionary policies through the decade, mortgage lending boomed. Not surprisingly, this program became part of the disaster of the depression, as bank failures dried up sources of funds, preventing the frequent refinancing that was common at the time, and high unemployment rates made the government-encouraged mortgages unaffordable. The result was a large increase in foreclosures.4

The Hoover presidency

Hoover did not stand idly by after the depression began. To fight the rapidly worsening depression, Hoover extended the size and scope of the federal government in six major areas: (1) federal spending, (2) agriculture, (3) wage policy, (4) immigration, (5) international trade, and (6) tax policy.

Consider federal government spending. (See Fiscal Policy.) Federal spending in the 1929 budget that Hoover inherited was 3.1 billion. He increased spending to 3.3 billion in 1930, 3.6 billion in 1931, and 4.7 billion and 4.6 billion in 1932 and 1933, respectively, a 48% increase over his four years. Because this was a period of deflation, the real increase in government spending was even larger: The real size of government spending in 1933 was almost double that of 1929.5 The budget deficits of 1931 and 1932 were 52.5% and 43.3% of total federal expenditures. No year between 1933 and 1941 under Roosevelt had a deficit that large.6 In short, Hoover was no defender of “austerity” and “budget cutting.”


Figure 1


Shortly after the stock market crash in October 1929, Hoover extended federal control over agriculture by expanding the reach of the Federal Farm Board (FFB), which had been created a few months earlier.7 The idea behind the FFB was to make government-funded loans to farm cooperatives and create “stabilization corporations” to keep farm prices up and deal with surpluses. In other words, it was a cartel plan. That fall, Hoover pushed the FFB into full action, lending to farmers all over the country and otherwise subsidizing farming in an attempt to keep prices up. The plan failed miserably, as subsidies encouraged farmers to grow more, exacerbating surpluses and eventually driving prices way down. As more farms faced dire circumstances, Hoover proposed the further anti-market step of paying farmers not to grow.

On wages, Hoover revived the business-government conferences of his time at the Department of Commerce by summoning major business leaders to the White House several times that fall. He asked them to pledge not to reduce wages in the face of rising unemployment. Hoover believed, as did a number of intellectuals at the time, that high wages caused prosperity, even though the true causation is from capital accumulation to increased labor productivity to higher wages. He argued that if major firms cut wages, workers would not have the purchasing power they needed to buy the goods being produced. As most depressions involve falling prices, cutting wages to match falling prices would have kept purchasing power constant. What Hoover wanted amounted to an increase in real wages, as constant nominal wages would be able to purchase more goods at falling prices. Presumably out of fear of the White House or, perhaps, because it would keep the unions quiet, industrial leaders agreed to this proposal. The result was rapidly escalating unemployment, as firms quickly realized that they could not continue to employ as many workers when their output prices were falling and labor costs were constant.8

Of all of the government failures of the Hoover presidency—excluding the actions of the Federal Reserve between 1929 and 1932, over which he had little to no influence—his attempt to maintain wages was the most damaging. Had he truly believed in laissez-faire, Hoover would not have intervened in the private sector that way. Hoover’s high-wage policy was a clear example of his lack of confidence in the corrective forces of the market and his willingness to use governmental power to fight the depression.

Later in his presidency, Hoover did more than just jawbone to keep wages up. He signed two pieces of labor legislation that dramatically increased the role of government in propping up wages and giving monopoly protection to unions. In 1931, he signed the Davis-Bacon Act, which mandated that all federally funded or assisted construction projects pay the “prevailing wage” (i.e., the above market-clearing union wage). The result of this move was to close out non-union labor, especially immigrants and non-whites, and drive up costs to taxpayers. A year later, he signed the Norris-LaGuardia Act, whose five major provisions each enshrined special provisions for unions in the law, such as prohibiting judges from using injunctions to stop strikes and making union-free contracts unenforceable in federal courts.9 Hoover’s interventions into the labor market are further evidence of his rejection of laissez-faire.

Two other areas that Hoover intervened in aggressively were immigration and international trade. One of the lesser-known policy changes during his presidency was his near halt to immigration through an Executive Order in September 1930. His argument was that blocking immigration would preserve the jobs and wages of American citizens against competition from low-wage immigrants. Immigration fell to a mere 10 to 15% of the allowable quota of visas for the five-month period ending February 28, 1931. Once again, Hoover was unafraid to intervene in the economic decisions of the private sector by preventing the competitive forces of the global labor market from setting wages.10

Even those with only a casual knowledge of the Great Depression will be familiar with one of Hoover’s major policy mistakes—his promotion and signing of the Smoot-Hawley tariff in 1930. This law increased tariffs significantly on a wide variety of imported goods, creating the highest tariff rates in U.S. history. While economist Douglas Irwin has found that Smoot-Hawley’s effects were not as large as often thought, they still helped cause a decline in international trade, a decline that contributed to the worsening worldwide depression.

Most of these policies continued and many expanded throughout 1931, with the economy worsening each month. By the end of the year, Hoover decided that more drastic action was necessary, and on December 8, he addressed Congress and offered proposals that historian David Kennedy refers to as “Hoover’s second program, ” and that has also been called “The Hoover New Deal.”11 His proposals included:

The Reconstruction Finance Corporation to lend tax dollars to banks, firms and others institutions in need.

A Home Loan Bank to provide government help to the construction sector.

Congressional legalization of Hoover’s executive order that had blocked immigration.

Direct loans to state governments for spending on relief for the unemployed.

More aid to Federal Land Banks.

Creating a Public Works Administration that would both better coordinate Federal public works and expand them.

More vigorous enforcement of antitrust laws to end “destructive competition” in a variety of industries, as well as supporting work-sharing programs that would supposedly reduce unemployment.

On top of these spending proposals, most of which were approved in one form or another, Hoover proposed, and Congress approved, the largest peacetime tax increase in U.S. history. The Revenue Act of 1932 increased personal income taxes dramatically, but also brought back a variety of excise taxes that had been used during World War I. The higher income taxes involved an increase of the standard rate from a range of 1.5 to 5% to a range of 4 to 8%. On top of that increase, the Act placed a large surtax on higher-income earners, leading to a total tax rate of anywhere from 25 to 63%. The Act also raised the corporate income tax along with several taxes on other forms of income and wealth.

Whether or not Hoover’s prescriptions were the right medicine—and the evidence suggests that they were not—his programs were a fairly aggressive use of government to address the problems of the depression.12 These programs were hardly what one would expect from a man devoted to “laissez-faire” and accused of doing nothing while the depression worsened.

The views of contemporaries and modern historians

The myth of Hoover as a defender of laissez-faire persists, despite the fact that his contemporaries clearly understood that he made aggressive use of government to fight the recession. Indeed, Hoover’s own statements made clear that he recognized his aggressive use of intervention. The myth also persists in spite of the widespread recognition by modern historians that the Hoover presidency was anything but an era of laissez-faire.

According to Hoover’s Secretary of State, Henry Stimson, Hoover argued that balancing the budget was a mistake: “The President likened it to war times. He said in war times no one dreamed of balancing the budget. Fortunately we can borrow.”13 Hoover himself summarized his administration’s approach to the depression during a campaign speech in 1932:

We might have done nothing. That would have been utter ruin. Instead, we met the situation with proposals to private business and the Congress of the most gigantic program of economic defense and counter attack ever evolved in the history of the Republic. These programs, unparalleled in the history of depressions of any country and in any time, to care for distress, to provide employment, to aid agriculture, to maintain the financial stability of the country, to safeguard the savings of the people, to protect their homes, are not in the past tense—they are in action. . . . No government in Washington has hitherto considered that it held so broad a responsibility for leadership in such time.14

Some might dismiss this as campaign rhetoric, but as the other evidence indicates, Hoover was giving an accurate portrayal of his presidency. Indeed, Hoover’s profligacy was so clear that Roosevelt attacked it during the 1932 Presidential campaign.

Roosevelt’s own advisors understood that much of what they created during the New Deal owed its origins to Hoover’s policies, going as far back as his time at the Commerce Department in the 1920s. Thus the quote at the start of this article by Rex Tugwell, one of the academics at the center of FDR’s “brains trust.” Another member of the brains trust, Raymond Moley, wrote of that period:

When we all burst into Washington . . . we found every essential idea [of the New Deal] enacted in the 100-day Congress in the Hoover administration itself. The essentials of the NRA [National Recovery Administration], the PWA [Public Works Administration], the emergency relief setup were all there. Even the AAA [Agricultural Adjustment Act] was known to the Department of Agriculture. Only the TVA [Tennessee Valley Authority] and the Securities Act was [sic] drawn from other sources. The RFC [Reconstruction Finance Corporation], probably the greatest recovery agency, was of course a Hoover measure, passed long before the inauguration.15

Decades later, Tugwell, writing to Moley, said of Hoover: “[W]e were too hard on a man who really invented most of the devices we used.”16 Members of Roosevelt’s inner circle would have every reason to disassociate themselves from the policies of their predecessor; yet these two men recognized Hoover’s role as the father of the New Deal quite clearly.

Nor is this point lost on contemporary historians. In his authoritative history of the Great Depression era, David Kennedy admiringly wrote that Hoover’s 1932 program of activist policies helped “lay the groundwork for a broader restructuring of government’s role in many other sectors of American life, a restructuring known as the New Deal.”17 In a later discussion of the beginning of the Roosevelt administration, Kennedy observed (emphasis added):

Roosevelt intended to preside over a government even more vigorously interventionist and directive than Hoover’s. . . . [I]f Roosevelt had a plan in early 1933 to effect economic recovery, it was difficult to distinguish from many of the measures that Hoover, even if sometimes grudgingly, had already adopted: aid for agriculture, promotion of industrial cooperation, support for the banks, and a balanced budget. Only the last was dubious. . . . FDR denounced Hoover’s budget deficits.18

Conclusion

Despite overwhelming evidence to the contrary, from Hoover’s own beliefs to his actions as president to the observations of his contemporaries and modern historians, the myth of Herbert Hoover’s presidency as an example of laissez-faire persists. Of all the presidents up to and including him, Herbert Hoover was one of the most active interveners in the economy.


About the Author

Steven Horwitz is Distinguished Professor of Free Enterprise at Ball State University in Muncie, Indiana.


Footnotes

*

This entry is adapted, with permission, from Steven Horwitz, “Herbert Hoover: Father of the New Deal,” Cato Institute Briefing Papers, No. 122, September 29, 2011, at: http://www.cato.org/publications/briefing-paper/herbert-hoover-father-new-deal

 

As quoted in Amity Shlaes, The Forgotten Man: A New History of the Great Depression. New York: Harper Collins, 2007, p. 149.

 

Murray N. Rothbard, America’s Great Depression (1963; Auburn, AL: Ludwig von Mises Institute, 2008), p. 192.

 

David M. Kennedy, Freedom From Fear: The American People in Depression and War, 1929-1945. New York: Oxford University Press, p. 48.

 

See Steven Malanga, “Obsessive Housing Disorder,” City Journal, 19 (2), Spring 2009.

 

Federal government spending data can be found at: http://www2.census.gov/prod2/statcomp/documents/CT1970p2-12.pdf

 

See the data and discussion in Jonathan Hughes and Louis P. Cain, American Economic History, 7th ed., Boston: Pearson, 2007, p. 487. Hughes and Cain also note of those deficits, “The expenditures were in large part the doing of the outgoing Hoover administration.”

 

See Kennedy op. cit., pp. 43-44; Rothbard op. cit., p. 228; and Gene Smiley, Rethinking the Great Depression, Chicago: Ivan R. Dee, 2002, p. 13.

 

See Lee Ohanian, “What – or Who – Started the Great Depression?” Journal of Economic Theory 144, 2009, pp. 2310-2335.

 

Chuck Baird, “Freeing Labor Markets by Reforming Union Laws,” June 2011, Downsizing DC, Cato Institute, available at http://www.downsizinggovernment.org/labor/reforming-labor-union-laws.

 

See “White House Statement on Government Policies To Reduce Immigration” March 26, 1931, available at http://www.presidency.ucsb.edu/ws/index.php?pid22581#axzz1V7klWwZu. That statement opens with an explicit link between the immigration policy and unemployment: “President Hoover, to protect American workingmen from further competition for positions by new alien immigration during the existing conditions of employment, initiated action last September looking to a material reduction in the number of aliens entering this country.”

 

Kennedy op. cit., p. 83. The phrase “Hoover’s New Deal” is from the title of chapter 11 in Rothbard, op. cit..

 

Hoover’s higher tax rates backfired, as they further depressed income-earning activity, reducing the tax base, which in turn led to a fall in tax revenues for 1932.

 

As cited in Kennedy op. cit., p. 79.

 

Herbert Hoover, “Address Accepting the Republican Presidential Nomination,” August 11, 1932.

 

Raymond Moley, “Reappraising Hoover,” Newsweek, June 14, 1948, p. 100.

 

Letter from Rexford G. Tugwell to Raymond Moley, January 29, 1965, Raymond Moley Papers, “Speeches and Writings,” Box 245-49, Hoover Institution on War, Revolution and Peace, Stanford University, Stanford, CA, as cited in Davis W. Houck, “Rhetoric as Currency: Herbert Hoover and the 1929 Stock Market Crash,” Rhetoric & Public Affairs 3, 2000, p. 174.

 

Kennedy, op. cit., p. 83.

 

Kennedy, op. cit., p. 118.

 

(0 COMMENTS)

Here are the 10 latest posts from Econlib.

Econlib June 23, 2019

Alan Reynolds on the Disappearing Middle Class

  At this link is a 22-minute talk that economist Alan Reynolds gave at Harvard in 1987. The other participants were John Kenneth Galbraith, Barry Bluestone, Lester Thurow, and Frank Levy. There is so much content in this one short talk that it impossible to summarize without losing a lot of content. It’s also funny […]

The post Alan Reynolds on the Disappearing Middle Class appeared first on Econlib.

Econlib June 23, 2019

A Chicken Game Between Two Governments

If President Trump called off an Iranian attack in order to save innocent civilians, he deserves praise. Render to God what is God’s, and to Caesar what is Caesar’s. But, as a Wall Street Journal editorial emphasized, there is something off in this story (“Iran Calls Trump’s Bluff,” June 21, 2019): It’s important to understand […]

The post A Chicken Game Between Two Governments appeared first on Econlib.

Econlib June 22, 2019

Macro theory for all times and all places

In 1936, John Maynard Keynes came up with a macro model that was a product of its time. That’s the wrong way to do macro. Models should be based on the empirical facts of all countries and all time periods. If your macro model cannot explain why the US experienced a major deflation (with NGDP […]

The post Macro theory for all times and all places appeared first on Econlib.

Econlib June 21, 2019

Does the Fed set monetary policy in Hong Kong?

David Beckworth recently conducted an excellent interview of Mike Bird, which is full of fascinating observations. At one point they were discussing the Hong Kong currency board, which since 1983 has fixed the exchange rate between the Hong Kong and US dollars.  This exchange caught my eye: Beckworth: Yeah. So, interestingly, the Fed sets monetary […]

The post Does the Fed set monetary policy in Hong Kong? appeared first on Econlib.

Econlib June 21, 2019

Henderson on Uwe Reinhardt’s Last Book

Uwe Reinhardt was a well-known health economist at Princeton University who died in 2017. An outspoken advocate of government regulation of health insurance, he helped design the single-payer system adopted by Taiwan’s government. Reinhardt’s last book is Priced Out: The Economic and Ethical Costs of American Health Care. In it, he argues that U.S. health […]

The post Henderson on Uwe Reinhardt’s Last Book appeared first on Econlib.

Econlib June 20, 2019

Murray on the Prevalence of “Poverty”

Charles Murray‘s Losing Ground contains a most surprising claim: [P]overty did not simply climb upward on our national list of problems; it abruptly reappeared from nowhere.  In the prologue to this book, 1950 was described as a year in which poverty was not part of the discourse about domestic policy – indeed, as a year in […]

The post Murray on the Prevalence of “Poverty” appeared first on Econlib.

Econlib June 20, 2019

Henderson on Cowen on Big Business

Tyler Cowen’s latest book, Big Business: A Love Letter to an American Anti-Hero, is excellent. Cowen, an economics professor at George Mason University, makes a strong evidence-based case that big business in America is an important—probably the most important—contributor to our well-being. In a heavily footnoted book with references to scores of high-quality articles and […]

The post Henderson on Cowen on Big Business appeared first on Econlib.

Econlib June 20, 2019

The Trump administration’s advice to Malaysia

In a recent post I discussed John Cochrane’s reaction to the Treasury department’s criteria for “currency manipulation”. John’s post included a link to the report, which contains some quite odd recommendations that raise additional question marks. Consider the following: Malaysia’s external rebalancing in recent years is welcome, and the authorities should pursue appropriate policies to […]

The post The Trump administration’s advice to Malaysia appeared first on Econlib.

Econlib June 19, 2019

Giving USA Gets the Incentives Half Right

  “After reaching record-breaking levels of giving in 2017, American individuals and organizations continued their generous support of charitable institutions in 2018,” said Rick Dunham, chair of Giving USA Foundation and CEO of Dunham Company. “However, the environment for giving in 2018 was far more complex than most years, with shifts in tax policy […]

The post Giving USA Gets the Incentives Half Right appeared first on Econlib.

Econlib June 18, 2019

Ominous News from the San Francisco Fed

And you thought the Fed was just about monetary policy. The Federal Reserve Bank of San Francisco wants banks to get extra credit for making loans that help communities adapt to climate change and prepare for future natural disasters. A paper released on Monday by researchers at the San Francisco Fed argues that banks should […]

The post Ominous News from the San Francisco Fed appeared first on Econlib.

This site uses local and third-party cookies to maintain your shopping cart and to analyze traffic. If you want to know more, click here. By closing this banner or clicking any link in this page, you agree with this practice.