Econlib

The Library of Economics and Liberty is dedicated to advancing the study of economics, markets, and liberty. Econlib offers a unique combination of resources for students, teachers, researchers, and aficionados of economic thought.

Econlib publishes three to four new economics articles and columns each month. The latest articles and columns are made available to the public on the first Monday of each month.

All Econlib articles and columns are written exclusively for us at the Library of Economics and Liberty, on various economics topics by renowned professors, researchers, and journalists worldwide. All articles and columns are retained online free of charge for public readership. Many articles and columns are discussed in concurrent comments and debate in our blog EconLog.

EconLog

The Library of Economics and Liberty features the popular daily blog EconLog. Bloggers Bryan Caplan, David Henderson, Alberto Mingardi, Scott Sumner, Pierre Lemieux and guest bloggers write on topical economics of interest to them, illuminating subjects from politics and finance, to recent films and cultural observations, to history and literature.

EconTalk

The Library of Economics and Liberty carries the podcast EconTalk, hosted by Russ Roberts. The weekly talk show features one-on-one discussions with an eclectic mix of authors, professors, Nobel laureates, entrepreneurs, leaders of charities and businesses, and people on the street. The emphases are on using topical books and the news to illustrate economic principles. Exploring how economics emerges in practice is a primary theme.

CEE

The Concise Encyclopedia of Economics features authoritative editions of classics in economics, and related works in history, political theory, and philosophy, complete with definitions and explanations of economics terms and ideas.

Visit the Library of Economics and Liberty

Recent Posts

Here are the 10 latest posts from EconLog.

EconLog May 26, 2019

McArdle’s Confusion About Costs of Inputs, by David Henderson

One of the best economic journalists in the United States is Megan McArdle of the Washington Post. That makes her error in a recent WaPo article all the more striking. Don Boudreaux at Cafe Hayek has pointed out the error. But I want to do my own analysis because it’s a more general error that I see people make and the first time I saw a friend make was when I was 20 and really starting to “get” economics. Coincidentally, the error my friend made was when we were talking about taxicab economics and he was a cab driver.

In her analysis of the market for Uber and Lyft, here’s the key paragraph in which McArdle veers into bad economics:

The companies’ [Uber’s and Lyft’s] problems essentially boil down to this: The barriers to entry into the driving-people-around business are functionally nil. Whenever the profits in the market rise above a subsistence wage, more drivers will enter, thus competing those profits away.

Notice the last sentence, which is absolutely true as long as we specify what’s being held constant. Whenever the wages rise about the wages drivers could make in other uses of their time, then, if the other components of the job (driving versus other uses) are the same, that will cause more drivers to enter.

But that’s not a problem for Uber and Lyft. That’s good for them. All else equal, Uber and Lyft do better when drivers enter because it keeps down labor costs and gives the two companies more of a chance to make money. Uber and Lyft are not in the driving business; they’re in the transportation business, and drivers’ services are inputs into the production of transportation.

McArdle’s next paragraph proceeds along this erroneous path:

That was the problem taxi medallions had been designed to solve. Drivers still didn’t make much money, because all you needed to get started in the business was a driver’s license. But the people who owned the right to drive were able to make a tidy living, with almost no downside risk. That’s why New York City taxi medallions got so valuable: Two decades of low-interest rates made them an attractive alternative to bonds offering yields in the low single digits.

Her thinking here is that restricting the number of cabs helped cab drivers. This is precisely wrong. Restricting the number of cabs hurt cab drivers but helped owners of medallions; the restrictions caused the medallions to be an artificially scarce resource.

To see the error more clearly, imagine–perish the thought–that the government handed out a certain number of “newspaper medallions” and that that restricted the number of newspapers. How would we know that it restricted them? By noticing that the price of a newspaper medallion was greater than zero. By McArdle’s reasoning, this should help newspaper columnists like her. But it wouldn’t. The restriction would reduce the demand for newspaper columnists and thus reduce the amount newspaper columnists are paid.

Back in 1970, when I was starting to get economics, a cab driver friend of mine in Winnipeg said that if the restriction on cabs were lifted, cab drivers would make less money. I made the same argument then that I’m making here. He didn’t have one of the scarce licenses (they didn’t call them medallions in Winnipeg) and so he would have lost no wealth once the restriction was lifted but would have gained as a driver. Of course, I’m assuming that the supply of labor was not and, in the current case, is not, perfectly elastic (horizontal.) So an increase in the demand for cab driver labor would drive up the price (wage) somewhat.

I pointed out this reasoning, by the way, in my biography of David Ricardo in The Concise Encyclopedia of Economics. See the last paragraph of the bio.

The application to farmers versus owners of farm land was a point that Thomas Hazlett made very clearly in the 1980s in an article or book review either in Reason or in the Wall Street Journal. I’ve forgotten which.

(12 COMMENTS)

EconLog May 26, 2019

Fiscal austerity in Japan, by Scott Sumner

Because history is written by Keynesians, many people have a highly distorted view of fiscal policy.  They attribute the double dip Eurozone recession (2011-13) to fiscal austerity, whereas it was actually caused by an extremely tight money policy at the ECB.  Or they think the Great Inflation of 1966-61 was triggered by an expansionary fiscal policy, whereas budget deficits during the 1960s were actually quite small.  And few people know that Japan’s long deflation (1993-2012) was accompanied by one of the most expansionary fiscal policies in all of world history (for a major peacetime economy):

Fortunately, Prime Minister Abe put an end to this fiscal madness when he took office at the beginning of 2013, and the Japanese economy has done better without the crazy levels of fiscal stimulus.  (Ironically, some now want to try something similar to the Japanese fiscal expansion of 1993-2012 in the US, despite the deflationary outcome in Japan.)

The Economist has an article discussing the Japanese sales tax increase scheduled for this October:

Among the pledges Shinzo Abe made in 2012, as he started his second stint as Japan’s prime minister, was to double the sales-tax rate. At 5% it was low by rich-country standards, and Japan’s public finances, battered by years of deficits, needed shoring up. But having gone part-way, to 8%, in 2014, he has twice put off finishing the job for fear of choking off a tentative economic recovery. That increase is now scheduled for October, and he is loth to delay a third time—so much so that he has said that only “an event with the magnitude of the Lehman Brothers shock” would deter him.

Everyone agrees that a higher sales tax is needed, but they differ on the wisdom of a speedy move. The previous hike provoked a sharp downturn. Now fresh signs of economic weakness are leading to fears of a repetition.

As is often the case, this article presents a misleading picture of the effect of fiscal austerity.  It’s not completely wrong; GDP growth did plunge sharply in the quarter immediately after Japan raised the national sales tax from 5% to 8%, but this was mostly due to Japanese consumers timing their purchases to avoid the tax.  It had little to no impact on the labor market:

There are good arguments on both sides of the sales tax issue.  I’d like to see them raise the tax to 10% in order to further reduce the budget deficit.  But there’s also an argument that more tax revenue would merely lead to more government spending.  On the other hand, it is not a good argument to claim that the tax increase would “provoke a sharp downturn.”  The labor market did fine in 2014 and it would continue to do fine after an October rate increase, as long as the BOJ doesn’t bungle monetary policy.

Because Japan has a falling population, their trend rate of RGDP growth is very low.  In recent years, the economy has occasionally experienced a negative quarter or two of GDP growth.  But these random fluctuations are not “recessions” in any meaningful sense of the term, as the labor market is not affected, even if they are called recessions in the media.

If you look at the unemployment rate graph, you can see 4 actual recessions, in 1993, 1998, 2001 and 2008-09.  In each case, the unemployment rate increased significantly.  That did not happen in 2014.  And the 1998 and 2001 recessions occurred despite highly expansionary fiscal policies.  Contrary to the claims of Keynesians, fiscal stimulus is no cure for “secular stagnation”.  And Abe’s policy of fiscal austerity has produced the strongest Japanese labor market in a quarter century.

We are told that voters hate inflation and hate tax increases, but Abe keeps winning landslide elections promising higher inflation and higher sales taxes.

(3 COMMENTS)

EconLog May 24, 2019

Socialist Fantasies, by Sarah Skwire

Ludwig von Mises’s essay “Economic Calculation in the Socialist Commonwealth,” references Aristophanes’ play The Birds and the medieval fantasy of the idyllic and work-free Land of Cockaigne when Mises notes of socialist planners that, “Economics as such figures all too sparsely in the glamorous pictures painted by the Utopians. They invariably explain how, in the cloud-cuckoo lands of their fancy, roast pigeons will in some way fly into the mouths of the comrades, but they omit to show how this miracle is to take place.”[1]  Don Lavoie similarly points to the science fictional/fantastical aspect of socialist planning discussions when he comments in Rivalry and Central Planning that, “Details of future social life are not the province of economic science but of speculative literature.”[2]

Certainly one feels the fantasy novel being written in Michael Albert and Robert Hahnel’s article “Participatory Planning,” as they outline the precisely thought-out details of how their version of socialist planning will function...

EconLog May 23, 2019

The confusing terminology of monetary policy, by Scott Sumner

As if monetary policy is not confusing enough, the terminology is also ambiguous, with terms used in inconsistent ways. For instance, is the Fed targeting interest rates, or are they targeting inflation?  Consider the following flow chart, showing two possible monetary policy targets.  At the bottom you have the actual tools that the central bank can use to influence policy.  In the middle you have flexible market prices that they may try to control, with the long run objective of stabilizing growth in NGDP or the price level (on top):

The term ‘instrument’ is sometimes used for variables that the central bank directly controls, like the monetary base, and at other times is used to describe the intermediate target of these open market operations, say the fed funds rate.  The term ‘target’ is sometimes used to describe the intermediate target of policy (say interest rates or exchange rates) and at other times is used to describe the goal of policy, say inflation or NGDP.

EconLog May 22, 2019

“Socialism”: The Provocative Equivocation, by Bryan Caplan

The socialists are back, but is it a big deal?  It’s tempting to say that it’s purely rhetorical.  Modern socialists don’t want to emulate the Soviet Union.  To them, socialism just means “Sweden,” right?  Even if their admiration for Sweden is unjustified, we’ve long known that the Western world contains millions of people who want their countries to be like Sweden.  Why should we care if Sweden-fans rebrand themselves as “socialists”?

My instinctive objection is that even using the term “socialism” is an affront to the many millions of living victims of Soviet-style totalitarian regimes.  Talking about “socialism” understandably horrifies them.  Since there are plenty of palatable synonyms for Swedish-type policies (starting even “Swedenism”!), selecting this particular label seems a breach of civility.

If this seems paranoid, what would you say about a new movement of self-styled “national socialists”?  Even if their policy positions were moderate, this brand needlessly terrifies lots of folks who have already suffered enough.

EconLog May 22, 2019

“Things have changed”, by Scott Sumner

The Wall Street Journal describes the views of Judy Shelton, one of the names mentioned for a position on the Fed’s Board of Governors:

She wrote critically in the weeks before that election about how the Fed’s low- rate policies were boosting wealthy investors and corporations at the expense of working Americans and retirees with fixed incomes.

On Tuesday, Ms. Shelton said she is no longer concerned about such perils because she believes the administration’s fiscal policies have boosted growth and productivity.

“Things have changed,” Ms. Shelton said in an interview with The Wall Street Journal, reconciling her earlier views with Mr. Trump’s current call for lower rates. She pointed to Mr. Trump’s tax and regulatory policies that she said have boosted growth without raising inflation as an example of a much-needed tool for supporting economic growth.

Higher economic growth is generally associated with higher interest rates, so I’m not sure I follow this argument.  This sort of reasoning seems extremely discretionary, and not in a good way. I favor a rules-based approach to monetary policy.  Yes, “things have changed”, but in a direction calling for higher interest rates.

EconLog May 21, 2019

George Warren, market monetarist, by Scott Sumner

Market monetarists like myself have criticized the “wait and see” approach used by many macroeconomists. This refers to the tendency of economists to watch how the economy plays out over time, after a new policy initiative. This technique is not reliable, as the economy is constantly being buffeted by all sorts of shocks, and it’s not easy to isolate the impact of any one shock. Instead, the immediate impact of policy announcements on market forecasts provides the best indication of whether stabilization policy is on target or off course.

In 1933, FDR gradually depreciated the dollar in order to boost prices and output. As the year progressed, opposition to the program increased. Even Keynes (who initially supported the policy) called for a halt in the dollar depreciation policy toward the end of 1933.  FDR’s critics made the mistake of assuming that we needed to “wait and see” how the policy would impact the economy.

In fact, FDR should have depreciated the dollar even further. One economist who did understand this was George Warren, the FDR advisor who had advocated the policy.  In my book on the Great Depression, I drew some parallels between this episode and modern policy errors:

The distinction between flexible (commodity) prices and a sticky overall price level is crucial to any understanding of Roosevelt’s policy. For instance, when Roosevelt decided to formally devalue the dollar in January 1934, many prominent economists such as E.W. Kemmerer predicted runaway inflation. Prices did rise modestly but remained well below pre-Depression levels throughout the 1930s. Pearson, Myers, and Gans quote Warren’s notes to the effect that when the summer of 1934 arrived without substantial increases in commodity prices:

The President (a) wanted more inflation and (b) assumed or had been led to believe that there was a long lag in the effect of depreciation. He did not understand—as many others did not then and do not now—the principle that commodity prices respond immediately to changes in the price of gold. (1957, p. 5664)

Warren understood that commodity markets in late January 1934 had already priced in the anticipated impact of the devaluation, and that commodity price indices were signaling that a gold price of 35/ounce was not nearly sufficient to produce the desired reflation. Most modern macroeconomists continue to make this mistake, taking a “wait and see” attitude toward initiatives such as “quantitative easing,” whereas the inflation expectations embedded in the Treasury Inflation-Protected Securities (TIPS) markets provided immediate evidence that the policy was nowhere near sufficient.

During 2009-10, many conservatives wrongly predicted that QE would be highly inflationary, and in 2012 many Keynesians wrongly predicted that the 2013 fiscal austerity would be highly contractionary.  Both groups could have avoided their errors by looking at market forecasts.

The majority of economic historians seem to view George Warren as a bit of a crackpot.  In fact, he was well ahead of the profession, and still is.

In macroeconomics, there’s (almost) nothing new under the sun, including market monetarism.

(8 COMMENTS)

EconLog May 21, 2019

An Alternative Perspective on Anglo-American Economic Liberties, by Garreth Bloor

As economic freedom gains traction in different spheres of the world, with marked successes most notable in Asia of late, understanding how markets come about within a state of economic freedom are tied closely to limitations on arbitrary executive powers.

Michael Patrick Leahy’s Covenant of Liberty: The Ideological origins of the Tea Party movement assess the emergence of the Tea Party movement in the United States, which gained steam in 2008, arguing the US draws on a concept in early North American colonies that merits a study of its own: that the English tradition of liberty in their then ancestral homeland was best exemplified the period of the pre-Norman Conquest I 1066.

EconLog May 21, 2019

GOT’s final season may have been disappointing, but not on politics, by Alberto Mingardi

There are a number of remarkable things about Game of Thrones. One is of course how millions of people are, synchronously, watching the series’ ending. This sort of collective TV viewing was once reserved for big sports matches, or perhaps for a few great rock music concerts, like LiveAid.

Many people have commented on the last episodes. They are full of plot faults. Still, at least there is no JarJarBinks or young Anakin spoiling the franchise. The last one has disappointed many, but made more sense than most – for reasons Ilya Somin, my GOT guru among many other things, highlights here.

One interesting twist in the last episodes is that they wanted (or perhaps needed, as this is the gist of the story) to get back to what made many love the series and the books at first: they are all about power. (Spoiler) The rapid evolution of Daenerys Targarean from heroine to mad queen aimed to point to some sort of slippery slope, in wanting power so bad and needing to consolidate it fast.

Power corrupts, and thirst for power of power corrupts absolutely, if we may paraphrase Lord Acton.

There is much of that in Dany’s parable. It reminds us that power is something humans hold, and when they hold it they don’t stop being human.

Leaving aside all the problems with the plot (if a dragon is that powerful, why didn’t she use the other that well before?), all her faults as a power holder can be boiled down to the fact she is a fragile and fallible human being. She is prey to anxiety, insecurity, and anger. How does she react? Pretty much like all of us, but she has a dragon and an army, and that makes her reactions far more deadly than mine or yours.

At the end of the series, power is portrayed as a consuming passion, which ruins who was once a well-intentioned girl. In the next-to-last episode, she becomes a mass murderer because she is weakened by paranoia, by a sense of urgency in grasping power until she enjoys a wide enough consensus, and by a very human desire for revenge. Her advisors have at times tried to prevent these developments but they end up having conflicts of loyalty, allegiance to the crown being terribly demanding. On the other hand, particularly in the final episode, and particularly in Tyrion’s marvelous speech to Jon (one of the great, genuine exercises in political philosophy TV has ever given us), another theme emerges, or better, comes back.

Dany’s evolution has its roots in something older and deeper in the series. Part of her appeal (her appeal to fellow characters and us watchers!) is due to the fact she promised to liberate people. She wanted to break the wheel. She is a revolutionary leader bound to provide us justice on the earth (well, or wherever Westeros is). And to do so she is ruthless and fully confident that such a noble end justify any means. So she organized the crucifixion of the Great Masters of Mereen, and so she slaughters the people of King’s Landing.

To the watcher, the two things may look different. Perhaps Elizabeth Warren would have approved of her favorite character’s actions in the first instance (the bad guys were all awful slave owners), but not in the second (civilians, as we know well, have zero say in whatever happens in the Seven Kingdoms). Still, for Daenerys the two things are pretty much the same, so strong is her identification with justice and the dream of a better world.

On the one hand, this is one of the rare glimpses of modern politics in GOT. The story is set in a consistently pre-modern world: it is all about medieval politics, the notion of honor which is commonly upheld is consistently aristocratic (no time for honoring bourgeois virtues), allegiances are basically a puzzle of mutual, personal loyalties. In shaping Daenerys, Martin needed something that empowered such a character to advance her claim to the Iron Throne, knowing that, though the daughter of the last king of the legitimate dynasty, she is in many ways an outsider. She was raised far away from court, she does not speak the language of kings and courtesans. And she is facing so many difficulties, that she needs something stronger and purer than ambition to propel her. Here comes this version of politicized millenarianism. In a sense, Daenerys is far more honest than revolutionary leaders we knew in the past: at least she is openly and clearly equating the triumph of justice with her personal triumph.

On the other hand, GOT proved good in pointing to some sort of iron law of power. Every brutality becomes, in the game of thrones, just a move necessary to bring about another. In themselves, none of the characters are actually that inhumane: even Cersei cares for her kids, as Tyrion reminds her (to no avail). But once they are in the game, they are influenced by “the wheel”, which turns them into monsters. The breaker of the wheel included.

In a sense, two images of the last show are particularly telling. One is Drogon melting away the Iron Throne. Sure, it’s an animal’s rage. But in a sense, the kid knows what killed his mother, her obsession to gain power, and makes justice of it. The other is Bran being picked as king. No longer a human, Bran is the closest thing to an omniscient being the series has produced. On top of that, his evolution into the three-eyed raven has apparently rid him of anything remotely akin to lust, lust for power included. So, here comes a (temporary) solution for all Westeros’s political problems: a genuine “neutral power” in royalty and a council of wise men who, changed by war and violence, will avoid more.

How long that can last before more human passions will prevail, it is impossible to say. But GOT perhaps is the greatest attempt to remind the younger generations that humans tend to abuse power whenever they have it. In that, for all its flaws, I think this is noteworthy.

At the same time, our difficulty (me included) in accepting the ending of the story, without Jon becoming king, or Dany redeeming herself, and actually the colorless Bran gaining the crown, is interesting too. I suppose the thing most watchers like better was the triumph of Sansa, the most skillful political operator alive. The problem is that stories are so much about characters and personalities because we want and we cherish heroes, the good guys smashing the bad guys, et cetera. And when a story ends with an – admittedly weird – attempt of de-personalizing power, creating a new institutional setting to solve problems, we are disappointed, because we are hard-wired to look for leadership. But it may well be that heroes and peace, or leaders and political stability, don’t really go well together.

(8 COMMENTS)

EconLog May 20, 2019

Letter from an “Anti-School Teacher”, by Bryan Caplan

I recently received this email from a self-styled “anti-school teacher.”  Reprinted unchanged with permission of the author, Samuel Mosley.


Dear Professor Caplan,

My name is Samuel Mosley. I studied economics at Beloit College, my advisor was a former graduate student of yours, Laura Grube.

I recently read The Case Against Education and it explained so much of what I see. Like many new graduates who do not know exactly what they want to do but want to do something that helps people, I became a teacher right after college. I have spent the last year teaching math at a high school in Chicago. Observing how unlikely it was that the decisions we make increase our students human capital, I wondered how it could be of benefit to the students. Your book helped me answer that question.

I was swayed to believe that education is overfunded. I began to view every decision made by my boss with the question “is this to add to our students’ human capital or their signaling value?” Looking at the school from this framework, I have come to suspect that education is best understood as a game theory problem. Often, my bosses are faced with options where one option would be better for the students’ human capital and another would help the student send a more functional signal. The school I teach at invests time in signals (like AP Calculus) because it will enrich our students’ lives more than classes that would cultivate their human capital (like AP Statistics). Because every school can choose to signal, we arrive at a Nash Equilibrium where students at none of the schools acquire human capital and the decisions of schools’ to signal cancel each other out.

Assume schools can either set the average grade to B or C. Schools that set the average grade to C have higher standards so students from those schools graduate college at a higher rate. Assume also that college admissions officers do not have perfect information about the standards of each high school so they admit students from schools where a B is the the average grade more often than students from schools where a C is the average grade.

Now, say Theoryville College only admits students from Row High School and Column High School. There are only 1000 spots available. Students from Row and Column only apply to Theoryville. Both schools have 1000 seniors. Theoryville accepts students evenly if they both come from schools with similar standards. If one school chooses lower standards (B), 700 of their students will get in and 300 from the other school. 45 percent of students from low standards schools graduate college while 55 percent of students from high standards schools graduate. Assume the utility function for both high schools is the number of its students who complete college, with no penalty for having students go to college and leave degreeless in debt.  So, the game matrix can be expressed:

C B C 275, 275 135, 315 B 315, 135 225, 225

This simple prisoners’ dilemma does not seem immediately relevant to the human capital vs. signaling debate and it does not address the question of whether or not college brings human capital. I choose college completion as the utility function for simplicity. Schools of standard C produce graduates who are more ready for college. Schools of standard B produce graduates who appear more college ready. Replace the idea of college readiness with “human capital,” and this becomes relevant. Signaling has become more profitable to schools, so they invest resources in signals when they could invest resources in human capital. This is a different claim from the one that schools cannot produce human capital. My time at this job has convinced me prisoners dilemmas like this one exist for course offering, course placement, pass rates and a number of other decisions schools face.

Do you think its at all likely that schools would be better human capital factories given an incentive structure that accounts for the game theory problem? Do you think game theory is a useful framework for this problem?

Sincerely,

Samuel Mosley

(10 COMMENTS)

Here are the 10 latest posts from EconTalk.

EconTalk May 27, 2019

David Epstein on Mastery, Specialization, and Range

Journalist and author David Epstein talks about his book Range with EconTalk host Russ Roberts. Epstein explores the costs of specialization and the value of breadth in helping to create mastery in our careers and in life. What are the best backgrounds for solving problems? Can mastery be achieved without specialization at a young age? […]

The post David Epstein on Mastery, Specialization, and Range appeared first on Econlib.

EconTalk May 20, 2019

Mary Hirschfeld on Economics, Culture, and Aquinas and the Market

Aquinas-and-the-Market-198x300.jpg Author, economist, and theologian Mary Hirschfeld of Villanova University talks about her book, Aquinas and the Market, with EconTalk host Russ Roberts. Hirschfeld looks at the nature of our economic activity as buyers and sellers and whether our pursuit of economic growth and material well-being comes at a cost. She encourages a skeptical stance about […]

EconTalk May 13, 2019

Robert Burton on Being Certain

On-Being-Certain-200x300.jpg Neurologist and author Robert Burton talks about his book, On Being Certain, with EconTalk host Russ Roberts. Burton explores our need for certainty and the challenge of being skeptical about what our brain tells us must be true. Where does what Burton calls “the feeling of knowing” come from? Why can memory lead us astray? […]

EconTalk May 6, 2019

Mauricio Miller on Poverty, Social Work, and the Alternative

The-Alternative-200x300.jpg Poverty activist, social entrepreneur and author, Mauricio Miller, talks about his book The Alternative with EconTalk host Russ Roberts. Miller, a MacArthur genius grant recipient, argues that we have made poverty tolerable when we should be trying to make it more escapable. This is possible, he argues, if we invest in the poor and encourage […]

EconTalk April 29, 2019

Emily Oster on Cribsheet

Crib-Sheet-199x300.jpg Economist and author Emily Oster of Brown University talks about her book Cribsheet with EconTalk host Russ Roberts. Oster explores what the data and evidence can tell us about parenting in areas such as breastfeeding, sleep habits, discipline, vaccination, and food allergies. Oster often finds that commonly held views on some of these topics are […]

EconTalk April 22, 2019

Paul Romer on Growth, Cities, and the State of Economics

economic-growth.jpg Nobel Laureate Paul Romer of New York University talks with EconTalk host Russ Roberts about the nature of growth, the role of cities in the economy, and the state of economics. Romer also reflects on his time at the World Bank and why he left his position there as Chief Economist.

EconTalk April 15, 2019

Jill Lepore on Nationalism, Populism, and the State of America

amber-waves.jpg Historian and author Jill Lepore talks about nationalism, populism, and the state of America with EconTalk host Russ Roberts. Lepore argues that we need a new Americanism, a common story we share and tell ourselves. Along the way, topics in the conversation include populism, the rise of globalization, and the challenge of knowing what is […]

The post Jill Lepore on Nationalism, Populism, and the State of America appeared first on Econlib.

EconTalk April 8, 2019

Robin Feldman on Drugs, Money, and Secret Handshakes

Drugs-Money-Handshakes-196x300.jpg Law professor and author Robin Feldman of UC Hastings College of the Law talks about her book Drugs, Money, and Secret Handshakes with EconTalk host Russ Roberts. Feldman argues that the legal and regulatory environment for drug companies encourages those companies to seek drugs that extend their monopoly through the patent system often with insufficient […]

EconTalk April 1, 2019

Jacob Stegenga on Medical Nihilism

Medical-Nihilism-1-196x300.jpg Philosopher and author Jacob Stegenga of the University of Cambridge talks about his book Medical Nihilism with EconTalk host Russ Roberts. Stegenga argues that many medical treatments either fail to achieve their intended goals or achieve those goals with many negative side effects. Stegenga argues that the approval process for pharmaceuticals, for example, exaggerates benefits […]

EconTalk March 25, 2019

Daniel Hamermesh on Spending Time

Spending-Time-199x300.jpg Economist and author Daniel Hamermesh of Barnard College and the Institute for the Study of Labor talks about his latest book, Spending Time, with EconTalk host Russ Roberts. Hamermesh explores how we treat time relative to money, how much we work and how that has changed over time, and the ways economists look at time, work, and leisure.

Here are the 10 latest posts from CEE.

CEE March 13, 2019

Jean Tirole

Jean Tirole .jpg "Ecole polytechnique Université Paris-Saclay [CC BY-SA 2.0 (https://creativecommons.org/licenses/by-sa/2.0)], via Wikimedia Commons") 

In 2014, French economist Jean Tirole was awarded the Nobel Prize in Economic Sciences “for his analysis of market power and regulation.” His main research, in which he uses game theory, is in an area of economics called industrial organization. Economists studying industrial organization apply economic analysis to understanding the way firms behave and why certain industries are organized as they are.

From the late 1960s to the early 1980s, economists George Stigler, Harold Demsetz, Sam Peltzman, and Yale Brozen, among others, played a dominant role in the study of industrial organization. Their view was that even though most industries don’t fit the economists’ “perfect competition” model—a model in which no firm has the power to set a price—the real world was full of competition. Firms compete by cutting their prices, by innovating, by advertising, by cutting costs, and by providing service, just to name a few. Their understanding of competition led them to skepticism about much of antitrust law and most government regulation.

In the 1980s, Jean Tirole introduced game theory into the study of industrial organization, also known as IO. The key idea of game theory is that, unlike for price takers, firms with market power take account of how their rivals are likely to react when they change prices or product offerings. Although the earlier-mentioned economists recognized this, they did not rigorously use game theory to spell out some of the implications of this interdependence. Tirole did.

One issue on which Tirole and his co-author Jean-Jacques Laffont focused was “asymmetric information.” A regulator has less information than the firms it regulates. So, if the regulator guesses incorrectly about a regulated firm’s costs, which is highly likely, it could set prices too low or too high. Tirole and Laffont showed that a clever regulator could offset this asymmetry by constructing contracts and letting firms choose which contract to accept. If, for example, some firms can take measures to lower their costs and other firms cannot, the regulator cannot necessarily distinguish between the two types. The regulator, recognizing this fact, may offer the firms either a cost-plus contract or a fixed-price contract. The cost-plus contract will appeal to firms with high costs, while the fixed-price contract will appeal to firms that can lower their costs. In this way, the regulator maintains incentives to keep costs down.

Their insights are most directly applicable to government entities, such as the Department of Defense, in their negotiations with firms that provide highly specialized military equipment. Indeed, economist Tyler Cowen has argued that Tirole’s work is about principal-agent theory rather than about reining in big business per se. In the Department of Defense example, the Department is the principal and the defense contractor is the agent.

One of Tirole’s main contributions has been in the area of “two-sided markets.” Consider Google. It can offer its services at one price to users (one side) and offer its services at a different price to advertisers (the other side). The higher the price to users, the fewer users there will be and, therefore, the less money Google will make from advertising. Google has decided to set a zero price to users and charge for advertising. Tirole and co-author Jean-Charles Rochet showed that the decision about profit-maximizing pricing is complicated, and they use substantial math to compute such prices under various theoretical conditions. Although Tirole believes in antitrust laws to limit both monopoly power and the exercise of monopoly power, he argues that regulators must be cautious in bringing the law to bear against firms in two-sided markets. An example of a two-sided market is a manufacturer of videogame consoles. The two sides are game developers and game players. He notes that it is very common for companies in such markets to set low prices on one side of the market and high prices on the other. But, he writes, “A regulator who does not bear in mind the unusual nature of a two-sided market may incorrectly condemn low pricing as predatory or high pricing as excessive, even though these pricing structures are adopted even by the smallest platforms entering the market.”

Tirole has brought the same kind of skepticism to some other related regulatory issues. Many regulators, for example, have advocated government regulation of interchange fees (IFs) in payment card associations such as Visa and MasterCard. But in 2003, Rochet and Tirole wrote that “given the [economics] profession’s current state of knowledge, there is no reason to believe that the IFs chosen by an association are systematically too high or too low, as compared with socially optimal levels.”

After winning the Nobel Prize, Tirole wrote a book for a popular audience, Economics for the Common Good. In it, he applied economics to a wide range of policy issues, laying out, among other things, the advantages of free trade for most residents of a given country and why much legislation and regulation causes negative unintended consequences.

Like most economists, Tirole favors free trade. In Economics for the Common Good, he noted that French consumers gain from freer trade in two ways. First, free trade exposes French monopolies and oligopolies to competition. He argued that two major French auto companies, Renault and Peugeot-Citroen, “sharply increased their efficiency” in response to car imports from Japan. Second, free trade gives consumers access to cheaper goods from low-wage countries.

In that same book, Tirole considered the unintended consequences of a hypothetical, but realistic, case in which a non-governmental organization, wanting to discourage killing elephants for their tusks, “confiscates ivory from traffickers.” In this hypothetical example, the organization can destroy the ivory or sell it. Destroying the ivory, he reasoned, would drive up the price. The higher price could cause poachers to kill more elephants. Another example he gave is of the perverse effects of price ceilings. Not only do they cause shortages, but also, as a result of these shortages, people line up and waste time in queues. Their time spent in queues wipes out the financial gain to consumers from the lower price, while also hurting the suppliers. No one wins and wealth is destroyed.

Also in that book, Tirole criticized the French government’s labor policies, which make it difficult for employers to fire people. He noted that this difficulty makes employers less likely to hire people in the first place. As a result, the unemployment rate in France was above 7 percent for over 30 years. The effect on young people has been particularly pernicious. When he wrote this book, the unemployment rate for French residents between 15 and 24 years old was 24 percent, and only 28.6 percent of percent of those in that age group had jobs. This was much lower than the OECD average of 39.6 percent, Germany’s 46.8 percent, and the Netherlands’ 62.3 percent.

One unintended, but predictable, consequence of government regulations of firms, which Tirole pointed out in Economics for the Common Good, is to make firms artificially small. When a French firm with 49 employees hires one more employee, he noted, it is subject to 34 additional legal obligations. Not surprisingly, therefore, in a figure that shows the number of enterprises with various numbers of employees, a spike occurs at 47 to 49 employees.

In Economics for the Common Good, Tirole ranged widely over policy issues in France. In addressing the French university system, he criticized the system’s rejection of selective admission to university. He argued that such a system causes the least prepared students to drop out and concluded that “[O]n the whole, the French educational system is a vast insider-trading crime.”

Tirole is chairman of the Toulouse School of Economics and of the Institute for Advanced Study in Toulouse. A French citizen, he was born in Troyes, France and earned his Ph.D. in economics in 1981 from the Massachusetts Institute of Technology.


Selected Works

 

  1. . (Co-authored with Jean-Jacques Laffont).“Using Cost Observation to Regulate Firms”. Journal of Political Economy. 94:3 (Part I). June: 614-641.

  2. . The Theory of Industrial Organization. MIT Press.

  3. . (Co-authored with Drew Fudenberg).“Moral Hazard and Renegotiation in Agency Contracts”, Econometrica, 58:6. November: 1279-1319.

  4. . (Co-authored with Jean-Jacques Laffont). A Theory of Incentives in Procurement and Regulation. MIT Press.

2003: (Co-authored with Jean-Charles Rochet). “An Economic Analysis of the Determination of Interchange Fees in Payment Card Systems.” Review of Network Economics. 2:2: 69-79.

  1. . (Co-authored with Jean-Charles Rochet). “Two-Sided Markets: A Progress Report.” The RAND Journal of Economics. 37:3. Autumn: 645-667.

2017, Economics for the Common Good. Princeton University Press.

 

(0 COMMENTS)

CEE November 30, 2018

The 2008 Financial Crisis

It was, according to accounts filtering out of the White House, an extraordinary scene. Hank Paulson, the U.S. treasury secretary and a man with a personal fortune estimated at 700m (380m), had got down on one knee before the most powerful woman in Congress, Nancy Pelosi, and begged her to save his plan to rescue Wall Street.

    The Guardian, September 26, 20081

The financial crisis of 2008 was a complex event that took most economists and market participants by surprise. Since then, there have been many attempts to arrive at a narrative to explain the crisis, but none has proven definitive. For example, a Congressionally-chartered ten-member Financial Crisis Inquiry Commission produced three separate narratives, one supported by the members appointed by the Democrats, one supported by four members appointed by the Republicans, and a third written by the fifth Republican member, Peter Wallison.2

It is important to appreciate that the financial system is complex, not merely complicated. A complicated system, such as a smartphone, has a fixed structure, so it behaves in ways that are predictable and controllable. A complex system has an evolving structure, so it can evolve in ways that no one anticipates. We will never have a proven understanding of what caused the financial crisis, just as we will never have a proven understanding of what caused the first World War.

There can be no single, definitive narrative of the crisis. This entry can cover only a small subset of the issues raised by the episode.

Metaphorically, we may think of the crisis as a fire. It started in the housing market, spread to the sub-prime mortgage market, then engulfed the entire mortgage securities market and, finally, swept through the inter-bank lending market and the market for asset-backed commercial paper.

Home sales began to slow in the latter part of 2006. This soon created problems for the sector of the mortgage market devoted to making risky loans, with several major lenders—including the largest, New Century Financial—declaring bankruptcy early in 2007. At the time, the problem was referred to as the “sub-prime mortgage crisis,” confined to a few marginal institutions.

But by the spring of 2008, trouble was apparent at some Wall Street investment banks that underwrote securities backed by sub-prime mortgages. On March 16, commercial bank JP Morgan Chase acquired one of these firms, Bear Stearns, with help from loan guarantees provided by the Federal Reserve, the central bank of the United States.

Trouble then began to surface at all the major institutions in the mortgage securities market. By late summer, many investors had lost confidence in Freddie Mac and Fannie Mae, and the interest rates that lenders demanded from them were higher than what they could pay and still remain afloat. On September 7, the U.S. Treasury took these two GSEs into “conservatorship.”

Finally, the crisis hit the short-term inter-bank collateralized lending markets, in which all of the world’s major financial institutions participate. This phase began after government officials’ unsuccessful attempts to arrange a merger of investment bank Lehman Brothers, which declared bankruptcy on September 15. This bankruptcy caused the Reserve Primary money market fund, which held a lot of short-term Lehman securities, to mark down the value of its shares below the standard value of one dollar each. That created jitters in all short-term lending markets, including the inter-bank lending market and the market for asset-backed commercial paper in general, and caused stress among major European banks.

The freeze-up in the interbank lending market was too much for leading public officials to bear. Under intense pressure to act, Treasury Secretary Henry Paulson proposed a 700 billion financial rescue program. Congress initially voted it down, leading to heavy losses in the stock market and causing Secretary Paulson to plead for its passage. On a second vote, the measure, known as the Troubled Assets Relief Program (TARP), was approved.

In hindsight, within each sector affected by the crisis, we can find moral hazard, cognitive failures, and policy failures. Moral hazard (in insurance company terminology) arises when individuals and firms face incentives to profit from taking risks without having to bear responsibility in the event of losses. Cognitive failures arise when individuals and firms base decisions on faulty assumptions about potential scenarios. Policy failures arise when regulators reinforce rather than counteract the moral hazard and cognitive failures of market participants.

The Housing Sector

From roughly 1990 to the middle of 2006, the housing market was characterized by the following:

  • an environment of low interest rates, both in nominal and real (inflation-adjusted) terms. Low nominal rates create low monthly payments for borrowers. Low real rates raise the value of all durable assets, including housing.
  • prices for houses rising as fast as or faster than the overall price level
  • an increase in the share of households owning rather than renting
  • loosening of mortgage underwriting standards, allowing households with weaker credit histories to qualify for mortgages.
  • lower minimum requirements for down payments. A standard requirement of at least ten percent was reduced to three percent and, in some cases, zero. This resulted in a large increase in the share of home purchases made with down payments of five percent or less.
  • an increase in the use of new types of mortgages with “negative amortization,” meaning that the outstanding principal balance rises over time.
  • an increase in consumers’ borrowing against their houses to finance spending, using home equity loans, second mortgages, and refinancing of existing mortgages with new loans for larger amounts.
  • an increase in the proportion of mortgages going to people who were not planning to live in the homes that they purchased. Instead, they were buying them to speculate. 3

These phenomena produced an increase in mortgage debt that far outpaced the rise in income over the same period. The trends accelerated in the three years just prior to the downturn in the second half of 2006.

The rise in mortgage debt relative to income was not a problem as long as home prices were rising. A borrower having difficulty finding the cash to make a mortgage payment on a house that had appreciated in value could either borrow more with the house as collateral or sell the house to pay off the debt.

But when house prices stopped rising late in 2006, households that had taken on too much debt began to default. This set in motion a reverse cycle: house foreclosures increased the supply of homes for sale; meanwhile, lenders became wary of extending credit, and this reduced demand. Prices fell further, leading to more defaults and spurring lenders to tighten credit still further.

During the boom, some people were speculating in non-owner-occupied homes, while others were buying their own homes with little or no money down. And other households were, in the vernacular of the time, “using their houses as ATMs,” taking on additional mortgage debt in order to finance consumption.

In most states in the United States, once a mortgage lender forecloses on a property, the borrower is not responsible for repayment, even if the house cannot be sold for enough to cover the loan. This creates moral hazard, particularly for property speculators, who can enjoy all of the profits if house prices rise but can stick lenders with some of the losses if prices fall.

One can see cognitive failure in the way that owners of houses expected home prices to keep rising at a ten percent rate indefinitely, even though overall inflation was less than half that amount.4Also, many house owners seemed unaware of the risks of mortgages with “negative amortization.”

Policy failure played a big role in the housing sector. All of the trends listed above were supported by public policy. Because they wanted to see increased home ownership, politicians urged lenders to loosen credit standards. With the Community Reinvestment Act for banks and Affordable Housing Goals for Freddie Mac and Fannie Mae, they spurred traditional mortgage lenders to increase their lending to minority and low-income borrowers. When the crisis hit, politicians blamed lenders for borrowers’ inability to repay, and political pressure exacerbated the credit tightening that subsequently took place

The Sub-prime Mortgage Sector

Until the late 1990s, few lenders were willing to give mortgages to borrowers with problematic credit histories. But sub-prime mortgage lenders emerged and grew rapidly in the decade leading up to the crisis. This growth was fueled by financial innovations, including the use of credit scoring to finely grade mortgage borrowers, and the use of structured mortgage securities (discussed in the next section) to make the sub-prime sector attractive to investors with a low tolerance for risk. Above all, it was fueled by rising home prices, which created a history of low default rates.

There was moral hazard in the sub-prime mortgage sector because the lenders were not holding on to the loans and, therefore, not exposing themselves to default risk. Instead, they packaged the mortgages into securities and sold them to investors, with the securities market allocating the risk.

Because they sold loans in the secondary market, profits at sub-prime lenders were driven by volume, regardless of the likelihood of default. Turning down a borrower meant getting no revenue. Approving a borrower meant earning a fee. These incentives were passed through to the staff responsible for finding potential borrowers and underwriting loans, so that personnel were compensated based on “production,” meaning the new loans they originated.

Although in theory the sub-prime lenders were passing on to others the risks that were embedded in the loans they were making, they were among the first institutions to go bankrupt during the financial crisis. This shows that there was cognitive failure in the management at these companies, as they did not foresee the house price slowdown or its impact on their firms.

Cognitive failure also played a role in the rise of mortgages that were underwritten without verification of the borrowers’ income, employment, or assets. Historical data showed that credit scores were sufficient for assessing borrower risk and that additional verification contributed little predictive value. However, it turned out that once lenders were willing to forgo these documents, they attracted a different set of borrowers, whose propensity to default was higher than their credit scores otherwise indicated.

There was policy failure in that abuses in the sub-prime mortgage sector were allowed to continue. Ironically, while the safety and soundness of Freddie Mac and Fannie Mae were regulated under the Department of Housing and Urban Development, which had an institutional mission to expand home ownership, consumer protection with regard to mortgages was regulated by the Federal Reserve Board, whose primary institutional missions were monetary policy and bank safety. Though mortgage lenders were setting up borrowers to fail, the Federal Reserve made little or no effort to intervene. Even those policy makers who were concerned about practices in the sub-prime sector believed that, on balance, sub-prime mortgage lending was helping a previously under-served set of households to attain home ownership.5

Mortagage Securities

A mortgage security consists of a pool of mortgage loans, the payments on which are passed through to pension funds, insurance companies, or other institutional investors looking for reliable returns with little risk. The market for mortgage securities was created by two government agencies, known as Ginnie Mae and Freddie Mac, established in 1968 and 1970, respectively.

Mortgage securitization expanded in the 1980s, when Fannie Mae, which previously had used debt to finance its mortgage purchases, began issuing its own mortgage-backed securities. At the same time, Freddie Mac was sold to shareholders, who encouraged Freddie to grow its market share. But even though Freddie and Fannie were shareholder-owned, investors treated their securities as if they were government-backed. This was known as an implicit government guarantee.

Attempts to create a market for private-label mortgage securities (PLMS) without any form of government guarantee were largely unsuccessful until the late 1990s. The innovations that finally got the PLMS market going were credit scoring and the collateralized debt obligation (CDO).

Before credit scoring was used in the mortgage market, there was no quantifiable difference between any two borrowers who were approved for loans. With credit scoring, the Wall Street firms assembling pools of mortgages could distinguish between a borrower with a very good score (750, as measured by the popular FICO system) and one with a more doubtful score (650).

Using CDOs, Wall Street firms were able to provide major institutional investors with insulation from default risk by concentrating that risk in other sub-securities (“tranches”) that were sold to investors who were more tolerant of risk. In fact, these basic CDOs were enhanced by other exotic mechanisms, such as credit default swaps, that reallocated mortgage default risk to institutions in which hardly any observer expected to find it, including AIG Insurance.

There was moral hazard in the mortgage securities market, as Freddie Mac and Fannie Mae sought profits and growth on behalf of shareholders, but investors in their securities expected (correctly, as it turned out) that the government would protect them against losses. Years before the crisis, critics grumbled that the mortgage giants exemplified privatized profits and socialized risks.6

There was cognitive failure in the assessment of default risk. Assembling CDOs and other exotic instruments required sophisticated statistical modeling. The most important driver of expectations for mortgage defaults is the path for house prices, and the steep, broad-based decline in home prices that took place in 2006-2009 was outside the range that some modelers allowed for.

Another source of cognitive failure is the “suits/geeks” divide. In many firms, the financial engineers (“geeks) understood the risks of mortgage-related securities fairly well, but their conclusions did not make their way to the senior management level (“suits”).

There was policy failure on the part of bank regulators. Their previous adverse experience was with the Savings and Loan Crisis, in which firms that originated and retained mortgages went bankrupt in large numbers. This caused bank regulators to believe that mortgage securitization, which took risk off the books of depository institutions, would be safer for the financial system. For the purpose of assessing capital requirements for banks, regulators assigned a weight of 100 percent to mortgages originated and held by the bank, but assigned a weight of only 20 percent to the bank’s holdings of mortgage securities issued by Freddie Mac, Fannie Mae, or Ginnie Mae. This meant that banks needed to hold much more capital to hold mortgages than to hold mortgage-related securities; that naturally steered them toward the latter.

In 2001, regulators broadened the low-risk umbrella to include AAA-rated and AA-rated tranches of private-label CDOs. This ruling helped to generate a flood of PLMS, many of them backed by sub-prime mortgage loans.7

By using bond ratings as a key determinant of capital requirements, the regulators effectively put the bond rating agencies at the center of the process of creating private-label CDOs. The rating agencies immediately became subject to both moral hazard and cognitive failure. The moral hazard came from the fact that the rating agencies were paid by the issuers of securities, who wanted the most generous ratings possible, rather than being paid by the regulators, who needed more rigorous ratings. The cognitive failure came from the fact that that models that the rating agencies used gave too little weight to potential scenarios of broad-based declines in house prices. Moreover, the banks that bought the securities were happy to see them rated AAA because the high ratings made the securities eligible for lower capital requirements on the part of the banks. Both sides, therefore, buyers and sellers, had bad incentives.

There was policy failure on the part of Congress. Officials in both the Clinton and Bush Administrations were unhappy with the risk that Freddie Mac and Fannie Mae represented to taxpayers. But Congress balked at any attempt to tighten regulation of the safety and soundness of those firms.8

The Inter-bank Lending Market

There are a number of mechanisms through which financial institutions make short-term loans to one another. In the United States, banks use the Federal Funds market to manage short-term fluctuations in reserves. Internationally, banks lend in what is known as the LIBOR market.

One of the least known and most important markets is for “repo,” which is short for “repurchase agreement.” As first developed, the repo market was used by government bond dealers to finance inventories of securities, just as an automobile dealer might finance an inventory of cars. A money-market fund might lend money for one day or one week to a bond dealer, with the loan collateralized by a low-risk long-term security.

In the years leading up to the crisis, some dealers were financing low-risk mortgage-related securities in the repo market. But when some of these securities turned out to be subject to price declines that took them out of the “low-risk” category, participants in the repo market began to worry about all repo collateral. Repo lending offers very low profit margins, and if an investor has to be very discriminating about the collateral backing a repo loan, it can seem preferable to back out of repo lending altogether. This, indeed, is what happened, in what economist Gary Gorton and others called a “run on repo.”9

Another element of institutional panic was “collateral calls” involving derivative financial instruments. Derivatives, such as credit default swaps, are like side bets. The buyer of a credit default swap is betting that a particular debt instrument will default. The seller of a credit default swap is betting the opposite.

In the case of mortgage-related securities, the probability of default seemed low prior to the crisis. Sometimes, buyers of credit default swaps were merely satisfying the technical requirements to record the underlying securities as AAA-rated. They could do this if they obtained a credit default swap from an institution that was itself AAA-rated. AIG was an insurance company that saw an opportunity to take advantage of its AAA rating to sell credit default swaps on mortgage-related securities. AIG collected fees, and its Financial Products division calculated that the probability of default was essentially zero. The fees earned on each transaction were low, but the overall profit was high because of the enormous volume. AIG’s credit default swaps were a major element in the expansion of shadow banking by non-bank financial institutions during the run-up to the crisis.

Late in 2005, AIG abruptly stopped writing credit default swaps, in part because its own rating had been downgraded below AAA earlier in the year for unrelated reasons. By the time AIG stopped selling credit default swaps on mortgage-related securities, it had outstanding obligations on 80 billion of underlying securities and was earning 1 billion a year in fees.10

Because AIG no longer had its AAA rating and because the underlying mortgage securities, while not in default, were increasingly shaky, provisions in the contracts that AIG had written allowed the buyers of credit default swaps to require AIG to provide protection in the form of low-risk securities posted as collateral. These “collateral calls” were like a margin call that a stock broker will make on an investor who has borrowed money to buy stock that subsequently declines in value. In effect, collateral calls were a run on AIG’s shadow bank.

These collateral calls were made when the crisis in the inter-bank lending market was near its height in the summer of 2008 and banks were hoarding low-risk securities. In fact, the shortage of low-risk securities may have motivated some of the collateral calls, as institutions like Deutsche Bank and Goldman Sachs sought ways to ease their own liquidity problems. In any event, AIG could not raise enough short-term funds to meet its collateral calls without trying to dump long-term securities into a market that had little depth to absorb them. It turned to Federal authorities for a bailout, which was arranged and creatively backed by the Federal Reserve, but at the cost of reducing the value of shares in AIG.

With repos and derivatives, there was moral hazard in that the traders and executives of the narrow units that engaged in exotic transactions were able to claim large bonuses on the basis of short-term profits. But the adverse long-term consequences were spread to the rest of the firm and, ultimately, to taxpayers.

There was cognitive failure in that the collateral calls were an unanticipated risk of the derivatives business. The financial engineers focused on the (remote) chances of default on the underlying securities, not on the intermediate stress that might emerge from collateral calls.

There was policy failure when Congress passed the Commodity Futures Modernization Act. This legislation specified that derivatives would not be regulated by either of the agencies with the staff most qualified to understand them. Rather than require oversight by the Securities and Exchange Commission or the Commodity Futures Trading Commission (which regulated market-traded derivatives), Congress decreed that the regulator responsible for overseeing each firm would evaluate its derivative position. The logic was that a bank that was using derivatives to hedge other transactions should have its derivative position evaluated in a larger context. But, as it happened, the insurance and bank regulators who ended up with this responsibility were not equipped to see the dangers at firms such as AIG.

There was also policy failure in that officials approved of securitization that transferred risk out of the regulated banking sector. While Federal Reserve Officials were praising the risk management of commercial banks,11risk was accumulating in the shadow banking sector (non-bank institutions in the financial system), including AIG insurance, money market funds, Wall Street firms such as Bear Stearns and Lehman Brothers, and major foreign banks. When problems in the shadow banking sector contributed to the freeze in inter-bank lending and in the market for asset-backed commercial paper, policy makers felt compelled to extend bailouts to satisfy the needs of these non-bank institutions for liquid assets.

Conclusion

In terms of the fire metaphor suggested earlier, in hindsight, we can see that the markets for housing, sub-prime mortgages, mortgage-related securities, and inter-bank lending were all highly flammable just prior to the crisis. Moral hazard, cognitive failures, and policy failures all contributed the combustible mix.

The crisis also reflects a failure of the economics profession. A few economists, most notably Robert Shiller,12warned that the housing market was inflated, as indicated by ratios of prices to rents that were high by historical standards. Also, when risk-based capital regulation was proposed in the wake of the Savings and Loan Crisis and the Latin American debt crisis, a group of economists known as the Shadow Regulatory Committee warned that these regulations could be manipulated. They recommended, instead, greater use of senior subordinated debt at regulated financial institutions.13Many economists warned about the incentives for risk-taking at Freddie Mac and Fannie Mae.14

But even these economists failed to anticipate the 2008 crisis, in large part because economists did not take note of the complex mortgage-related securities and derivative instruments that had been developed. Economists have a strong preference for parsimonious models, and they look at financial markets through a lens that includes only a few types of simple assets, such as government bonds and corporate stock. This approach ignores even the repo market, which has been important in the financial system for over 40 years, and, of course, it omits CDOs, credit default swaps and other, more recent innovations.

Financial intermediaries do not produce tangible output that can be measured and counted. Instead, they provide intangible benefits that economists have never clearly articulated. The economics profession has a long way to go to catch up with modern finance.


About the Author

Arnold Kling was an economist with the Federal Reserve Board and with the Federal Home Loan Mortgage Corporation before launching one of the first Web-based businesses in 1994.  His most recent books areSpecialization and Trade and The Three Languages of Politics. He earned his Ph.D. in economics from the Massachusetts Institute of Technology.


Footnotes

1

“A desperate plea – then race for a deal before ‘sucker goes down’” The Guardian, September 26, 2008. https://www.theguardian.com/business/2008/sep/27/wallstreet.useconomy1

 

2

The report and dissents of the Financial Crisis Inquiry Commission can be found at https://fcic.law.stanford.edu/

3

See Stefania Albanesi, Giacomo De Giorgi, and Jaromir Nosal 2017, “Credit Growth and the Financial Crisis: A New Narrative” NBER working paper no. 23740. http://www.nber.org/papers/w23740

 

4

Karl E. Case and Robert J. Shiller 2003, “Is there a Bubble in the Housing Market?” Cowles Foundation Paper 1089 http://www.econ.yale.edu/shiller/pubs/p1089.pdf

 

5

Edward M. Gramlich 2004, “Subprime Mortgage Lending: Benefits, Costs, and Challenges,” Federal Reserve Board speeches. https://www.federalreserve.gov/boarddocs/speeches/2004/20040521/

 

6

For example, in 1999, Treasury Secretary Lawrence Summers said in a speech, “Debates about systemic risk should also now include government-sponsored enterprises.” See Bethany McLean and Joe Nocera 2010, All the Devils are Here: The Hidden History of the Financial Crisis Portfolio/Penguin Press. The authors write that Federal Reserve Chairman Alan Greenspan was also, like Summers, disturbed by the moral hazard inherent in the GSEs.

 

7

Jeffrey Friedman and Wladimir Kraus 2013, Engineering the Financial Crisis: Systemic Risk and the Failure of Regulation, University of Pennsylvania Press.

 

8

See McLean and Nocera, All the Devils are Here

 

9

Gary Gorton, Toomas Laarits, and Andrew Metrick 2017, “The Run on Repo and the Fed’s Response,” Stanford working paper. https://www.gsb.stanford.edu/sites/gsb/files/fin_11_17_gorton.pdf

 

10

Talking Points Memo 2009, “The Rise and Fall of AIG’s Financial Products Unit” https://talkingpointsmemo.com/muckraker/the-rise-and-fall-of-aig-s-financial-products-unit

 

11

Chairman Ben S. Bernanke 2006, “Modern Risk Management and Banking Supervision,” Federal Reserve Board speeches. https://www.federalreserve.gov/newsevents/speech/bernanke20060612a.htm

 

12

National Public Radio 2005, “Yale Professor Predicts Housing ’Bubble’ Will Burst” https://www.npr.org/templates/story/story.php?storyId4679264

 

13

Shadow Financial Regulatory Committee 2001, “The Basel Committee’s Revised Capital Accord Proposal” https://www.bis.org/bcbs/ca/shfirect.pdf

14

See the discussion in Viral V. Acharya, Matthew Richardson, Stijn Van Nieuwerburgh and Lawrence J. White 2011, Guaranteed to Fail: Fannie Mae, Freddie Mac, and the Debacle of Mortgage Finance, Princeton University Press.

 

(0 COMMENTS)

CEE September 18, 2018

Christopher Sims

Nobel Prize 2011-Nobel interviews KVA-DSC 8118

Christopher Sims was awarded, along with Thomas Sargent, the 2011 Nobel Prize in Economic Sciences. The Nobel committee cited their “empirical research on cause and effect in the macroeconomy.” The economists who spoke at the press conference announcing the award emphasized Sargent’s and Sims’ analysis of role of people’s expectations.

One of Sims’s earliest famous contributions was his work on money-income causality, which was cited by the Nobel committee. Money and income move together, but which causes which? Milton Friedman argued that changes in the money supply caused changes in income, noting that the supply of money often rises before income rises. Keynesians such as James Tobin argued that changes in income caused changes in the amount of money. Money seems to move first, but causality, said Tobin and others, still goes the other way: people hold more money when they expect income to rise in the future.

Which view is true? In 1972 Sims applied Clive Granger’s econometric test of causality. On Granger’s definition one variable is said to cause another variable if knowledge of the past values of the possibly causal variable helps to forecast the effect variable over and above the knowledge of the history of the effect variable itself. Implementing a test of this incremental predictability, Sims concluded “[T]he hypothesis that causality is unidirectional from money to income [Friedman’s view] agrees with the postwar U.S. data, whereas the hypothesis that causality is unidirectional from income to money [Tobin’s view] is rejected.”

Sims’s influential article “Macroeconomics and Reality” was a criticism of both the usual econometric interpretation of large-scale Keynesian econometric models and ofRobert Lucas’s influential earlier criticism of these Keynesian models (the so-called Lucas critique). Keynesian econometricians had claimed that with sufficiently accurate theoretical assumptions about the structure of the economy, correlations among the macroeconomic variables could be used to measure the strengths of various structural connections in the economy. Sims argued that there was no basis for thinking that these theoretical assumptions were sufficiently accurate. Such so-called “identifying assumptions” were, Sims said, literally “incredible.” Lucas, on the other hand, had not rejected the idea of such identification. Rather he had pointed out that, if people held “rational expectations” – that is, expectations that, though possibly incorrect, did not deviate on average from what actually occurs in a correctable, systematic manner – then failing to account for them would undermine the stability of the econometric estimates and render the macromodels useless for policy analysis. Lucas and his New Classical followers argued that in forming their expectations people take account of the rules implicitly followed by monetary and fiscal policymakers; and, unless those rules were integrated into the econometric model, every time the policymakers adopted a new policy (i.e., new rules), the estimates would shift in unpredictable ways.

While rejecting the structural interpretation of large-scale macromodels, Sims did not reject the models themselves, writing: “[T]here is no immediate prospect that large-scale macromodels will disappear from the scene, and for good reason: they are useful tools in forecasting and policy analysis.” Sims conceded that the Lucas critique was correct in those cases in which policy regimes truly changed. But he argued that such regime changes were rare and that most economic policy was concerned with the implementation of a particular policy regime. For that purpose, the large-scale macromodels could be helpful, since what was needed for forecasting was a model that captured the complex interrelationships among variables and not one that revealed the deeper structural connections.

In the same article, Sims proposed an alternative to large-scale macroeconomic models, the vector autoregression (or VAR). In Sims’s view, the VAR had the advantages of the earlier macromodels, in that it could capture the complex interactions among a relatively large number of variables needed for policy analysis and yet did not rely on as many questionable theoretical assumptions. With subsequent developments by Sims and others, the VAR became a major tool of empirical macroeconomic analysis.

Sims has also suggested that sticky prices are caused by “rational inattention,” an idea imported from electronic communications. Just as computers do not access information on the Internet infinitely fast (but rather, in bits per second), individual actors in an economy have only a finite ability to process information. This delay produces some sluggishness and randomness, and allows for more accurate forecasts than conventional models, in which people are assumed to be highly averse to change.

Sims’s recent work has focused on the fiscal theory of the price level, the view that inflation in the end is determined by fiscal problems—the overall amount of debt relative to the government’s ability to repay it—rather than by the split in government debt between base money and bonds. In 1999, Sims suggested that the fiscal foundations of the European Monetary Union were “precarious” and that a fiscal crisis in one country “would likely breed contagion effects in other countries.” The Greek financial crisis about a decade later seemed to confirm his prediction.

Christopher Sims earned his B.A. in mathematics in 1963 and his Ph.D. in economics in 1968, both from Harvard University. He taught at Harvard from 1968 to 1970, at the University of Minnesota from 1970 to 1990, at Yale University from 1990 to 1999, and at Princeton University from 1999 to the present. He has been a Fellow of the Econometric Society since 1974, a member of the American Academy of Arts and Sciences since 1988, a member of the National Academy of Sciences since 1989, President of the Econometric Society (1995), and President of the American Economic Association (2012). He has been a Visiting Scholar for the Federal Reserve Banks of Atlanta, New York, and Philadelphia off and on since 1994.


Selected Works

  1. . “Money, Income, and Causality.” American Economic Review 62: 4 (September): 540-552.

  2. . “Macroeconomics and Reality.” Econometrica 48: 1 (January): 1-48.

1990 (with James H. Stock and Mark W. Watson). “Inference in Linear Time Series Models with some Unit Roots.” Econometrica 58: 1 (January): 113-144.

  1. . “The Precarious Fiscal Foundations of EMU.” De Economist 147:4 (December): 415-436.

  2. . “Implications of Rational Inattention.” Journal of Monetary Economics 50: 3 (April): 665–690.

(0 COMMENTS)

CEE June 28, 2018

Gordon Tullock

Gordon tullock

Gordon Tullock, along with his colleague James M. Buchanan, was a founder of the School of Public Choice. Among his contributions to public choice were his study of bureaucracy, his early insights on rent seeking, his study of political revolutions, his analysis of dictatorships, and his analysis of incentives and outcomes in foreign policy. Tullock also contributed to the study of optimal organization of research, was a strong critic of common law, and did work on evolutionary biology. He was arguably one of the ten or so most influential economists of the last half of the twentieth century. Many economists believe that Tullock deserved to share Buchanan’s 1986 Nobel Prize or even deserved a Nobel Prize on his own.

One of Tullock’s early contributions to public choice was The Calculus of Consent: Logical Foundations of Constitutional Democracy, co-authored with Buchanan in 1962. In that path-breaking book, the authors assume that people seek their own interests in the political system and then consider the results of various rules and political structures. One can think of their book as a political economist’s version of Montesquieu.

One of the most masterful sections of The Calculus of Consent is the chapter in which the authors, using a model formulated by Tullock, consider what good decision rules would be for agreeing to have someone in government make a decision for the collective. An individual realizes that if only one person’s consent is required, and he is not that person, he could have huge costs imposed on him. Requiring more people’s consent in order for government to take action reduces the probability that that individual will be hurt. But as the number of people required to agree rises, the decision costs rise. In the extreme, if unanimity is required, people can game the system and hold out for a disproportionate share of benefits before they give their consent. The authors show that the individual’s preferred rule would be one by which the costs imposed on him plus the decision costs are at a minimum. That preferred rule would vary from person to person. But, they note, it would be highly improbable that the optimal decision rule would be one that requires a simple majority. They write, “On balance, 51 percent of the voting population would not seem to be much preferable to 49 percent.” They suggest further that the optimal rule would depend on the issues at stake. Because, they note, legislative action may “produce severe capital losses or lucrative capital gains” for various groups, the rational person, not knowing his own future position, might well want strong restraints on the exercise of legislative power.

Tullock’s part of The Calculus of Consent was a natural outgrowth of an unpublished manuscript written in the 1950s that later became his 1965 book, The Politics of Bureaucracy. Buchanan, reminiscing about that book, summed up Tullock’s approach and the book’s significance:

The substantive contribution in the manuscript was centered on the hypothesis that, regardless of role, the individual bureaucrat responds to the rewards and punishments that he confronts. This straightforward, and now so simple, hypothesis turned the whole post-Weberian quasi-normative approach to bureaucracy on its head. . . . The economic theory of bureaucracy was born.1

Buchanan noted in his reminiscence that Tullock’s “fascinating analysis” was “almost totally buried in an irritating personal narrative account of Tullock’s nine-year experience in the foreign service hierarchy.” Buchanan continued: “Then, as now, Tullock’s work was marked by his apparent inability to separate analytical exposition from personal anecdote.” Translation: Tullock learned from his experiences. As a Foreign Service officer with the U.S. State Department for nine years Tullock learned, up close and “personal,” how dysfunctional bureaucracy can be. In a later reminiscence, Tullock concluded:

A 90 per cent cut-back on our Foreign Service would save money without really damaging our international relations or stature.2

Tullock made many other contributions in considering incentives within the political system. Particularly noteworthy was his work on political revolutions and on dictatorships.

Consider, first, political revolutions. Any one person’s decision to participate in a revolution, Tullock noted, does not much affect the probability that the revolution will succeed. Therefore, each person’s actions do not much affect his expected benefits from revolution. On the other hand, a ruthless head of government can individualize the costs by heavily punishing those who participate in a revolution. So anyone contemplating participating in a revolution will be comparing heavy individual costs with small benefits that are simply his pro rata share of the overall benefits. Therefore, argued Tullock, for people to participate, they must expect some large benefits that are tied to their own participation, such as a job in the new government. That would explain an empirical regularity that Tullock noted—namely that “in most revolutions, the people who overthrow the existing government were high officials in that government before the revolution.”

This thinking carried over to his work on autocracy. In Autocracy, Tullock pointed out that in most societies at most times, governments were not democratically elected but were autocracies: they were dictatorships or kingdoms. For that reason, he argued, analysts should do more to understand them. Tullock’s book was his attempt to get the discussion started. In a chapter titled “Coups and Their Prevention,” Tullock argued that one of the autocrat’s main challenges is to survive in office. He wrote: “The dictator lives continuously under the Sword of Damocles and equally continuously worries about the thickness of the thread.” Tullock pointed out that a dictator needs his countrymen to believe not that he is good, just, or ordained by God, but that those who try to overthrow him will fail.”

Among modern economists, Tullock was the earliest discoverer of the concept of “rent seeking,” although he did not call it that. Before his work, the usual measure of the deadweight loss from monopoly was the part of the loss in consumer surplus that did not increase producer surplus for the monopolist. Consumer surplus is the maximum amount that consumers are willing to pay minus the amount they actually pay; producer surplus, also called “economic rent,” is the amount that producers get minus the minimum amount for which they would be willing to produce. Harberger3 had estimated that for the U.S. economy in the 1950s, that loss was very low, on the order of 0.1 percent of Gross National Product. In “The Welfare Cost of Tariffs, Monopolies, and Theft,” Tullock argued that this method understated the loss from monopoly because it did not take account of the investment of the monopolist—and of others trying to be monopolists—in becoming monopolists. These investments in monopoly are a loss to the economy. Tullock also pointed out that those who seek tariffs invest in getting those tariffs, and so the standard measure of the loss from tariffs understated the loss. His analysis, as the tariff example illustrates, applies more to firms seeking special privileges from government than to private attempts to monopolize via the free market because private attempts often lead, as if by an invisible hand, to increased competition.”

One of Tullock’s most important insights in public choice was in a short article in 1975 titled “The Transitional Gains Trap.” He noted that even though rent seeking often leads to big gains for the rent seekers, those gains are capitalized in asset prices, which means that buyers of the assets make a normal return on the asset. So, for example, if the government requires the use of ethanol in gasoline, owners of land on which corn is grown will find that their land is worth more because of the regulatory requirement. (Ethanol in the United States is produced from corn.) They gain when the regulation is first imposed. But when they sell the land, the new owner pays a price equal to the present value of the stream of the net profits from the land. So the new owner doesn’t get a supra-normal rate of return from the land. In other words, the owner at the time that the regulation was imposed got “transitional gains,” but the new owner does not. This means that the new owner will suffer a capital loss if the regulation is removed and will fight hard to keep the regulation in place, arguing, correctly, that he paid for those gains. That makes repealing the regulation more difficult than otherwise. Tullock notes that, therefore, we should try hard to avoid getting into these traps because they are hard to get out of.

Tullock was one of the few public choice economists to apply his tools to foreign policy. In Open Secrets of American Foreign Policy, he takes a hard-headed look at U.S. foreign policy rather than the romantic “the United States is the good guys” view that so many Americans take. For example, he wrote of the U.S. government’s bombing of Serbia under President Bill Clinton:

[T]he bombing campaign was a clear-cut violation of the United Nations Charter and hence, should be regarded as a war crime. It involved the use of military forces without the sanction of the Security Council and without any colorable claim of self-defense. Of course, it was not a first—we [the U.S. government] had done the same thing in Vietnam, Grenada and Panama.

Possibly Tullock’s most underappreciated contributions were in the area of methodology and the economics of research. About a decade after spending six months with philosopher Karl Popper at the Center for Advanced Studies in Palo Alto, Tullock published The Organization of Inquiry. In it, he considered why scientific discovery in both the hard sciences and economics works so well without any central planner, and he argued that centralized funding by government would slow progress. After arguing that applied science is generally more valuable than pure science, Tullock wrote:

Nor is there any real justification for the general tendency to consider pure research as somehow higher and better than applied research. It is certainly more pleasant to engage in research in fields that strike you as interesting than to confine yourself to fields which are likely to be profitable, but there is no reason why the person choosing the more pleasant type of research should be considered more noble.4

In Tullock’s view, a system of prizes for important discoveries would be an efficient way of achieving important breakthroughs. He wrote:

As an extreme example, surely offering a reward of 1 billion for the first successful ICBM would have resulted in both a large saving of money for the government and much faster production of this weapon.5

Tullock was born in Rockford, Illinois and was an undergrad at the University of Chicago from 1940 to 1943. His time there was interrupted when he was drafted into the U.S. Army. During his time at Chicago, though, he completed a one-semester course in economics taught by Henry Simons. After the war, he returned to the University of Chicago Law School, where he completed the J.D. degree in 1947. He was briefly with a law firm in 1947 before going into the Foreign Service, where he worked for nine years. He was an economics professor at the University of South Carolina (1959-1962), the University of Virginia (1962-1967), Rice University (1968-1969), the Virginia Polytechnic Institute and State University (1968-1983), George Mason University (1983-1987), the University of Arizona (1987-1999), and again at George Mason University (1999-2008). In 1966, he started the journal Papers in Non-Market Decision Making, which, in 1969, was renamed Public Choice.


Selected Works

 

  1. . The Calculus of Consent. (Co-authored with James M. Buchanan.) Ann Arbor, Michigan: University of Michigan Press.

  2. . The Politics of Bureaucracy. Public Affairs Press. Washington, D.C.: Public Affairs Press.

  3. . The Organization of Inquiry. Durham, North Carolina: Duke University Press.

  4. . “The Welfare Costs of Tariffs, Monopolies, and Theft,” Western Economic Journal, 5:3 (June): 224-232.

  5. . Toward a Mathematics of Politics. Ann Arbor, Michigan: University of Michigan Press.

  6. . “The Paradox of Revolution.” Public Choice. Vol. 11. Fall: 89-99.

1975: “The Transitional Gains Trap.” Bell Journal of Economics, 6:2 (Autumn): 671-678.

1987: Autocracy. Hingham, Massachusetts: Kluwer Academic Publishers.

  1. . Open Secrets of American Foreign Policy. New Jersey: World Scientific Publishing Co.

 


Footnotes

James M. Buchanan. 1987. The qualities of a natural economist. In Charles K. Rowley, (Ed.) (1987). Democracy and public choice. Oxford and New York: Basil Blackwell, 9-19.

 

Gordon Tullock. 2009. Memories of an unexciting life. Unfinished and unpublished manuscript. Tucson, 2009. Quoted in Charles K. Rowley and Daniel Houser. “The Life and Times of Gordon Tullock.” 2011. George Mason University. Department of Economics. Paper No. 11-56. December 20.

 

Arnold C. Harberger. 1954 “Monopoly and Resource Allocation.” American Economic Review. 44(2): 77-87.

 

Tullock. 1966. P. 14.

 

Tullock. 1966. P. 168.

 

(0 COMMENTS)

CEE February 4, 2018

Division of Labor

Division of labor combines specialization and the partition of a complex production task into several, or many, sub-tasks. Its importance in economics lies in the fact that a given number of workers can produce far more output using division of labor compared to the same number of workers each working alone. Interestingly, this is true even if those working alone are expert artisans. The production increase has several causes. According to Adam Smith, these include increased dexterity from learning, innovations in tool design and use as the steps are defined more clearly, and savings in wasted motion changing from one task to another.

Though the scientific understanding of the importance of division of labor is comparatively recent, the effects can be seen in most of human history. It would seem that exchange can arise only from differences in taste or circumstance. But division of labor implies that this is not true. In fact, even a society of perfect clones would develop exchange, because specialization alone is enough to reward advances such as currency, accounting, and other features of market economies.

In the early 1800s, David Ricardo developed a theory of comparative advantage as an explanation for the origins of trade. And this explanation has substantial power, particularly in a pre-industrial world. Assume, for example, that England is suited to produce wool, while Portugal is suited to produce wine. If each nation specializes, then total consumption in the world, and in each nation, is expanded. Interestingly, this is still true if one nation is better at producing both commodities: even the less productive nation benefits from specialization and trade.

In a world with industrial production based on division of labor, however, comparative advantage based on weather and soil conditions becomes secondary. Ricardo himself recognized this in his broader discussion of trade, as Meoqui points out. The reason is that division of labor produces a cost advantage where none existed before—an advantage based simply on specialization. Consequently, even in a world without comparative advantage, division of labor would create incentives for specialization and exchange.

Origins

The Neolithic Revolution, with its move to fixed agriculture and greater population densities, fostered specialization in both production of consumer goods and military protection. As Plato put it:

A State [arises] out of the needs of mankind; no one is self-sufficing, but all of us have many wants… Then, as we have many wants, and many persons are needed to supply them, one takes a helper… and another… [W]hen these partners and helpers are gathered together in one habitation the body of inhabitants is termed a State… And they exchange with one another, and one gives, and another receives, under the idea that the exchange will be for their good. (The Republic, Book II)

This idea of the city-state, or polis, as a nexus of cooperation directed by the leaders of the city is a potent tool for the social theorist. It is easy to see that the extent of specialization was limited by the size of the city: a clan has one person who plays on a hollow log with sticks; a moderately sized city might have a string quartet; and a large city could support a symphony.

One of the earliest sociologists, Muslim scholar Ibn Khaldun (1332-1406), also emphasized what he called “cooperation” as a means of achieving the benefits of specialization:

The power of the individual human being is not sufficient for him to obtain (the food) he needs, and does not provide him with as much food as he requires to live. Even if we assume an absolute minimum of food –that is, food enough for one day, (a little) wheat, for instance – that amount of food could be obtained only after much preparation such as grinding, kneading, and baking. Each of these three operations requires utensils and tools that can be provided only with the help of several crafts, such as the crafts of the blacksmith, the carpenter, and the potter. Assuming that a man could eat unprepared grain, an even greater number of operations would be necessary in order to obtain the grain: sowing and reaping, and threshing to separate it from the husks of the ear. Each of these operations requires a number of tools and many more crafts than those just mentioned. It is beyond the power of one man alone to do all that, or (even) part of it, by himself. Thus, he cannot do without a combination of many powers from among his fellow beings, if he is to obtain food for himself and for them. Through cooperation, the needs of a number of persons, many times greater than their own (number), can be satisfied. [From Muqaddimah (Introduction), First Prefatory Discussion in chapter 1; parenthetical expression in original in Rosenthal translation]

This sociological interpretation of specialization as a consequence of direction, limited by the size of the city, later motivated scholars such as Emile Durkheim (1858-1917) to recognize the central importance of division of labor for human flourishing.

Smith’s Insight

It is common to say that Adam Smith “invented” or “advocated” division of labor. Such claims are simply mistaken, on several grounds (see, for a discussion, Kennedy 2008). Smith described how decentralized market exchange fosters division of labor among cities or across political units, rather than just within them as previous thinkers had done. Smith had two key insights: First, division of labor would be powerful even if all human beings were identical, because differences in productive capacity are learned. Smith’s parable of the “street porter and the philosopher” illustrates the depth of this insight. As Smith put it:

[T]he very different genius which appears to distinguish men of different professions, when grown up to maturity, is not upon many occasions so much the cause, as the effect of the division of labour. The difference between the most dissimilar characters, between a philosopher and a common street porter, for example, seems to arise not so much from nature, as from habit, custom, and education. (WoN, V. 1, Ch 2; emphasis in original.)

Second, the division of labor gives rise to market institutions and expands the extent of the market. Exchange relations relentlessly push against borders and expand the effective locus of cooperation. The benefit to the individual is that first dozens, then hundreds, and ultimately millions, of other people stand ready to work for each of us, in ways that are constantly being expanded into new activities and new products.

Smith gives an example—the pin factory—that has become one of the central archetypes of economic theory. As Munger (2007) notes, Smith divides pin-making into 18 operations. But that number is arbitrary: labor is divided into the number of operations that fit the extent of the market. In a small market, perhaps three workers, each performing several different operations, could be employed. In a city or small country, as Smith saw, 18 different workers might be employed. In an international market, the optimal number of workers (or their equivalent in automated steps) would be even larger.

The interesting point is that there would be constant pressure on the factory to (a) expand the number of operations even more, and to automate them through the use of tools and other capital; and to (b) expand the size of the market served with consequently lower-cost pins so that the expanded output could be sold. Smith recognized this dynamic pressure in the form of what can only be regarded today as a theorem, the title of Chapter 3 in Book I of the Wealth of Nations: “That the Division of Labor is Limited by the Extent of the Market.” George Stigler treated this claim as a testable theorem in his 1951 article, and developed its insights in the context of modern economics.

Still, the full importance of Smith’s insight was not recognized and developed until quite recently. James Buchanan presented the starkest description of the implications of Smith’s theory (James Buchanan and Yong Yoon, 2002). While the bases of trade and exchange can be differences in tastes or capacities, market institutions would develop even if such differences were negligible. The Smithian conception of the basis for trade and the rewards from developing market institutions is more general and more fundamental than the simple version implied by deterministic comparative advantage.

Division of labor is a hopeful doctrine. Nearly any nation, regardless of its endowment of natural resources, can prosper simply by developing a specialization. That specialization might be determined by comparative advantage, lying in climate or other factors, of course. But division of labor alone is sufficient to create trading opportunities and the beginnings of prosperity. By contrast, nations that refuse the opportunity to specialize, clinging to mercantilist notions of independence and economic self-sufficiency, doom themselves and their populations to needless poverty.


About the Author

Michael Munger is the Director of the PPE Program at Duke University.


Further Reading

Buchanan, James, and Yong Yoon. 2002. “Globalization as Framed by the Two Logics of Trade,” The Independent Review, 6(3): 399-405.

Durkheim, Emile, 1984. Division of Labor in Society. New York: MacMillan.

Kennedy, Gavin. 2008. “Basic Errors About the Role of Adam Smith.” April 2: http://adamsmithslostlegacy.blogspot.com/2008/04/basic-errors-about-role-of-adam-smith.html

Khaldun, Ibn. 1377. Muqaddimah (Introductory) http://www.muslimphilosophy.com/ik/Muqaddimah/

Morales Meoqui, Jorge , 2015. Ricardo’s numerical example versus Ricardian trade model: A comparison of two distinct notions of comparative advantage DOI: 10.13140/RG.2.1.2484.5527/1 Link: https://www.researchgate.net/publication/283206070_Ricardos_numerical_example_versus_Ricardian_trade_model_A_comparison_of_two_distinct_notions_of_comparative_advantage

Munger, Michael. 2007. “I’ll Stick With These: Some Sharp Observations on the Division of Labor.” Indianapolis, Liberty Fund. http://www.econlib.org/library/Columns/y2007/Mungerpins.html

Plato, n.d. The Republic. Translated by Benjamin Jowett. http://classics.mit.edu/Plato/republic.html

Roberts, Russell. 2006. “Treasure Island: The Power of Trade. Part II. How Trade Transforms Our Standard of Living.” Indianapolis, Liberty Fund. http://www.econlib.org/library/Columns/y2006/Robertsstandardofliving.html

Smith, Adam. 1759/1853. (Revised Edition). The Theory of Moral Sentiments, New Edition. With a biographical and critical Memoir of the Author, by Dugald Stewart (London: Henry G. Bohn, 1853). 7/27/2015. http://oll.libertyfund.org/titles/2620

Smith, Adam. 1776/1904. An Inquiry into the Nature and Causes of the Wealth of Nations by Adam Smith, edited with an Introduction, Notes, Marginal Summary and an Enlarged Index by Edwin Cannan (London: Methuen, 1904). Vol. 1. 7/27/2015. http://oll.libertyfund.org/titles/237

Stigler, George. 1951. “The Division of Labor is Limited by the Extent of the Market.” Journal of Political Economy. 59(3): 185-193

(0 COMMENTS)

CEE February 4, 2018

Hoover’s Economic Policies

When it was all over, I once made a list of New Deal ventures begun during Hoover’s years as Secretary of Commerce and then as president. . . . The New Deal owed much to what he had begun.1 —FDR advisor Rexford G. Tugwell

Many historians, most of the general public, and even many economists think of Herbert Hoover, the president who preceded Franklin D. Roosevelt, as a defender of laissez-faire economic policy. According to this view, Hoover’s dogmatic commitment to small government led him to stand by and do nothing while the economy collapsed in the wake of the 1929 stock market crash. The reality is quite different. Far from being a bystander, Hoover actively intervened in the economy, advocating and implementing polices that were quite similar to those that Franklin Roosevelt later implemented. Moreover, many of Hoover’s interventions, like those of his successor, caused the great depression to be “great”—that is, to last a long time.

Hoover’s early career

Hoover, a very successful mining engineer, thought that the engineer’s focus on efficiency could enable government to play a larger and more constructive role in the economy. In 1917, he became head of the wartime Food Administration, working to reduce American food consumption. Many Democrats, including FDR, saw him as a potential presidential candidate for their party in the 1920s. For most of the 1920s, Hoover was Secretary of Commerce under Republican Presidents Harding and Coolidge. As Commerce Secretary during the 1920-21 recession, Hoover convened conferences between government officials and business leaders as a way to use government to generate “cooperation” rather than individualistic competition. He particularly liked using the “cooperation” that was seen during wartime as an example to follow during economic crises. In contrast to Harding’s more genuine commitment to laissez-faire, Hoover began one 1921 conference with a call to “do something” rather than nothing. That conference ended with a call for more government planning to avoid future depressions, as well as using public works as a solution once they started.2 Pulitzer-Prize winning historian David Kennedy summarized Hoover’s work in the 1920-21 recession this way: “No previous administration had moved so purposefully and so creatively in the face of an economic downturn. Hoover had definitively made the point that government should not stand by idly when confronted with economic difficulty.”3 Harding, and later Coolidge, rejected most of Hoover’s ideas. This may well explain why the 1920-21 recession, as steep as it was, was fairly short, lasting 18 months.

Interestingly, though, in his role as Commerce Secretary, Hoover created a new government program called “Own Your Own Home,” which was designed to increase the level of homeownership. Hoover jawboned lenders and the construction industry to devote more resources to homeownership, and he argued for new rules that would allow federally chartered banks to do more residential lending. In 1927, Congress complied, and with this government stamp of approval and the resources made available by Federal Reserve expansionary policies through the decade, mortgage lending boomed. Not surprisingly, this program became part of the disaster of the depression, as bank failures dried up sources of funds, preventing the frequent refinancing that was common at the time, and high unemployment rates made the government-encouraged mortgages unaffordable. The result was a large increase in foreclosures.4

The Hoover presidency

Hoover did not stand idly by after the depression began. To fight the rapidly worsening depression, Hoover extended the size and scope of the federal government in six major areas: (1) federal spending, (2) agriculture, (3) wage policy, (4) immigration, (5) international trade, and (6) tax policy.

Consider federal government spending. (See Fiscal Policy.) Federal spending in the 1929 budget that Hoover inherited was 3.1 billion. He increased spending to 3.3 billion in 1930, 3.6 billion in 1931, and 4.7 billion and 4.6 billion in 1932 and 1933, respectively, a 48% increase over his four years. Because this was a period of deflation, the real increase in government spending was even larger: The real size of government spending in 1933 was almost double that of 1929.5 The budget deficits of 1931 and 1932 were 52.5% and 43.3% of total federal expenditures. No year between 1933 and 1941 under Roosevelt had a deficit that large.6 In short, Hoover was no defender of “austerity” and “budget cutting.”


Figure 1


Shortly after the stock market crash in October 1929, Hoover extended federal control over agriculture by expanding the reach of the Federal Farm Board (FFB), which had been created a few months earlier.7 The idea behind the FFB was to make government-funded loans to farm cooperatives and create “stabilization corporations” to keep farm prices up and deal with surpluses. In other words, it was a cartel plan. That fall, Hoover pushed the FFB into full action, lending to farmers all over the country and otherwise subsidizing farming in an attempt to keep prices up. The plan failed miserably, as subsidies encouraged farmers to grow more, exacerbating surpluses and eventually driving prices way down. As more farms faced dire circumstances, Hoover proposed the further anti-market step of paying farmers not to grow.

On wages, Hoover revived the business-government conferences of his time at the Department of Commerce by summoning major business leaders to the White House several times that fall. He asked them to pledge not to reduce wages in the face of rising unemployment. Hoover believed, as did a number of intellectuals at the time, that high wages caused prosperity, even though the true causation is from capital accumulation to increased labor productivity to higher wages. He argued that if major firms cut wages, workers would not have the purchasing power they needed to buy the goods being produced. As most depressions involve falling prices, cutting wages to match falling prices would have kept purchasing power constant. What Hoover wanted amounted to an increase in real wages, as constant nominal wages would be able to purchase more goods at falling prices. Presumably out of fear of the White House or, perhaps, because it would keep the unions quiet, industrial leaders agreed to this proposal. The result was rapidly escalating unemployment, as firms quickly realized that they could not continue to employ as many workers when their output prices were falling and labor costs were constant.8

Of all of the government failures of the Hoover presidency—excluding the actions of the Federal Reserve between 1929 and 1932, over which he had little to no influence—his attempt to maintain wages was the most damaging. Had he truly believed in laissez-faire, Hoover would not have intervened in the private sector that way. Hoover’s high-wage policy was a clear example of his lack of confidence in the corrective forces of the market and his willingness to use governmental power to fight the depression.

Later in his presidency, Hoover did more than just jawbone to keep wages up. He signed two pieces of labor legislation that dramatically increased the role of government in propping up wages and giving monopoly protection to unions. In 1931, he signed the Davis-Bacon Act, which mandated that all federally funded or assisted construction projects pay the “prevailing wage” (i.e., the above market-clearing union wage). The result of this move was to close out non-union labor, especially immigrants and non-whites, and drive up costs to taxpayers. A year later, he signed the Norris-LaGuardia Act, whose five major provisions each enshrined special provisions for unions in the law, such as prohibiting judges from using injunctions to stop strikes and making union-free contracts unenforceable in federal courts.9 Hoover’s interventions into the labor market are further evidence of his rejection of laissez-faire.

Two other areas that Hoover intervened in aggressively were immigration and international trade. One of the lesser-known policy changes during his presidency was his near halt to immigration through an Executive Order in September 1930. His argument was that blocking immigration would preserve the jobs and wages of American citizens against competition from low-wage immigrants. Immigration fell to a mere 10 to 15% of the allowable quota of visas for the five-month period ending February 28, 1931. Once again, Hoover was unafraid to intervene in the economic decisions of the private sector by preventing the competitive forces of the global labor market from setting wages.10

Even those with only a casual knowledge of the Great Depression will be familiar with one of Hoover’s major policy mistakes—his promotion and signing of the Smoot-Hawley tariff in 1930. This law increased tariffs significantly on a wide variety of imported goods, creating the highest tariff rates in U.S. history. While economist Douglas Irwin has found that Smoot-Hawley’s effects were not as large as often thought, they still helped cause a decline in international trade, a decline that contributed to the worsening worldwide depression.

Most of these policies continued and many expanded throughout 1931, with the economy worsening each month. By the end of the year, Hoover decided that more drastic action was necessary, and on December 8, he addressed Congress and offered proposals that historian David Kennedy refers to as “Hoover’s second program, ” and that has also been called “The Hoover New Deal.”11 His proposals included:

The Reconstruction Finance Corporation to lend tax dollars to banks, firms and others institutions in need.

A Home Loan Bank to provide government help to the construction sector.

Congressional legalization of Hoover’s executive order that had blocked immigration.

Direct loans to state governments for spending on relief for the unemployed.

More aid to Federal Land Banks.

Creating a Public Works Administration that would both better coordinate Federal public works and expand them.

More vigorous enforcement of antitrust laws to end “destructive competition” in a variety of industries, as well as supporting work-sharing programs that would supposedly reduce unemployment.

On top of these spending proposals, most of which were approved in one form or another, Hoover proposed, and Congress approved, the largest peacetime tax increase in U.S. history. The Revenue Act of 1932 increased personal income taxes dramatically, but also brought back a variety of excise taxes that had been used during World War I. The higher income taxes involved an increase of the standard rate from a range of 1.5 to 5% to a range of 4 to 8%. On top of that increase, the Act placed a large surtax on higher-income earners, leading to a total tax rate of anywhere from 25 to 63%. The Act also raised the corporate income tax along with several taxes on other forms of income and wealth.

Whether or not Hoover’s prescriptions were the right medicine—and the evidence suggests that they were not—his programs were a fairly aggressive use of government to address the problems of the depression.12 These programs were hardly what one would expect from a man devoted to “laissez-faire” and accused of doing nothing while the depression worsened.

The views of contemporaries and modern historians

The myth of Hoover as a defender of laissez-faire persists, despite the fact that his contemporaries clearly understood that he made aggressive use of government to fight the recession. Indeed, Hoover’s own statements made clear that he recognized his aggressive use of intervention. The myth also persists in spite of the widespread recognition by modern historians that the Hoover presidency was anything but an era of laissez-faire.

According to Hoover’s Secretary of State, Henry Stimson, Hoover argued that balancing the budget was a mistake: “The President likened it to war times. He said in war times no one dreamed of balancing the budget. Fortunately we can borrow.”13 Hoover himself summarized his administration’s approach to the depression during a campaign speech in 1932:

We might have done nothing. That would have been utter ruin. Instead, we met the situation with proposals to private business and the Congress of the most gigantic program of economic defense and counter attack ever evolved in the history of the Republic. These programs, unparalleled in the history of depressions of any country and in any time, to care for distress, to provide employment, to aid agriculture, to maintain the financial stability of the country, to safeguard the savings of the people, to protect their homes, are not in the past tense—they are in action. . . . No government in Washington has hitherto considered that it held so broad a responsibility for leadership in such time.14

Some might dismiss this as campaign rhetoric, but as the other evidence indicates, Hoover was giving an accurate portrayal of his presidency. Indeed, Hoover’s profligacy was so clear that Roosevelt attacked it during the 1932 Presidential campaign.

Roosevelt’s own advisors understood that much of what they created during the New Deal owed its origins to Hoover’s policies, going as far back as his time at the Commerce Department in the 1920s. Thus the quote at the start of this article by Rex Tugwell, one of the academics at the center of FDR’s “brains trust.” Another member of the brains trust, Raymond Moley, wrote of that period:

When we all burst into Washington . . . we found every essential idea [of the New Deal] enacted in the 100-day Congress in the Hoover administration itself. The essentials of the NRA [National Recovery Administration], the PWA [Public Works Administration], the emergency relief setup were all there. Even the AAA [Agricultural Adjustment Act] was known to the Department of Agriculture. Only the TVA [Tennessee Valley Authority] and the Securities Act was [sic] drawn from other sources. The RFC [Reconstruction Finance Corporation], probably the greatest recovery agency, was of course a Hoover measure, passed long before the inauguration.15

Decades later, Tugwell, writing to Moley, said of Hoover: “[W]e were too hard on a man who really invented most of the devices we used.”16 Members of Roosevelt’s inner circle would have every reason to disassociate themselves from the policies of their predecessor; yet these two men recognized Hoover’s role as the father of the New Deal quite clearly.

Nor is this point lost on contemporary historians. In his authoritative history of the Great Depression era, David Kennedy admiringly wrote that Hoover’s 1932 program of activist policies helped “lay the groundwork for a broader restructuring of government’s role in many other sectors of American life, a restructuring known as the New Deal.”17 In a later discussion of the beginning of the Roosevelt administration, Kennedy observed (emphasis added):

Roosevelt intended to preside over a government even more vigorously interventionist and directive than Hoover’s. . . . [I]f Roosevelt had a plan in early 1933 to effect economic recovery, it was difficult to distinguish from many of the measures that Hoover, even if sometimes grudgingly, had already adopted: aid for agriculture, promotion of industrial cooperation, support for the banks, and a balanced budget. Only the last was dubious. . . . FDR denounced Hoover’s budget deficits.18

Conclusion

Despite overwhelming evidence to the contrary, from Hoover’s own beliefs to his actions as president to the observations of his contemporaries and modern historians, the myth of Herbert Hoover’s presidency as an example of laissez-faire persists. Of all the presidents up to and including him, Herbert Hoover was one of the most active interveners in the economy.


About the Author

Steven Horwitz is Distinguished Professor of Free Enterprise at Ball State University in Muncie, Indiana.


Footnotes

*

This entry is adapted, with permission, from Steven Horwitz, “Herbert Hoover: Father of the New Deal,” Cato Institute Briefing Papers, No. 122, September 29, 2011, at: http://www.cato.org/publications/briefing-paper/herbert-hoover-father-new-deal

 

As quoted in Amity Shlaes, The Forgotten Man: A New History of the Great Depression. New York: Harper Collins, 2007, p. 149.

 

Murray N. Rothbard, America’s Great Depression (1963; Auburn, AL: Ludwig von Mises Institute, 2008), p. 192.

 

David M. Kennedy, Freedom From Fear: The American People in Depression and War, 1929-1945. New York: Oxford University Press, p. 48.

 

See Steven Malanga, “Obsessive Housing Disorder,” City Journal, 19 (2), Spring 2009.

 

Federal government spending data can be found at: http://www2.census.gov/prod2/statcomp/documents/CT1970p2-12.pdf

 

See the data and discussion in Jonathan Hughes and Louis P. Cain, American Economic History, 7th ed., Boston: Pearson, 2007, p. 487. Hughes and Cain also note of those deficits, “The expenditures were in large part the doing of the outgoing Hoover administration.”

 

See Kennedy op. cit., pp. 43-44; Rothbard op. cit., p. 228; and Gene Smiley, Rethinking the Great Depression, Chicago: Ivan R. Dee, 2002, p. 13.

 

See Lee Ohanian, “What – or Who – Started the Great Depression?” Journal of Economic Theory 144, 2009, pp. 2310-2335.

 

Chuck Baird, “Freeing Labor Markets by Reforming Union Laws,” June 2011, Downsizing DC, Cato Institute, available at http://www.downsizinggovernment.org/labor/reforming-labor-union-laws.

 

See “White House Statement on Government Policies To Reduce Immigration” March 26, 1931, available at http://www.presidency.ucsb.edu/ws/index.php?pid22581#axzz1V7klWwZu. That statement opens with an explicit link between the immigration policy and unemployment: “President Hoover, to protect American workingmen from further competition for positions by new alien immigration during the existing conditions of employment, initiated action last September looking to a material reduction in the number of aliens entering this country.”

 

Kennedy op. cit., p. 83. The phrase “Hoover’s New Deal” is from the title of chapter 11 in Rothbard, op. cit..

 

Hoover’s higher tax rates backfired, as they further depressed income-earning activity, reducing the tax base, which in turn led to a fall in tax revenues for 1932.

 

As cited in Kennedy op. cit., p. 79.

 

Herbert Hoover, “Address Accepting the Republican Presidential Nomination,” August 11, 1932.

 

Raymond Moley, “Reappraising Hoover,” Newsweek, June 14, 1948, p. 100.

 

Letter from Rexford G. Tugwell to Raymond Moley, January 29, 1965, Raymond Moley Papers, “Speeches and Writings,” Box 245-49, Hoover Institution on War, Revolution and Peace, Stanford University, Stanford, CA, as cited in Davis W. Houck, “Rhetoric as Currency: Herbert Hoover and the 1929 Stock Market Crash,” Rhetoric & Public Affairs 3, 2000, p. 174.

 

Kennedy, op. cit., p. 83.

 

Kennedy, op. cit., p. 118.

 

(0 COMMENTS)

CEE February 4, 2018

Unemployment

Few economic indicators are of more concern to Americans than unemployment statistics. Reports that unemployment rates are dropping make us happy; reports to the contrary make us anxious. But just what do unemployment!---- figures tell us? Are they reliable measures? What influences joblessness?

How Is Unemployment Defined and Measured?

Each month, the federal government’s Bureau of Labor Statistics randomly surveys sixty thousand individuals around the nation. If respondents say they are both out of work and seeking employment, they are counted as unemployed members of the labor force. Jobless respondents who have chosen not to continue looking for work are considered out of the labor force and therefore are not counted as unemployed. Almost half of all unemployment spells end because people leave the labor force. Ironically, those who drop out of the labor force—because they are discouraged, have household responsibilities, or are sick—actually make unemployment rates look better; the unemployment rate includes only people within the labor force who are out of work.

Not all unemployment is the same. Unemployment can be long term or short term. It can be frictional, meaning someone is between jobs; or it may be structural, as when someone’s skills are no longer demanded because of a change in technology or an industry downturn.

Is Unemployment a Big Problem?

Some say there are reasons to think that unemployment in the United States is not a big problem. In June 2005, for example, 33.5 percent of all unemployed people were under the age of twenty-four, and presumably few of them were the main source of income for their families. One out of six of the unemployed are teenagers. Moreover, the average duration of a spell of unemployment is short. In June 2005 it was 16.3 weeks. And the median spell of unemployment is even shorter. In June 2005 it was 7.0 weeks, meaning that half of all spells last 7.0 weeks or less.

On the basis of numbers like the above, many economists have thought that unemployment is not a very large problem. A few weeks of unemployment seems to them like just enough time for people to move from one job to another. Yet these numbers, though accurate, are misleading. Much of the reason why unemployment spells appear short is that many workers drop out of the labor force at least temporarily because they cannot find attractive jobs. Often two short spells of unemployment mean a long spell of joblessness because the person was unemployed for a short time, withdrew from the labor force, and then reentered the labor force.

And even if most unemployment spells are short, most weeks of unemployment are experienced by people who are out of work for a long time. To see why, consider the following example. Suppose that each week, twenty spells of unemployment lasting 1 week begin, and only one begins that lasts 20 weeks. Then the average duration of a completed spell of unemployment would be only 1.05 weeks. But half of all unemployment (half of the total of 40 weeks that the twenty-one people are out of work) would be accounted for by spells lasting 20 weeks.

Something like this example applies in the real world. In June 2005, for example, 42.9 percent of the unemployed had been unemployed for less than five weeks, but 16.9 percent had been unemployed for six or more months.

What Causes Long-Term Unemployment?

To fully understand unemployment, we must consider the causes of recorded long-term unemployment. Empirical evidence shows that two causes are welfare payments and unemployment insurance. These government assistance programs contribute to long-term unemployment in two ways.

First, government assistance increases the measure of unemployment by prompting people who are not working to claim that they are looking for work even when they are not. The work-registration requirement for welfare recipients, for example, compels people who otherwise would not be considered part of the labor force to register as if they were a part of it. This requirement effectively increases the measure of unemployed in the labor force even though these people are better described as nonemployed—that is, not actively looking for work.

In a study using state data on registrants in Aid to Families with Dependent Children and food stamp programs, my colleague Kim Clark and I found that the work-registration requirement actually increased measured unemployment by about 0.5 to 0.8 percentage points. If this same relationship holds in 2005, this requirement increases the measure of unemployment by 750,000 to 1.2 million people. Without the condition that they look for work, many of these people would not be counted as unemployed. Similarly, unemployment insurance increases the measure of unemployment by inducing people to say that they are job hunting in order to collect benefits.

The second way government assistance programs contribute to long-term unemployment is by providing an incentive, and the means, not to work. Each unemployed person has a “reservation wage”—the minimum wage he or she insists on getting before accepting a job. Unemployment insurance and other social assistance programs increase that reservation wage, causing an unemployed person to remain unemployed longer.

Consider, for example, an unemployed person who is accustomed to making 15.00 an hour. On unemployment insurance this person receives about 55 percent of normal!---- earnings, or 8.25 per lost work hour. If that person is in a 15 percent federal tax bracket and a 3 percent state tax bracket, he or she pays 1.49 in taxes per hour not worked and nets 6.76 per hour after taxes as compensation for not working. If that person took a job that paid 15.00 per hour, governments would take 18 percent for income taxes and 7.65 percent for Social Security taxes, netting him or her 11.15 per hour of work. Comparing the two payments, this person may decide that an hour of leisure is worth more than the extra 4.39 the job would pay. If so, this means that the unemployment insurance raises the person’s reservation wage to above 15.00 per hour.

Unemployment, therefore, may not be as costly for the jobless person as previously imagined. But as Harvard economist Martin Feldstein pointed out in the 1970s, the costs of unemployment to taxpayers are very great indeed. Take the example above of the individual who could work for 15.00 an hour or collect unemployment insurance of 8.25 per hour. The cost of unemployment to this unemployed person was only 4.39 per hour, the difference between the net income from working and the net income from not working. And as compensation for this cost, the unemployed person gained leisure, whose value could well be above 4.39 per hour. But other taxpayers as a group paid 8.25 in unemployment benefits for every hour the person was unemployed, and got back in taxes only 1.49 on this benefit. Moreover, they gave up 3.85 in lost tax and Social Security revenue that this person would have paid per hour employed at a 15.00 wage. Net loss to other taxpayers: 10.61 (8.25 1.49 3.85) per hour. Multiply this by millions of people collecting unemployment, each missing hundreds of hours of work, and you get a cost to taxpayers in the billions.

Unemployment insurance also extends the time a person stays off the job. Clark and I estimated that the existence of unemployment insurance almost doubles the number of unemployment spells lasting more than three months. If unemployment insurance were eliminated, the unemployment rate would drop by more than half a percentage point, which means that the number of unemployed people would fall by about 750,000. This is all the more significant in light of the fact that less than half of the unemployed receive insurance benefits, largely because many have not worked enough to qualify.

Another cause of long-term unemployment is unionization. High union wages that exceed the competitive market rate are likely to cause job losses in the unionized sector of the economy. Also, those who lose high-wage union jobs are often reluctant to accept alternative low-wage employment. Between 1970 and 1985, for example, a state with a 20 percent unionization rate, approximately the average for the fifty states and the District of Columbia, experienced an unemployment rate that was 1.2 percentage points higher than that of a hypothetical state that had no unions. To put this in perspective, 1.2 percentage points is about 60 percent of the increase in normal unemployment between 1970 and 1985.

There is no question that some long-term unemployment is caused by government intervention and unions that interfere with the supply of labor. It is, however, a great mistake (made by some conservative economists) to attribute most unemployment to government interventions in the economy or to any lack of desire to work on the part of the unemployed. Unemployment was a serious economic problem in the late nineteenth and early twentieth centuries prior to the welfare state and widespread unionization. Unemployment then, as now, was closely linked to general macroeconomic conditions. The great depression, when unemployment in the United States reached 25 percent, is the classic example of the damage that collapses in credit can do. Since then, most economists have agreed that cyclical fluctuations in unemployment are caused by changes in the demand for labor, not by changes in workers’ desires to work, and that unemployment in recessions is involuntary.

Even leaving aside cyclical fluctuations, a large part of unemployment is due to demand factors rather than supply. High unemployment in New England in the early 1990s, for example, was due to declines in computer and other industries in which New England specialized. High unemployment in northern California in the early 2000s was caused by the dot-com bust. The process of adjustment following shocks is long and painful, and recent research suggests that even temporary declines in demand can have permanent effects on unemployment, as workers who lose jobs are unable to sell their labor due to a loss of skills or for other reasons. Therefore, most economists who study unemployment support an active government role in training and retraining workers and in maintaining stable demand for labor.

The Natural Rate of Unemployment

Long before Milton Friedman and Edmund Phelps advanced the notion of the natural rate of unemployment (the lowest rate of unemployment tolerable without pushing up inflation), policymakers had contented themselves with striving for low, not zero, unemployment. Just what constitutes an acceptably low level of unemployment has been redefined over the decades. In the early 1960s an unemployment rate of 4 percent was both desirable and achievable. Over time, the unemployment rate drifted upward and, for the most part, has hovered around 7 percent.!---- Lately, it has fallen to 5 percent. I suspect that some of the reduction in the apparent natural rate of unemployment in recent years has to do with reduced transitional unemployment, both because fewer people are between jobs and because they are between jobs for shorter periods. Union power has been eroded by domestic regulatory action and inaction, as well as by international competition. More generally, international competition has restrained wage increases in high-wage industries. Another factor making unemployment lower is a decline in the fraction of the unemployed who are supported by unemployment insurance.


About the Author

Lawrence H. Summers is Charles W. Eliot University Professor at Harvard University. He was previously the president of Harvard University. Before that, he was secretary of the U.S. Treasury.


Further Reading

Feldstein, Martin. “The Economics of the New Unemployment.” Public Interest 33 (Fall 1973): 3–42.

Feldstein, Martin. “Why Is Productivity Growing Faster?” NBER Working Paper no. 9530. National Bureau of Economic Research, Cambridge, Mass., 2003.

Friedman, Milton. “The Role of Monetary Policy.” American Economic Review 58 (March 1968): 1–17.

Hall, Robert. “Employment Fluctuations and Wage Rigidity.” Brookings Papers on Economic Activity 1 (1980): 91–141.

Summers, Lawrence H. Understanding Unemployment. Cambridge: MIT Press, 1990.

Summers, Lawrence H. “Why Is the Unemployment Rate So Very High Near Full Employment?” Brookings Papers on Economic Activity 2 (1986): 339–383.

Summers, Lawrence H., and Kim B. Clark. “Labor Market Dynamics and Unemployment: A Reconsideration.” Brookings Papers on Economic Activity 1 (1979): 13–60.

(0 COMMENTS)

CEE February 4, 2018

Unintended Consequences

The law of unintended consequences, often cited but rarely defined, is that actions of people—and especially of government—always have effects that are unanticipated or unintended. Economists and other social scientists have heeded its power for centuries; for just as long, politicians and popular opinion have largely ignored it.

The concept of unintended consequences is one of the building blocks of economics. Adam Smith’s “invisible hand,” the most famous metaphor in social science, is an example of a positive unintended consequence. Smith maintained that each individual, seeking only his own gain, “is led by an invisible hand to promote an end which was no part of his intention,” that end being the public interest. “It is not from the benevolence of the butcher, or the baker, that we expect our dinner,” Smith wrote, “but from regard to their own self interest.”

Most often, however, the law of unintended consequences illuminates the perverse unanticipated effects of legislation and regulation. In 1692 the English philosopher John Locke, a forerunner of modern economists, urged the defeat of a parliamentary bill designed to cut the maximum permissible rate of interest from 6 percent to 4 percent. Locke argued that instead of benefiting borrowers, as intended, it would hurt them. People would find ways to circumvent the law, with the costs of circumvention borne by borrowers. To the extent the law was obeyed, Locke concluded, the chief results would be less available credit and a redistribution of income away from “widows, orphans and all those who have their estates in money.”

In the first half of the nineteenth century, the famous French economic journalist Frédéric Bastiat often distinguished in his writing between the “seen” and the “unseen.” The seen were the obvious visible consequences of an action or policy. The unseen were the less obvious, and often unintended, consequences. In his famous essay “What Is Seen and What Is Not Seen,” Bastiat wrote:

There is only one difference between a bad economist and a good one: the bad economist confines himself to the visible effect; the good economist takes into account both the effect that can be seen and those effects that must be foreseen.1

Bastiat applied his analysis to a wide range of issues, including trade barriers, taxes, and government spending.

The first and most complete analysis of the concept of unintended consequences was done in 1936 by the American sociologist Robert K. Merton. In an influential article titled “The Unanticipated Consequences of Purposive Social Action,” Merton identified five sources of unanticipated consequences. The first two—and the most pervasive—were “ignorance” and “error.”

Merton labeled the third source the “imperious immediacy of interest.” By that he was referring to instances in which someone wants the intended consequence of an action so much that he purposefully chooses to ignore any unintended effects. (That type of willful ignorance is very different from true ignorance.) The Food and Drug Administration, for example, creates enormously destructive unintended!---- consequences with its regulation of pharmaceutical drugs. By requiring that drugs be not only safe but efficacious for a particular use, as it has done since 1962, the FDA has slowed down by years the introduction of each drug. An unintended consequence is that many people die or suffer who would have been able to live or thrive. This consequence, however, has been so well documented that the regulators and legislators now foresee it but accept it.

“Basic values” was Merton’s fourth source of unintended consequences. The Protestant ethic of hard work and asceticism, he wrote, “paradoxically leads to its own decline through the accumulation of wealth and possessions.” His final case was the “self-defeating prediction.” Here he was referring to the instances when the public prediction of a social development proves false precisely because the prediction changes the course of history. For example, the warnings earlier in this century that population growth would lead to mass starvation helped spur scientific breakthroughs in agricultural productivity that have since made it unlikely that the gloomy prophecy will come true. Merton later developed the flip side of this idea, coining the phrase “the self-fulfilling prophecy.” In a footnote to the 1936 article, he vowed to write a book devoted to the history and analysis of unanticipated consequences. Although Merton worked on the book over the next sixty years, it remained uncompleted when he died in 2003 at age ninety-two.

The law of unintended consequences provides the basis for many criticisms of government programs. As the critics see it, unintended consequences can add so much to the costs of some programs that they make the programs unwise even if they achieve their stated goals. For instance, the U.S. government has imposed quotas on imports of steel in order to protect steel companies and steelworkers from lower-priced competition. The quotas do help steel companies. But they also make less of the cheap steel available to U.S. automakers. As a result, the automakers have to pay more for steel than their foreign competitors do. So a policy that protects one industry from foreign competition makes it harder for another industry to compete with imports.

Similarly, Social Security has helped alleviate poverty among senior citizens. Many economists argue, however, that it has carried a cost that goes beyond the payroll taxes levied on workers and employers. Martin Feldstein and others maintain that today’s workers save less for their old age because they know they will receive Social Security checks when they retire. If Feldstein and the others are correct, it means that less savings are available, less investment takes place, and the economy and wages grow more slowly than they would without Social Security.

The law of unintended consequences is at work always and everywhere. People outraged about high prices of plywood in areas devastated by hurricanes, for example, may advocate price controls to keep the prices closer to usual levels. An unintended consequence is that suppliers of plywood from outside the region, who would have been willing to supply plywood quickly at the higher market price, are less willing to do so at the government-controlled price. Thus results a shortage of a good where it is badly needed. Government licensing of electricians, to take another example, keeps the supply of electricians below what it would otherwise be, and thus keeps the price of electricians’ services higher than otherwise. One unintended consequence is that people sometimes do their own electrical work, and, occasionally, one of these amateurs is electrocuted.

One final sobering example is the case of the Exxon Valdez oil spill in 1989. Afterward, many coastal states enacted laws placing unlimited liability on tanker operators. As a result, the Royal Dutch/Shell group, one of the world’s biggest oil companies, began hiring independent ships to deliver oil to the United States instead of using its own forty-six-tanker fleet. Oil specialists fretted that other reputable shippers would flee as well rather than face such unquantifiable risk, leaving the field to fly-by-night tanker operators with leaky ships and iffy insurance. Thus, the probability of spills probably increased and the likelihood of collecting damages probably decreased as a consequence of the new laws.


About the Author

Rob Norton is an author and consultant and was previously the economics editor of Fortune magazine.


Further Reading

Bastiat, Frédéric. “What Is Seen and What Is Not Seen.” Online at: http://www.econlib.org/library/Bastiat/basEss1.html.

Hayek, Friedrich A. New Studies in Philosophy, Politics, Economics and the History of Ideas. Chicago: University of Chicago Press, 1978.

Merton, Robert K. Sociological Ambivalence and Other Essays. New York: Free Press, 1976.


Footnotes

Online at: http://www.econlib.org/library/Bastiat/basEss1.html.

(0 COMMENTS)

CEE February 4, 2018

Urban Transportation

The defining trait of urban areas is density: of people, activities, and structures. The defining trait of urban transportation is the ability to cope with this density while moving people and goods. Density creates challenges for urban transportation because of crowding and the expense of providing infrastructure in built-up areas. It also creates certain advantages because of economies of scale: some transportation activities are cheaper when carried out in large volumes. These characteristics mean that two of the most important phenomena in urban transportation are traffic congestion and mass transit.

Traffic congestion imposes large costs, primarily in terms of lost time. (Economists measure the value of this time by examining situations in which people can trade time for money, such as by choosing different means of travel.) Researchers at the Texas Transportation Institute regularly estimate the costs of urban congestion; their estimate of annual congestion costs per capita in 2001 for seventy-five large U.S. metropolitan areas was 520, representing twenty-six hours of delay and forty-two gallons of fuel. This totals nearly 70 billion.1

But is the cost of congestion too high? Density dictates that we cannot expect to provide unencumbered road space for every person who might like it at 5:00 p.m. on a weekday—any more than one would expect to build a dormitory with a shower for every resident who wants to use one in the morning. Just as an architect might decide how many showers to provide for the dormitory, economists, by knowing how much people value their time and how much it costs to save time by increasing road capacity, can estimate the optimal amount of roadway capacity and the resulting level of congestion.

Virtually all economists agree that congestion in cities around the world is greater than this optimum. They also agree on the reason: driving in the rush hour is priced far below its real social cost. The social cost is the driver’s cost to himself plus the congestion imposed on other drivers. People often drive, therefore, even when the social cost is more than the trip is worth to them because they do not bear the cost of the congestion they cause. Whereas this social cost varies by time of day and location, the individual’s trip price (consisting of operating costs, fuel taxes, and the occasional toll) is more uniform. Even if the price covers the costs of providing road infrastructure, which it probably does not in U.S. cities, it is not serving the purpose of allocating road capacity at peak hours to those who value it most.

These observations lead directly to the frequent recommendation for “congestion pricing”: a system of prices that vary by time and location, designed to reduce congestion by encouraging people to shift their travel to less socially costly means, places, or times of day. Singapore has had congestion pricing since 1975. London adopted an ambitious pricing system in 2003, initially requiring five pounds (about eight U.S. dollars) to drive in its central area during weekdays. Singapore’s tolls are now collected electronically, and London’s through various off-site means, in both cases with enforcement by video recordings of license plates. In its first year, the London scheme appeared to have increased speeds to and in the central area by 15–20 percent and to have eliminated or diverted 67,500 weekday automobile trips there, with half of these shifting to public transit and another quarter diverting to less congested routes.2

A partial form of congestion pricing has recently been adopted in several U.S. locations. Known as “value pricing,” it applies only to a set of “express lanes” that are adjacent to an unpriced roadway. This scheme has the advantage that paying the price is voluntary, but also the disadvantage that congestion is eliminated for only a fraction of travelers and is even greater for the others than would be the case if the express lanes were opened to everyone. Value pricing has been in place on State Route 91 in the Los Angeles region since late 1995 and on Interstate 15 near San Diego since late 1996.3 Proposals have emerged for a nationwide network of such express lanes to replace the present system of intermittent carpool lanes.4

Since examples of congestion pricing are so few, the consequences of underpricing congested highways are far-reaching. People and businesses have rearranged themselves and their activities in time and place to lessen the!---- impacts of congestion, probably leading to more spread-out land-use patterns (although the land-use impact cannot be precisely predicted from theory). Furthermore, public authorities have responded by building more roadway capacity, including very expensive, wide expressways designed to allow high speeds, even though peak-period users cannot maintain those speeds. The result is a more spread-out urban area with bigger roads than would evolve if congestion pricing were in place.

The effectiveness of building capacity to relieve urban congestion is limited not only by its high cost, but also by the phenomenon of “latent demand” or “induced demand.” Because many potential peak-hour trips are already deterred by the congestion itself, any success in reducing that congestion is partially undone by an influx of these previously latent trips from other routes, hours of the day, or travel modes. As a consequence, adding capacity may still provide considerable benefits by allowing more people to travel when and where they want to, but it will not necessarily reduce congestion. The same problem afflicts other anticongestion policies, such as employer carpooling incentives, mass transit improvements, and land-use controls; moreover, these policies usually provide only weak incentives to change travel behavior.

Now consider mass transit, where economies of scale are critical. Researchers who have compared the costs of serving passenger trips in a given travel corridor via various modes consistently find that automobiles are most economical at low passenger densities, bus transit at medium densities, and rail transit at very high densities. (There is some disagreement about exactly where these thresholds occur, but not about their existence.) As passenger density increases, it becomes worthwhile at some point to pay one driver to serve many passengers by carrying them in a single vehicle, and eventually to incur the high capital cost of building a rail line. However, many rail transit systems recently constructed in the United States are uneconomical because the passenger volumes they carry are too low.5 An attractive alternative in such cases is “bus rapid transit,” in which local bus transit is configured to offer rail-like service quality at costs between those typical of bus and rail. Bus rapid transit was pioneered in Brazil and also operates on selected corridors in Ottawa, Los Angeles, Seattle, Boston, and other cities.6

In addition to the transit agency’s costs, scale economies have another dimension—costs incurred by its users. People using mass transit first have to access a station or bus stop and wait for the vehicle to arrive. Even if they know the schedule, they have to adjust their plans to match it, which is a cost to them. The more transit lines there are in a given area and the more frequent the service, the lower is each user’s cost to reach the station and wait for a vehicle to arrive. Empirical evidence reveals that people care even more about avoiding time spent walking or waiting than about time spent inside a vehicle. So these access costs are quite significant, as are the scale economies that result when increased passenger density leads to greater route coverage and/or frequency of service.

Scale economies are behind proposals to use land-use regulation to bolster transit demand by creating areas of high-density residential, commercial, or industrial development. However, many analysts are skeptical about how effective a given measure would be and whether such “transit-oriented developments” can overcome the preferences for low-density living that accompany rising income levels.

Scale economies create a prima facie case for transit subsidies because the social cost of handling a passenger is lowered by the favorable effects on the average cost for everyone. Another argument for transit subsidies is to overcome the inefficiently low price on peak-hour highway travel, if congestion pricing is deemed infeasible. Countering these arguments is the well-documented tendency of transit subsidies to be partly absorbed in higher wages to transit workers, less efficient use of employees, and excessive capital expenditures. This problem could be alleviated by giving the subsidies in the form of fare discounts rather than as grants to transit agencies. If subsidies are justified because of economies of scale in transit, however, then they would be justified for the many other industries with scale economies: it is infeasible and probably unwise to subsidize them all.

Because of scale economies in mass transit, it makes sense to focus service on those few markets with potentially high passenger density, especially suburb-to-downtown commutes and local travel in densely populated low-income areas. Unfortunately, this dictum collides with the political balance typically achieved in metropolitan-wide transit systems, where every participating jurisdiction is eager to receive some service in return for its financial contribution.

!----

Scale economies might make a case for highway subsidies as well, but it is even less clear-cut. Scale economies exist in construction of a given highway, but somewhat less so in an entire network because the cost of intersections rises more than proportionally to their capacity. Furthermore, because highways occupy a significant fraction of scarce urban land, expanding them drives up land prices and/or requires expensive mitigation measures, offsetting any scale economies in constructing them. On balance, there is probably not a strong case for subsidizing urban highway travel.

Today, government provides most urban transportation services and facilities, but this is not necessary, nor was it historically always the norm. Privately built and financed canals, and later “turnpikes,” were important in the industrialization of Britain in the eighteenth century and of the United States in the nineteenth century. And today, innovative private transit providers supply highly valued jitney service or specialized taxi service—sometimes illegally—in many cities around the globe, especially—but not exclusively—in the Third World. Ubiquitous private taxi fleets also play an important role in urban travel, and deregulating entry would bring down taxi fares substantially.

Private enterprise is making something of a comeback in infrastructure provision. A private company is completing Paris’s A86 ring road via tunnels under Versailles, financed by tolls. A similar proposal may break a thirty-year impasse over completing the final link in the Long Beach Freeway near Los Angeles. London is undertaking a controversial privatization of its subway system. In 2004, Texas solicited proposals for private construction and operation of new toll roads, and in 2005 Chicago privatized operation of its Skyway, an important segment of Interstate 90 bringing traffic into the city from the east.7

Evidence suggests that the private sector can carry out transportation activities more cheaply than the public sector can. Many experiments with the private sector have been motivated by huge subsidy increases or evident inefficiency of public sector operations. During the 1980s, all of Britain’s urban bus services outside London were privatized and the markets opened to free entry, resulting in cost savings but also some competitive problems. In most instances, some sort of regulation is needed to offset the market power that can accompany privatization. Success depends on the specifics of the situation and the details of any accompanying regulatory or franchising arrangements.

Urban transportation has historically had a dramatic influence on land-use patterns. Upon the invention of horse-drawn and then electric streetcars, “streetcar suburbs” quickly arose along newly laid tracks. Following World War II, widespread construction of express highways had a similar but even stronger effect, especially in the United States, causing development to spread more ubiquitously because automobiles relaxed the need for proximity to a transit line. These developments provided many desired amenities to residents, but also created problems. Whatever one’s judgment about the wisdom of those past decisions, the longevity of buildings makes such trends virtually impossible to reverse. In particular, a dispersed land-use pattern undermines the market potential of mass transit, making it ineffective as a means to counter the automobile’s dominance, even if promoting mass transit might have been a better policy in the first place.

Urban transportation is a vital part of economic activity and responds to well-designed economic policies. Much can be accomplished to improve urban life by using our basic knowledge of economic incentives.


About the Author

Kenneth A. Small, research professor and professor emeritus of economics at the University of California at Irvine, specializes in urban, transportation, and environmental economics—especially highway congestion, air pollution from motor vehicles, and travel demand. Professor Small was a coeditor of the international journal Urban Studies for five years and is now associate editor of Transportation Research B. He received the Distinguished Member Award of the Transportation and Public Utilities Group of the American Economic Association in 1999 and is a Fellow of the Regional Science Association International.


Further Reading

Introductory

Altshuler, Alan, and David Luberoff. Mega-Projects: The Changing Politics of Urban Public Investment. Washington, D.C.: Brookings Institution Press, 2003. Insightful description and analysis of the political changes behind the extraordinary increase in costs of the large U.S. urban infrastructure projects that started around 1970.

Arnott, Richard, and Kenneth Small. “The Economics of Traffic Congestion.” American Scientist 82 (1994): 446–455. Explanation of traffic paradoxes including induced demand, for a scientifically but not economically literate audience.

Downs, Anthony. Still Stuck in Traffic: Coping with Peak-Hour Traffic Congestion. Washington D.C.: Brookings Institution, 2004. A comprehensive look at numerous anticongestion!---- policies and their effectiveness, concluding largely that the only ones that are effective are politically infeasible.

Klein, Daniel B., Adrian T. Moore, and Binyam Reja. Curb Rights: A Foundation for Free Enterprise in Urban Transit. Washington, D.C.: Brookings Institution Press, 1997. Policy-oriented analysis of how the public sector can establish property rights to encourage successful private transit.

Meyer, John R., and José A. Gómez-Ibáñez. Autos, Transit, and Cities. Cambridge: Harvard University Press, 1981. A thorough analysis of urban transportation policy for an educated lay audience.

National Research Council. Curbing Gridlock: Peak-Period Fees to Relieve Traffic Congestion. Washington D.C.: National Academy Press, 1994. Full report by a study panel on congestion pricing. Volume 1 is the report, aimed at a general audience; volume 2 is a collection of commissioned papers.

Pickrell, Don H. “Rising Deficits and the Uses of Transit Subsidies in the United States.” Journal of Transport Economics and Policy 19 (1985): 281–298. Decomposes the dramatic increase in U.S. transit deficits into its sources, finding that about three-fourths of new subsidies were absorbed in higher costs.

White, Peter. “Deregulation of Local Bus Service in Great Britain: An Introductory Review.” Transport Reviews 15 (1995): 185–209. Reviews results of British bus deregulation of 1980s.

Winston, Clifford. “Government Failure in Urban Transportaion.” Fiscal Studies 21 (2000): 403–425. A nontechnical summary of inefficiencies in U.S. urban transportation policy drawing on the UK privatization experiment for perspective.

Advanced

Gómez-Ibáñez, José A., William B. Tye, and Clifford Winston, eds. Essays in Transportation Economics and Policy: A Handbook in Honor of John R. Meyer. Washington, D.C.: Brookings Institution Press, 1999. Collection of essays, some technical, on analytical and issue-oriented topics. Can serve as introductory textbook.

Santos, Georgina, ed. Road Pricing: Theory and Evidence. Elsevier: Oxford, 2004. Collection of scholarly articles about congestion pricing and related topics.

Small, Kenneth A. Urban Transportation Economics. Reading, Pa.: Harwood Academic, 1992, 2d ed. 2005. Advanced textbook and reference book.

Small, Kenneth A., and José A. Gómez-Ibáñez. “Urban Transportation.” In Paul Cheshire and Edwin S. Mills, eds., Handbook of Regional and Urban Economics. Vol. 3. New York: North-Holland, 1999. Survey of selected topics, aimed at professional economists.

Winston, Clifford, and Chad Shirley. Alternate Route: Toward Efficient Urban Transportation. Washington, D.C.: Brookings Institution Press, 1998. Analysis of mass transit policy in the United States, with emphasis on quantifying the inefficiencies of transit and highway investment and pricing.


Footnotes

See David Schrank and Tim Lomax, 2003 Urban Mobility Report, available online at: http://mobility.tamu.edu/ums/.

See the Singapore Land Transport Authority Web site on electronic road pricing at: http://www.lta.gov.sg/motoring_matters/index_motoring_erp; and the Transport for London Web site on congestion charging at: http://www.tfl.gov.uk/tfl/cclondon/cc_intro. For other examples around the world, see the University of Minnesota’s Value Pricing Homepage at: http://www.hhh.umn.edu/centers/slp/projects/conpric/.

See the operators’ Web sites at http://www.91expresslanes.com/ and http://sandag.cog.ca.us/index.asp?classrd29fuseactionhome.classhome.

Robert W. Poole Jr. and C. Kenneth Orski, “HOT Networks: A New Plan for Congestion Relief and Better Transit,” Reason Public Policy Institute, Policy Study 305, February 2003, available online at: http://www.rppi.org/ps305.pdf.

See “The Public Purpose” Web site (http://www.publicpurpose.com/) for unabashedly critical and informative evaluations of many rail projects and other topics.

See the Bus Rapid Transit Policy Center Web site at: http://www.gobrt.org/; also Aaron Golub, “Brazil’s Buses: Simply Successful,” Access (University of California Transportation Center, Berkeley) 24 (2004), available online at: http://www.uctc.net/access/access.asp.

On privatization initiatives, see the periodicals Public Works Financing, and the Reason Foundation’s Privatization Watch (http://www.reason.org/pw.shtml), especially “Urban Toll Tunnels Solve Tough Problems” by Robert W. Poole Jr. (http://www.rppi.org/urbantolltunnels.html).

(0 COMMENTS)

CEE February 4, 2018

Welfare

The U.S. welfare system would be an unlikely model for anyone designing a welfare system from scratch. The dozens of programs that make up the “system” have different (sometimes competing) goals, inconsistent rules, and over-lapping groups of beneficiaries. Responsibility for administering the various programs is spread throughout the executive branch of the federal government and across many committees of the U.S. Congress. Responsibilities are also shared with state, county, and city governments, which actually deliver the services and contribute to funding.

The six programs most commonly associated with the “social safety net” include: (1) Temporary Assistance for Needy Families (TANF), (2) the Food Stamp Program (FSP), (3) Supplemental Security Income (SSI), (4) Medicaid, (5) housing assistance, and (6) the Earned Income Tax Credit (EITC). The federal government is the primary funder of all six, although TANF and Medicaid each require a 25–50 percent state funding match. The first five programs are administered locally (by the states, counties, or local federal agencies), whereas EITC operates as part of the regular federal tax system. Outside the six major programs are many smaller government-assistance programs (e.g., Special Supplemental Food Program for Women, Infants and Children [WIC]; general assistance [GA]; school-based food programs; and Low-Income Home Energy Assistance Program [LIHEAP]), which have extensive numbers of participants but pay quite modest benefits.

Welfare reform, brought about through the passage of the Personal Responsibility and Work Opportunity Reconciliation Act (PRWORA) of 1996, significantly altered the rules for delivering income support, but it was narrowly focused on one program. The 1996 law replaced Aid to Families with Dependent Children (AFDC) with TANF. SSI and food stamps were also affected, but to a much lesser extent.

Key Programs

The accompanying figures summarize trends in the coverage and expenses of the six major federal safety-net programs over the past three decades. Figure 1 shows the percentage of the American population receiving benefits from each program, and Figure 2 presents the share of federal expenditures spent on each program. The bars in Figure 1 also plot the percentage of Americans classified as being in poverty. In addition to highlighting the evolution of these U.S. welfare programs, the following discussion briefly describes the forms of benefits paid out by programs, along with eligibility criteria.


Figure 1  Percentage of Population Receiving Program Benefits

ZOOM

 


Temporary Assistance for Needy Families pays cash assistance to single-parent or unemployed two-parent families for a limited term. The program also significantly funds job training and child care as a means to discourage welfare dependency and encourage work.

The origins of TANF are in the Social Security Act of 1935, which established the Aid to Dependent Children (ADP) program. ADP enabled state governments to help single mothers who were widowed or abandoned by their husbands. It was originally designed to allow mothers to stay at home and take care of their children, providing cash benefits for the basic requirements of food, shelter, and clothing. The program was expanded in the 1950s and 1960s to provide cash assistance to needy children and families regardless of the reason for parental absence. This expansion coincided with renaming the program Aid to Families with Dependent Children. While AFDC was principally a federal program managed by the Department of Health and Human Services, it was administered through state-run welfare offices. Indeed, states were responsible for organizing the program, determining benefits, establishing income and resource limits, and setting actual benefit payments. With relatively little flexibility, an AFDC program in New York City looked a lot like its counterpart in Reno, Nevada, apart from differences in the maximum amount each state paid to a family for assistance. Funding for AFDC was shared between the federal and state governments, with the feds covering a higher portion of AFDC benefit costs in states with lower-than-average per capita income. As with many other welfare programs, AFDC’s costs were not capped because the program was an “entitlement”—meaning that qualified families could not be refused cash assistance.

By the early 1990s, many policymakers were seeking alternatives to AFDC. Although the average monthly benefit in 1995 was only 376.70 per family and 132.64 per recipient, 40 percent of applicants remained on welfare for two years or longer. In response to this dependency, in 1996, Congress passed and President Bill Clinton signed the Personal Responsibility and Work Opportunity Reconciliation Act, which replaced AFDC with TANF. Under the new program, the federal government eliminated the entitlement to cash welfare, placed limits on the length of time families could collect benefits, and introduced work requirements. By law, a family cannot receive TANF benefits for more than a lifetime limit of five years, cumulative across welfare spells. Regarding work requirements, TANF mandated that at least 50 percent of recipients participate in “work” activities by 2002, with activities including employment, on-the-job training, vocational education, job search, and community service. Together, these activities must account for thirty hours per week for a single parent. Recipients who refuse to participate in work activities must be sanctioned, resulting in a loss of cash benefits. Enforcement of sanctions could include immediately suspending all cash payments, stopping support only after multiple episodes of noncompliance, or only partially reducing grants to families who fail to cooperate. States could, and in fact did, introduce more stringent requirements for families to work or participate in educational activities to qualify for cash payments. TANF cemented the primary emphasis on getting welfare recipients into jobs.


Figure 2  Program Spending as a Percentage of Federal Outlays

ZOOM

 


Figures 1 and 2 reveal that growth in neither costs nor enrollments motivated the passage of welfare reform in 1996. Program expenditures have accounted for less than 3 percent of the federal budget since 1975. The caseload remained relatively stable until the mid-1990s. After welfare reform, however, the welfare caseload and welfare spending as a percentage of government spending dropped sharply.

The Food Stamp Program, authorized as a permanent program in 1964, provides benefits to low-income households to buy nutritional, low-cost food. After 1974, Congress required all states to offer the program. Recipients use coupons and electronic benefits transfer (EBT) cards to purchase food at authorized retail stores. There are limitations on what items can be purchased with food stamps (e.g., they cannot be used to purchase cigarettes or alcohol). Recipients pay no tax on items purchased with food stamps. The federal government is entirely responsible for the rules and the complete funding of FSP benefits under the auspices of the Department of Agriculture’s Food and Nutrition Service (FNS). State governments, through local welfare offices, have primary responsibility for administering the Food Stamp Program. They determine eligibility, calculate benefits, and issue food stamp allotments.

Welfare reform imposed work requirements on recipients and allowed states to streamline administrative procedures for determining eligibility and benefits. Childless recipients between the ages of eighteen and fifty became ineligible for food stamps if they received benefits for more than three months while not working. According to Figure 1, the FSP caseload has included between 6 and 10 percent of the U.S. population, following a cyclical pattern before welfare reform: during recessions, the caseload percentage was higher. Welfare reform caused a decline in the FSP caseload percentage.

Supplemental Security Income, authorized by the Social Security Act in 1974, pays monthly cash benefits to needy individuals whose ability to work is restricted by blindness or disability. Families can also receive payments to support disabled children. Survivor’s benefits for children are authorized under Title II of the Social Security Act, not Title XVI, and are, therefore, not part of the SSI program. Although one cannot receive SSI payments and TANF payments concurrently, one can receive SSI and Social Security simultaneously. (In 2003, 35 percent of all SSI recipients also received Social Security benefits, and 57 percent of aged SSI recipients were Social Security beneficiaries.) The average SSI recipient received almost 5,000 in annual payments in 2003, with the average monthly federal payment being 417, and many state governments supplemented the basic SSI benefits with their own funds.

Welfare reforms and related immigration legislation in 1996–1997 sought to address three areas of perceived abuse in the SSI program. First, the legislation set up procedures to help ensure that SSI payments are not made to prison inmates. Second, the legislation eliminated benefits to less-disabled children, particularly children with behavioral problems rather than physical disorders. Finally, new immigrants were deemed ineligible for benefits prior to becoming citizens.

Medicaid became law in 1965, under the Social Security Act, to assist state governments in providing medical care to eligible needy persons. Medicaid provides health-care services to more than 49.7 million low-income individuals who are in one or more of the following categories: aged, blind, disabled, members of families with dependent children, or certain other children and pregnant women. Medicaid is the largest government program providing medical and health-related services to the nation’s poorest people and the largest single funding source for nursing homes and institutions for mentally retarded people.

Within federal guidelines, each state government is responsible for designing and administering its own program. Individual states determine persons covered, types and scope of benefits offered, and amounts of payments for services. Federal law requires that a single state agency be responsible for the administration of the Medicaid program; generally it is the state welfare or health agency. The federal government shares costs with states by means of an annually adjusted variable matching formula.

The Medicaid program has more participants than any other major welfare program. More than 17 percent of the population received Medicaid benefits in 2002, up from about 10 percent in the 1970s and 1980s. Spending on Medicaid has risen steadily as a fraction of the federal budget, increasing from approximately 2 percent in 1975 to 13 percent in 2002. Total outlays for the Medicaid program in 2002 (federal and state) were 259 billion, and per capita expenditures on Medicaid beneficiaries averaged 4,291.

Housing assistance covers a broad range of efforts by federal and state governments to improve housing quality and to reduce housing costs for lower-income households. The Department of Housing and Urban Development (HUD) and the Federal Housing Administration (FHA) administer most federal housing programs. Under current programs, each resident pays approximately 30 percent of his or her income for rent.

In terms of welfare policy, there are two principal types of housing assistance for low-income families: subsidized rent and public housing. The federal government has provided rental subsidies since the mid-1930s and today funds the HUD Section 8 voucher program. Local governments commonly provide for subsidized housing through their building authority in that they require a portion of new construction to be made available to low-income families at below-market rents. Public housing (the actual provision of dwellings) is almost exclusively a federal program administered by local public housing agencies (PHAs), not private owner-managers. In contrast to the mid-1960s, public housing now accounts for a small fraction of overall housing assistance.

Earned Income Tax Credit, enacted in 1975, pays a refundable tax credit for working Americans with low earnings. The tax credit increases family income by supplementing earnings up to a fixed level. The program was initially designed to offset the impact of Social Security taxes on low-income individuals and to encourage individuals to seek employment rather than rely on welfare benefits. Because EITC is part of the regular federal income tax system, receiving benefits is private, unremarkable, and without stigma. In 2004, the EITC paid out 33.1 billion to approximately 18.6 million claimants—several billion dollars more than the amounts projected to be spent on other primary programs such as TANF and food stamps. EITC is one of the few programs that effectively reach the eligible population. Analysis of EITC claims in 1999 shows that 86 percent of eligible families with children received the credit. (In contrast, only 66 percent of eligible households with children received food stamp benefits in 1999.) Although the EITC is generally paid all at once as an annual refund, it can also be included with an employee’s weekly, biweekly, or monthly paycheck.

caption div style"background-color: #ffffff;" hr pspan class"title"br bTable 1/b Benefits, Taxes, and Disposable Income for a Family of Four/span/p /div /caption


Program Payments Tax Costs Disposable Income Earnings() TANF Food Stamps SSI Sec. 8Housing Federal EITC Health Care Federal Payroll Taxes Federal Income Tax Taxes, EITC Taxes, EITC, TANF, FSP Taxes, EITC, TANF, FSP,Sec. 8 0 8,148 5,988 9,660 10,800 0 MNP 0 0 0 12,663 20,611 4,000 7,498 5,510 8,170 9,400 1,600 MNP 306 0 5,294 16,503 23,279 8,000 5,498 4,550 6,170 8,000 3,200 MNP 612 0 10,588 19,317 25,393 12,000 3,498 3,590 4,170 6,600 4,300 MNP 918 0 15,382 21,631 27,007 16,000 0 2,630 2,170 5,200 4,101 Child 6–19 1,224 0 18,877 21,507 26,707 20,000 0 1,670 170 3,800 3,261 Child 1–6 1,530 0 21,731 23,401 27,201 24,000 0 710 0 2,400 2,421 Child 1–6 1,836 190 24,395 25,105 27,505



Level of Benefits and Impacts on Work Incentives

How much do the above safety-net programs pay in benefits? Table 1 presents the benefit levels provided to a qualifying family whose annual earnings equal the amounts listed in the first column of the table. Calculations in this table assume that the family includes a father, a mother, and two children below the age of eighteen, and that this family lives in California.1 According to the row in the table for a family that earns 8,000 a year, this family would be eligible to receive 5,498 from TANF, 4,550 from food stamps, 6,170 from SSI, 8,000 in housing benefits from the Section 8 program, and 3,200 from EITC, for a total of 27,418 in government assistance. Moreover, this family would qualify for Medicaid’s Medically Needy Program (MNP), wherein all family members would receive zero-price health care. On reaching 16,000 in earnings, the family would qualify for Medicaid’s Children Ages 6 to 19 Program (Child 6–19), which provides zero-price health care to all children in the family; and at 20,000 in earnings, the family would qualify for Medicaid’s Children Ages 1 to 6 Program (Child 1–6), which offers zero-price health care to all children ages six and below.

To determine the disposable income available to a family, one needs to add the family’s earnings and the payments it receives in program benefits and then subtract the amounts paid in taxes. Any family faces three categories of taxes: Social Security payroll taxes, federal income tax, and state income tax. The eighth and ninth columns of the table show the amounts a family of four must pay in payroll and federal income taxes for the various levels of earnings—the negative values in these columns indicate payments that subtract from income rather than add to income. The table does not include a column for state taxes because none are paid for any of the income values considered in the table. The last three columns of the table report a family’s disposable income for each level of earnings, assuming participation in the programs listed in the associated column. The family that earns 8,000 receives 10,588 in disposable income for the year when it chooses not to participate in any welfare program and obtains benefits only from EITC. This family’s disposable income grows to 19,317 if it decides to take TANF and food stamps and to 25,393 if it also chooses to obtain assistance for rent.2

Note, by looking at the “Taxes, EITC, TANF, FSP” column, that a family participating in these programs increases its disposable income by 5,294 when it raises its earnings from zero to 4,000. That means that, in this range of earnings, work is rewarded; the family actually increases its disposable income by more than 4,000. But if a family raises its earnings from 12,000 to 16,000, its government benefits fall so much that its disposable income literally declines by 124. This happens because program benefits fall as earnings rise. Families facing these latter circumstances (earning 12,000) clearly have no incentive to increase their work effort since they will see no enhancement of their spending power. If one alters the family’s situation and also has it participate in housing programs, then the last column shows that raising earning from 12,000 to 24,000 yields merely 498 more in disposable income. Such features of our welfare system sharply reduce the returns of work, and in doing so discourage families from increasing their work activities. The U.S. welfare system enhances work incentives at low levels of earnings, but discourages work thereafter. To counterbalance such work disincentives, welfare reform in the mid-1990s introduced work requirements that required families to work above specific thresholds in order to qualify for benefits.

Future Directions

Welfare reform was enacted to promote self-sufficiency and to improve flexibility in the design of income-maintenance programs. To a large extent, these goals have been achieved. TANF has brought about substantial increases in the work activities of low-income families and enhanced states’ flexibility to create welfare systems unique to their constituencies. States are using the monies they are allocated in a more efficient manner—focusing on job readiness, child care, education, and work placement.

What other policy trends characterize the evolution of welfare system in the United State today? Briefly, two key forces are changing the basic relationship between the government and welfare recipients in all programs.

First, welfare programs at all levels are being geared toward more work-related activities. Nearly every program gives priority to parents who show a willingness and commitment to work. At the same time, able-bodied adults who refuse to work now find themselves disqualified from many programs. The emphasis on work has gained strength only since 1996. Proposals for the reauthorization of welfare reform all generally push for stricter work requirements and a longer work week.

Second, there has been a movement from pure in-kind provision to voucher-based systems. In-kind provision represents government efforts to both fund and directly serve the poor. Voucher systems are being emphasized not only for shelter but also for provision of food, health care, job training, and child care, among others. A cash-equivalent voucher is provided directly to the person served, who then redeems the voucher at any qualified/authorized service provider. This approach brings some of the advantages of market-based economics to the provision of welfare. The recipient spends dollars on the things he or she wants most. A classic example is public housing. HUD provides the funding for most public housing, and local government housing authorities use it to buy or build publicly owned residential units. This inefficient use of funds segregates low-income families into common facilities that typically duplicate housing resources that are widely available in the private market. Over the past decade, HUD and other government providers have been opting to fund more voucher-based, Section 8-type housing to meet the needs of the poor, thus allowing recipients greater choice in where they live.

Although welfare reform has achieved success in a short amount of time, more reform is needed. Of the many government assistance programs, only one, TANF, has seen any significant reform. The remaining programs (food stamps, SSI, housing assistance, Medicaid, and EITC) are about as inflexible as ever and generally ignore what is going on in the rest of the system. Future policy initiatives are likely to alter these programs toward the direction set for TANF in the 1990s welfare reform, with the two above trends continuing to influence new reforms.

Does Welfare Help the Poor?

David Henderson

Economists believe that people tend to make decisions that benefit themselves, so the answer to the above question seems obvious. If welfare did not help the poor, then why would so many of them go on welfare? This self-interest among the poor could also explain a phenomenon noted by those who study welfare, namely that only about one-half to two-thirds of those who qualify for welfare programs are enrolled in them. Presumably, the others have decided that it is in their self-interest to refuse the money and keep the government from meddling in their lives.

So, while it seems clear that welfare helps the poor who accept welfare, that does not mean that welfare helps the poor generally. Two groups of poor people, not counted in the welfare statistics, are hurt by welfare. The first group consists of the future poor. Economists know that welfare is a disincentive to work, and, therefore, that its existence reduces an economy’s output. If even some of this output would have been used for research and development, and if this forgone R&D would have increased growth, then welfare hurts growth by reducing R&D. If the annual growth rate of GDP in the United States had been just one percentage point lower between 1885 and 2005, then the United States today would be no richer than Mexico. The main thing that helps all poor people in the long run is economic growth. Even though the 1920s are thought of as a decade of prosperity, by today’s standards almost all Americans in the 1920s were poor. Economic growth made almost all Americans richer than their counterparts of the 1920s. A reduction in economic growth, even a slight one, if compounded, causes more future poverty than would otherwise have been the case.

The second group hurt by U.S. welfare is poor foreigners. The welfare state acts as a magnet for poor immigrants to the United States. Because of this, there are various domestic pressures to limit immigration. Without the welfare state, the number of immigrants would likely rise substantially, meaning that many previously poor foreigners would become much richer. The welfare state limits this improvement.

Based on Tyler Cowen, “Does the Welfare State Help the Poor?” Social Philosophy and Policy 19, no.1 (2002) pp. 36–54.


About the Authors

Thomas MaCurdy is the Dean Witter Senior Fellow at the Hoover Institution and a professor of economics at Stanford University. He is a member of standing committees that advise the Congressional Budget Office, the U.S. Bureau of Labor Statistics, and the U.S. Census. Jeffrey M. Jones is a research fellow at the Hoover Institution. He was previously executive director of Promised Land Employment Service.


Further Reading

 

DeParle, Jason. American Dream: Three Women, Ten Kids, and a Nation’s Drive to End Welfare. New York: Viking Books, 2004.

Jones, Jeffrey, and Thomas MaCurdy. “How Not to Mess Up a Good Thing.” Hoover Digest, no. 2. Stanford, Calif.: Hoover Institution Press, 2003. Pp. 99–108. Online at: http://www.hooverdigest.org/032/jones.html.

MaCurdy, Thomas, and Frank McIntyre. Helping Working-Poor Families: Advantages of Wage-Based Tax Credits over the EITC and Minimum Wages. Washington, D.C.: Employment Policies Institute, 2004. Online at: http://www.epionline.org/studies/macurdy_04-2004.pdf.

Malanga, Steven. “The Myth of the Working Poor.” City Journal (Autumn 2004). New York: Manhattan Institute, 2004. Online at: http://www.city-journal.org/html/14_4_working_poor.html.

Murray, Charles. Losing Ground: American Social Policy 1950–1980. New York: Basic Books, 1984.

O’Neill, June, and M. Anne Hill. “Gaining Ground, Moving Up: The Change in the Economic Status of Single Mothers Under Welfare Reform.” Civic Report, no. 35 (March 2003). New York: Manhattan Institute, 2003. Online at: http://www.manhattan-institute.org/html/cr_35.htm.

Rector, Robert, and Patrick F. Fagan. “The Continuing Good News About Welfare Reform.” Backgrounder no. 1620. Washington, D.C.: Heritage Foundation, 2003. Online at: http://www.heritage.org/Research/Welfare/BG1620.cfm.

Tanner, Michael. The Poverty of Welfare: Fighting Poverty in Civil Society. Washington, D.C.: Cato Institute, 2003.

2004 Green Book. Washington, D.C.: Committee on Ways and Means, U.S. House of Representatives, 2004. Online at: http://waysandmeans.house.gov/Documents.asp?section813.

 


Footnotes

To qualify for low-income assistance, the family must have less than two thousand dollars in financial and housing assets. For the calculation of housing benefits, Table 1 assumes that the family pays nine hundred dollars in rent per month. In some circumstances, eligible benefit levels may be affected by dual-enrollment restrictions (e.g., cannot receive TANF and SSI concurrently).

 

In the calculation of food stamps and housing benefits, payments from TANF count as income: this lowers payments below the amounts listed in the table for the program. The program benefits listed in the first set of columns assume that the family participates only in that particular program.

 

(0 COMMENTS)

Here are the 10 latest posts from Econlib.

Econlib May 26, 2019

McArdle’s Confusion About Costs of Inputs

One of the best economic journalists in the United States is Megan McArdle of the Washington Post. That makes her error in a recent WaPo article all the more striking. Don Boudreaux at Cafe Hayek has pointed out the error. But I want to do my own analysis because it’s a more general error that […]

The post McArdle’s Confusion About Costs of Inputs appeared first on Econlib.

Econlib May 26, 2019

Fiscal austerity in Japan

Because history is written by Keynesians, many people have a highly distorted view of fiscal policy.  They attribute the double dip Eurozone recession (2011-13) to fiscal austerity, whereas it was actually caused by an extremely tight money policy at the ECB.  Or they think the Great Inflation of 1966-61 was triggered by an expansionary fiscal […]

The post Fiscal austerity in Japan appeared first on Econlib.

Econlib May 24, 2019

Socialist Fantasies

Ludwig von Mises’s essay “Economic Calculation in the Socialist Commonwealth,” references Aristophanes’ play The Birds and the medieval fantasy of the idyllic and work-free Land of Cockaigne when Mises notes of socialist planners that, “Economics as such figures all too sparsely in the glamorous pictures painted by the Utopians. They invariably explain how, in the […]

The post Socialist Fantasies appeared first on Econlib.

Econlib May 23, 2019

The confusing terminology of monetary policy

As if monetary policy is not confusing enough, the terminology is also ambiguous, with terms used in inconsistent ways. For instance, is the Fed targeting interest rates, or are they targeting inflation?  Consider the following flow chart, showing two possible monetary policy targets.  At the bottom you have the actual tools that the central bank […]

The post The confusing terminology of monetary policy appeared first on Econlib.

Econlib May 22, 2019

“Socialism”: The Provocative Equivocation

The socialists are back, but is it a big deal?  It’s tempting to say that it’s purely rhetorical.  Modern socialists don’t want to emulate the Soviet Union.  To them, socialism just means “Sweden,” right?  Even if their admiration for Sweden is unjustified, we’ve long known that the Western world contains millions of people who want […]

The post “Socialism”: The Provocative Equivocation appeared first on Econlib.

Econlib May 22, 2019

“Things have changed”

The Wall Street Journal describes the views of Judy Shelton, one of the names mentioned for a position on the Fed’s Board of Governors: She wrote critically in the weeks before that election about how the Fed’s low- rate policies were boosting wealthy investors and corporations at the expense of working Americans and retirees with […]

The post “Things have changed” appeared first on Econlib.

Econlib May 21, 2019

George Warren, market monetarist

Market monetarists like myself have criticized the “wait and see” approach used by many macroeconomists. This refers to the tendency of economists to watch how the economy plays out over time, after a new policy initiative. This technique is not reliable, as the economy is constantly being buffeted by all sorts of shocks, and it’s […]

The post George Warren, market monetarist appeared first on Econlib.

Econlib May 21, 2019

An Alternative Perspective on Anglo-American Economic Liberties

As economic freedom gains traction in different spheres of the world, with marked successes most notable in Asia of late, understanding how markets come about within a state of economic freedom are tied closely to limitations on arbitrary executive powers. Michael Patrick Leahy’s Covenant of Liberty: The Ideological origins of the Tea Party movement assess […]

The post An Alternative Perspective on Anglo-American Economic Liberties appeared first on Econlib.

Econlib May 21, 2019

GOT’s final season may have been disappointing, but not on politics

There are a number of remarkable things about Game of Thrones. One is of course how millions of people are, synchronously, watching the series’ ending. This sort of collective TV viewing was once reserved for big sports matches, or perhaps for a few great rock music concerts, like LiveAid. Many people have commented on the […]

The post GOT’s final season may have been disappointing, but not on politics appeared first on Econlib.

Econlib May 20, 2019

Letter from an “Anti-School Teacher”

I recently received this email from a self-styled “anti-school teacher.”  Reprinted unchanged with permission of the author, Samuel Mosley. Dear Professor Caplan, My name is Samuel Mosley. I studied economics at Beloit College, my advisor was a former graduate student of yours, Laura Grube. I recently read The Case Against Education and it explained so much of […]

The post Letter from an “Anti-School Teacher” appeared first on Econlib.

This site uses local and third-party cookies to maintain your shopping cart and to analyze traffic. If you want to know more, click here. By closing this banner or clicking any link in this page, you agree with this practice.