Econlib

The Library of Economics and Liberty is dedicated to advancing the study of economics, markets, and liberty. Econlib offers a unique combination of resources for students, teachers, researchers, and aficionados of economic thought.

Econlib publishes three to four new economics articles and columns each month. The latest articles and columns are made available to the public on the first Monday of each month.

All Econlib articles and columns are written exclusively for us at the Library of Economics and Liberty, on various economics topics by renowned professors, researchers, and journalists worldwide. All articles and columns are retained online free of charge for public readership. Many articles and columns are discussed in concurrent comments and debate in our blog EconLog.

EconLog

The Library of Economics and Liberty features the popular daily blog EconLog. Bloggers Bryan Caplan, David Henderson, Alberto Mingardi, Scott Sumner, Pierre Lemieux and guest bloggers write on topical economics of interest to them, illuminating subjects from politics and finance, to recent films and cultural observations, to history and literature.

EconTalk

The Library of Economics and Liberty carries the podcast EconTalk, hosted by Russ Roberts. The weekly talk show features one-on-one discussions with an eclectic mix of authors, professors, Nobel laureates, entrepreneurs, leaders of charities and businesses, and people on the street. The emphases are on using topical books and the news to illustrate economic principles. Exploring how economics emerges in practice is a primary theme.

CEE

The Concise Encyclopedia of Economics features authoritative editions of classics in economics, and related works in history, political theory, and philosophy, complete with definitions and explanations of economics terms and ideas.

Visit the Library of Economics and Liberty

Recent Posts

Here are the 10 latest posts from EconLog.

EconLog September 15, 2019

Samantha Power: The Woman of System, by David Henderson

 

We need to restore a constituency and a faith that we can have a productive foreign policy, and I think that part of what that will entail is putting diplomacy and burden sharing at the front of our messaging and of our packaging and of our actions. Right now, humanitarian intervention, if it happens — and it’s happening in different places around the world — but is much more likely to be done by regional organizations like the African Union than it is to be orchestrated by great powers.

And given the lagging public opinion, until some successful actions are prosecuted — depending, again, on the circumstances — that may be a wise thing. But that doesn’t mean that the United States doesn’t have a role, for example, in training and equipping the troops that are going into a Central African Republic or into a Mali or into places where lives can be saved.

This is from “Samantha Power on Learning How to Make a Difference,” Conversations with Tyler, Mercatus Center, September 11.

The quote reminded me of the great quote from Adam Smith about “The man of system.” In The Theory of Moral Sentiments, Smith wrote:

The man of system, on the contrary, is apt to be very wise in his own conceit; and is often so enamoured with the supposed beauty of his own ideal plan of government, that he cannot suffer the smallest deviation from any part of it. He goes on to establish it completely and in all its parts, without any regard either to the great interests, or to the strong prejudices which may oppose it. He seems to imagine that he can arrange the different members of a great society with as much ease as the hand arranges the different pieces upon a chess-board. He does not consider that the pieces upon the chess-board have no other principle of motion besides that which the hand impresses upon them; but that, in the great chess-board of human society, every single piece has a principle of motion of its own, altogether different from that which the legislature might chuse to impress upon it. If those two principles coincide and act in the same direction, the game of human society will go on easily and harmoniously, and is very likely to be happy and successful. If they are opposite or different, the game will go on miserably, and the society must be at all times in the highest degree of disorder.

Samantha Power is the, actually one of many, “women of system.”

In that whole long interview, Tyler Cowen asks her not a single question about her strong advocacy, when she was a high-level employee of President Obama’s National Security Council, of intervening in Libya. That was the major foreign policy disaster of the Obama administration, one that she, along with Secretary of State Hillary Clinton and then U.S. ambassador to the United Nations Susan Rice, had a role in pushing for.

I don’t recommend the interview; it’s too much of a puff piece. I do recommend Matt Welch’s much more hard hitting review of Samantha Power’s book The Education of an Idealist. His review is aptly titled “The Corruptions of Power.”

(1 COMMENTS)

EconLog September 15, 2019

Can innovation be sped up?, by Scott Sumner

One would think that there are public policies that would substantially accelerate the pace of innovation. I’m not so sure. I’ll address this issue in a roundabout fashion, starting with a discussion of innovation in some seemingly unrelated fields.

In 1969, we had just landed on the moon, the Boeing 747 was carrying 400 passengers at 600 mph, and cars sped down expressways at 70 mph. President Nixon would soon launch the war on cancer. Given the incredible progress in medicine and transport over the previous 50 years, people expected fantastic gains in manned space travel, air and ground transport, as well as a cure for cancer. Certainly within the next 50 years.  I was also guilty of excessive optimism, as several supersonic commercial airplanes were being developed at this time.

Of course, those dreams did not pan out. We fly and drive more slowly than in 1969. (Where’s my flying car?) There was impressive innovation, but mostly in areas that people didn’t expect, such as the internet.

Tyler Cowen recently linked to a post by Evert Cilliers, listing what he thought were the 10 best songwriters of all time.  Five were born between 1885-1902, and the other five between 1940-43.  The reputation of the latter group rests almost entirely with their work from the 1960s.  I don’t think many people in 1969 anticipated that the golden age of songwriting was over.  (I didn’t.)  And the golden age of cinema seemed to end around 1980 (except in Asia.)

At this point, there are often two different responses:

  1. The fact that art goes in cycles of creativity shows that it is not a progressive field like physics.  The scientists of today are better than the scientists of the past, whereas today’s artists are inferior.

2.  It’s just nostalgia to assume the best artists were in the past.  Today’s scientists are better, our chess players are better, our athletes are better, so our artists and musicians must also be better than ever.

I’ll try to show that both views are wrong.

Let’s start with the first argument, which compares apples and oranges.  It’s true that modern scientists know more than earlier scientists.  But it’s also true that the theoretical physicists of the period from 1900 to 1969 were more creative and more productive than during the period since the “standard model” was developed.  (In 1969, people didn’t know that the golden age of physics was over.)  In the same way, art experts today know more about art than art experts 100 years ago, but that doesn’t mean that today’s painters are more creative than those of 100 years ago.

It seems like there are certain periods of time in each field that are more “fertile”.  Periods when it’s easier to be a great innovator.  If Thomas Edison had been born in 1347 instead of 1847, it’s unlikely he would have invented lots of neat consumer products.  But that’s also true if he had been born in 1947.  Is it a coincidence that 5 great songwriters were born between 1940 and 1943?  I’ll try to show that it was not.

The second argument makes some sense.  Nostalgic people often claim that the athletes of the past were greater than today’s athletes, a claim that’s almost certainly false.  Nonetheless, it is very likely true that the arts really do have periods of creativity and decline, and this isn’t just baby boomer nostalgia.

Consider European painting between 1625 and 1725.  Those familiar with art history know that this 100-year period was extremely uneven.  During 1625-75, great artists such as Rubens, Velasquez, Vermeer, Rembrandt and Poussin produced many, many astounding masterpieces, perhaps the greatest flowering of the European visual arts.  And then . . . almost nothing of significance for 50 years.  This is far enough back that history’s verdict is likely to be final.

Nor can this 1625-75 golden age to attributed to coincidence.  While I only cited 5 painters, the next tier were also far superior to those of 1675-1725.  Behind Vermeer and Rembrandt, you had many other Dutch painters such as Frans Hals, Jacob Ruisdael, Pieter de Hooch, Gerard ter Borch and Carel Frabritius (a film was just made of one of his paintings.)  It’s the second tier that confirms it really was a fertile period for innovation in art, not just a coincidence.  Of course, the same could be said about painting in the Renaissance, or the period from 1865-1920 (after which cinema took over.)  I’m no expert on music, but doesn’t the period from Bach to Beethoven also qualify?

There’s no point in discovering the Theory of Relativity twice, inventing the light bulb twice, writing another Beethoven-style symphony, directing another Godfather, or painting another Vermeer.  To get innovation, we need a development that opens up new opportunities.  The computer chip obviously qualifies.  Basic research in reading the human genome has allowed recent discoveries regarding the genetic links of various populations around the world, reinvigorating ancient history.

Can government speed up innovation in music, art, technology and basic physics?  For technology, some cite the government’s role in creating the internet.  But wouldn’t the private sector have tried to link up computers during the 1990s, once the PC revolution was in full swing?  If Edison had not invented the light bulb, wouldn’t someone else have done so at roughly the same time?  Indeed, someone else did so!  The US government tried to invent the airplane and failed.  But it didn’t matter because the Wright brothers succeeded, and did so privately.

If industrial policy can do anything useful, it would be to create the fundamental scientific and artistic discoveries that lead to many smaller discoveries.  But can it?  I’m skeptical that it can do very much.  In the arts, pop music was reinvigorated by hip hop once rock music had stagnated.  Does anyone think a government program could have invented rap music? Can government produce the next Einstein in physics?  I’m not saying there is nothing they could do (and they can certainly discourage innovation), but it remains to be proven that government can play a major role in producing fundamental innovation.

Innovation seems to occur when the time is ripe.  Since I’ve talked a lot about music, I’ll end with a great song from the 1960s: “You Can’t Hurry Love”.

Or innovation.

Here’s the Fabritius painting that led to the novel and the film.

And remember that Fabritius was a “minor” artist!

PS.  The Cilliers article actually focuses on the top 8 songwriters, but later on indicated that Jagger and Richards are #9 and #10, and well above the next tier.

PPS.  It may well be true that today’s music and films are, on average, made with more skill and “craftmanship”, by more talented people.  But they are less innovative, less “great”.

(11 COMMENTS)

EconLog September 13, 2019

Prohibitions: Frontiers of liberty & markets, by John Alcorn

For the next ten weeks, I will write a series of posts about prohibitions: frontiers of liberty and markets.  This will be a topic discussion club, inspired by Bryan Caplan’s recent book club about persistence of poverty.  However, we won’t focus on one book.  I will provide links here and there to short, ungated readings about facets of our general topic.

 

Let’s keep in mind two overarching questions: Why are some behaviors, which don’t intrinsically involve force or fraud, nonetheless outlawed?  And should they be?

 

We will tackle, in turn, prohibitions in the following areas:

  • Lifestyles & liberties:

—marriage of more than two persons

—guns

—drugs

  • Markets:

—sex

—kidneys for transplantation

—adoption

  • Information:

—blackmail

—prediction markets

—advertising

  • International migration

 

Before we get to specific prohibitions, we’ll touch on two foundations: (a) the general institutional framework for policy-making and (b) several standard rationales for prohibitions.

 

For our purposes, the institutional framework is constitutional democracy and capitalism.  I won’t get into prohibitions in the ancien régime, theocracies, totalitarian states, or anarchy.

 

My next post will be about constitutional democracy.  If you would like a background reading, I recommend Bryan Caplan, “Majorities against Utility.

 

 


John Alcorn is Principal Lecturer in Formal Organizations, Shelby Cullom Davis Endowment, Trinity College, Connecticut.  He received his Ph.D in History from Columbia University, with a dissertation about Social Strife in Sicily 1892-94: The Rise and Fall of Peasant Leagues on the Latifondo before the Great Emigration.  Scruples about principles of historical inquiry, and a stint teaching in Columbia’s ‘great books’ core curriculum led him to explore methodological individualism and the social sciences.  As in the Dry Bones song, a concatenation of authors—Jon Elster, Diego Gambetta, Thomas C. Schelling, Robert Sugden, David Friedman, and Michael Munger—eventually brought him to discover EconTalk and EconLog.  Along the way, research about the Sicilian mafia kindled a broader research interest in illegal behaviors and markets.  He teaches a seminar about prohibitions, and is writing a debate-format handbook on the topic.

(4 COMMENTS)

EconLog September 13, 2019

The unsung success of Japan’s recent fiscal policy, by Scott Sumner

The media tends to dwell on bad news. Even when something is a smashing success, say Germany’s 2004 labor market reforms, the reporting is relentlessly downbeat. The same is true of Japan’s recent fiscal policy, which has finally brought the national debt under control.  The debt to GDP ratio has leveled off at roughly 240% of GDP since the 2014 tax increase:

Better yet, this fiscal austerity was associated with an extremely strong labor market, not at all the “disaster” predicted by Keynesian economists:

But that doesn’t stop the media from continuing to insist that the fiscal austerity was a failure. Here’s the Financial Times, discussing the planned October increase in Japan’s national sales tax, from 8% to 10%:

After the disastrous economic impact in 2014, when the tax went up from 5 to 8 per cent, the government has prepared a series of countermeasures.

EconLog September 12, 2019

Michael Grossberg on Libertarian Futurism and Pretty Much Everything, by David Henderson

I’m not big on libertarian futurism or on science fiction. I’m not bragging. On the contrary, I think it’s due to my lack of imagination.

So when I recommend an interview on the Libertarian Futurist Society’s (LFS) Prometheus Blog, you can be fairly confident that it would interest not just libertarian futurists but also libertarians and maybe even an audience broader than that.

I highly recommend the blog’s interview with LFS founder Michael Grossberg. (I knew Michael briefly in the early to mid-1980s and we lost touch.) It covers lots of ground. Some highlights follow.

On current Democratic president candidate Marianne Williamson:

Yes, but back then Marianne was an aspiring actress, very talented and stylish, and a very smart student, in several of my advanced classes, including English. I cast Marianne and directed her in Love Street, a short psychedelic film I conceived about an LSD trip gone wrong during a high-school date. Her performance was excellent.

EconLog September 12, 2019

You’re All A Bunch of Socialists, by Bryan Caplan

A fun figure from Tetlock et al.’s “The Psychology of the Unthinkable.”  Possible level of outrage ranges from 1-7, 7 being highest.

Background:

Participants were told that the goal of the study was to explore the attitudes that Americans have about what people should be allowed to buy and sell in competitive market transactions:

Imagine that you had the power to judge the permissibility and morality of each transaction listed below. Would you allow people to enter into certain types of deals? Do you morally approve or disapprove of those deals? And what emotional reactions, if any, do these proposals trigger in you?

EconLog September 11, 2019

9/11, Six Years Later, by David Henderson

 

George W. Bush has provided, and is providing, much of that pavement.

As you know if you’ve been paying attention to the news, this is the 18th anniversary of the murders of almost 3,000 people in Washington, D.C., New York City, and Pennsylvania.

I wrote a piece for antiwar.com 6 years after the event and, since I own the copyright, I’m reprinting it here:

header class"entry-header" h1 class"entry-title"9/11, Six Years Later/h1 /headerfooter class"entry-footer"span class"byline"span class"author vcard"by a class"author url fn" title"Posts by David R. Henderson" href"https://original.antiwar.com/author/henderson/" rel"author"David R. Henderson/a/span/span span class"posted-on"span class"screen-reader-text"Posted on /spana href"https://original.antiwar.com/henderson/2007/09/10/911-six-years-later/" rel"bookmark"September 10, 2007/a/span/footer

Introduction

I t has been six years since the horrid attacks of Sept. 11, 2001. Throughout that time, George W. Bush has been president of the United States and, especially given the emphasis he has put on 9/11, it’s fair to judge him on how well he’s done on policies aimed at responding to 9/11. I will leave out, except tangentially, a judgment on his policies on civil liberties, not because I don’t have one, but because I want to focus on foreign policy.

So, how has George W. Bush done? In a word, badly. I’m not challenging his intentions. I believe Bush thought, like almost all Americans and, indeed, like a supermajority of the world’s population, that the attacks were horrific and unjustified. I also believe that he wanted to respond appropriately to the attacks. But I’m judging his responses. As one of my mentors, the late Milton Friedman, loved to say, “The road to hell is paved with good intentions.” George W. Bush has provided, and is providing, much of that pavement.

The Mistake on the First Day

A s my co-author, Charles Hooper, and I put it in our book, Making Great Decisions in Business and Life, “When designing a building, the saying goes, all the really important mistakes are made on the first day.” The reason is simple. On the first day, you make decisions that affect every other decision after that. If you choose the wrong place to locate the building, for example, everything you do to build it will compound that mistake. This principle applies more generally. On any project or undertaking, the biggest mistakes will be made on the first day. On the first day, you choose a direction and, if you’ve chosen badly, the longer you move in that direction, the further you deviate from the right outcome.

George W. Bush made a big mistake on literally the first day. He actually started well. When he sat in that Florida classroom and chewed his cheek as he pondered the news he had just heard, I thought, “Good for him; he’s thinking about what he should do rather than acting impulsively.” But later that day he said, “Freedom itself was attacked this morning by a faceless coward … and freedom will be defended.” In other words, George W. Bush thought that the reason the United States was attacked was that the attackers hated our freedom. How did he reach this conclusion? He never told us. He made his statement within hours of the attack and never gave any evidence for this view. As far as we can tell, he never questioned this view, never sought evidence that might have challenged it, and acted on the assumption that this view was correct.

Notice the importance of his assumption. If we were attacked because of our freedom, then there are only two options if you want to avoid future attacks: (1) get rid of our freedom, or (2) attack those who want to plan future attacks. But if we were attacked for some other reason, then it’s important to understand this other reason. What could another reason be?

Believe it or not, the Department of Defense, in 1997, had identified another reason that had nothing to do with “our freedom” but a lot to do with the U.S. government’s interventions abroad. The Defense Science Board’s 1997 Summer Study Task Force on DoD Responses to Transnational Threats had written, four years before the attacks:

“As part of its global power position, the United States is called upon frequently to respond to international causes and deploy forces around the world. America’s position in the world invites attack simply because of its presence. Historical data show a strong correlation between U.S. involvement in international situations and an increase in terrorist attacks against the United States.” (Washington: U.S. Department of Defense, October 1997, vol. 1, Final Report, p. 15)

I had first seen this report referenced in a Cato Institute piece by Ivan Eland and had found it, and Eland’s piece, persuasive enough that I had put them on a reading list for a policy analysis class that I taught military officers at the Naval Postgraduate School. With the exception of the opening statement that “the United States is called upon,” as if the U.S. government doesn’t choose to get involved in other countries’ affairs, the Defense Science Board was making a good point.

I had made the same point in a talk in December 1996 to the U.S. Navy’s Strategic Studies Group when I commented on a paper by my Hoover colleague Henry Rowen, a former president of the RAND Corporation. I pointed out that Rowen had taken terrorism as a given, but that one should take a step back and ask why terrorism exists. I said:

“What leads the Irish Republican Army to put bombs in Britain? Why don’t they, for example, put bombs in Canada or Bangladesh? To ask the question is to answer it. They place the bomb where they think it will help influence the government that makes decisions most directly in the way of their goals, and the governments in the way of their goals are usually governments that intervene in their affairs.”

Then I concluded, “If you want to avoid acts of terrorism carried out against people in your country, avoid getting involved in the affairs of other countries.”

How does one choose between the competing explanations, George W. Bush’s explanation that the terrorists hated us for our freedom and my explanation that the terrorists were responding to U.S. government interventions in their countries? There are two ways to do so. The first is by reasoning. Is it plausible that people in one country would be so upset by the freedom of people in another country that they would be willing to commit suicide to strike those people? On the other hand, is it plausible that people who are outraged at a foreign government’s intervention in their countries, but who lack the military might to confront that government directly, would engage in terrorist methods to go after people in that foreign country? Which is more plausible? A good way to reason it out is to use the “put yourself in the other person’s shoes” approach. Since it’s hard to think of a country that is much freer than ours (although George W. Bush and most of the Republicans and Democrats in Congress are working on making ours less free), imagine the opposite. Imagine that a foreign government oppresses its people horribly and that you are someone who believes in freedom. How likely are you to want to be a suicide bomber in that country? Now imagine that a foreign government intervenes in our affairs, spending money to stir up opposition to our government, or even sets up military bases in our country. Imagine also that this foreign government has a great deal of military power and that the U.S. government has little. (You’ll have to use your imagination here.) This means that you can’t confront that government by normal military means. Did your probability of being a suicide bomber within that country’s borders increase? I would bet that for most readers it did.

The second way to judge the competing explanations is to look at evidence. Are the terrorists going around bombing other relatively free countries that don’t intervene in their affairs? If the answer is no, that suggests that freedom is not the crucial variable.

Yet, with the enormous stakes involved in getting the answer right, George W. Bush does not appear to have put much effort at all into thinking about it. Many people accuse George W. Bush of being a stupid man. But there’s a more plausible, and, indeed, more damning, explanation of his decisions: he is a relatively smart man but not a curious man. Call him Uncurious George.

And it wasn’t as if George W. Bush didn’t have time to think through these things. There was no timetable telling him that he had to act within x number of days. It’s true that many Americans, in their understandable anguish, wanted him to bomb someone. But so what? Given the stakes involved, it was worth thinking carefully at the start. And Bush does not appear to have done that. One piece of evidence, which is out in the open and was at the time, is that the weekend after the Tuesday terrorist attacks, Bush and his advisers were at Camp David putting together a strategy for attacking Afghanistan. [DRH update: Check out former Bush adviser Condoleezza Rice saying on the Colbert Report this week that the Saturday morning after the Tuesday attacks, they were sitting at Camp David planning the invasion of Afghanistan. She says that “we knew we had to do it.” No, they didn’t. There was no have to.]

Why Afghanistan? Everyone knows, right? Because Osama bin Laden was thought to be the planner of the 9/11 attacks, and he was thought to be in Afghanistan, a country run by the Taliban. But notice the slip in thinking. Even if we take as given all the facts so far in this paragraph, does it follow that one would want to make war on a government that harbors bin Laden? Wouldn’t it have made more sense to have pressured the Taliban to give up bin Laden? Of course, the White House tried to do that, but it hit a hurdle. The Taliban actually insisted, before putting its own forces at risk to try to capture bin Laden, that the U.S. government produce evidence that bin Laden was behind the attacks. Of all the nerve! Actually insisting on evidence! But the evidence was not forthcoming. Bush never presented it to them. Nor has he yet, as far as I can tell, presented it to us.

Consider the following dialogue between NBC interviewer Tim Russert and U.S. Secretary of State Colin Powell (from Meet the Press, Sept. 23, 2001):

Russert: Are you absolutely convinced that Osama bin Laden was responsible for this attack?

Secretary Powell: I am absolutely convinced that the al-Qaeda network, which he heads, was responsible for this attack. You know, it’s sort of al-Qaeda – the Arab name is for “the base.” It’s something like a holding company of terrorist organizations that are located in dozens of countries around the world, sometimes tightly controlled, sometimes loosely controlled. And at the head of that organization is Osama bin Laden.

So what we have to do in the first phase of this campaign is to go after al-Qaeda and to go after Osama bin Laden. But it is not just a problem in Afghanistan. It’s a problem throughout the world. That’s why we are attacking it with a worldwide coalition.

Russert: Will you release publicly a white paper which links him and his organization to this attack to put people at ease?

Secretary Powell: We are hard at work bringing all the information together, intelligence information, law enforcement information. And I think in the near future we will be able to put out a paper, a document that will describe quite clearly the evidence that we have linking him to this attack. But also, remember, he has been linked to earlier attacks against U.S. interests, and he’s already indicted for earlier attacks against United States.

But the very next day, Sept. 24, 2001, Ari Fleischer, President Bush’s press secretary, backtracked. The following conversation took place at Fleischer’s press briefing:

Question: Ari, yesterday Secretary Powell was very precise that he was going to put out a report on what we had on bin Laden that could be reported and not classified. Today, the president shot him down, and he’s been shot down many, many times by the administration [inaudible] indicating that he also retreated on the question of putting out a report. No, I’m wrong?

Fleischer: No, I think that there was just a misinterpretation of the exact words the secretary used on the Sunday shows and the secretary talked about that in a period of time – I think his word was “soon” – there would be some type of document that could be made available.

As you heard the secretary say today, he said, “As we are able, as it unclassifies,” and…

Question: Much more emphatic yesterday at the…

Fleischer: Well, I think he said the word “soon.”

As I was reminded today by a very knowledgeable official at the State Department, that’s called State Department soon. And so it’s fully consistent with what the president’s been saying and the secretary said.

You know, I mean, look, it shouldn’t surprise anybody, as soon as…

[Crosstalk]

Question: The American people thought soon meant soon.

Question: But is this a sign, Ari, that…

[Crosstalk]

Fleischer: Let me – I was getting there. I was answering.

What I was saying is it shouldn’t surprise anybody that as soon as the attack on our country took place the immediate reaction is that investigations begin. They begin with the intelligence agencies, they begin with domestic agencies, they begin with the regulator law enforcement authorities, and they start to collect a whole series of information.

Some of that information is going to end up in the form of grand jury information, which of course is subject to secrecy laws. Others, coming from intelligence services, is by definition going to be classified and will treated as such.

Over the course of time, will there be changes to that that can lead to some type of declassified document over whatever period of time? That has historically been the pattern, and I think that’s what the secretary is referring to.

Unless I’ve missed something, six years later we’re still waiting for the White House to release that information. Is it just possible that President Bush and those who work for him were never able to find the evidence?

Moreover, let’s say that George W. Bush had had evidence that Osama bin Laden was behind the attacks. Would that have justified attacking a whole country, with thousands of innocent people killed, just because that country’s government had refused to turn him over? Before you answer “yes,” consider one implication of a “yes” answer. For years now, the U.S. government has allowed Luis Posada Carriles to live in the United States and has refused to extradite him to Venezuela. Carriles is suspected of having bombed a Cuban airliner in 1976, a bombing that killed 73 people. So here’s my question: would you support a decision by Venezuela’s Hugo Chavez to attack the United States? If you wouldn’t, why wouldn’t you? And, more important, if you wouldn’t, how could you support a decision by George W. Bush to attack Afghanistan?

That’s the philosophical case against George W. Bush’s attack on Afghanistan. Now to the practical case: it hasn’t worked. After all of George W. Bush’s bluster about wanting Osama bin Laden dead or alive, he still has not delivered. But it’s even worse than that. Bush let himself be sidetracked by deciding to invade a country that had no apparent connection, and that he himself agreed had no connection, with the attacks on 9/11. That country, of course, is Iraq.

Bait and Switch

D espite George W. Bush’s apparently sincere concern about the victims of 9/11, he got sidetracked four and a half months later. In his Jan. 29, 2002, State of the Union speech, Bush talked about the “axis of evil.” This axis, he claimed, consisted of North Korea, Iraq, and Iran. There are so many things that are screwy about Bush’s claim. First, Iraq and Iran had been mortal enemies since at least the start of the Iran-Iraq war in 1980, which took about 300,000 Iranian lives and between 160,000 and 240,000 Iraqi lives. Claiming that there was an axis between these two countries was like claiming that the Hatfields and McCoys were part of an alliance. Moreover, there was no alliance between either North Korea and Iran or North Korea and Iraq. In short, while I can certainly accept that the governments of these countries were evil, especially the government of North Korea, there was no axis.

Second, despite the fact that he still hadn’t caught bin Laden, Bush talked as if he wanted to settle scores, real or imagined, with the governments of these three countries. This, even though he had no basis, and admitted he had no basis, for thinking there was any connection between those governments and 9/11. In other words, George W. Bush cynically engaged in a gigantic bait and switch. The bait was that he was going to talk about 9/11 and what to do about it. The switch was that he tried to stir the pot to get Americans upset at the governments of three countries, none of which had anything to do with 9/11.

I could discuss the Iraq war further, but it doesn’t fit in a discussion of 9/11. It simply has nothing to do with 9/11, as virtually everyone, hawk and dove, admits. But it certainly has drawn resources away from the search for bin Laden.

The bottom line is that, if we judge George W. Bush’s actions in responding to 9/11, he has failed miserably.

Intentions vs. Results

O ne of the most important things I learned from Milton Friedman and, indeed, from economics in general is that it is crucial to distinguish between intentions and results. Most of us economists, for example, don’t doubt the good intentions of many people who advocate increases in the minimum wage. But we insist on considering the consequences of such increases and putting much less weight on the wishes of those who advocate them. It would be nice if the world were so simple that increasing the minimum wage simply created better-paying jobs rather than destroying low-paying jobs for unskilled workers. But the fact that it would be nice is irrelevant. What matters are the actual consequences, and these are, on net, negative.

But few of my fellow economists, especially the free-market ones, have been willing to apply this distinction between intentions and results to foreign policy. Unlike many of my antiwar allies, especially those on the Left, I have little doubt that the world George W. Bush would like to see is somewhat similar to the world I would like to see. He would like the world to be full of vibrant, freedom-loving countries whose governments do not oppress them. But so what? We still need to judge him by his actions. His actions have not created such freedom-loving countries and have not even achieved the specific goal of capturing or killing bin Laden. At the same time, George W. Bush has become the oppressor-in-chief of Americans and of many others elsewhere in the world. To put it into perspective, think of the damage done by the minimum wage, multiply it by 1,000 times, and you will still probably underestimate the damage done to the world by George W. Bush’s foreign policies and anti-civil-liberties domestic policy. I condemn presidents who destroy our domestic economic freedom. I don’t draw a line, as the old cliché goes, at the water’s edge. Nor do I put a zero weight on civil liberties. Which is why I condemn President George W. Bush. His actions since Sept. 11, 2001, have been, with few exceptions, a disgrace.

Copyright 2007 by David R. Henderson. Requests for permission to reprint should be directed to the author or Antiwar.com.

(9 COMMENTS)

EconLog September 11, 2019

Monopolize the Pretty Lies, by Bryan Caplan

Why do dictators deny people the right to speak freely?  The obvious response is, “The truth hurts.”  Dictators are bad, so if people can freely speak the truth, they will say bad things about the dictator.  This simultaneously wounds dictators’ pride and threatens their power, so dictators declare war on the truth.

But is this story right?  Consider: If you want to bring an incumbent dictator down, do you really want to be hamstrung by the truth?  It’s far easier – and more crowd-pleasing – to respond to a pack of official lies with your own pack of lies.  When the dictator claims, “I’ve made this the greatest country on earth,” you could modestly respond, “Face facts: we’re only 87th.”  Yet if it’s power you seek, you might as well lie back, “The dictator has destroyed our country – but this will be the greatest country on earth if we gain power.” Even more obviously, if the current dictator claims the sanction of God, the opposition doesn’t want to shrug, “Highly improbable.  How do you even know God exists?”  Instead, the opposition wants to roar, “No, God is on our side.  Our side!”

What then is the primary purpose of censorship?  It’s not to suppress the truth – which has little mass appeal anyway.  The primary purpose of censorship is to monopolize the pretty lies.  Only the powers-that-be can freely make absurdly self-aggrandizing claims.  Depending on the severity of the despotism, you may not have to echo the official lies.  But if you publicly defend alternative absurdly self-aggrandizing claims, the powers-that-be will crush you.

Why, though, do dictators so eagerly seek to monopolize the pretty lies?  In order to take full advantage of their subjects’ Social Desirability Bias.  Human beings like to say – and think – whatever superficially sounds good.  Strict censorship allows rulers to exploit this deep mental flaw.  If no one else can make absurd lies, a trite slogan like, “Let’s unite to fight for a fantastic future!” carries great force.  Truthful critics would have to make crowd-displeasing objections like, “Maybe competition will bring us a brighter future than unity,” “Who exactly are we fighting?,” or “Precisely how fantastic of a future are we talking about?”  A rather flaccid bid for power!  Existing rulers tremble far more when rebels bellow, “Join us to fight for a fantastic future!”

George Orwell has been a huge influence on me.  When you read his political novels, you often get the feeling that dictators fear the truth above all.  If only Winston Smith could take over the Ministry of Truth and tell all Oceania that it needlessly lives in poverty and fear.  In the broad scheme of things, however, unvarnished truth is only a minor threat to tyranny.  After all, rulers could respond to ironclad fact with a pile of demagoguery: “Smith is slandering our great country!”  “He’s a willing tool of Eurasia!”  Or even, “We’re not rich because the greatest country in the world is too proud to sell itself.”  The real threat to the regime would be a rival set of demagogues offering Utopia after a brief bloodbath sends a few wicked, treasonous leaders straight to the hell that they so richly deserve.

Doesn’t this imply that free speech is overrated?  Yes; I’ve said so before.  While I’d like to believe that free speech leads naturally to the triumph of truth, I see little sign of this.  Instead, politics looks to me like a Great Liars’ War.  Viable politicians defy literal truth in virtually every sentence.  They defy it with hyperbole.  They defy it with overconfidence.  They defy it with wishful thinking.  Dictators try to make One Big Political Lie mandatory.  Free speech lets a Thousand Political Lies Bloom.

Yes, freedom of speech lets me make these dour observations without fear.  I’m grateful for that.  Yet outside my Bubble, dour observations fall on deaf ears.  Psychologically normal humans crave pretty lies, so the Great Liars’ War never ends.

(5 COMMENTS)

EconLog September 11, 2019

Power to the Children and Hail to the State!, by Pierre Lemieux

Three dangerous aspects of the relationship between children and political power are worth noting. First, children have been getting more and more political influence—that is, influence on making other people move under the threat of government guns. Greta Thunberg is their current face. The 16-year-old Swedish girl is waging an international campaign for environmental control that lead her to sail (literally) to the shores of America without the carbon footprint of flying. She was already demonstrating when she was 15. In America and elsewhere, high school students regularly participate in demonstrations in order to try to impose their preferred political solutions on others—including on their parents who pay for their schooling. Parents bring their toddlers to demonstrations.

Wikipedia reports (and it’s worth reading the whole report, especially if you think that I am exaggerating):

Thunberg says she first heard about climate change in 2011, when she was 8 years old, and could not understand why so little was being done about it.

Incidentally, I am not focussing on the environmental issues. Other children demonstrate for other “social justice” causes. On the climate issue, I am rather agnostic, literally, in the sense of “a-gnostic”: I don’t know. But I do know the fantasies that have been peddled before about the environment (see my “Running Out of Everything,” a review of Paul Sabin’s book The Bet); and I observe that the environmentalist agenda is mainly about  increasing state control and power.

It is not the only absurdity in the whole situation that demonstrating children are considered, by their adulated state itself, as fragile snowflakes to be protected from alcohol, tobacco, vaping, ideas, and life in general. (They may buy ice cream, though.) They are also political hostages of faddish causes. “The children” are constantly invoked to justify depriving adults of their liberties, whether it is free speech, the Second Amendment, or smoking what adults want to smoke. This exploitation of children for political causes is a second aspect of the relation between children and political power.

I touched on this point in a recent Reason Foundation paper, “Consumer Surplus in the FDA’s Tobacco Regulations.” In officialdom’s Newspeak, tobacco includes vaping products containing no tobacco. (Perhaps “tobacco” is just a synonym for “evil”?) The Food and Drug Administration is fighting e-cigarettes under the excuse that some teenagers are taking advantage of the freedom of adults to vape. I criticized the FDA’s abuse of children:

“No child should use any tobacco products, including e-cigarettes,” [former FDA Commissioner Scott] Gottlieb declared. But it is as true, and quite certainly truer, that no “child” should take drugs, or have sex, or commit suicide or mass murder (according to FBI statistics, 7 of the 50 mass shooters in 2016-2017 were in their teens). Whatever the root causes of these problems, it is doubtful that the FDA or other federal agencies can solve them by further prohibitions. Many of the bad behaviors of teenagers are already forbidden by law.

The FDA calls indistinctly “youth,” “adolescent,” “child” or “kid” anybody from 12 through 17, but this group is not homogeneous. A 17-year-old, who can enroll in the army with his parents’ permission and is on the verge of having the right to vote (18 years of age at the federal level) and to reach the age of majority in many states, is certainly different from a 12-year-old child. Those whom the FDA considers “kids” can often be held criminally responsible for their actions. In many states, they can marry and in most states, they can be licensed to drive unsupervised from age 16 (the highest threshold is 17; the lowest, 14 and a half). The three-page statement issued by the FDA commissioner with the Advance Notice of Proposed Rulemaking (ANPRM) on flavors [in tobacco and e-cigarettes] contains the word “kids” seven times.

My paper further notes:

The ANPRM on flavors comes close to assimilating young adults (18 to 24 years old) to adolescents (who are also called “youth,” “children,” or “kids”).

In fashionable political discourse, then, children are presented either as role models to justify future tyranny or as little parentless incompetents, depending on how exactly their exploitation is required to advance state power. Granted that some statocrats (politicians and bureaucrats) and public-health or environmental crusaders might be consumed by good intentions, but as German poet Johan Christian Hölderlin wrote (quoted in Friedrich Hayek’s The Road to Serfdom),

What has always made the state a hell on earth has been precisely that man has tried to make it his heaven.

A third relation between children and political power is different but no less troubling: adults themselves act like children when they beg the state to take care of them and restrict their freedom. In a 2005 Public Choice article, “Afraid to Be Free: Dependency as Desideratum,” James Buchanan (the famous Nobel prizewinning economist and political philosopher) called this phenomenon “parentalism,” the desire of citizens to be to the state what children are to their parents. He observed, disturbingly:

The thirst or desire for freedom, and responsibility, is perhaps not nearly so universal as so many post-Enlightenment philosophers have assumed.

Children, by definition, are not capable of freedom and responsibility, which is why they need parents. But one would expect parents to have and cultivate this capability. A seeming factoid reported in the Wall Street Journal (“FDA Warns Juul About Marketing Products as Safer Than Cigarettes,” September 9, 2019) illustrates how parents become children: a parent complained to the FDA that Jull (the e-cigarette manufacturer) has sold products to her child. In other words, a child complained to political authority that her child was not under her control.

You will also find on Wikipedia that little Greta Thunberg raised her parents well:

At home, Thunberg persuaded her parents to adopt several lifestyle choices to reduce their own carbon footprint, including giving up air travel and not eating meat.

In truth, the case of little Greta is sad if not tragic. She was diagnosed with Asperger syndrome. I know little about Asperger but it does not appear to be an unmitigated catastrophe. Yet, perhaps in view of other factors, her father said (“The Swedish 15-year-old Who’s Cutting Class to Fight the Climate Crisis,” The Guardian, September 1, 2018):

We respect that she wants to make a stand. She can either sit at home and be really unhappy, or protest, and be happy.

Like other real children, Greta needs protection and guidance, not a flirt with political power. Children need to be taught how to become free and responsible adults, not how to remain children. The FDA and other political authorities will teach them the wrong thing. They are too good an excuse for the state to expand its power.

(8 COMMENTS)

EconLog September 10, 2019

Michael Trump or Donald Dukakis, by David Henderson

 

In researching my article “Donald Trump Vs. Adam Smith,” I came across Michael Dukakis’s speech to Moog Automotive, an auto parts firm near St. Louis. What I found striking is the uncanny resemblance between Dukakis’s and Donald Trump’s views on foreign trade and even on making American great again. Thus the title of this post.

Here’s the segment on C-SPAN. Missouri Congressman Dick Gephardt warms them up with his economic nationalism and then Dukakis continues the message.

Some highlights:

8:20: Gephardt wants an aggressive trade policy.

12:10: Gephardt wants to “Make America #1 Again.”

13:30: Dukakis wants to “Make America #1 Again.”

19:00: Dukakis says the president is commander in chief of the battle for America’s future. (Lawyer Dukakis must have been dozing the day they taught separation of powers in his Con Law class at the Harvard Law School.)

20:10: Dukakis decries the fact that so few consumer electronics are produced in America any more.

20:45: Dukakis decries the shift early in the Reagan administration from a trade surplus to a trade deficit.

21:50: Dukakis decries the fact that foreign owners are buying up America. Here’s where he made his gaffe, not understanding that the people he was talking to worked for a foreign-owned firm.

27:00: Dukakis insists that the U.S. open its borders only to those who open their borders to us.

 

(2 COMMENTS)

Here are the 10 latest posts from EconTalk.

EconTalk September 9, 2019

Daron Acemoglu on Shared Prosperity and Good Jobs

fire-safety-net-300x230.jpgEconomist and author Daron Acemoglu of MIT discusses with EconTalk host Russ Roberts the challenge of shared prosperity and the policies that could bring about a more inclusive economy. Acemoglu argues for the importance of good jobs over redistribution and makes the case for the policies that could lead to jobs and opportunities across skill […]

EconTalk September 2, 2019

David Deppner on Leadership, Confidence, and Humility

Businessman in spotlights, politician.jpgCan a great leader or manager be humble in public? Or is exuding confidence, even when it may not be merited, a key part of leadership? In this episode of EconTalk, host Russ Roberts talks with David Deppner, CEO of Psyberware, about an email David sent Russ wondering how Russ might reconcile his passion for […]

EconTalk August 26, 2019

Andrew Roberts on Churchill and the Craft of Biography

Churchill.jpgHistorian Andrew Roberts talks about the life of Winston Churchill and the art of biography with EconTalk host Russ Roberts. How did Churchill deal with the mistakes he inevitably made in a long career? Was he prescient or just the right man in the right place at the right time? Was he an alcoholic? Did […]

EconTalk August 19, 2019

Tyler Cowen on Big Business

bb1-197x300.jpgAuthor and economist Tyler Cowen of George Mason University talks about his book, Big Business, with EconTalk host Russ Roberts. Cowen argues that big corporations in America are underrated and under-appreciated. He even defends the financial sector while adding some caveats along the way. This is a lively and contrarian look at a timely issue.

EconTalk August 12, 2019

Arthur Diamond on Openness to Creative Destruction

Openness-to-Creative-Detruction-212x300.jpgArthur Diamond of the University of Nebraska at Omaha talks about his book, Openness to Creative Destruction, with EconTalk host Russ Roberts. Diamond sings the sometimes forgotten virtues of innovation and entrepreneurship and argues that they should be taught more prominently as a central part of economics.

EconTalk August 5, 2019

Andy Matuschak on Books and Learning

books-in-space.jpgSoftware Engineer Andy Matuschak talks about his essay “Why Books Don’t Work” with EconTalk host Russ Roberts. Matuschak argues that most books rely on transmissionism, the idea that an author can share an idea in print and the reader will absorb it. And yet after reading a non-fiction book, most readers will struggle to remember […]

EconTalk July 29, 2019

Shoshana Zuboff on Surveillance Capitalism

Surveillance-Capitalism-193x300.jpg Shoshana Zuboff of Harvard University talks about her book Surveillance Capitalism with EconTalk host Russ Roberts. Zuboff argues that the monetization of search engines and social networks by Google, Facebook, and other large tech firms threatens privacy and democracy.

EconTalk July 22, 2019

Chris Arnade on Dignity

Dignity-228x300.jpg Photographer, author, and former Wall St. trader Chris Arnade talks about his book, Dignity, with EconTalk host Russ Roberts. Arnade quit his Wall Street trading job and criss-crossed America photographing and getting to know the addicted and homeless who struggle to find work and struggle to survive. The conversation centers on what Arnade learned about […]

EconTalk July 15, 2019

Michael Brendan Dougherty on My Father Left Me Ireland

Father-Left-Ireland-198x300.jpg Author and journalist Michael Brendan Dougherty talks about his book My Father Left Me Ireland with EconTalk host Russ Roberts. Dougherty talks about the role of cultural and national roots in our lives and the challenges of cultural freedom in America. What makes us feel part of something? Do you feel American or just someone […]

EconTalk July 8, 2019

Arthur Brooks on Love Your Enemies

Love-Your-Enemies-199x300.jpgEconomist and author Arthur Brooks talks about his book Love Your Enemies with EconTalk host Russ Roberts. Brooks argues that contempt is destroying our political conversations and it’s not good for us at the personal level either. Brooks makes the case for humility and tolerance. Along the way he discusses parenting, his past as professional […]

Here are the 10 latest posts from CEE.

CEE July 19, 2019

Richard H. Thaler

Richard H. Thaler won the 2017 Nobel Prize in Economic Science for “his contributions to behavioral economics.”

In most of his work, Thaler has challenged the standard economist’s model of rational human beings.  He showed some of the ways that people systematically depart from rationality and some of the decisions that resulted. He has used these insights to propose ways to help people save, and save more, for retirement. Thaler also advocates something called “libertarian paternalism.”

Economists generally assume that more choices are better than fewer choices. But if that were so, argues Thaler, people would be upset, not happy, when the host at a dinner party removes the pre-dinner bowl of cashews. Yet many of us are happy that it’s gone. Purposely taking away our choice to eat more cashews, he argues, makes up for our lack of self-control. This simple contradiction between the economists’ model of rationality and actual human behavior, plus many more that Thaler has observed, leads him to divide the population into “Econs” and “Humans.” Econs, according to Thaler, are people who are economically rational and fit the model completely. Humans are the vast majority of people.

CEE May 28, 2019

William D. Nordhaus

.jpg)

William D. Nordhaus was co-winner, along with Paul M. Romer, of the 2018 Nobel Prize in Economic Science “for integrating climate change into long-run macroeconomic analysis.”

Starting in the 1970s, Nordhaus constructed increasingly comprehensive models of the interaction between the economy and additions of carbon dioxide to the atmosphere, along with its effects on global warming. Economists use these models, along with assumptions about various magnitudes, to compute the “social cost of carbon” (SCC). The idea is that past a certain point, additions of carbon dioxide to the atmosphere heat the earth and thus create a global negative externality. The SCC is the net cost that using that additional carbon imposes on society. While the warmth has some benefits in, for example, causing longer growing seasons and improving recreational alternatives, it also has costs such as raising ocean levels, making some land uses obsolete. The SCC is the net of these social costs and is measured at the current margin. (The “current margin” language is important because otherwise one can get the wrong impression that any use of carbon is harmful.) Nordhaus and others then use the SCC to recommend taxes on carbon. In 2017, Nordhaus computed the optimal tax to be 31 per ton of carbon dioxide. To put that into perspective, a 31 carbon tax would increase the price of gasoline by about 28 cents.

CEE May 28, 2019

William D. Nordhaus

.jpg)

William D. Nordhaus was co-winner, along with Paul M. Romer, of the 2018 Nobel Prize in Economic Science “for integrating climate change into long-run macroeconomic analysis.”

Starting in the 1970s, Nordhaus constructed increasingly comprehensive models of the interaction between the economy and additions of carbon dioxide to the atmosphere, along with its effects on global warming. Economists use these models, along with assumptions about various magnitudes, to compute the “social cost of carbon” (SCC). The idea is that past a certain point, additions of carbon dioxide to the atmosphere heat the earth and thus create a global negative externality. The SCC is the net cost that using that additional carbon imposes on society. While the warmth has some benefits in, for example, causing longer growing seasons and improving recreational alternatives, it also has costs such as raising ocean levels, making some land uses obsolete. The SCC is the net of these social costs and is measured at the current margin. (The “current margin” language is important because otherwise one can get the wrong impression that any use of carbon is harmful.) Nordhaus and others then use the SCC to recommend taxes on carbon. In 2017, Nordhaus computed the optimal tax to be 31 per ton of carbon dioxide. To put that into perspective, a 31 carbon tax would increase the price of gasoline by about 28 cents per gallon.

Nordhaus noted, though, that there is a large amount of uncertainty about the optimal tax. For the 31 tax above, the actual optimal tax could be as little as 6 per ton or as much as 93.

Interestingly, according to Nordhaus’s model, setting too high a carbon tax can be worse than setting no carbon tax at all. According to the calibration of Nordhaus’s model in 2007, with no carbon tax and no other government controls, the present value of damages from environment damage and abatement costs would be 22.59 trillion (in 2004 dollars). Nordhaus’s optimal carbon tax would have reduced damage but increased abatement costs, for a total of 19.52 trillion, an improvement of only 3.07 trillion. But the cost of a policy to limit the temperature increase to only 1.5 C would have been 37.03 trillion, which is 16.4 trillion more than the cost of the “do nothing” option. Those numbers will be different today, but what is not different is that the cost of doing nothing is substantially below the cost of limiting the temperature increase to only 1.5 C.

One item the Nobel committee did not mention is his demonstration that the price of light has fallen by many orders of magnitude over the last 200 years. He showed that the price of light in 1992, adjusted for inflation, was less than one tenth of one percent of its price in 1800. Failure to take this reduction fully into account, noted Nordhaus, meant that economists have substantially underestimated the real growth rate of the economy and the growth rate of real wages.

Nordhaus also did pathbreaking work on the distribution of gains from innovation. In a 2004 study he wrote:

Only a minuscule fraction of the social returns from technological advances over the 1948-2001 period was captured by producers, indicating that most of the benefits of technological change are passed on to consumers rather than captured by producers.

Nordhaus earned his B.A. degree at Yale University in 1963 and his Ph.D. in economics at MIT in 1967. From 1977 to 1979, he was a member of President Carter’s Council of Economic Advisers.

 

 


Selected Works

  1. . “Economic Growth and Climate: The Case of Carbon Dioxide.” American Economic Review, Vol. 67, No. 1, pp. 341-346.

  2. . “Do Real-Output and Real-Wage Measures Capture Reality? The History of Lighting Suggests Not,” in Timothy F. Bresnahan and Robert J. Gordon, editors, The Economics of New Goods. Chicago: University of Chicago Press, 1996.

  3. . (with J. Boyer.) Warming the World: Economic Models of Global Warming. Cambridge, MA: MIT Press.

  4. . “Schumpeterian Profits in the American Economy: Theory and Measurement,” NBER Working Paper No. 10433, April 2004.

  5. . “Projections and Uncertainties about Climate Change in an Era of Minimal Climate Policies,” NBER Working Paper No. 22933.

(0 COMMENTS)

CEE May 28, 2019

Paul M. Romer

In 2018, U.S. economist Paul M. Romer was co-recipient, along with William D. Nordhaus, of the Nobel Prize in Economic Science for “integrating technological innovations into long-run macroeconomic analysis.”

Romer developed “endogenous growth theory.” Before his work in the 1980s and early 1990s, the dominant economic model of economic growth was one that MIT economist Robert Solow developed in the 1950s. Even though Solow concluded that technological change was a key driver of economic growth, his own model made technological change exogenous. That is, technological change was not something determined in the model but was an outside factor. Romer made it endogenous.

CEE May 28, 2019

Paul M. Romer

In 2018, U.S. economist Paul M. Romer was co-recipient, along with William D. Nordhaus, of the Nobel Prize in Economic Science for “integrating technological innovations into long-run macroeconomic analysis.”

Romer developed “endogenous growth theory.” Before his work in the 1980s and early 1990s, the dominant economic model of economic growth was one that MIT economist Robert Solow developed in the 1950s. Even though Solow concluded that technological change was a key driver of economic growth, his own model made technological change exogenous. That is, technological change was not something determined in the model but was an outside factor. Romer made it endogenous.

There are actually two very different phases in Romer’s work on endogenous growth theory. Romer (1986) and Romer (1987) had an AK model. Real output was equal to A times K, where A is a positive constant and K is the amount of physical capital. The model assumes diminishing marginal returns to K, but assumes also that part of a firm’s investment in capital results in the production of new technology or human capital that, because it is non-rival and non-excludable, generates spillovers (positive externalities) for all firms. Because this technology is embodied in physical capital, as the capital stock (K) grows, there are constant returns to a broader measure of capital that includes the new technology. Modeling growth this way allowed Romer to keep the assumption of perfect competition, so beloved by economists.

In Romer (1990), Romer rejected his own earlier model. Instead, he assumed that firms are monopolistically competitive. That is, industries are competitive, but many firms within a given industry have market power. Monopolistically competitive firms develop technology that they can exclude others from using. The technology is non-rival; that is, one firm’s use of the technology doesn’t prevent other firms from using it. Because they can exploit their market power by innovating, they have an incentive to innovate. It made sense, therefore, to think carefully about how to structure such incentives.

Consider new drugs. Economists estimate that the cost of successfully developing and bringing a new drug to market is about 2.6 billion. Once the formula is discovered and tested, another firm could copy the invention of the firm that did all the work. If that second firm were allowed to sell the drug, the first firm would probably not do the work in the first place. One solution is patents. A patent gives the inventor a monopoly for a fixed number of years during which it can charge a monopoly price. This monopoly price, earned over years, gives drug companies a strong incentive to innovate.

Another way for new ideas to emerge, notes Romer, is for governments to subsidize research and development.

The idea that technological change is not just an outside factor but itself is determined within the economic system might seem obvious to those who have read the work of Joseph Schumpeter. Why did Romer get a Nobel Prize for his insights? It was because Romer’s model didn’t “blow up.” Previous economists who had tried mathematically to model growth in a Schumpeterian way had failed to come up with models in which the process of growth was bounded.

To his credit, Romer lays out some of his insights on growth in words and very simple math. In the entry on economic growth in The Concise Encyclopedia of Economics, Romer notes the huge difference in long run well being that would result from raising the economic growth rate by only a few percentage points. The “rule of 72” says that the length of time over which a magnitude doubles can be computed by dividing the growth rate into 72. It actually should be called the rule of 70, but the math with 72 is slightly easier. So, for example, if an economy grows by 2 percent per year, it will take 36 years for its size to double. But if it grows by 4 percent per year, it will double in 18 years.

Romer warns that policy makers should be careful about using endogenous growth theory to justify government intervention in the economy. In a 1998 interview he stated:

A lot of people see endogenous growth theory as a blanket seal of approval for all of their favourite government interventions, many of which are very wrong-headed. For example, much of the discussion about infrastructure is just wrong. Infrastructure is to a very large extent a traditional physical good and should be provided in the same way that we provide other physical goods, with market incentives and strong property rights. A move towards privatization of infrastructure provision is exactly the right way to go. The government should be much less involved in infrastructure provision.[1]

In the same interview, he stated, “Selecting a few firms and giving them money has obvious problems” and that governments “must keep from taxing income at such high rates that it severely distorts incentives.”

In 2000, Romer introduced Aplia, an on-line set of problems and answers that economics professors could assign to their students and easily grade. The upside is that students are more prepared for lectures and exams and can engage with their fellow students in economic experiments on line. The downside of Aplia, according to some economics professors, is that students get less practice actually manually drawing demand and supply curves.

In 2009, Romer started advocating “Charter Cities.” His idea was that many people are stuck in countries with bad rules that make wealth creation difficult. If, he argued, an outside government could start a charter city in a country that had bad rules, people in that country could move there. Of course, this would require the cooperation of the country with the bad rules and getting that cooperation is not an easy task. His primary example of such an experiment working is Hong Kong, which was run by the British government until 1997. In a 2009 speech on charter cities, Romer stated, “Britain, through its actions in Hong Kong, did more to reduce world poverty than all the aid programs that we’ve undertaken in the last century.”[2]

Romer earned a B.S. in mathematics in 1977, an M.A. in economics in 1978, and a Ph.D. in economics in 1983, all from the University of Chicago. He also did graduate work at MIT and Queen’s University. He has taught at the University of Rochester, the University of Chicago, UC Berkeley, and Stanford University, and is currently a professor at New York University.

He was chief economist at the World Bank from 2106 to 2018.

 

 

[1] “Interview with Paul M. Romer,” in Brian Snowdon and Howard R. Vane, Modern Macroeconomics: Its Origins, Development and Current State, Cheltenham, UK: Edward Elgar, 2005, p. 690.

[2] Paul Romer, “Why the world needs charter cities,” TEDGlobal 2009.

 


Selected Works

  1. “Increasing Returns and Long-Run Growth.” Journal of Political Economy, Vol. 94, No. 5, pp. 1002-1037.
  2. “Growth Based on Increasing Returns Due to Specialization.” American Economic Review, Papers and Proceedings, Vol. 77, No. 2, pp. 56-62.
  3. “Endogenous Technological Change.” Journal of Political Economy. Vol. 98, No. 5, S71-S102.
  4. “Mathiness in the Theory of Economic Growth.” American Economic Review, Vol. 105, No. 5, pp. 89-93.

 

(0 COMMENTS)

CEE March 13, 2019

Jean Tirole

Jean Tirole .jpg "Ecole polytechnique Université Paris-Saclay [CC BY-SA 2.0 (https://creativecommons.org/licenses/by-sa/2.0)], via Wikimedia Commons") 

In 2014, French economist Jean Tirole was awarded the Nobel Prize in Economic Sciences “for his analysis of market power and regulation.” His main research, in which he uses game theory, is in an area of economics called industrial organization. Economists studying industrial organization apply economic analysis to understanding the way firms behave and why certain industries are organized as they are.

From the late 1960s to the early 1980s, economists George Stigler, Harold Demsetz, Sam Peltzman, and Yale Brozen, among others, played a dominant role in the study of industrial organization. Their view was that even though most industries don’t fit the economists’ “perfect competition” model—a model in which no firm has the power to set a price—the real world was full of competition. Firms compete by cutting their prices, by innovating, by advertising, by cutting costs, and by providing service, just to name a few. Their understanding of competition led them to skepticism about much of antitrust law and most government regulation.

In the 1980s, Jean Tirole introduced game theory into the study of industrial organization, also known as IO. The key idea of game theory is that, unlike for price takers, firms with market power take account of how their rivals are likely to react when they change prices or product offerings. Although the earlier-mentioned economists recognized this, they did not rigorously use game theory to spell out some of the implications of this interdependence. Tirole did.

One issue on which Tirole and his co-author Jean-Jacques Laffont focused was “asymmetric information.” A regulator has less information than the firms it regulates. So, if the regulator guesses incorrectly about a regulated firm’s costs, which is highly likely, it could set prices too low or too high. Tirole and Laffont showed that a clever regulator could offset this asymmetry by constructing contracts and letting firms choose which contract to accept. If, for example, some firms can take measures to lower their costs and other firms cannot, the regulator cannot necessarily distinguish between the two types. The regulator, recognizing this fact, may offer the firms either a cost-plus contract or a fixed-price contract. The cost-plus contract will appeal to firms with high costs, while the fixed-price contract will appeal to firms that can lower their costs. In this way, the regulator maintains incentives to keep costs down.

Their insights are most directly applicable to government entities, such as the Department of Defense, in their negotiations with firms that provide highly specialized military equipment. Indeed, economist Tyler Cowen has argued that Tirole’s work is about principal-agent theory rather than about reining in big business per se. In the Department of Defense example, the Department is the principal and the defense contractor is the agent.

One of Tirole’s main contributions has been in the area of “two-sided markets.” Consider Google. It can offer its services at one price to users (one side) and offer its services at a different price to advertisers (the other side). The higher the price to users, the fewer users there will be and, therefore, the less money Google will make from advertising. Google has decided to set a zero price to users and charge for advertising. Tirole and co-author Jean-Charles Rochet showed that the decision about profit-maximizing pricing is complicated, and they use substantial math to compute such prices under various theoretical conditions. Although Tirole believes in antitrust laws to limit both monopoly power and the exercise of monopoly power, he argues that regulators must be cautious in bringing the law to bear against firms in two-sided markets. An example of a two-sided market is a manufacturer of videogame consoles. The two sides are game developers and game players. He notes that it is very common for companies in such markets to set low prices on one side of the market and high prices on the other. But, he writes, “A regulator who does not bear in mind the unusual nature of a two-sided market may incorrectly condemn low pricing as predatory or high pricing as excessive, even though these pricing structures are adopted even by the smallest platforms entering the market.”

Tirole has brought the same kind of skepticism to some other related regulatory issues. Many regulators, for example, have advocated government regulation of interchange fees (IFs) in payment card associations such as Visa and MasterCard. But in 2003, Rochet and Tirole wrote that “given the [economics] profession’s current state of knowledge, there is no reason to believe that the IFs chosen by an association are systematically too high or too low, as compared with socially optimal levels.”

After winning the Nobel Prize, Tirole wrote a book for a popular audience, Economics for the Common Good. In it, he applied economics to a wide range of policy issues, laying out, among other things, the advantages of free trade for most residents of a given country and why much legislation and regulation causes negative unintended consequences.

Like most economists, Tirole favors free trade. In Economics for the Common Good, he noted that French consumers gain from freer trade in two ways. First, free trade exposes French monopolies and oligopolies to competition. He argued that two major French auto companies, Renault and Peugeot-Citroen, “sharply increased their efficiency” in response to car imports from Japan. Second, free trade gives consumers access to cheaper goods from low-wage countries.

In that same book, Tirole considered the unintended consequences of a hypothetical, but realistic, case in which a non-governmental organization, wanting to discourage killing elephants for their tusks, “confiscates ivory from traffickers.” In this hypothetical example, the organization can destroy the ivory or sell it. Destroying the ivory, he reasoned, would drive up the price. The higher price could cause poachers to kill more elephants. Another example he gave is of the perverse effects of price ceilings. Not only do they cause shortages, but also, as a result of these shortages, people line up and waste time in queues. Their time spent in queues wipes out the financial gain to consumers from the lower price, while also hurting the suppliers. No one wins and wealth is destroyed.

Also in that book, Tirole criticized the French government’s labor policies, which make it difficult for employers to fire people. He noted that this difficulty makes employers less likely to hire people in the first place. As a result, the unemployment rate in France was above 7 percent for over 30 years. The effect on young people has been particularly pernicious. When he wrote this book, the unemployment rate for French residents between 15 and 24 years old was 24 percent, and only 28.6 percent of percent of those in that age group had jobs. This was much lower than the OECD average of 39.6 percent, Germany’s 46.8 percent, and the Netherlands’ 62.3 percent.

One unintended, but predictable, consequence of government regulations of firms, which Tirole pointed out in Economics for the Common Good, is to make firms artificially small. When a French firm with 49 employees hires one more employee, he noted, it is subject to 34 additional legal obligations. Not surprisingly, therefore, in a figure that shows the number of enterprises with various numbers of employees, a spike occurs at 47 to 49 employees.

In Economics for the Common Good, Tirole ranged widely over policy issues in France. In addressing the French university system, he criticized the system’s rejection of selective admission to university. He argued that such a system causes the least prepared students to drop out and concluded that “[O]n the whole, the French educational system is a vast insider-trading crime.”

Tirole is chairman of the Toulouse School of Economics and of the Institute for Advanced Study in Toulouse. A French citizen, he was born in Troyes, France and earned his Ph.D. in economics in 1981 from the Massachusetts Institute of Technology.


Selected Works

 

  1. . (Co-authored with Jean-Jacques Laffont).“Using Cost Observation to Regulate Firms”. Journal of Political Economy. 94:3 (Part I). June: 614-641.

  2. . The Theory of Industrial Organization. MIT Press.

  3. . (Co-authored with Drew Fudenberg).“Moral Hazard and Renegotiation in Agency Contracts”, Econometrica, 58:6. November: 1279-1319.

  4. . (Co-authored with Jean-Jacques Laffont). A Theory of Incentives in Procurement and Regulation. MIT Press.

2003: (Co-authored with Jean-Charles Rochet). “An Economic Analysis of the Determination of Interchange Fees in Payment Card Systems.” Review of Network Economics. 2:2: 69-79.

  1. . (Co-authored with Jean-Charles Rochet). “Two-Sided Markets: A Progress Report.” The RAND Journal of Economics. 37:3. Autumn: 645-667.

2017, Economics for the Common Good. Princeton University Press.

 

(0 COMMENTS)

CEE November 30, 2018

The 2008 Financial Crisis

It was, according to accounts filtering out of the White House, an extraordinary scene. Hank Paulson, the U.S. treasury secretary and a man with a personal fortune estimated at 700m (380m), had got down on one knee before the most powerful woman in Congress, Nancy Pelosi, and begged her to save his plan to rescue Wall Street.

    The Guardian, September 26, 20081

The financial crisis of 2008 was a complex event that took most economists and market participants by surprise. Since then, there have been many attempts to arrive at a narrative to explain the crisis, but none has proven definitive. For example, a Congressionally-chartered ten-member Financial Crisis Inquiry Commission produced three separate narratives, one supported by the members appointed by the Democrats, one supported by four members appointed by the Republicans, and a third written by the fifth Republican member, Peter Wallison.2

It is important to appreciate that the financial system is complex, not merely complicated. A complicated system, such as a smartphone, has a fixed structure, so it behaves in ways that are predictable and controllable. A complex system has an evolving structure, so it can evolve in ways that no one anticipates. We will never have a proven understanding of what caused the financial crisis, just as we will never have a proven understanding of what caused the first World War.

There can be no single, definitive narrative of the crisis. This entry can cover only a small subset of the issues raised by the episode.

Metaphorically, we may think of the crisis as a fire. It started in the housing market, spread to the sub-prime mortgage market, then engulfed the entire mortgage securities market and, finally, swept through the inter-bank lending market and the market for asset-backed commercial paper.

Home sales began to slow in the latter part of 2006. This soon created problems for the sector of the mortgage market devoted to making risky loans, with several major lenders—including the largest, New Century Financial—declaring bankruptcy early in 2007. At the time, the problem was referred to as the “sub-prime mortgage crisis,” confined to a few marginal institutions.

But by the spring of 2008, trouble was apparent at some Wall Street investment banks that underwrote securities backed by sub-prime mortgages. On March 16, commercial bank JP Morgan Chase acquired one of these firms, Bear Stearns, with help from loan guarantees provided by the Federal Reserve, the central bank of the United States.

Trouble then began to surface at all the major institutions in the mortgage securities market. By late summer, many investors had lost confidence in Freddie Mac and Fannie Mae, and the interest rates that lenders demanded from them were higher than what they could pay and still remain afloat. On September 7, the U.S. Treasury took these two GSEs into “conservatorship.”

Finally, the crisis hit the short-term inter-bank collateralized lending markets, in which all of the world’s major financial institutions participate. This phase began after government officials’ unsuccessful attempts to arrange a merger of investment bank Lehman Brothers, which declared bankruptcy on September 15. This bankruptcy caused the Reserve Primary money market fund, which held a lot of short-term Lehman securities, to mark down the value of its shares below the standard value of one dollar each. That created jitters in all short-term lending markets, including the inter-bank lending market and the market for asset-backed commercial paper in general, and caused stress among major European banks.

The freeze-up in the interbank lending market was too much for leading public officials to bear. Under intense pressure to act, Treasury Secretary Henry Paulson proposed a 700 billion financial rescue program. Congress initially voted it down, leading to heavy losses in the stock market and causing Secretary Paulson to plead for its passage. On a second vote, the measure, known as the Troubled Assets Relief Program (TARP), was approved.

In hindsight, within each sector affected by the crisis, we can find moral hazard, cognitive failures, and policy failures. Moral hazard (in insurance company terminology) arises when individuals and firms face incentives to profit from taking risks without having to bear responsibility in the event of losses. Cognitive failures arise when individuals and firms base decisions on faulty assumptions about potential scenarios. Policy failures arise when regulators reinforce rather than counteract the moral hazard and cognitive failures of market participants.

The Housing Sector

From roughly 1990 to the middle of 2006, the housing market was characterized by the following:

  • an environment of low interest rates, both in nominal and real (inflation-adjusted) terms. Low nominal rates create low monthly payments for borrowers. Low real rates raise the value of all durable assets, including housing.
  • prices for houses rising as fast as or faster than the overall price level
  • an increase in the share of households owning rather than renting
  • loosening of mortgage underwriting standards, allowing households with weaker credit histories to qualify for mortgages.
  • lower minimum requirements for down payments. A standard requirement of at least ten percent was reduced to three percent and, in some cases, zero. This resulted in a large increase in the share of home purchases made with down payments of five percent or less.
  • an increase in the use of new types of mortgages with “negative amortization,” meaning that the outstanding principal balance rises over time.
  • an increase in consumers’ borrowing against their houses to finance spending, using home equity loans, second mortgages, and refinancing of existing mortgages with new loans for larger amounts.
  • an increase in the proportion of mortgages going to people who were not planning to live in the homes that they purchased. Instead, they were buying them to speculate. 3

These phenomena produced an increase in mortgage debt that far outpaced the rise in income over the same period. The trends accelerated in the three years just prior to the downturn in the second half of 2006.

The rise in mortgage debt relative to income was not a problem as long as home prices were rising. A borrower having difficulty finding the cash to make a mortgage payment on a house that had appreciated in value could either borrow more with the house as collateral or sell the house to pay off the debt.

But when house prices stopped rising late in 2006, households that had taken on too much debt began to default. This set in motion a reverse cycle: house foreclosures increased the supply of homes for sale; meanwhile, lenders became wary of extending credit, and this reduced demand. Prices fell further, leading to more defaults and spurring lenders to tighten credit still further.

During the boom, some people were speculating in non-owner-occupied homes, while others were buying their own homes with little or no money down. And other households were, in the vernacular of the time, “using their houses as ATMs,” taking on additional mortgage debt in order to finance consumption.

In most states in the United States, once a mortgage lender forecloses on a property, the borrower is not responsible for repayment, even if the house cannot be sold for enough to cover the loan. This creates moral hazard, particularly for property speculators, who can enjoy all of the profits if house prices rise but can stick lenders with some of the losses if prices fall.

One can see cognitive failure in the way that owners of houses expected home prices to keep rising at a ten percent rate indefinitely, even though overall inflation was less than half that amount.4Also, many house owners seemed unaware of the risks of mortgages with “negative amortization.”

Policy failure played a big role in the housing sector. All of the trends listed above were supported by public policy. Because they wanted to see increased home ownership, politicians urged lenders to loosen credit standards. With the Community Reinvestment Act for banks and Affordable Housing Goals for Freddie Mac and Fannie Mae, they spurred traditional mortgage lenders to increase their lending to minority and low-income borrowers. When the crisis hit, politicians blamed lenders for borrowers’ inability to repay, and political pressure exacerbated the credit tightening that subsequently took place

The Sub-prime Mortgage Sector

Until the late 1990s, few lenders were willing to give mortgages to borrowers with problematic credit histories. But sub-prime mortgage lenders emerged and grew rapidly in the decade leading up to the crisis. This growth was fueled by financial innovations, including the use of credit scoring to finely grade mortgage borrowers, and the use of structured mortgage securities (discussed in the next section) to make the sub-prime sector attractive to investors with a low tolerance for risk. Above all, it was fueled by rising home prices, which created a history of low default rates.

There was moral hazard in the sub-prime mortgage sector because the lenders were not holding on to the loans and, therefore, not exposing themselves to default risk. Instead, they packaged the mortgages into securities and sold them to investors, with the securities market allocating the risk.

Because they sold loans in the secondary market, profits at sub-prime lenders were driven by volume, regardless of the likelihood of default. Turning down a borrower meant getting no revenue. Approving a borrower meant earning a fee. These incentives were passed through to the staff responsible for finding potential borrowers and underwriting loans, so that personnel were compensated based on “production,” meaning the new loans they originated.

Although in theory the sub-prime lenders were passing on to others the risks that were embedded in the loans they were making, they were among the first institutions to go bankrupt during the financial crisis. This shows that there was cognitive failure in the management at these companies, as they did not foresee the house price slowdown or its impact on their firms.

Cognitive failure also played a role in the rise of mortgages that were underwritten without verification of the borrowers’ income, employment, or assets. Historical data showed that credit scores were sufficient for assessing borrower risk and that additional verification contributed little predictive value. However, it turned out that once lenders were willing to forgo these documents, they attracted a different set of borrowers, whose propensity to default was higher than their credit scores otherwise indicated.

There was policy failure in that abuses in the sub-prime mortgage sector were allowed to continue. Ironically, while the safety and soundness of Freddie Mac and Fannie Mae were regulated under the Department of Housing and Urban Development, which had an institutional mission to expand home ownership, consumer protection with regard to mortgages was regulated by the Federal Reserve Board, whose primary institutional missions were monetary policy and bank safety. Though mortgage lenders were setting up borrowers to fail, the Federal Reserve made little or no effort to intervene. Even those policy makers who were concerned about practices in the sub-prime sector believed that, on balance, sub-prime mortgage lending was helping a previously under-served set of households to attain home ownership.5

Mortagage Securities

A mortgage security consists of a pool of mortgage loans, the payments on which are passed through to pension funds, insurance companies, or other institutional investors looking for reliable returns with little risk. The market for mortgage securities was created by two government agencies, known as Ginnie Mae and Freddie Mac, established in 1968 and 1970, respectively.

Mortgage securitization expanded in the 1980s, when Fannie Mae, which previously had used debt to finance its mortgage purchases, began issuing its own mortgage-backed securities. At the same time, Freddie Mac was sold to shareholders, who encouraged Freddie to grow its market share. But even though Freddie and Fannie were shareholder-owned, investors treated their securities as if they were government-backed. This was known as an implicit government guarantee.

Attempts to create a market for private-label mortgage securities (PLMS) without any form of government guarantee were largely unsuccessful until the late 1990s. The innovations that finally got the PLMS market going were credit scoring and the collateralized debt obligation (CDO).

Before credit scoring was used in the mortgage market, there was no quantifiable difference between any two borrowers who were approved for loans. With credit scoring, the Wall Street firms assembling pools of mortgages could distinguish between a borrower with a very good score (750, as measured by the popular FICO system) and one with a more doubtful score (650).

Using CDOs, Wall Street firms were able to provide major institutional investors with insulation from default risk by concentrating that risk in other sub-securities (“tranches”) that were sold to investors who were more tolerant of risk. In fact, these basic CDOs were enhanced by other exotic mechanisms, such as credit default swaps, that reallocated mortgage default risk to institutions in which hardly any observer expected to find it, including AIG Insurance.

There was moral hazard in the mortgage securities market, as Freddie Mac and Fannie Mae sought profits and growth on behalf of shareholders, but investors in their securities expected (correctly, as it turned out) that the government would protect them against losses. Years before the crisis, critics grumbled that the mortgage giants exemplified privatized profits and socialized risks.6

There was cognitive failure in the assessment of default risk. Assembling CDOs and other exotic instruments required sophisticated statistical modeling. The most important driver of expectations for mortgage defaults is the path for house prices, and the steep, broad-based decline in home prices that took place in 2006-2009 was outside the range that some modelers allowed for.

Another source of cognitive failure is the “suits/geeks” divide. In many firms, the financial engineers (“geeks) understood the risks of mortgage-related securities fairly well, but their conclusions did not make their way to the senior management level (“suits”).

There was policy failure on the part of bank regulators. Their previous adverse experience was with the Savings and Loan Crisis, in which firms that originated and retained mortgages went bankrupt in large numbers. This caused bank regulators to believe that mortgage securitization, which took risk off the books of depository institutions, would be safer for the financial system. For the purpose of assessing capital requirements for banks, regulators assigned a weight of 100 percent to mortgages originated and held by the bank, but assigned a weight of only 20 percent to the bank’s holdings of mortgage securities issued by Freddie Mac, Fannie Mae, or Ginnie Mae. This meant that banks needed to hold much more capital to hold mortgages than to hold mortgage-related securities; that naturally steered them toward the latter.

In 2001, regulators broadened the low-risk umbrella to include AAA-rated and AA-rated tranches of private-label CDOs. This ruling helped to generate a flood of PLMS, many of them backed by sub-prime mortgage loans.7

By using bond ratings as a key determinant of capital requirements, the regulators effectively put the bond rating agencies at the center of the process of creating private-label CDOs. The rating agencies immediately became subject to both moral hazard and cognitive failure. The moral hazard came from the fact that the rating agencies were paid by the issuers of securities, who wanted the most generous ratings possible, rather than being paid by the regulators, who needed more rigorous ratings. The cognitive failure came from the fact that that models that the rating agencies used gave too little weight to potential scenarios of broad-based declines in house prices. Moreover, the banks that bought the securities were happy to see them rated AAA because the high ratings made the securities eligible for lower capital requirements on the part of the banks. Both sides, therefore, buyers and sellers, had bad incentives.

There was policy failure on the part of Congress. Officials in both the Clinton and Bush Administrations were unhappy with the risk that Freddie Mac and Fannie Mae represented to taxpayers. But Congress balked at any attempt to tighten regulation of the safety and soundness of those firms.8

The Inter-bank Lending Market

There are a number of mechanisms through which financial institutions make short-term loans to one another. In the United States, banks use the Federal Funds market to manage short-term fluctuations in reserves. Internationally, banks lend in what is known as the LIBOR market.

One of the least known and most important markets is for “repo,” which is short for “repurchase agreement.” As first developed, the repo market was used by government bond dealers to finance inventories of securities, just as an automobile dealer might finance an inventory of cars. A money-market fund might lend money for one day or one week to a bond dealer, with the loan collateralized by a low-risk long-term security.

In the years leading up to the crisis, some dealers were financing low-risk mortgage-related securities in the repo market. But when some of these securities turned out to be subject to price declines that took them out of the “low-risk” category, participants in the repo market began to worry about all repo collateral. Repo lending offers very low profit margins, and if an investor has to be very discriminating about the collateral backing a repo loan, it can seem preferable to back out of repo lending altogether. This, indeed, is what happened, in what economist Gary Gorton and others called a “run on repo.”9

Another element of institutional panic was “collateral calls” involving derivative financial instruments. Derivatives, such as credit default swaps, are like side bets. The buyer of a credit default swap is betting that a particular debt instrument will default. The seller of a credit default swap is betting the opposite.

In the case of mortgage-related securities, the probability of default seemed low prior to the crisis. Sometimes, buyers of credit default swaps were merely satisfying the technical requirements to record the underlying securities as AAA-rated. They could do this if they obtained a credit default swap from an institution that was itself AAA-rated. AIG was an insurance company that saw an opportunity to take advantage of its AAA rating to sell credit default swaps on mortgage-related securities. AIG collected fees, and its Financial Products division calculated that the probability of default was essentially zero. The fees earned on each transaction were low, but the overall profit was high because of the enormous volume. AIG’s credit default swaps were a major element in the expansion of shadow banking by non-bank financial institutions during the run-up to the crisis.

Late in 2005, AIG abruptly stopped writing credit default swaps, in part because its own rating had been downgraded below AAA earlier in the year for unrelated reasons. By the time AIG stopped selling credit default swaps on mortgage-related securities, it had outstanding obligations on 80 billion of underlying securities and was earning 1 billion a year in fees.10

Because AIG no longer had its AAA rating and because the underlying mortgage securities, while not in default, were increasingly shaky, provisions in the contracts that AIG had written allowed the buyers of credit default swaps to require AIG to provide protection in the form of low-risk securities posted as collateral. These “collateral calls” were like a margin call that a stock broker will make on an investor who has borrowed money to buy stock that subsequently declines in value. In effect, collateral calls were a run on AIG’s shadow bank.

These collateral calls were made when the crisis in the inter-bank lending market was near its height in the summer of 2008 and banks were hoarding low-risk securities. In fact, the shortage of low-risk securities may have motivated some of the collateral calls, as institutions like Deutsche Bank and Goldman Sachs sought ways to ease their own liquidity problems. In any event, AIG could not raise enough short-term funds to meet its collateral calls without trying to dump long-term securities into a market that had little depth to absorb them. It turned to Federal authorities for a bailout, which was arranged and creatively backed by the Federal Reserve, but at the cost of reducing the value of shares in AIG.

With repos and derivatives, there was moral hazard in that the traders and executives of the narrow units that engaged in exotic transactions were able to claim large bonuses on the basis of short-term profits. But the adverse long-term consequences were spread to the rest of the firm and, ultimately, to taxpayers.

There was cognitive failure in that the collateral calls were an unanticipated risk of the derivatives business. The financial engineers focused on the (remote) chances of default on the underlying securities, not on the intermediate stress that might emerge from collateral calls.

There was policy failure when Congress passed the Commodity Futures Modernization Act. This legislation specified that derivatives would not be regulated by either of the agencies with the staff most qualified to understand them. Rather than require oversight by the Securities and Exchange Commission or the Commodity Futures Trading Commission (which regulated market-traded derivatives), Congress decreed that the regulator responsible for overseeing each firm would evaluate its derivative position. The logic was that a bank that was using derivatives to hedge other transactions should have its derivative position evaluated in a larger context. But, as it happened, the insurance and bank regulators who ended up with this responsibility were not equipped to see the dangers at firms such as AIG.

There was also policy failure in that officials approved of securitization that transferred risk out of the regulated banking sector. While Federal Reserve Officials were praising the risk management of commercial banks,11risk was accumulating in the shadow banking sector (non-bank institutions in the financial system), including AIG insurance, money market funds, Wall Street firms such as Bear Stearns and Lehman Brothers, and major foreign banks. When problems in the shadow banking sector contributed to the freeze in inter-bank lending and in the market for asset-backed commercial paper, policy makers felt compelled to extend bailouts to satisfy the needs of these non-bank institutions for liquid assets.

Conclusion

In terms of the fire metaphor suggested earlier, in hindsight, we can see that the markets for housing, sub-prime mortgages, mortgage-related securities, and inter-bank lending were all highly flammable just prior to the crisis. Moral hazard, cognitive failures, and policy failures all contributed the combustible mix.

The crisis also reflects a failure of the economics profession. A few economists, most notably Robert Shiller,12warned that the housing market was inflated, as indicated by ratios of prices to rents that were high by historical standards. Also, when risk-based capital regulation was proposed in the wake of the Savings and Loan Crisis and the Latin American debt crisis, a group of economists known as the Shadow Regulatory Committee warned that these regulations could be manipulated. They recommended, instead, greater use of senior subordinated debt at regulated financial institutions.13Many economists warned about the incentives for risk-taking at Freddie Mac and Fannie Mae.14

But even these economists failed to anticipate the 2008 crisis, in large part because economists did not take note of the complex mortgage-related securities and derivative instruments that had been developed. Economists have a strong preference for parsimonious models, and they look at financial markets through a lens that includes only a few types of simple assets, such as government bonds and corporate stock. This approach ignores even the repo market, which has been important in the financial system for over 40 years, and, of course, it omits CDOs, credit default swaps and other, more recent innovations.

Financial intermediaries do not produce tangible output that can be measured and counted. Instead, they provide intangible benefits that economists have never clearly articulated. The economics profession has a long way to go to catch up with modern finance.


About the Author

Arnold Kling was an economist with the Federal Reserve Board and with the Federal Home Loan Mortgage Corporation before launching one of the first Web-based businesses in 1994.  His most recent books areSpecialization and Trade and The Three Languages of Politics. He earned his Ph.D. in economics from the Massachusetts Institute of Technology.


Footnotes

1

“A desperate plea – then race for a deal before ‘sucker goes down’” The Guardian, September 26, 2008. https://www.theguardian.com/business/2008/sep/27/wallstreet.useconomy1

 

2

The report and dissents of the Financial Crisis Inquiry Commission can be found at https://fcic.law.stanford.edu/

3

See Stefania Albanesi, Giacomo De Giorgi, and Jaromir Nosal 2017, “Credit Growth and the Financial Crisis: A New Narrative” NBER working paper no. 23740. http://www.nber.org/papers/w23740

 

4

Karl E. Case and Robert J. Shiller 2003, “Is there a Bubble in the Housing Market?” Cowles Foundation Paper 1089 http://www.econ.yale.edu/shiller/pubs/p1089.pdf

 

5

Edward M. Gramlich 2004, “Subprime Mortgage Lending: Benefits, Costs, and Challenges,” Federal Reserve Board speeches. https://www.federalreserve.gov/boarddocs/speeches/2004/20040521/

 

6

For example, in 1999, Treasury Secretary Lawrence Summers said in a speech, “Debates about systemic risk should also now include government-sponsored enterprises.” See Bethany McLean and Joe Nocera 2010, All the Devils are Here: The Hidden History of the Financial Crisis Portfolio/Penguin Press. The authors write that Federal Reserve Chairman Alan Greenspan was also, like Summers, disturbed by the moral hazard inherent in the GSEs.

 

7

Jeffrey Friedman and Wladimir Kraus 2013, Engineering the Financial Crisis: Systemic Risk and the Failure of Regulation, University of Pennsylvania Press.

 

8

See McLean and Nocera, All the Devils are Here

 

9

Gary Gorton, Toomas Laarits, and Andrew Metrick 2017, “The Run on Repo and the Fed’s Response,” Stanford working paper. https://www.gsb.stanford.edu/sites/gsb/files/fin_11_17_gorton.pdf

 

10

Talking Points Memo 2009, “The Rise and Fall of AIG’s Financial Products Unit” https://talkingpointsmemo.com/muckraker/the-rise-and-fall-of-aig-s-financial-products-unit

 

11

Chairman Ben S. Bernanke 2006, “Modern Risk Management and Banking Supervision,” Federal Reserve Board speeches. https://www.federalreserve.gov/newsevents/speech/bernanke20060612a.htm

 

12

National Public Radio 2005, “Yale Professor Predicts Housing ’Bubble’ Will Burst” https://www.npr.org/templates/story/story.php?storyId4679264

 

13

Shadow Financial Regulatory Committee 2001, “The Basel Committee’s Revised Capital Accord Proposal” https://www.bis.org/bcbs/ca/shfirect.pdf

14

See the discussion in Viral V. Acharya, Matthew Richardson, Stijn Van Nieuwerburgh and Lawrence J. White 2011, Guaranteed to Fail: Fannie Mae, Freddie Mac, and the Debacle of Mortgage Finance, Princeton University Press.

 

(0 COMMENTS)

CEE September 18, 2018

Christopher Sims

Nobel Prize 2011-Nobel interviews KVA-DSC 8118

Christopher Sims was awarded, along with Thomas Sargent, the 2011 Nobel Prize in Economic Sciences. The Nobel committee cited their “empirical research on cause and effect in the macroeconomy.” The economists who spoke at the press conference announcing the award emphasized Sargent’s and Sims’ analysis of role of people’s expectations.

One of Sims’s earliest famous contributions was his work on money-income causality, which was cited by the Nobel committee. Money and income move together, but which causes which? Milton Friedman argued that changes in the money supply caused changes in income, noting that the supply of money often rises before income rises. Keynesians such as James Tobin argued that changes in income caused changes in the amount of money. Money seems to move first, but causality, said Tobin and others, still goes the other way: people hold more money when they expect income to rise in the future.

Which view is true? In 1972 Sims applied Clive Granger’s econometric test of causality. On Granger’s definition one variable is said to cause another variable if knowledge of the past values of the possibly causal variable helps to forecast the effect variable over and above the knowledge of the history of the effect variable itself. Implementing a test of this incremental predictability, Sims concluded “[T]he hypothesis that causality is unidirectional from money to income [Friedman’s view] agrees with the postwar U.S. data, whereas the hypothesis that causality is unidirectional from income to money [Tobin’s view] is rejected.”

Sims’s influential article “Macroeconomics and Reality” was a criticism of both the usual econometric interpretation of large-scale Keynesian econometric models and ofRobert Lucas’s influential earlier criticism of these Keynesian models (the so-called Lucas critique). Keynesian econometricians had claimed that with sufficiently accurate theoretical assumptions about the structure of the economy, correlations among the macroeconomic variables could be used to measure the strengths of various structural connections in the economy. Sims argued that there was no basis for thinking that these theoretical assumptions were sufficiently accurate. Such so-called “identifying assumptions” were, Sims said, literally “incredible.” Lucas, on the other hand, had not rejected the idea of such identification. Rather he had pointed out that, if people held “rational expectations” – that is, expectations that, though possibly incorrect, did not deviate on average from what actually occurs in a correctable, systematic manner – then failing to account for them would undermine the stability of the econometric estimates and render the macromodels useless for policy analysis. Lucas and his New Classical followers argued that in forming their expectations people take account of the rules implicitly followed by monetary and fiscal policymakers; and, unless those rules were integrated into the econometric model, every time the policymakers adopted a new policy (i.e., new rules), the estimates would shift in unpredictable ways.

While rejecting the structural interpretation of large-scale macromodels, Sims did not reject the models themselves, writing: “[T]here is no immediate prospect that large-scale macromodels will disappear from the scene, and for good reason: they are useful tools in forecasting and policy analysis.” Sims conceded that the Lucas critique was correct in those cases in which policy regimes truly changed. But he argued that such regime changes were rare and that most economic policy was concerned with the implementation of a particular policy regime. For that purpose, the large-scale macromodels could be helpful, since what was needed for forecasting was a model that captured the complex interrelationships among variables and not one that revealed the deeper structural connections.

In the same article, Sims proposed an alternative to large-scale macroeconomic models, the vector autoregression (or VAR). In Sims’s view, the VAR had the advantages of the earlier macromodels, in that it could capture the complex interactions among a relatively large number of variables needed for policy analysis and yet did not rely on as many questionable theoretical assumptions. With subsequent developments by Sims and others, the VAR became a major tool of empirical macroeconomic analysis.

Sims has also suggested that sticky prices are caused by “rational inattention,” an idea imported from electronic communications. Just as computers do not access information on the Internet infinitely fast (but rather, in bits per second), individual actors in an economy have only a finite ability to process information. This delay produces some sluggishness and randomness, and allows for more accurate forecasts than conventional models, in which people are assumed to be highly averse to change.

Sims’s recent work has focused on the fiscal theory of the price level, the view that inflation in the end is determined by fiscal problems—the overall amount of debt relative to the government’s ability to repay it—rather than by the split in government debt between base money and bonds. In 1999, Sims suggested that the fiscal foundations of the European Monetary Union were “precarious” and that a fiscal crisis in one country “would likely breed contagion effects in other countries.” The Greek financial crisis about a decade later seemed to confirm his prediction.

Christopher Sims earned his B.A. in mathematics in 1963 and his Ph.D. in economics in 1968, both from Harvard University. He taught at Harvard from 1968 to 1970, at the University of Minnesota from 1970 to 1990, at Yale University from 1990 to 1999, and at Princeton University from 1999 to the present. He has been a Fellow of the Econometric Society since 1974, a member of the American Academy of Arts and Sciences since 1988, a member of the National Academy of Sciences since 1989, President of the Econometric Society (1995), and President of the American Economic Association (2012). He has been a Visiting Scholar for the Federal Reserve Banks of Atlanta, New York, and Philadelphia off and on since 1994.


Selected Works

  1. . “Money, Income, and Causality.” American Economic Review 62: 4 (September): 540-552.

  2. . “Macroeconomics and Reality.” Econometrica 48: 1 (January): 1-48.

1990 (with James H. Stock and Mark W. Watson). “Inference in Linear Time Series Models with some Unit Roots.” Econometrica 58: 1 (January): 113-144.

  1. . “The Precarious Fiscal Foundations of EMU.” De Economist 147:4 (December): 415-436.

  2. . “Implications of Rational Inattention.” Journal of Monetary Economics 50: 3 (April): 665–690.

(0 COMMENTS)

CEE June 28, 2018

Gordon Tullock

Gordon tullock

Gordon Tullock, along with his colleague James M. Buchanan, was a founder of the School of Public Choice. Among his contributions to public choice were his study of bureaucracy, his early insights on rent seeking, his study of political revolutions, his analysis of dictatorships, and his analysis of incentives and outcomes in foreign policy. Tullock also contributed to the study of optimal organization of research, was a strong critic of common law, and did work on evolutionary biology. He was arguably one of the ten or so most influential economists of the last half of the twentieth century. Many economists believe that Tullock deserved to share Buchanan’s 1986 Nobel Prize or even deserved a Nobel Prize on his own.

One of Tullock’s early contributions to public choice was The Calculus of Consent: Logical Foundations of Constitutional Democracy, co-authored with Buchanan in 1962. In that path-breaking book, the authors assume that people seek their own interests in the political system and then consider the results of various rules and political structures. One can think of their book as a political economist’s version of Montesquieu.

One of the most masterful sections of The Calculus of Consent is the chapter in which the authors, using a model formulated by Tullock, consider what good decision rules would be for agreeing to have someone in government make a decision for the collective. An individual realizes that if only one person’s consent is required, and he is not that person, he could have huge costs imposed on him. Requiring more people’s consent in order for government to take action reduces the probability that that individual will be hurt. But as the number of people required to agree rises, the decision costs rise. In the extreme, if unanimity is required, people can game the system and hold out for a disproportionate share of benefits before they give their consent. The authors show that the individual’s preferred rule would be one by which the costs imposed on him plus the decision costs are at a minimum. That preferred rule would vary from person to person. But, they note, it would be highly improbable that the optimal decision rule would be one that requires a simple majority. They write, “On balance, 51 percent of the voting population would not seem to be much preferable to 49 percent.” They suggest further that the optimal rule would depend on the issues at stake. Because, they note, legislative action may “produce severe capital losses or lucrative capital gains” for various groups, the rational person, not knowing his own future position, might well want strong restraints on the exercise of legislative power.

Tullock’s part of The Calculus of Consent was a natural outgrowth of an unpublished manuscript written in the 1950s that later became his 1965 book, The Politics of Bureaucracy. Buchanan, reminiscing about that book, summed up Tullock’s approach and the book’s significance:

The substantive contribution in the manuscript was centered on the hypothesis that, regardless of role, the individual bureaucrat responds to the rewards and punishments that he confronts. This straightforward, and now so simple, hypothesis turned the whole post-Weberian quasi-normative approach to bureaucracy on its head. . . . The economic theory of bureaucracy was born.1

Buchanan noted in his reminiscence that Tullock’s “fascinating analysis” was “almost totally buried in an irritating personal narrative account of Tullock’s nine-year experience in the foreign service hierarchy.” Buchanan continued: “Then, as now, Tullock’s work was marked by his apparent inability to separate analytical exposition from personal anecdote.” Translation: Tullock learned from his experiences. As a Foreign Service officer with the U.S. State Department for nine years Tullock learned, up close and “personal,” how dysfunctional bureaucracy can be. In a later reminiscence, Tullock concluded:

A 90 per cent cut-back on our Foreign Service would save money without really damaging our international relations or stature.2

Tullock made many other contributions in considering incentives within the political system. Particularly noteworthy was his work on political revolutions and on dictatorships.

Consider, first, political revolutions. Any one person’s decision to participate in a revolution, Tullock noted, does not much affect the probability that the revolution will succeed. Therefore, each person’s actions do not much affect his expected benefits from revolution. On the other hand, a ruthless head of government can individualize the costs by heavily punishing those who participate in a revolution. So anyone contemplating participating in a revolution will be comparing heavy individual costs with small benefits that are simply his pro rata share of the overall benefits. Therefore, argued Tullock, for people to participate, they must expect some large benefits that are tied to their own participation, such as a job in the new government. That would explain an empirical regularity that Tullock noted—namely that “in most revolutions, the people who overthrow the existing government were high officials in that government before the revolution.”

This thinking carried over to his work on autocracy. In Autocracy, Tullock pointed out that in most societies at most times, governments were not democratically elected but were autocracies: they were dictatorships or kingdoms. For that reason, he argued, analysts should do more to understand them. Tullock’s book was his attempt to get the discussion started. In a chapter titled “Coups and Their Prevention,” Tullock argued that one of the autocrat’s main challenges is to survive in office. He wrote: “The dictator lives continuously under the Sword of Damocles and equally continuously worries about the thickness of the thread.” Tullock pointed out that a dictator needs his countrymen to believe not that he is good, just, or ordained by God, but that those who try to overthrow him will fail.”

Among modern economists, Tullock was the earliest discoverer of the concept of “rent seeking,” although he did not call it that. Before his work, the usual measure of the deadweight loss from monopoly was the part of the loss in consumer surplus that did not increase producer surplus for the monopolist. Consumer surplus is the maximum amount that consumers are willing to pay minus the amount they actually pay; producer surplus, also called “economic rent,” is the amount that producers get minus the minimum amount for which they would be willing to produce. Harberger3 had estimated that for the U.S. economy in the 1950s, that loss was very low, on the order of 0.1 percent of Gross National Product. In “The Welfare Cost of Tariffs, Monopolies, and Theft,” Tullock argued that this method understated the loss from monopoly because it did not take account of the investment of the monopolist—and of others trying to be monopolists—in becoming monopolists. These investments in monopoly are a loss to the economy. Tullock also pointed out that those who seek tariffs invest in getting those tariffs, and so the standard measure of the loss from tariffs understated the loss. His analysis, as the tariff example illustrates, applies more to firms seeking special privileges from government than to private attempts to monopolize via the free market because private attempts often lead, as if by an invisible hand, to increased competition.”

One of Tullock’s most important insights in public choice was in a short article in 1975 titled “The Transitional Gains Trap.” He noted that even though rent seeking often leads to big gains for the rent seekers, those gains are capitalized in asset prices, which means that buyers of the assets make a normal return on the asset. So, for example, if the government requires the use of ethanol in gasoline, owners of land on which corn is grown will find that their land is worth more because of the regulatory requirement. (Ethanol in the United States is produced from corn.) They gain when the regulation is first imposed. But when they sell the land, the new owner pays a price equal to the present value of the stream of the net profits from the land. So the new owner doesn’t get a supra-normal rate of return from the land. In other words, the owner at the time that the regulation was imposed got “transitional gains,” but the new owner does not. This means that the new owner will suffer a capital loss if the regulation is removed and will fight hard to keep the regulation in place, arguing, correctly, that he paid for those gains. That makes repealing the regulation more difficult than otherwise. Tullock notes that, therefore, we should try hard to avoid getting into these traps because they are hard to get out of.

Tullock was one of the few public choice economists to apply his tools to foreign policy. In Open Secrets of American Foreign Policy, he takes a hard-headed look at U.S. foreign policy rather than the romantic “the United States is the good guys” view that so many Americans take. For example, he wrote of the U.S. government’s bombing of Serbia under President Bill Clinton:

[T]he bombing campaign was a clear-cut violation of the United Nations Charter and hence, should be regarded as a war crime. It involved the use of military forces without the sanction of the Security Council and without any colorable claim of self-defense. Of course, it was not a first—we [the U.S. government] had done the same thing in Vietnam, Grenada and Panama.

Possibly Tullock’s most underappreciated contributions were in the area of methodology and the economics of research. About a decade after spending six months with philosopher Karl Popper at the Center for Advanced Studies in Palo Alto, Tullock published The Organization of Inquiry. In it, he considered why scientific discovery in both the hard sciences and economics works so well without any central planner, and he argued that centralized funding by government would slow progress. After arguing that applied science is generally more valuable than pure science, Tullock wrote:

Nor is there any real justification for the general tendency to consider pure research as somehow higher and better than applied research. It is certainly more pleasant to engage in research in fields that strike you as interesting than to confine yourself to fields which are likely to be profitable, but there is no reason why the person choosing the more pleasant type of research should be considered more noble.4

In Tullock’s view, a system of prizes for important discoveries would be an efficient way of achieving important breakthroughs. He wrote:

As an extreme example, surely offering a reward of 1 billion for the first successful ICBM would have resulted in both a large saving of money for the government and much faster production of this weapon.5

Tullock was born in Rockford, Illinois and was an undergrad at the University of Chicago from 1940 to 1943. His time there was interrupted when he was drafted into the U.S. Army. During his time at Chicago, though, he completed a one-semester course in economics taught by Henry Simons. After the war, he returned to the University of Chicago Law School, where he completed the J.D. degree in 1947. He was briefly with a law firm in 1947 before going into the Foreign Service, where he worked for nine years. He was an economics professor at the University of South Carolina (1959-1962), the University of Virginia (1962-1967), Rice University (1968-1969), the Virginia Polytechnic Institute and State University (1968-1983), George Mason University (1983-1987), the University of Arizona (1987-1999), and again at George Mason University (1999-2008). In 1966, he started the journal Papers in Non-Market Decision Making, which, in 1969, was renamed Public Choice.


Selected Works

 

  1. . The Calculus of Consent. (Co-authored with James M. Buchanan.) Ann Arbor, Michigan: University of Michigan Press.

  2. . The Politics of Bureaucracy. Public Affairs Press. Washington, D.C.: Public Affairs Press.

  3. . The Organization of Inquiry. Durham, North Carolina: Duke University Press.

  4. . “The Welfare Costs of Tariffs, Monopolies, and Theft,” Western Economic Journal, 5:3 (June): 224-232.

  5. . Toward a Mathematics of Politics. Ann Arbor, Michigan: University of Michigan Press.

  6. . “The Paradox of Revolution.” Public Choice. Vol. 11. Fall: 89-99.

1975: “The Transitional Gains Trap.” Bell Journal of Economics, 6:2 (Autumn): 671-678.

1987: Autocracy. Hingham, Massachusetts: Kluwer Academic Publishers.

  1. . Open Secrets of American Foreign Policy. New Jersey: World Scientific Publishing Co.

 


Footnotes

James M. Buchanan. 1987. The qualities of a natural economist. In Charles K. Rowley, (Ed.) (1987). Democracy and public choice. Oxford and New York: Basil Blackwell, 9-19.

 

Gordon Tullock. 2009. Memories of an unexciting life. Unfinished and unpublished manuscript. Tucson, 2009. Quoted in Charles K. Rowley and Daniel Houser. “The Life and Times of Gordon Tullock.” 2011. George Mason University. Department of Economics. Paper No. 11-56. December 20.

 

Arnold C. Harberger. 1954 “Monopoly and Resource Allocation.” American Economic Review. 44(2): 77-87.

 

Tullock. 1966. P. 14.

 

Tullock. 1966. P. 168.

 

(0 COMMENTS)

CEE February 5, 2018

Division of Labor

Division of labor combines specialization and the partition of a complex production task into several, or many, sub-tasks. Its importance in economics lies in the fact that a given number of workers can produce far more output using division of labor compared to the same number of workers each working alone. Interestingly, this is true even if those working alone are expert artisans. The production increase has several causes. According to Adam Smith, these include increased dexterity from learning, innovations in tool design and use as the steps are defined more clearly, and savings in wasted motion changing from one task to another.

Though the scientific understanding of the importance of division of labor is comparatively recent, the effects can be seen in most of human history. It would seem that exchange can arise only from differences in taste or circumstance. But division of labor implies that this is not true. In fact, even a society of perfect clones would develop exchange, because specialization alone is enough to reward advances such as currency, accounting, and other features of market economies.

In the early 1800s, David Ricardo developed a theory of comparative advantage as an explanation for the origins of trade. And this explanation has substantial power, particularly in a pre-industrial world. Assume, for example, that England is suited to produce wool, while Portugal is suited to produce wine. If each nation specializes, then total consumption in the world, and in each nation, is expanded. Interestingly, this is still true if one nation is better at producing both commodities: even the less productive nation benefits from specialization and trade.

In a world with industrial production based on division of labor, however, comparative advantage based on weather and soil conditions becomes secondary. Ricardo himself recognized this in his broader discussion of trade, as Meoqui points out. The reason is that division of labor produces a cost advantage where none existed before—an advantage based simply on specialization. Consequently, even in a world without comparative advantage, division of labor would create incentives for specialization and exchange.

Origins

The Neolithic Revolution, with its move to fixed agriculture and greater population densities, fostered specialization in both production of consumer goods and military protection. As Plato put it:

A State [arises] out of the needs of mankind; no one is self-sufficing, but all of us have many wants… Then, as we have many wants, and many persons are needed to supply them, one takes a helper… and another… [W]hen these partners and helpers are gathered together in one habitation the body of inhabitants is termed a State… And they exchange with one another, and one gives, and another receives, under the idea that the exchange will be for their good. (The Republic, Book II)

This idea of the city-state, or polis, as a nexus of cooperation directed by the leaders of the city is a potent tool for the social theorist. It is easy to see that the extent of specialization was limited by the size of the city: a clan has one person who plays on a hollow log with sticks; a moderately sized city might have a string quartet; and a large city could support a symphony.

One of the earliest sociologists, Muslim scholar Ibn Khaldun (1332-1406), also emphasized what he called “cooperation” as a means of achieving the benefits of specialization:

The power of the individual human being is not sufficient for him to obtain (the food) he needs, and does not provide him with as much food as he requires to live. Even if we assume an absolute minimum of food –that is, food enough for one day, (a little) wheat, for instance – that amount of food could be obtained only after much preparation such as grinding, kneading, and baking. Each of these three operations requires utensils and tools that can be provided only with the help of several crafts, such as the crafts of the blacksmith, the carpenter, and the potter. Assuming that a man could eat unprepared grain, an even greater number of operations would be necessary in order to obtain the grain: sowing and reaping, and threshing to separate it from the husks of the ear. Each of these operations requires a number of tools and many more crafts than those just mentioned. It is beyond the power of one man alone to do all that, or (even) part of it, by himself. Thus, he cannot do without a combination of many powers from among his fellow beings, if he is to obtain food for himself and for them. Through cooperation, the needs of a number of persons, many times greater than their own (number), can be satisfied. [From Muqaddimah (Introduction), First Prefatory Discussion in chapter 1; parenthetical expression in original in Rosenthal translation]

This sociological interpretation of specialization as a consequence of direction, limited by the size of the city, later motivated scholars such as Emile Durkheim (1858-1917) to recognize the central importance of division of labor for human flourishing.

Smith’s Insight

It is common to say that Adam Smith “invented” or “advocated” division of labor. Such claims are simply mistaken, on several grounds (see, for a discussion, Kennedy 2008). Smith described how decentralized market exchange fosters division of labor among cities or across political units, rather than just within them as previous thinkers had done. Smith had two key insights: First, division of labor would be powerful even if all human beings were identical, because differences in productive capacity are learned. Smith’s parable of the “street porter and the philosopher” illustrates the depth of this insight. As Smith put it:

[T]he very different genius which appears to distinguish men of different professions, when grown up to maturity, is not upon many occasions so much the cause, as the effect of the division of labour. The difference between the most dissimilar characters, between a philosopher and a common street porter, for example, seems to arise not so much from nature, as from habit, custom, and education. (WoN, V. 1, Ch 2; emphasis in original.)

Second, the division of labor gives rise to market institutions and expands the extent of the market. Exchange relations relentlessly push against borders and expand the effective locus of cooperation. The benefit to the individual is that first dozens, then hundreds, and ultimately millions, of other people stand ready to work for each of us, in ways that are constantly being expanded into new activities and new products.

Smith gives an example—the pin factory—that has become one of the central archetypes of economic theory. As Munger (2007) notes, Smith divides pin-making into 18 operations. But that number is arbitrary: labor is divided into the number of operations that fit the extent of the market. In a small market, perhaps three workers, each performing several different operations, could be employed. In a city or small country, as Smith saw, 18 different workers might be employed. In an international market, the optimal number of workers (or their equivalent in automated steps) would be even larger.

The interesting point is that there would be constant pressure on the factory to (a) expand the number of operations even more, and to automate them through the use of tools and other capital; and to (b) expand the size of the market served with consequently lower-cost pins so that the expanded output could be sold. Smith recognized this dynamic pressure in the form of what can only be regarded today as a theorem, the title of Chapter 3 in Book I of the Wealth of Nations: “That the Division of Labor is Limited by the Extent of the Market.” George Stigler treated this claim as a testable theorem in his 1951 article, and developed its insights in the context of modern economics.

Still, the full importance of Smith’s insight was not recognized and developed until quite recently. James Buchanan presented the starkest description of the implications of Smith’s theory (James Buchanan and Yong Yoon, 2002). While the bases of trade and exchange can be differences in tastes or capacities, market institutions would develop even if such differences were negligible. The Smithian conception of the basis for trade and the rewards from developing market institutions is more general and more fundamental than the simple version implied by deterministic comparative advantage.

Division of labor is a hopeful doctrine. Nearly any nation, regardless of its endowment of natural resources, can prosper simply by developing a specialization. That specialization might be determined by comparative advantage, lying in climate or other factors, of course. But division of labor alone is sufficient to create trading opportunities and the beginnings of prosperity. By contrast, nations that refuse the opportunity to specialize, clinging to mercantilist notions of independence and economic self-sufficiency, doom themselves and their populations to needless poverty.


About the Author

Michael Munger is the Director of the PPE Program at Duke University.


Further Reading

Buchanan, James, and Yong Yoon. 2002. “Globalization as Framed by the Two Logics of Trade,” The Independent Review, 6(3): 399-405.

Durkheim, Emile, 1984. Division of Labor in Society. New York: MacMillan.

Kennedy, Gavin. 2008. “Basic Errors About the Role of Adam Smith.” April 2: http://adamsmithslostlegacy.blogspot.com/2008/04/basic-errors-about-role-of-adam-smith.html

Khaldun, Ibn. 1377. Muqaddimah (Introductory) http://www.muslimphilosophy.com/ik/Muqaddimah/

Morales Meoqui, Jorge , 2015. Ricardo’s numerical example versus Ricardian trade model: A comparison of two distinct notions of comparative advantage DOI: 10.13140/RG.2.1.2484.5527/1 Link: https://www.researchgate.net/publication/283206070_Ricardos_numerical_example_versus_Ricardian_trade_model_A_comparison_of_two_distinct_notions_of_comparative_advantage

Munger, Michael. 2007. “I’ll Stick With These: Some Sharp Observations on the Division of Labor.” Indianapolis, Liberty Fund. http://www.econlib.org/library/Columns/y2007/Mungerpins.html

Plato, n.d. The Republic. Translated by Benjamin Jowett. http://classics.mit.edu/Plato/republic.html

Roberts, Russell. 2006. “Treasure Island: The Power of Trade. Part II. How Trade Transforms Our Standard of Living.” Indianapolis, Liberty Fund. http://www.econlib.org/library/Columns/y2006/Robertsstandardofliving.html

Smith, Adam. 1759/1853. (Revised Edition). The Theory of Moral Sentiments, New Edition. With a biographical and critical Memoir of the Author, by Dugald Stewart (London: Henry G. Bohn, 1853). 7/27/2015. http://oll.libertyfund.org/titles/2620

Smith, Adam. 1776/1904. An Inquiry into the Nature and Causes of the Wealth of Nations by Adam Smith, edited with an Introduction, Notes, Marginal Summary and an Enlarged Index by Edwin Cannan (London: Methuen, 1904). Vol. 1. 7/27/2015. http://oll.libertyfund.org/titles/237

Stigler, George. 1951. “The Division of Labor is Limited by the Extent of the Market.” Journal of Political Economy. 59(3): 185-193

(0 COMMENTS)

Here are the 10 latest posts from Econlib.

Econlib September 8, 2019

Murphy on Henderson and Carbon Taxes

  There are several problems here. First, even if we agreed that government (as opposed to private) payments for tree-planting made sense, it doesn’t at all follow that the revenue should come from a carbon tax. In general, raising a dollar of revenue from a tax on carbon content hurts the economy more than raising […]

The post Murphy on Henderson and Carbon Taxes appeared first on Econlib.

Econlib September 8, 2019

Is Fed policy “premised importantly” on market monetarism being true?

Market monetarists favor a policy regime where the instrument setting creates a policy stance that the financial markets believe will achieve the Fed’s policy goal. Thus, if the Fed’s goal is 2% inflation, then the monetary base (or the fed funds target) should be set at a level that the market believes will result in […]

The post Is Fed policy “premised importantly” on market monetarism being true? appeared first on Econlib.

Econlib September 7, 2019

Author on British Inheritance Misses Two Important Points

  In a recent article titled “Is property inheritance widening the wealth gap?” author James Gordon points out that people in Britain who own houses will often leave them to their adult children and, thus, adult children of Brits with no houses will see a large gap between their wealth and those of their housing-endowed […]

The post Author on British Inheritance Misses Two Important Points appeared first on Econlib.

Econlib September 7, 2019

Is the State Your Father or Your Mother?

The problem of the relations between the state and the individual was illustrated by a short Twitter exchange with a frequent contradictor of mine. He tweeted: One of those general moral/political rules is the social contract, that Libertarians routinely deny … Or if there is such a contract, it is with the globe, thus justifying […]

The post Is the State Your Father or Your Mother? appeared first on Econlib.

Econlib September 6, 2019

Last Word from Karelis

Charles Karelis has the final word in our exchange.  Here’s Charles: Bryan, my responses to your last word. If you think utility is well-suited to analyzing life or death choices, you should include a utility analysis of life or death choices in your book. Having read Derek Parfit’s Reasons and Persons on length of life […]

The post Last Word from Karelis appeared first on Econlib.

Econlib September 6, 2019

Does price stickiness explain “lowflation”?

Here’s The Economist: A forthcoming paper by Diego Aparicio and Roberto Rigobon of the Massachusetts Institute of Technology helps make the point. Firms that sell thousands of different items do not offer them at thousands of different prices, but rather slot them into a dozen or two price points. Visit the website for h&m, a fashion […]

The post Does price stickiness explain “lowflation”? appeared first on Econlib.

Econlib September 6, 2019

Paul Romer Likes Anarchy and Thinks It’s Government

  Mr. Romer’s answer is to do with this moment what Burning Man does every summer: Stake out the street grid; separate public from private space; and leave room for what’s to come. Then let the free market take over. No market mechanism can ever create the road network that connects everyone. The government must […]

The post Paul Romer Likes Anarchy and Thinks It’s Government appeared first on Econlib.

Econlib September 5, 2019

Arnold Kling’s ex-Communist Mother’s Testimony

  Arnold Kling, an EconLog mainstay, posted this morning with a link to his mother’s fascinating testimony before a St. Louis meeting of the famous House Unamerican Activities Committee. His mother, Anne Ruth Kling, nee Yasgur, had been a member of the Communist Party during and slightly after World War II. That in itself I […]

The post Arnold Kling’s ex-Communist Mother’s Testimony appeared first on Econlib.

Econlib September 5, 2019

What Signals the Value of a Book- Sales versus Prizes

I nearly missed this intriguing article about a study done by three professors (two from English departments, and one who is the associate director of Stanford’s Literary Lab) that tries to determine the significance of winning or being nominated for a literary prize. To do so, they track the popularity of the book, gauged by […]

The post What Signals the Value of a Book- Sales versus Prizes appeared first on Econlib.

Econlib September 5, 2019

UBI: Some Early Experiments

The Universal Basic Income is only a tangential interest of mine.  Yet when I’ve debated it, I’ve been consistently impressed by how little the eager advocates try to teach me.*  Case in point: I learned more from reading three paragraphs in Kevin Lang’s Poverty and Discrimination than in my typical conversation with a UBI enthusiast: […]

The post UBI: Some Early Experiments appeared first on Econlib.

This site uses local and third-party cookies to maintain your shopping cart and to analyze traffic. If you want to know more, click here. By closing this banner or clicking any link in this page, you agree with this practice.