Econlib

The Library of Economics and Liberty is dedicated to advancing the study of economics, markets, and liberty. Econlib offers a unique combination of resources for students, teachers, researchers, and aficionados of economic thought.

Econlib publishes three to four new economics articles and columns each month. The latest articles and columns are made available to the public on the first Monday of each month.

All Econlib articles and columns are written exclusively for us at the Library of Economics and Liberty, on various economics topics by renowned professors, researchers, and journalists worldwide. All articles and columns are retained online free of charge for public readership. Many articles and columns are discussed in concurrent comments and debate in our blog EconLog.

EconLog

The Library of Economics and Liberty features the popular daily blog EconLog. Bloggers Bryan Caplan, David Henderson, Alberto Mingardi, Scott Sumner, Pierre Lemieux and guest bloggers write on topical economics of interest to them, illuminating subjects from politics and finance, to recent films and cultural observations, to history and literature.

EconTalk

The Library of Economics and Liberty carries the podcast EconTalk, hosted by Russ Roberts. The weekly talk show features one-on-one discussions with an eclectic mix of authors, professors, Nobel laureates, entrepreneurs, leaders of charities and businesses, and people on the street. The emphases are on using topical books and the news to illustrate economic principles. Exploring how economics emerges in practice is a primary theme.

CEE

The Concise Encyclopedia of Economics features authoritative editions of classics in economics, and related works in history, political theory, and philosophy, complete with definitions and explanations of economics terms and ideas.

Visit the Library of Economics and Liberty

Recent Posts

Here are the 10 latest posts from EconLog.

EconLog September 18, 2020

What do models tell us?

Josh Hendrickson has a new post that defends the use of models that might in some respects be viewed as “unrealistic”. I agree with his general point about models, and also his specific defense of models that assume perfect competition. But I have a few reservations about some of his examples:

Ricardian Equivalence holds that governments should be indifferent between generating revenue from taxes or new debt issuances. This is a benchmark. The Modigliani-Miller Theorem states that the value of the firm does not depend on whether it is financing with debt or equity. Again, this is a benchmark. Regardless of what one thinks about the empirical validity of these claims, they provide useful benchmarks in the sense that they give us an understanding of when these claims are true and how to test them. By providing a benchmark for comparison, they help us to better understand the world.

With all that being said, a world without “frictions” is not always the correct counterfactual.

Taken as a whole, this statement is quite reasonable.  But I would slightly take issue with the first sentence, which is likely to mislead some readers.  Ricardian Equivalence doesn’t actually tell the government how it “should” feel about the issue of debt vs. taxes, even if Ricardian Equivalence is true.  Rather it says something like the following:

If the government believes that debt issuance is less efficient than tax financed spending because people don’t account for future tax liabilities, that belief will not be accurate if people do account for future tax liabilities.

But even if people do anticipate the future tax burden created by the national debt, heavy public borrowing may still be less efficient than tax-financed spending because taxes are distortionary, and hence tax rates should be smoothed over time.

I happen to believe Ricardian Equivalence is roughly true, but I still don’t believe the government should be indifferent between taxes and borrowing.  Similarly, I believe that rational expectations is roughly true, and yet also believe that monetary shocks have real effects due to sticky wages.  I believe that the Coase Theorem is true, but also believe that the allocation of resources depends on how legal liability is assigned (due to transactions costs).  Models generally don’t tell us what we should believe about a given issue; rather they address one aspect of highly complex problems.

Here’s Hendrickson on real business cycle theory:

Since the RBC model has competitive and complete markets, the inefficiency of business cycles can be measured by using the RBC as a benchmark. In addition, if your model does not add much insight relative to the RBC model, how valuable can it be?

[As an aside, I agree with Bennett McCallum that either the term ‘real business cycle model’ means a model where business cycles are not caused by nominal shocks interacting with sticky wages/prices, or else the term is meaningless.  There is nothing “real” about a model where nominal shocks cause business cycles.]

Do RBC models provide a useful benchmark for judging inefficiency?  Consider the following analogy:  “A model where there is no gravity provides a useful benchmark for airline industry inefficiency in a world with gravity.”  It is certainly true that airlines could be more fuel efficient in a world with no gravity, but it’s equally true that they have no way to make that happen.  I don’t believe that gravity-free models tell us much of value about airline industry efficiency.

In my view, the concept of efficiency is most useful at a policy counterfactual.  Thus monetary policy A is inefficient if monetary policy B or C produces a better outcome in terms of some plausible metric such as utility or GDP or consumption.  (I do understand that macro outcomes are hard to measure (especially utility), but unless we have some ability to measure outcomes then no one could claim that South Korea is more successful than North Korea.  I’m not that pessimistic about our knowledge of the world.)

In my view, you don’t measure inefficiency by comparing a sticky price model featuring nominal shocks against a flexible price RBC model, rather you measure efficiency by comparing two different types of monetary policies in a realistic model with sticky prices.

That’s not to say that there are not aspects of RBC models that are useful, and indeed some of those innovations might provide insights into thinking about what sort of fluctuation in GDP would be optimal.  But I don’t believe you can say anything about policy efficiency unless you first embed those RBC insights (on things like productivity shocks) into a sticky wage/price model, and then compare policy alternatives with that model.  I view sticky prices as 90% a given, much like gravity.  (The other 10% is things like minimum wage laws, which can be impacted by policy.)

PS.  Just to be clear, I agree with Hendrickson on the more important issues in his post.  My support for University of Chicago-style perfect competition models definitely puts me on “team Hendrickson”, especially when I consider the direction the broader profession is moving.

(7 COMMENTS)

EconLog September 18, 2020

Is Modern Democracy So Modern and How?

The Decline and Rise of Democracy, a new book by David Stasavage, a political scientist at New York University, reviews the history of democracy, from “early democracy” to “modern democracy.” I review the book in the just-out Fall issue of Regulation. One short quote of my review about the plight of modern democracy in America:

[Stasavage] notes the “tremendous expansion of the ability of presidents to rule by executive order.” Presidential powers, he explains, “have sometimes been expanded by presidents who cannot be accused of having authoritarian tendencies, such as Barack Obama, only to have this expanded power then used by Donald Trump.” We could, or course, as well say that the new powers grabbed by Trump will likely be used by a future Democratic president “who cannot be accused of authoritarian tendencies,” or perhaps who might legitimately be so accused.

The book is a book of history and political theory, not a partisan book. But the history of democracy has implications for today. An interesting one is how bureaucracy typically helped rulers prevent the development of democracy. Another quote from  my review—Stasavage deals with imperial China and I compare with today’s America:

At the apogee of the Han dynasty, at the beginning of the first millennium CE, there was one bureaucrat for every 440 subjects in the empire. … In the United States, which is at the low end of government bureaucracies in the rich world, public employees at all levels of government translate into one bureaucrat for 15 residents (about one for 79 at the federal level only).

If you read my review in the paper version of Regulation, beware. I made an error in my estimate for the federal bureaucracy and the printed version says “37” instead of “79”. It is corrected in the electronic version. Mea culpa.

(1 COMMENTS)

EconLog September 18, 2020

Raghuran Rajan’s The Third Pillar

In his latest book, Raghuram Rajan, a chaired professor of finance at the University of Chicago’s Booth School of Business and former governor of the Reserve Bank of India, advocates what he calls “inclusive localism.” His basic idea is that there are three pillars of a good and productive society: the market, the state, and the community. He argues that the community, which is the third pillar, nicely balances the excesses of both the free market and the state.

Although there is a strong case to be made for the importance of the community, Rajan does not make it nearly as well as he could have. The Third Pillar contains many insights and important facts, but his argument for inclusive localism is half-hearted. He concedes far too much to the current large state apparatus and, in doing so, implicitly accepts that communities will be weak. Again, and again in the book, when contemplating how to make local communities more powerful relative to federal governments, he fails to call for a massive reduction in state power. At times he accepts the state apparatus because he believes, often unjustifiably, in its goodness and effectiveness, and at times he accepts it because he seems to have a status quo bias.

Moreover, although he has better than the median economist’s understanding of the free market, he misses opportunities to point out how the market would straightforwardly solve some of the dilemmas he presents. Rajan also gets some important history wrong. And he makes too weak a case for free trade and favors ending child labor even in third-world countries where children and their families desperately need them to work.

This is from David R. Henderson, “An Unpersuasive Book with Some Encouraging Insights,” my review of Raghuram Rajan, The Third Pillar, Regulation, Fall 2020.

Rajan’s Misunderstanding of the Term “The Dismal Science”

In making his case that we can go too far in the direction of markets, Rajan writes, “Reverend Thomas Robert Malthus epitomized the heartless side of [classical] liberalism, when taken to its extreme.” Commenting on Malthus’s claim that disease, war, and famine would be natural checks on population growth, he writes, “No wonder historian Thomas Carlyle termed economics the ‘dismal science.’” But that is not why Carlyle coined the term. Instead, in noting that the dominant economists of his day strongly opposed slavery, he said economics was dismal because they opposed slavery. That is a big difference.

Rajan on Child Labor

One thing that is well established in economics is that child labor in very poor countries is a boon to children and their families. I made that point in Fortune in 1996 and Nobel economics prizewinner Paul Krugman made it in Slate in 1997. We both pointed out that children who work in “sweat shops” are virtually always better off than in their next best alternative. That next best alternative, if they are lucky, is a lower-paid job in agriculture or, if they are unlucky, picking through garbage or starving. Yet Rajan, who comes from a poor country, writes, “All countries should, of course, respect universal human rights, including refraining from using slave labor or child labor.” He is right on slave labor; he is horribly wrong on child labor. If he got his way, millions of poor children would suffer needlessly.

Rajan Has a Way with Words

One bright spot is Rajan’s refreshing way of expressing insights. For example, he sees a lot of problems with China’s unusual mixed economy and coins a beautiful phrase to describe it: “competitive cronyism.” And here is how he characterizes populism: “Populism, at its core, is a cry for help, sheathed in a demand for respect, and enveloped in the anger of those who feel they have been ignored.”

Read the whole thing.

(2 COMMENTS)

EconLog September 17, 2020

NASA Is Paying for Moon Rocks

NASA is creating financial incentives for private companies to market lunar resources. This could be a first step to developing lunar mining capabilities. The biggest benefit of the program, though, is precedent. It puts the U.S. government’s imprimatur on space commerce. Given the ambiguities in public international space law, this precedent has the potential to steer space policy and commerce in a pro-market direction.

This is a key paragraph in Alexander William Salter and David R. Henderson, “NASA is Paying for Moon Rocks. The Implications for Space Commerce are Huge,” AIER, September 16, 2020.

Read the whole thing. It’s short.

EconLog September 17, 2020

It’s Complicated: Grasping the Syllogism

A few weeks ago, I presented the following syllogism:

Issue X is complicated.

Perspective Y’s position on X is not complicated.

Therefore, Perspective Y is wrong about X.

 

Almost all of the comments were critical.  Some notable examples:

Dan:

As someone who used to live in San Francisco and was involved in YIMBY activism, this argument was used frustratingly often by NIMBYs: “The housing crisis is complicated and you can’t simplify it to econ 101, therefore just building more won’t help”. The NIMBYs, after criticizing YIMBYism for being econ 101, then never made an econ 102 argument.

The problem with this argument is that you can make yourself sound wise about anything by claiming that it’s complicated and simple solutions won’t work.

Salem:

How about:

Trade is complicated.

EconLog September 16, 2020

Case and Deaton on Deaths of Despair and the Future of Capitalism

In their recent book Deaths of Despair and the Future of Capitalism, Anne Case and Nobel economics prizewinner Angus Deaton, both emeritus economists at Princeton University, show that the death rate for middle-age whites without a college degree bottomed out in 1999 and has risen since. They attribute the increase to drugs, alcohol, and suicide. Their data on deaths are impeccable. They are careful not to attribute the deaths to some of the standard but problematic reasons people might think of, such as increasing inequality, poverty, or a lousy health care system. At the same time, they claim that capitalism, pharmaceutical companies, and expensive health insurance are major contributors to this despair.

The dust jacket of their book states, “Capitalism, which over two centuries lifted countless people out of poverty, is now destroying the lives of blue-collar America.” Fortunately, their argument is much more nuanced than the book jacket. But it is also, at times, contradictory. Their discussion of the health care system is particularly interesting both for its insights and for its confusions. In their last chapter, “What to Do?” the authors suggest various policies but, compared to the empirical rigor with which they established the facts about deaths by despair, their proposals are not well worked out. One particularly badly crafted policy is their proposal on the minimum wage.

This is from “Blame Capitalism?“, my review of The Deaths of Despair and the Future of Capitalism,” in Regulation, Fall 2020.

Another excerpt:

To understand what is behind the increase in the death rate, the authors look at state data and note that death rates increased in all but six states. The largest increases in mortality were in West Virginia, Kentucky, Arkansas, and Mississippi. The only states in which midlife white mortality fell much were California, New York, New Jersey, and Illinois. All four of the latter states, they note, have high levels of formal education. That fact leads them to one of their main “aha!” findings: the huge negative correlation between having a bachelor’s degree and deaths of despair.

To illustrate, they focus on Kentucky, a state with one of the lowest levels of educational attainment. Between the mid-1990s and 2015, Case and Deaton show, for white non-Hispanics age 45–54 who had a four-year college degree, deaths from suicide, drug overdose, or alcoholic liver disease stayed fairly flat at about 25–30 per 100,000. But for that same group but without a college degree, the deaths in the same categories zoomed up from about 40 in the mid-1990s to a whopping 130 by 2015, over four times the rate for those with a college degree.

Why is a college degree so important? One big difference between those with and without a degree is the probability of being employed. In 2017, the U.S. unemployment rate was a low 3.6%. Of those with a bachelor’s degree or more, 84% of Americans age 25–64 were employed. By contrast, only 68% of those in the same age range who had only a high school degree were employed.

That leads to two questions. First, why are those without a college degree so much less likely to have jobs? Second, how does the absence of a degree lead to more suicide and drug and alcohol consumption? On the first question, the authors note that a higher percentage of jobs than in the past require higher skills and ability. Also, they write, “some jobs that were once open to nongraduates are now reserved for those with a college degree.”

I wish they had addressed this educational “rat race” in more detail. My Econlog blogging colleague Bryan Caplan, an economist at George Mason University, argues in his 2018 book The Case Against Education that a huge amount of the value of higher education is for people to signal to potential employers that they can finish a major project and be appropriately docile. To the extent he is right, government subsidies to higher education make many jobs even more off-limits to high school graduates. Yet, Case and Deaton do not cite Caplan’s work. Moreover, in their final chapter on what to do, they go the exact wrong way, writing, “Perhaps it is time to up our game to make college the norm?” That policy would further narrow the range of jobs available to nongraduates, making them even worse off.

On the second question—why absence of a degree leads to more deaths of despair—they cite a Gallup poll asking Americans to rate their lives on a scale from 0 (“the worst possible life you can imagine”) to 10 (“the best possible life you can imagine”). Those with a college degree averaged 7.3, while those with just a high school diploma averaged 6.6. That is not a large difference, a fact they do not note.

And note their novel argument for why improved health care, better entertainment through the internet, and more convenience don’t count in people’s real wages:

So, what are the culprits behind the deaths of those without college degrees? Case and Deaton blame the job market and health insurance. Jobs for those without college degrees do not pay as much and do not generally carry much prestige. And, as noted above, Case and Deaton mistakenly think that real wages for such jobs have fallen. Some economists, by adding nonmonetary benefits provided by employers and by noting the amazing goods we can buy with our wages such as cell phones, conclude that even those without a college degree are doing better. Case and Deaton reject that argument. They do not deny that health care now is better than it was 20 years ago, but they write that a typical worker is doing better now than then “only if the improvements—in healthcare, or in better entertainment through the internet, or in more convenience from ATMs—can be turned into hard cash by buying less of the good affected, or less of something else, a possibility that, however desirable, is usually not available.” They continue, “People may be happier as a result of the innovations, but while it is often disputed whether money buys happiness, we have yet to discover a way of using happiness to buy money.”

That thinking is stunning. Over many decades, economists have been accused, usually unjustly, of saying that only money counts. We have usually responded by saying, “No, what counts is utility, the satisfaction we get out of goods and services and life in general.” But now Case and Deaton dismiss major improvements in the happiness provided by goods and services by noting that happiness cannot be converted to money. That is a big step backward in economic thinking.

 

Read the whole thing.

(31 COMMENTS)

EconLog September 16, 2020

The Fed can create money

Here’s the Financial Times:

The Fed also has acknowledged it lacks the tools to solve all the problems in the economy, since it can only lend money, but not spend it to help businesses or households. And the Fed is acutely aware that its policies have done plenty to save financial markets from distress, but cannot deliver benefits as easily to low-income families and the unemployed.

That’s entirely false.  The Fed doesn’t just lend money; it can and does create money and also spend the new money on assets in order to boost NGDP and help businesses and households.  This policy delivers benefits to unemployed workers by reducing the unemployment rate.

The Fed is worried that the lack of a fiscal agreement will threaten the recovery and make its job harder. The US central bank does not want to be left alone in propping up the recovery.

Why?

This is good:

Some economists have suggested the Fed might tweak that to include a reference to an average 2 per cent inflation objective “over time” — reflecting its new policy framework.

Investors arguing for the new guidance to be rolled out this week say the Fed risks a loss of credibility if it does not act quickly to reinforce its monetary shift.

Today’s Fed meeting will be much more important than the typical meeting.  We will get some indication as to whether the Fed plans to obey the law—fulfill its mandate from Congress—or go sit in the corner and mope about the fact that fiscal policy is not all that it would prefer.

Bonus question: When the government lends money is that policy expansionary?  When the government borrows money is that policy expansionary?  Does the FT believe that the answer to both questions is yes?

(2 COMMENTS)

EconLog September 16, 2020

Hello Mind, Nice to Meet Ya

Universities at their best are places where reading, writing, speaking, and (hopefully) listening are carried out at the highest level. The core activity here is sharing words with other people. We share words—written, verbal, and non-verbal—to meet other minds, to learn and share experiences for the sake of mutual betterment. So, as teachers and students return to campus, I thought it might be fun to take a moment to reflect on WORDS, with a little inspiration from Vincent Ostrom, F. A. Hayek, and Stephen King.

Vincent Ostrom, maybe more than any other 20supth/sup century political economist, emphasized the fact that language is a powerful tool. When we name what we experience by assigning words to objects and relationships, we generate “shared communities of understanding.”(The Meaning of Deocracies and the Sahred Vulnerability of Democracies: A Response To Tocqueville’s Challenge, p. 153) These words and the understanding they enable are how people share what they learn with others, including across generations. Through words, our experiences benefit others. I interpret this as similar to Hayek’s claim from Constitution of Liberty that “civilization begins where the individual can benefit from more knowledge than he can himself acquire, and is able to cope with his ignorance by using knowledge which he does not possess.” Words—along with markets, culture, and law/rules of conduct—form the extended orders that make society possible.

In Stephen King’s On Writing—an intellectual memoir from a true master of words—he equates successful writing with being able to pull off the near-supernatural act of telepathy. Language is a vehicle through which we are able to either send or receive mental images that otherwise would remain electrical impulses with nowhere to go, trapped inside our own minds. He gives the following example of “telepathy in action”:

“Look, here’s a table covered with a red cloth. On it is a cage the size of a small fish aquarium. In the cage is a white rabbit, with a pink nose and pink rimmed eyes. In its front paws is a carrot stub upon which it is contentedly munching. On its back, clearly marked in blue ink, is the numeral eight. Do we see the same thing? We’d have to get together and compare notes to make absolutely sure, but I think we do.”

The quote doesn’t give the chapter full justice, so I definitely recommend reading the whole thing, especially if you have an interest in writing as a craft. He goes on to explain that we might imagine very different details, but nearly everybody comes away with the same understanding about what is important about the description: the blue number eight on the rabbit’s back. This is the puzzle, the unexpected element that makes the information new and unites our attention around an idea. What I take from this is that there’s something about finding the right way to say something—precise but only to the point of usefulness, thorough yet focused, with some understanding of what the reader is bringing to the table—that makes it possible to get a message across in the way it was intended. That makes it possible for two minds to meet.

King’s conclusion is that “You must not come lightly to the blank page.” To write is to transmit ideas to other people’s minds. That’s a serious responsibility that can be carried out well or poorly, put to good use or ill. I can think of no reason why the same admonition should not apply to lectures, conversations, and video presentations.

Vincent Ostrom built on this idea. For Ostrom, language is created through the process of continued communication, and the language that is created then enters back into every aspect of our lives: “The learning, use, and alteration of language articulations is constitutional in character, applicable to the constitutive character of human personality, to patterns of human association, and to the aggregate structure of the conventions of language usage… the way languages are associated with institutions, goods, cultures, and personality, attributes means that we find languages permeating all aspects of human existence” (p. 171-2).

In other words, by embarking on the academic’s quest to use words better, we are all taking on a particularly important constitutive role. Global markets are made up by millions of buyers and sellers scattered around the world. Languages are made up of millions of people talking, reading, writing, listening, and—to borrow King’s analogy—making telepathic connections with each other in an attempt to connect words to better ideas, and better ideas to better lives. It might be an abstract quest but it’s noble one. Getting it right can make the world better, getting it wrong can make the world worse.

There are several dozen morals about the importance of the endeavor, of sticking to one’s principles, of mastering the fundamentals, etc. that can be drawn from this, and I don’t really want to moralize or pontificate more than I already have. So I’ll just end by saying that if you’re still reading, it was nice to meet your mind for a moment. I hope we’ll meet again soon.

 

 

 

_ Jayme Lemke is a Senior Research Fellow and Associate Director of Academic and Student Programs at the Mercatus Center at George Mason University and a Senior Fellow in the F.A. Hayek Program for Advanced Study in Philosophy, Politics, and Economics. _

 


As an Amazon Associate, Econlib earns from qualifying purchases.

(2 COMMENTS)

EconLog September 16, 2020

The Great Reconciliation?

What is the best way to reconcile the results for these three polls?

How good is the following heuristic?

The resources you spend mitigating a problem should be directly proportional to its overall severity.

— Bryan Caplan (@bryan_caplan) August 18, 2020

script async src"https://platform.twitter.com/widgets.js" charset"utf-8"/script

Medically speaking, how bad is coronavirus compared to flu?

— Bryan Caplan (@bryan_caplan) August 17, 2020

script async src"https://platform.twitter.com/widgets.js" charset"utf-8"/script

How much time, inconvenience, and resources should we spend fighting coronavirus compared to flu?

— Bryan Caplan (@bryan_caplan) August 17, 2020

script async src"https://platform.twitter.com/widgets.js" charset"utf-8"/script

I’m tempted to just say “cognitive dissonance.”  The effort heuristic makes great sense, and the medical estimate seems about right.  But that in turn implies that past and current coronavirus efforts (public and private) are grossly excessive.  Indeed, do we even spend five hours per year fighting flu?  If so, why should we spend more than twenty five hours per year fighting coronavirus?  But almost no one feels comfortable with that relaxed attitude, hence the dissonance.

(23 COMMENTS)

EconLog September 16, 2020

The Logic of Protectionist Nationalists

It seems that economics and logic were not the strong fields of protectionist nationalists in college—or at least this is the case with the three lieutenant governors who published an op-ed in The Hill at the beginning of the Summer. In the just-published Fall issue of Regulation (the electrons are still hot and the paper version has not yet hit the newsstands), I write:

The USMCA, the authors glowingly wrote, will “increase U.S. annual agricultural exports by 2.2 billion.” This crowing claim comes just a few lines after the statement that “agriculture is what puts food on the table, literally and metaphorically.” They better  take their “metaphorically” very literally because exported agricultural products actually take food away from American tables in order to feed foreigners.

No wonder that with this sort of coherence, protectionists think they can prove anything, including that the only benefit of free trade lies in exports.

For a view of the free-market view of trade, have a look at my article. One question it answers is, Why do American exporters work for foreigners? I also review recent data on the impact of the 2018 tariff on the domestic price of washing machines.

 

(7 COMMENTS)

Here are the 10 latest posts from EconTalk.

EconTalk September 14, 2020

Robert Chitester on Milton Friedman and Free to Choose

friedman-300x197.jpg Once upon a time, a man had an idea for a documentary on free-market ideas. Then that man was introduced to Milton Friedman. The result of their collaboration was a wildly successful book and PBS series, Free to Choose, capturing Friedman’s view of the world, how markets work, and the role of individual liberty in […]

EconTalk September 7, 2020

Margaret Heffernan on Uncharted

Uncharted-199x300.jpg How do we prepare for a future that is unpredictable? That’s the question at the heart of Margaret Heffernan‘s new book, Uncharted: How to Navigate the Future. Heffernan is a professor at the University of Bath, but she is also a serial entrepreneur, a former CEO, and the author of five books on leadership, innovation, […]

EconTalk August 31, 2020

Matt Ridley on How Innovation Works

How-Innovation-Works-300x300.jpg What’s the difference between invention and innovation? Could it be that innovation–the process of making a breakthrough invention available, affordable, and reliable–is actually the hard part? In this week’s EconTalk episode, author Matt Ridley talks about his book How Innovation Works with EconTalk host Russ Roberts. Ridley argues that we give too much credit to […]

EconTalk August 24, 2020

Franklin Zimring on When Police Kill

When-Police-Kill-199x300.jpgFranklin Zimring’s 2017 book, When Police Kill, starts with an alarming statistic: Roughly 1,000 Americans die each year at the hands of police. Zimring, criminologist and law professor at the University of California at Berkeley, talks about his book with EconTalk host Russ Roberts. Zimring argues that better policing practices can reduce the number of […]

EconTalk August 17, 2020

Michael Munger on the Future of Higher Education

In this 750th (!) episode, Duke University’s Michael Munger talks with EconTalk host Russ Roberts about whether the pandemic might create an opportunity for colleges and universities to experiment and innovate. Munger is Professor of Political Science, Economics and Public Policy at Duke. He believes “top” schools can emerge from the current period of uncertainty […]

The post Michael Munger on the Future of Higher Education appeared first on Econlib.

EconTalk August 10, 2020

Ben Cohen on the Hot Hand

The-Hot-Hand-199x300.jpg Journalist and author Ben Cohen talks about his book, The Hot Hand, with EconTalk host Russ Roberts. At times in sports and elsewhere in life, a person seems to be “on fire,” playing at an unusually high level. Is this real or an illusion? Cohen takes the listener through the scientific literature on this question […]

EconTalk August 3, 2020

John Kay and Mervyn King on Radical Uncertainty

Radical-Uncertainty-198x300.jpg John Kay and Mervyn King talk about their book, Radical Uncertainty, with EconTalk host Russ Roberts. This is a wide-ranging discussion based on the book looking at rationality, decision-making under uncertainty, and the economists’ view of the world.

EconTalk July 27, 2020

Nassim Nicholas Taleb on the Pandemic

coronavirus-3-300x300.jpg Nassim Nicholas Taleb talks about the pandemic with EconTalk host Russ Roberts. Topics discussed include how to handle the rest of this pandemic and the next one, the power of the mask, geronticide, and soul in the game.

EconTalk July 20, 2020

Glenn Loury on Race, Inequality, and America

school-math.jpg Economist and author Glenn Loury of Brown University talks about race in America with EconTalk host Russ Roberts.

EconTalk July 13, 2020

Josh Williams on Online Gaming, Blockchain, and Forte

game-on-300x219.jpg Josh Williams, co-founder and CEO of the blockchain gaming company Forte, talks with EconTalk host Russ Roberts about the state of online gaming and the potential of a blockchain-based gaming platform to create market economies with property rights within online games.

Here are the 10 latest posts from CEE.

CEE July 1, 2020

Israel Kirzner

Israel Kirzner is a prominent member of the Austrian School of economics. His major contribution is his work on the meaning and importance of entrepreneurship.

Kirzner’s view is that mainstream neo-classical economics omits the role of the entrepreneur. The standard neoclassical models of markets, whether perfect competition, monopolistic competition, or monopoly, argues Kirzner, are equilibrium models. They omit the crucial role of the entrepreneur, which is to bring markets to equilibrium. In Kirzner’s view, which he and others refer to as a distinct viewpoint of the Austrian school of economics, the main characteristic of the entrepreneur is alertness. The entrepreneur is alert to price differences that others have not noticed and makes a profit by acting on this alertness. So, for example, the entrepreneur notices that goods selling for 10 in one market are fetching 15 in another market. He also notices that the shipping, insurance, and interest costs of buying where it sells for 10 and selling where it sells for 15 are less than 5. So he buys in the cheaper market, sells in the dearer market, and makes a profit. As long as others are not aware of this difference, the entrepreneur continues to make money. But other entrepreneurs are also alert. When they notice the difference in prices, they seek to do what the first entrepreneur did. As they enter the market—buying in the cheaper market and selling in the dearer market—they drive the price of the good that they buy above 10 and drive the price where they are selling below 15. This continues until the price difference covers the shipping, insurance, and interest costs.

CEE July 1, 2020

Israel Kirzner

Israel Kirzner is a prominent member of the Austrian School of economics. His major contribution is his work on the meaning and importance of entrepreneurship.

Kirzner’s view is that mainstream neo-classical economics omits the role of the entrepreneur. The standard neoclassical models of markets, whether perfect competition, monopolistic competition, or monopoly, argues Kirzner, are equilibrium models. They omit the crucial role of the entrepreneur, which is to bring markets to equilibrium. In Kirzner’s view, which he and others refer to as a distinct viewpoint of the Austrian school of economics, the main characteristic of the entrepreneur is alertness. The entrepreneur is alert to price differences that others have not noticed and makes a profit by acting on this alertness. So, for example, the entrepreneur notices that goods selling for 10 in one market are fetching 15 in another market. He also notices that the shipping, insurance, and interest costs of buying where it sells for 10 and selling where it sells for 15 are less than 5. So he buys in the cheaper market, sells in the dearer market, and makes a profit. As long as others are not aware of this difference, the entrepreneur continues to make money. But other entrepreneurs are also alert. When they notice the difference in prices, they seek to do what the first entrepreneur did. As they enter the market—buying in the cheaper market and selling in the dearer market—they drive the price of the good that they buy above 10 and drive the price where they are selling below 15. This continues until the price difference covers the shipping, insurance, and interest costs.

In his 1973 book, Competition and Entrepreneurship, Kirzner finds similarities and differences between his view of entrepreneurship and that of Joseph Schumpeter. What they have in common is that the entrepreneur qua entrepreneur contributes “no factor services to production.” Kirzner elaborates, “What the entrepreneur contributes is merely the pure decision to direct these inputs into the process selected rather than into other processes.”

The main difference between Kirzner’s entrepreneur and Schumpeter’s is that Schumpeter’s entrepreneur upsets an existing equilibrium by introducing a new product or a new production technique, while for Kirzner, the entrepreneur “has an equilibrating influence.” Kirzner writes, “For me the important feature of entrepreneurship is not so much the ability to break away from routine as the ability to perceive new opportunities which others have not yet noticed.”

He elaborates:

Entrepreneurship for me is not so much the introduction of new products or new techniques of production as the ability to see where new products have become unsuspectedly valuable to consumers and where new methods of production have, unknown to others, become feasible. For me the function of the entrepreneur consists not of shifting the curves of cost or revenue which face him, but of noticing that they have in fact shifted. (italics in original)[1]

Kirzner’s entrepreneur, then, is essentially an arbitrageur, a point that Kirzner makes in comparing his view of the entrepreneur to that of his mentor Ludwig von Mises. He sees both the Misesian view and his own as “an ‘arbitrage’ theory of profit.” (italics in original) For Kirzner’s entrepreneur, something is sold at different prices in two different markets because of imperfect communication between participants in the two markets. But, Kirzner notes, it is not only arbitrage in the narrow sense of buying a good in one market and selling the identical good in the other market. It is also arbitrage in a wider sense: in the market for factors of production, “it appears as a bundle of inputs, and in the product market it appears as a consumption good.”

Kirzner’s view is that entrepreneurship is an inherent aspect of the competitive process. The term “process” is important because Kirzner sees competition as a process rather than as an end state.  But in the neoclassical model of perfect competition, which came to dominate economics in the 1920s, there was no process. Kirzner writes:

Competition, to the equilibrium price theorist, turned out to refer to a state of affairs into which so many competing participants have already entered that no room exists for additional entry (or other modification of existing market conditions). (italics in original)[2]

Kirzner notes that this end-state view of competition is very far from the view of the non-economist.

By viewing competition as a process, one can get a new perspective on various market phenomena that economists have commented on and analyzed for almost a century. Two that stand out are (1) the role of advertising and selling effort in general and (2) the alleged waste from competition.

Consider advertising and selling effort. The alert entrepreneur must also alert potential buyers to the presence and, ideally, the attractiveness of the items he’s selling. Doing so uses resources, but those resources are not wasted. Kirzner writes that “selling effort (including advertising) that alters the opportunities perceived by consumers constitutes an entirely normal avenue of competitive-entrepreneurial activity.” But such activity, he notes, would be unnecessary in a state of equilibrium because in that state, consumers already know all they need to know. Again, those who focus on an equilibrium, and not on the competitive process that gets us closer to equilibrium, will miss the value of advertising and other selling costs.

One criticism of free markets that has been made for many decades is that they result in wasteful duplication. When one firm already exists, according to this view, entry of another firm is wasteful. Kirzner answers:

The truth is that until the newly competing entrepreneur has tested his hunch about the lowest cost at which he can produce, we simply do not know what organization of industry is “best.” To describe the competitive process as wasteful because it corrects mistakes only after they occur seems similar to ascribing the ailment to the medicine which heals it, or even to blaming the diagnostic procedure for the disease it identifies.[3]

Although economists do not typically engage in moral philosophy, Kirzner has applied his theory of entrepreneurship to make a case for the justice of making profits. He writes:

The finders-keepers rule asserts that an unowned object becomes the justly owned property of the first person who, discovering its availability and its potential value, takes possession of it.[4]

Because the Kirznerian entrepreneur makes a profit by discovering a higher-valued use, argues Kirzner, the entrepreneur, by the finders-keepers rule, has a right to the profit he makes from acting on that discovery.

For more on Kirzner’s life and work, see A Conversation with Israel Kirzner, an Intellectual Portrait at Econlib Videos.

Kirzner was born in London, England. From 1947 to 1948, he attended the University of Cape Town in South Africa. From 1950 to 1951, he attended the University of London. He earned a B.A. at Brooklyn College, an M.B.A. from New York University (NYU), and a Ph.D. in economics from NYU in 1957. He was an assistant professor of economics at NYU from 1957 to 1961, an associate professor from 1961 to 1968, and a full professor from 1968 until he retired in 2001. In 1976, he founded a graduate study program in Austrian economics at NYU.

 


Selected Works

 

  1. The Economic Point of View. Kansas City: Sheed and Ward.
  2. Competition and Entrepreneurship. Chicago: University of Chicago Press.
  3. Perception, Opportunity, and Profit. Chicago: University of Chicago Press.
  4. Discovery, Capitalism, and Distributive Justice. New York: Basil Blackwell.

 

 

 


Footnotes

[1] Kirzner, Israel M., Competition and Entrepreneurship, Chicago: University of Chicago Press, 1973, p. 81.

[2] Kirzner, Competition and Entrepreneurship, p. 28.

[3] Kirzner, Competition and Entrepreneurship, p. 236.

[4] Kirzner, Israel M, Discovery, Capitalism, and Distributive Justice, New York: Basil Blackwell, 1989, p. 98.

(0 COMMENTS)

CEE March 24, 2020

Harold Demsetz

Harold Demsetz made major contributions to the economics of property rights and to the economics of industrial organization. He also coined the term “the Nirvana approach.” Economists have altered it slightly but use it widely. Demsetz was one of the few top economists of his era to communicate almost entirely in words and not math. Demsetz also defended both economic freedom and civil liberties.

Drawing on anthropological research, Demsetz noted that, although the native Canadians (Canadians often call them First Nations people) in Labrador had property rights in the early 18supth/sup century, they did not have property rights in the mid-17supth/sup century. What changed? Demsetz argued that the advent of the fur trade in the late 17supth/sup century made it more valuable to establish property rights so that the beavers were not overtrapped. By contrast, native Americans on the southwestern plains of the United States did not establish property rights; Demsetz reasoned that it was because the animals they hunted wandered over wide tracts of land and, therefore, the cost of establishing property rights was prohibitive. One of Demsetz’s most important contributions was a 1967 article, “Toward a Theory of Property Rights.” In it, he argued that property rights tend to develop where the gains from defining and enforcing those rights exceed the costs. He found confirming evidence in the presence or absence of property rights among native Americans and native Canadians, and he dismissed the idea that they were primitive people who couldn’t understand or appreciate property rights. Instead, he argued, they developed property rights in areas of North America where the property was worth defending.

In the 1960s, the dominant view in the area of economics called industrial organization was that concentration in industries was bad because it led to monopoly. In the 1970s, Demsetz challenged that view. He argued that the kind of monopoly to worry about is caused by government regulation that prohibits firms from entering an industry. He pointed to the Civil Aeronautics Board’s restrictions on entry by new airlines and the Federal Communication Commission’s hobbling of cable TV as examples. He wrote, “The legal route of monopoly runs through Washington and the state capitals.” But, he argued, if a few firms achieved a large market share through economies of scale or through superior performance, we should not worry, and the antitrust officials should not go after such firms. As long as the government doesn’t restrict new competitors, firms with a large market share will face competition in the future.

In a 1969 article, “Information and Efficiency: Another Viewpoint,” Demsetz accused fellow economist Kenneth Arrow of taking the “Nirvana approach” and recommended instead a “comparative institutions approach.” He wrote, “[T]hose who adopt the nirvana viewpoint seek to discover discrepancies between the ideal and the real and if discrepancies are found, they deduce that the real is inefficient.” Specifically, Arrow showed ways in which the free market might provide too little innovation, but then simply assumed that government intervention would get the economy closer to the optimum. Demsetz conceded that ideal government intervention might improve things, but he noted that Arrow, like many economists, had failed to show that actual government intervention would do so. Economists have slightly changed the label on Demsetz’s insight: they now refer to it as the “Nirvana fallacy.”

Another major Demsetz contribution was his thinking about natural monopoly, evidenced best in his 1968 article “Why Regulate Utilities?” In that article, Demsetz stated that the theory of natural monopoly “is deficient for it fails to reveal the logical steps that carry it from scale economies in production to monopoly price in the market place.” How so? Demsetz argued that competing providers could bid to be the single provider and that consumers, if well organized, could choose among competing providers. The competition among potential providers would prevent the winning provider from charging a monopoly price.

Economists often use negative externalities as a justification for government regulation. One standard example is pollution; in their actions, polluters do not take into account the damage imposed on others. Demsetz pointed out that governments also impose negative externalities. In the above-mentioned 1967 article on property rights, Demsetz wrote, “Perhaps one of the most significant cases of externalities is the extensive use of the military draft. The taxpayer benefits by not paying the full cost of staffing the armed services.” He added, “It has always seemed incredible to me that so many economists can recognize an externality when they see smoke but not when they see the draft.” Demsetz was a strong opponent of the draft.

One of Demsetz’s other contributions, co-authored with Armen A. Alchian, was his 1972 article “Production, Information Costs, and Economic Organization.” A 2011 article written by three Nobel Prize winners—Kenneth J. Arrow, Daniel L. McFadden, and Robert M. Solow—and three other economists—B. Douglas Bernheim, Martin S. Feldstein, and James M. Poterba, stated that this article was one of the top 20 articles publishes in the American Economic Review in the first 100 years of its existence. In it, Alchian and Demsetz proposed the idea that the reason to have firms is that team production is important and monitoring the productivity of team members is difficult. Therefore, they argued, firms, to be effective, must have people in the firm who monitor and who are residual claimants. These people, often, but not always, the owners, get some fraction of the profits of the firm and, therefore, have an incentive to monitor effectively. That helps solve the classic principal-agent problem.

In a famous 1933 book titled The Modern Corporation and Private Property, Adolf A. Berle and Gardiner C. Means had argued that diffusion of ownership in modern corporations gave managers of large corporations more control, shifting it from the owners. These managers, they argued, would use that control to benefit themselves. Demsetz and co-author Kenneth Lehn questioned that reasoning. They argued that owners would not give up control without getting something in return. If Berle and Means were correct, they wrote, then one should observe a lower rate of profit in firms with highly diffused ownership. But if Demsetz and Lehn were correct, one should see no such relationship because diffused ownership would happen where there were profitable reasons for it to happen. They wrote:

A decision by shareholders to alter the ownership structure of their firm from concentrated to diffuse should be a decision made in awareness of its consequences for loosening control over professional management. The higher cost and reduced profit that would be associated with this loosening in owner control should be offset by lower capital costs or other profit-enhancing aspects of diffuse ownership if shareholders choose to broaden ownership.

Demsetz and Lehn found “no significant relationship between ownership concentration and accounting profit rate,” just as they expected.

In a 2013 tribute to Demsetz’s co-author Alchian, economist Thomas Hubbard highlighted their 1972 article, writing:

This paper may be the most influential paper in the economics of organization, catalyzing the development of the field as we know it. It is the most-cited paper published in the AER [American Economic Review] in the past 40 years. (If one takes away finance and econometrics methods papers, it is the most-cited ‘economics’ paper, period.) It is truly a spectacular piece. It is a theory not only of firms’ boundaries, but also the firm’s hierarchical and financial structure.[1]

He was also an early defender of the rights of homosexuals. At the September 1978 Mont Pelerin Society meetings in Hong Kong, Demsetz criticized, on grounds of individual rights, the Briggs Initiative, on the November 1978 California ballot. This initiative would have banned homosexuals from teaching in public schools.  The initiative was defeated, helped by the opposition of Demsetz’s fellow Californian Ronald Reagan.

For more on Demsetz’s life and work, see A Conversation with Harold Demsetz, an Intellectual Portrait at Econlib Videos.

Demsetz, a native of Chicago, earned his undergraduate degree in economics at the University of Illinois in 1953 and his Ph.D. in economics at Northwestern University in 1959. He taught at the University of Michigan from 1958 to 1960, at UCLA from 1960 to 1963, at the University of Chicago from 1963 to 1971, and then again at UCLA from 1971 until his retirement. In 2013, he was made a Distinguished Fellow of the American Economic Association.

In 1963, when Demsetz was on the UCLA faculty, a University of Chicago economist named Reuben Kessel asked him if he was happy there. Demsetz, sensing an offer in the works, answered, “Make me unhappy.” The University of Chicago did just that, and Demsetz moved to Chicago for eight productive years.

 

 

Selected Works

  1. . “Minorities in the Marketplace.” North Carolina Law Review, Vol. 43, No. 2: 271-97.

  2. . “Toward a Theory of Property Rights.” American Economic Review, Vol. 57, No. 2, (May, 1967): 347-59.

  3. . “Why Regulate Utilities?” Journal of Law and Economics,Vol. 11, No. 1, (April, 1968): 55-65.

1972 (with Armen A. Alchian). “Production, Information Costs, and Economic Organization,” American Economic Review, Vol. 62, No. 5, (December 1972): 777-95.

  1. . “Industry Structure, Market Rivalry, and Public Policy,” Journal of Law and Economics,Vol. 16, No. 1 (April, 1973): 1-9.

  2. . ‘Two Systems of Belief about Monopoly,” in Industrial Concentration: The New Learning, edited by H. J. Goldschmid, H. M. Mann and J. F. Weston, Little, Brown.

1985 (with Kenneth Lehn). “The Structure of Corporate Ownership: Causes and Consequences,” Journal of Political Economy, Vol. 93, No. 6 (December, 1985): 1155-1177.

  1. . Ownership, Control, and the Firm. Cambridge, MA: Basil Blackwell.

  2. . Efficiency, Competition, and Policy. Cambridge, MA: Basil Blackwell.


[1] Thomas N. Hubbard, “A Legend in Economics Passes,” Digitopoly, February 20, 2013. At: https://digitopoly.org/2013/02/20/a-legend-in-economics-passes/

(0 COMMENTS)

CEE July 19, 2019

Richard H. Thaler

 

Richard H. Thaler won the 2017 Nobel Prize in Economic Science for “his contributions to behavioral economics.”

In most of his work, Thaler has challenged the standard economist’s model of rational human beings.  He showed some of the ways that people systematically depart from rationality and some of the decisions that resulted. He has used these insights to propose ways to help people save, and save more, for retirement. Thaler also advocates something called “libertarian paternalism.”

Economists generally assume that more choices are better than fewer choices. But if that were so, argues Thaler, people would be upset, not happy, when the host at a dinner party removes the pre-dinner bowl of cashews. Yet many of us are happy that it’s gone. Purposely taking away our choice to eat more cashews, he argues, makes up for our lack of self-control. This simple contradiction between the economists’ model of rationality and actual human behavior, plus many more that Thaler has observed, leads him to divide the population into “Econs” and “Humans.” Econs, according to Thaler, are people who are economically rational and fit the model completely. Humans are the vast majority of people.

Thaler (1980) noticed another anomaly in people’s thinking that is inconsistent with the idea that people are rational. He called it the “endowment effect.” People must be paid much more to give something up (their “endowment”) than they are willing to pay to acquire it. So, to take one of his examples from a survey, people, when asked how much they are willing to accept to take on an added mortality risk of one in one thousand, would give, as a typical response, the number 10,000. But a typical response by people, when asked how much they would pay to reduce an existing risk of death by one in one thousand, was 200.

One of Thaler’s most-cited articles is Werner F. M. De Bondt and Richard Thaler (1985). In that paper they compared the stocks of “losers” and “winners.” They defined losers as stocks that had recently dropped in value and winners as stocks that had recently increased in value, and their hypothesis was that people overreact to news, driving the prices of winners too high and the prices of losers too low. Consistent with that hypothesis, they found that the portfolio of losers outperformed the portfolio of winners.

One of the issues to which Thaler applied his thinking is that of saving for retirement. In his book Misbehaving, Thaler argues that if everyone were an Econ, it wouldn’t matter whether employers’ default option was not to sign up their employees for tax-advantaged retirement accounts and let them opt in or to sign them all up and let employees opt out. There are transactions costs associated with getting out of either default option, of course, but they are small relative to the stakes involved. For that reason, argued Thaler, either option should lead to about the same percentage of employees taking advantage of the program. Yet Brigitte G. Madrian and Dennis F. Shea found[1] that before a company they studied had tried automatic enrollment, only 49 percent of employees had joined the plan. When enrollment became the default, 84 percent of employees stayed enrolled. That is a large difference relative to what most economists would have expected.

Thaler and economist Shlomo Benartzi, arguing that people tend to be myopic and heavily discount the future, helped design a private pension plan to enable people to save more. Called Save More Tomorrow, it automatically increases the percent of their gross pay that people save in 401(k) plans every time they get a pay raise. That way, people can save more without ever having to cut their current consumption expenditures. Many “Econs” were presumably already doing that, but this plan helps Humans, as well. When a midsize manufacturing firm implemented their plan, participants, at the end of four annual raises, had almost quadrupled their saving rate.

In their book Nudge, Thaler, along with co-author Cass Sunstein, a law professor, used behavioral economics to argue for “nudging” people to make better decisions. In each person, they argued, are an impulsive Doer and a farsighted Planner. In the retirement saving example above, the Doer wants to spend now and the Planner wants to save for retirement. Which preferences should be taken account of in public policy?

As noted earlier, Thaler believes in “libertarian paternalism.” In Nudge, he and Sunstein lay out the concept. The basic idea is to have the government set paternalist rules as defaults but let people choose to opt out at low cost. One example is laws requiring motorcyclists to wear helmets. That is paternalism. How to make it “libertarian paternalist?” They favorably cite New York Times columnist John Tierney’s proposal that motorcyclists who don’t want to wear helmets be required to take an extra driving course and show proof of health insurance.

In a review of Nudge, Thomas Leonard writes:

The irony is that behavioral economics, having attacked Homo Economicus as an empirically false description of human choice, now proposes, in the name of paternalism, to enshrine the very same fellow as the image of what people should want to be. Or, more precisely, what paternalists want people to be. For the consequence of dividing the self has been to undermine the very idea of true preferences. If true preferences don’t exist, the libertarian paternalist cannot help people get what they truly want. He can only make like an old fashioned paternalist, and give people what they should want.[2]

In some areas, Thaler seems to have departed from the view that long-term considerations should guide economic policy. A standard view among economists is that after a flood or hurricane, a government should refrain from imposing price controls on crucial items such fresh water, food, or plywood. That way, goes the economic reasoning, suppliers in other parts of the country have an incentive to move goods to where they are needed most and buyers will be careful not to stock up as much immediately after the flood or hurricane. In 2012, when asked about a proposed anti-price-gouging law in Connecticut, Thaler answered succinctly, “Not needed. Big firms hold prices firm. ‘Entrepreneurs’ with trucks help meet supply. Are the latter covered? If so, [the proposed law is] bad.”[3] What he was getting at is that, to some extent, we get the best of both worlds. Companies like Wal-Mart, worried about their reputation with consumers, will refrain from price gouging but will stock up in advance; one-time entrepreneurs, not worried about their reputations, will supply high-priced items to people who want them badly. But in a Marketplace interview in September 2017,[4] Thaler said, “A time of crisis is a time for all of us to pitch in; it’s not a time for all of us to grab.” He seemed to have moved from the mainstream economists’ view to the popular view.

One relatively unexplored area in Thaler’s work is how government officials show the same irrationality that many of us show and the implications of that fact for government policy.

Thaler earned his Bachelor of Arts degree with a major in economics at Case Western Reserve University in 1965, his Masters degree in economics from the University of Rochester in 1970, and his Ph.D. in economics from the University of Rochester in 1974. He was a professor at the University of Rochester’s Graduate School of Management from 1974 to 1978 and a professor at Cornell University’s Johnson School of Management from 1978 to 1995. He has been a professor at the University of Chicago’s Booth School of Business since 1995.

 

 

Selected Works

  1. . Toward a Positive Theory of Consumer Choice,” Journal of Economic Behavior and Organization 1, No. 1, pp. 39-60.

  2. . (with Werner F.M. De Bondt.) “Does the Stock Market Overreact?,” Journal of Finance, Vol. 40, pp. 793-805.

  3. . The Winner’s Curse: Paradoxes and Anomalies of Economic Life. Princeton University Press.

  4. . (with Cass R. Sunstein.) “Libertarian Paternalism,” American Economic Review, Vol. 93, No. 2, pp. 175-179.

  5. . (with Shlomo Benartzi.) “Save More TomorrowsupTM/sup: Using Behavioral Economics to Increase Employee Saving,” Journal of Political Economy, Vol. 112, No. S1, pp. S164-87.

  6. . (with Cass Sunstein.) Nudge: Improving Decisions About Health, Wealth, and Happiness, New Haven: Yale University Press.

  7. . Misbehaving: The Making of Behavioral Economics, New York: W.W. Norton.

 

 

 

[1] Brigitte C. Madrian and Dennis F. Shea, “The Power of Suggestion: Inertia in 401(k) Participation and Savings Behavior,” Quarterly Journal of Economics, Vol. CXVI, Issue 4, November 2001, pp. 1149-1187. At: https://www.ssc.wisc.edu/scholz/Teaching_742/Madrian_Shea.pdf

[2] Thomas Leonard, “Review of Richard Thaler and Cass Sunstein, Nudge: Improving Decisions about Health, Wealth, and Happiness.” Constitutional Political Economy 19(4): 356-360.

[3] http://www.igmchicago.org/surveys/price-gouging

[4] https://www.marketplace.org/shows/marketplace/09012017

(0 COMMENTS)

CEE May 28, 2019

William D. Nordhaus

.jpg)

William D. Nordhaus was co-winner, along with Paul M. Romer, of the 2018 Nobel Prize in Economic Science “for integrating climate change into long-run macroeconomic analysis.”

Starting in the 1970s, Nordhaus constructed increasingly comprehensive models of the interaction between the economy and additions of carbon dioxide to the atmosphere, along with its effects on global warming. Economists use these models, along with assumptions about various magnitudes, to compute the “social cost of carbon” (SCC). The idea is that past a certain point, additions of carbon dioxide to the atmosphere heat the earth and thus create a global negative externality. The SCC is the net cost that using that additional carbon imposes on society. While the warmth has some benefits in, for example, causing longer growing seasons and improving recreational alternatives, it also has costs such as raising ocean levels, making some land uses obsolete. The SCC is the net of these social costs and is measured at the current margin. (The “current margin” language is important because otherwise one can get the wrong impression that any use of carbon is harmful.) Nordhaus and others then use the SCC to recommend taxes on carbon. In 2017, Nordhaus computed the optimal tax to be 31 per ton of carbon dioxide. To put that into perspective, a 31 carbon tax would increase the price of gasoline by about 28 cents.

CEE May 28, 2019

William D. Nordhaus

William D. Nordhaus was co-winner, along with Paul M. Romer, of the 2018 Nobel Prize in Economic Science “for integrating climate change into long-run macroeconomic analysis.”

Starting in the 1970s, Nordhaus constructed increasingly comprehensive models of the interaction between the economy and additions of carbon dioxide to the atmosphere, along with its effects on global warming. Economists use these models, along with assumptions about various magnitudes, to compute the “social cost of carbon” (SCC). The idea is that past a certain point, additions of carbon dioxide to the atmosphere heat the earth and thus create a global negative externality. The SCC is the net cost that using that additional carbon imposes on society. While the warmth has some benefits in, for example, causing longer growing seasons and improving recreational alternatives, it also has costs such as raising ocean levels, making some land uses obsolete. The SCC is the net of these social costs and is measured at the current margin. (The “current margin” language is important because otherwise one can get the wrong impression that any use of carbon is harmful.) Nordhaus and others then use the SCC to recommend taxes on carbon. In 2017, Nordhaus computed the optimal tax to be 31 per ton of carbon dioxide. To put that into perspective, a 31 carbon tax would increase the price of gasoline by about 28 cents per gallon.

Nordhaus noted, though, that there is a large amount of uncertainty about the optimal tax. For the 31 tax above, the actual optimal tax could be as little as 6 per ton or as much as 93.

Interestingly, according to Nordhaus’s model, setting too high a carbon tax can be worse than setting no carbon tax at all. According to the calibration of Nordhaus’s model in 2007, with no carbon tax and no other government controls, the present value of damages from environment damage and abatement costs would be 22.59 trillion (in 2004 dollars). Nordhaus’s optimal carbon tax would have reduced damage but increased abatement costs, for a total of 19.52 trillion, an improvement of only 3.07 trillion. But the cost of a policy to limit the temperature increase to only 1.5 C would have been 37.03 trillion, which is 16.4 trillion more than the cost of the “do nothing” option. Those numbers will be different today, but what is not different is that the cost of doing nothing is substantially below the cost of limiting the temperature increase to only 1.5 C.

One item the Nobel committee did not mention is his demonstration that the price of light has fallen by many orders of magnitude over the last 200 years. He showed that the price of light in 1992, adjusted for inflation, was less than one tenth of one percent of its price in 1800. Failure to take this reduction fully into account, noted Nordhaus, meant that economists have substantially underestimated the real growth rate of the economy and the growth rate of real wages.

Nordhaus also did pathbreaking work on the distribution of gains from innovation. In a 2004 study he wrote:

Only a minuscule fraction of the social returns from technological advances over the 1948-2001 period was captured by producers, indicating that most of the benefits of technological change are passed on to consumers rather than captured by producers.

Nordhaus earned his B.A. degree at Yale University in 1963 and his Ph.D. in economics at MIT in 1967. From 1977 to 1979, he was a member of President Carter’s Council of Economic Advisers.

 

 


Selected Works

  1. . “Economic Growth and Climate: The Case of Carbon Dioxide.” American Economic Review, Vol. 67, No. 1, pp. 341-346.

  2. . “Do Real-Output and Real-Wage Measures Capture Reality? The History of Lighting Suggests Not,” in Timothy F. Bresnahan and Robert J. Gordon, editors, The Economics of New Goods. Chicago: University of Chicago Press, 1996.

  3. . (with J. Boyer.) Warming the World: Economic Models of Global Warming. Cambridge, MA: MIT Press.

  4. . “Schumpeterian Profits in the American Economy: Theory and Measurement,” NBER Working Paper No. 10433, April 2004.

  5. . “Projections and Uncertainties about Climate Change in an Era of Minimal Climate Policies,” NBER Working Paper No. 22933.

(0 COMMENTS)

CEE May 28, 2019

Paul M. Romer

In 2018, U.S. economist Paul M. Romer was co-recipient, along with William D. Nordhaus, of the Nobel Prize in Economic Science for “integrating technological innovations into long-run macroeconomic analysis.”

Romer developed “endogenous growth theory.” Before his work in the 1980s and early 1990s, the dominant economic model of economic growth was one that MIT economist Robert Solow developed in the 1950s. Even though Solow concluded that technological change was a key driver of economic growth, his own model made technological change exogenous. That is, technological change was not something determined in the model but was an outside factor. Romer made it endogenous.

CEE May 28, 2019

Paul M. Romer

In 2018, U.S. economist Paul M. Romer was co-recipient, along with William D. Nordhaus, of the Nobel Prize in Economic Science for “integrating technological innovations into long-run macroeconomic analysis.”

Romer developed “endogenous growth theory.” Before his work in the 1980s and early 1990s, the dominant economic model of economic growth was one that MIT economist Robert Solow developed in the 1950s. Even though Solow concluded that technological change was a key driver of economic growth, his own model made technological change exogenous. That is, technological change was not something determined in the model but was an outside factor. Romer made it endogenous.

There are actually two very different phases in Romer’s work on endogenous growth theory. Romer (1986) and Romer (1987) had an AK model. Real output was equal to A times K, where A is a positive constant and K is the amount of physical capital. The model assumes diminishing marginal returns to K, but assumes also that part of a firm’s investment in capital results in the production of new technology or human capital that, because it is non-rival and non-excludable, generates spillovers (positive externalities) for all firms. Because this technology is embodied in physical capital, as the capital stock (K) grows, there are constant returns to a broader measure of capital that includes the new technology. Modeling growth this way allowed Romer to keep the assumption of perfect competition, so beloved by economists.

In Romer (1990), Romer rejected his own earlier model. Instead, he assumed that firms are monopolistically competitive. That is, industries are competitive, but many firms within a given industry have market power. Monopolistically competitive firms develop technology that they can exclude others from using. The technology is non-rival; that is, one firm’s use of the technology doesn’t prevent other firms from using it. Because they can exploit their market power by innovating, they have an incentive to innovate. It made sense, therefore, to think carefully about how to structure such incentives.

Consider new drugs. Economists estimate that the cost of successfully developing and bringing a new drug to market is about 2.6 billion. Once the formula is discovered and tested, another firm could copy the invention of the firm that did all the work. If that second firm were allowed to sell the drug, the first firm would probably not do the work in the first place. One solution is patents. A patent gives the inventor a monopoly for a fixed number of years during which it can charge a monopoly price. This monopoly price, earned over years, gives drug companies a strong incentive to innovate.

Another way for new ideas to emerge, notes Romer, is for governments to subsidize research and development.

The idea that technological change is not just an outside factor but itself is determined within the economic system might seem obvious to those who have read the work of Joseph Schumpeter. Why did Romer get a Nobel Prize for his insights? It was because Romer’s model didn’t “blow up.” Previous economists who had tried mathematically to model growth in a Schumpeterian way had failed to come up with models in which the process of growth was bounded.

To his credit, Romer lays out some of his insights on growth in words and very simple math. In the entry on economic growth in The Concise Encyclopedia of Economics, Romer notes the huge difference in long run well being that would result from raising the economic growth rate by only a few percentage points. The “rule of 72” says that the length of time over which a magnitude doubles can be computed by dividing the growth rate into 72. It actually should be called the rule of 70, but the math with 72 is slightly easier. So, for example, if an economy grows by 2 percent per year, it will take 36 years for its size to double. But if it grows by 4 percent per year, it will double in 18 years.

Romer warns that policy makers should be careful about using endogenous growth theory to justify government intervention in the economy. In a 1998 interview he stated:

A lot of people see endogenous growth theory as a blanket seal of approval for all of their favourite government interventions, many of which are very wrong-headed. For example, much of the discussion about infrastructure is just wrong. Infrastructure is to a very large extent a traditional physical good and should be provided in the same way that we provide other physical goods, with market incentives and strong property rights. A move towards privatization of infrastructure provision is exactly the right way to go. The government should be much less involved in infrastructure provision.[1]

In the same interview, he stated, “Selecting a few firms and giving them money has obvious problems” and that governments “must keep from taxing income at such high rates that it severely distorts incentives.”

In 2000, Romer introduced Aplia, an on-line set of problems and answers that economics professors could assign to their students and easily grade. The upside is that students are more prepared for lectures and exams and can engage with their fellow students in economic experiments on line. The downside of Aplia, according to some economics professors, is that students get less practice actually manually drawing demand and supply curves.

In 2009, Romer started advocating “Charter Cities.” His idea was that many people are stuck in countries with bad rules that make wealth creation difficult. If, he argued, an outside government could start a charter city in a country that had bad rules, people in that country could move there. Of course, this would require the cooperation of the country with the bad rules and getting that cooperation is not an easy task. His primary example of such an experiment working is Hong Kong, which was run by the British government until 1997. In a 2009 speech on charter cities, Romer stated, “Britain, through its actions in Hong Kong, did more to reduce world poverty than all the aid programs that we’ve undertaken in the last century.”[2]

Romer earned a B.S. in mathematics in 1977, an M.A. in economics in 1978, and a Ph.D. in economics in 1983, all from the University of Chicago. He also did graduate work at MIT and Queen’s University. He has taught at the University of Rochester, the University of Chicago, UC Berkeley, and Stanford University, and is currently a professor at New York University.

He was chief economist at the World Bank from 2106 to 2018.

 

 

[1] “Interview with Paul M. Romer,” in Brian Snowdon and Howard R. Vane, Modern Macroeconomics: Its Origins, Development and Current State, Cheltenham, UK: Edward Elgar, 2005, p. 690.

[2] Paul Romer, “Why the world needs charter cities,” TEDGlobal 2009.

 


Selected Works

  1. “Increasing Returns and Long-Run Growth.” Journal of Political Economy, Vol. 94, No. 5, pp. 1002-1037.
  2. “Growth Based on Increasing Returns Due to Specialization.” American Economic Review, Papers and Proceedings, Vol. 77, No. 2, pp. 56-62.
  3. “Endogenous Technological Change.” Journal of Political Economy. Vol. 98, No. 5, S71-S102.
  4. “Mathiness in the Theory of Economic Growth.” American Economic Review, Vol. 105, No. 5, pp. 89-93.

 

(0 COMMENTS)

CEE March 13, 2019

Jean Tirole

In 2014, French economist Jean Tirole was awarded the Nobel Prize in Economic Sciences “for his analysis of market power and regulation.” His main research, in which he uses game theory, is in an area of economics called industrial organization. Economists studying industrial organization apply economic analysis to understanding the way firms behave and why certain industries are organized as they are.

From the late 1960s to the early 1980s, economists George Stigler, Harold Demsetz, Sam Peltzman, and Yale Brozen, among others, played a dominant role in the study of industrial organization. Their view was that even though most industries don’t fit the economists’ “perfect competition” model—a model in which no firm has the power to set a price—the real world was full of competition. Firms compete by cutting their prices, by innovating, by advertising, by cutting costs, and by providing service, just to name a few. Their understanding of competition led them to skepticism about much of antitrust law and most government regulation.

In the 1980s, Jean Tirole introduced game theory into the study of industrial organization, also known as IO. The key idea of game theory is that, unlike for price takers, firms with market power take account of how their rivals are likely to react when they change prices or product offerings. Although the earlier-mentioned economists recognized this, they did not rigorously use game theory to spell out some of the implications of this interdependence. Tirole did.

One issue on which Tirole and his co-author Jean-Jacques Laffont focused was “asymmetric information.” A regulator has less information than the firms it regulates. So, if the regulator guesses incorrectly about a regulated firm’s costs, which is highly likely, it could set prices too low or too high. Tirole and Laffont showed that a clever regulator could offset this asymmetry by constructing contracts and letting firms choose which contract to accept. If, for example, some firms can take measures to lower their costs and other firms cannot, the regulator cannot necessarily distinguish between the two types. The regulator, recognizing this fact, may offer the firms either a cost-plus contract or a fixed-price contract. The cost-plus contract will appeal to firms with high costs, while the fixed-price contract will appeal to firms that can lower their costs. In this way, the regulator maintains incentives to keep costs down.

Their insights are most directly applicable to government entities, such as the Department of Defense, in their negotiations with firms that provide highly specialized military equipment. Indeed, economist Tyler Cowen has argued that Tirole’s work is about principal-agent theory rather than about reining in big business per se. In the Department of Defense example, the Department is the principal and the defense contractor is the agent.

One of Tirole’s main contributions has been in the area of “two-sided markets.” Consider Google. It can offer its services at one price to users (one side) and offer its services at a different price to advertisers (the other side). The higher the price to users, the fewer users there will be and, therefore, the less money Google will make from advertising. Google has decided to set a zero price to users and charge for advertising. Tirole and co-author Jean-Charles Rochet showed that the decision about profit-maximizing pricing is complicated, and they use substantial math to compute such prices under various theoretical conditions. Although Tirole believes in antitrust laws to limit both monopoly power and the exercise of monopoly power, he argues that regulators must be cautious in bringing the law to bear against firms in two-sided markets. An example of a two-sided market is a manufacturer of videogame consoles. The two sides are game developers and game players. He notes that it is very common for companies in such markets to set low prices on one side of the market and high prices on the other. But, he writes, “A regulator who does not bear in mind the unusual nature of a two-sided market may incorrectly condemn low pricing as predatory or high pricing as excessive, even though these pricing structures are adopted even by the smallest platforms entering the market.”

Tirole has brought the same kind of skepticism to some other related regulatory issues. Many regulators, for example, have advocated government regulation of interchange fees (IFs) in payment card associations such as Visa and MasterCard. But in 2003, Rochet and Tirole wrote that “given the [economics] profession’s current state of knowledge, there is no reason to believe that the IFs chosen by an association are systematically too high or too low, as compared with socially optimal levels.”

After winning the Nobel Prize, Tirole wrote a book for a popular audience, Economics for the Common Good. In it, he applied economics to a wide range of policy issues, laying out, among other things, the advantages of free trade for most residents of a given country and why much legislation and regulation causes negative unintended consequences.

Like most economists, Tirole favors free trade. In Economics for the Common Good, he noted that French consumers gain from freer trade in two ways. First, free trade exposes French monopolies and oligopolies to competition. He argued that two major French auto companies, Renault and Peugeot-Citroen, “sharply increased their efficiency” in response to car imports from Japan. Second, free trade gives consumers access to cheaper goods from low-wage countries.

In that same book, Tirole considered the unintended consequences of a hypothetical, but realistic, case in which a non-governmental organization, wanting to discourage killing elephants for their tusks, “confiscates ivory from traffickers.” In this hypothetical example, the organization can destroy the ivory or sell it. Destroying the ivory, he reasoned, would drive up the price. The higher price could cause poachers to kill more elephants. Another example he gave is of the perverse effects of price ceilings. Not only do they cause shortages, but also, as a result of these shortages, people line up and waste time in queues. Their time spent in queues wipes out the financial gain to consumers from the lower price, while also hurting the suppliers. No one wins and wealth is destroyed.

Also in that book, Tirole criticized the French government’s labor policies, which make it difficult for employers to fire people. He noted that this difficulty makes employers less likely to hire people in the first place. As a result, the unemployment rate in France was above 7 percent for over 30 years. The effect on young people has been particularly pernicious. When he wrote this book, the unemployment rate for French residents between 15 and 24 years old was 24 percent, and only 28.6 percent of percent of those in that age group had jobs. This was much lower than the OECD average of 39.6 percent, Germany’s 46.8 percent, and the Netherlands’ 62.3 percent.

One unintended, but predictable, consequence of government regulations of firms, which Tirole pointed out in Economics for the Common Good, is to make firms artificially small. When a French firm with 49 employees hires one more employee, he noted, it is subject to 34 additional legal obligations. Not surprisingly, therefore, in a figure that shows the number of enterprises with various numbers of employees, a spike occurs at 47 to 49 employees.

In Economics for the Common Good, Tirole ranged widely over policy issues in France. In addressing the French university system, he criticized the system’s rejection of selective admission to university. He argued that such a system causes the least prepared students to drop out and concluded that “[O]n the whole, the French educational system is a vast insider-trading crime.”

Tirole is chairman of the Toulouse School of Economics and of the Institute for Advanced Study in Toulouse. A French citizen, he was born in Troyes, France and earned his Ph.D. in economics in 1981 from the Massachusetts Institute of Technology.


Selected Works

 

  1. . (Co-authored with Jean-Jacques Laffont).“Using Cost Observation to Regulate Firms”. Journal of Political Economy. 94:3 (Part I). June: 614-641.

  2. . The Theory of Industrial Organization. MIT Press.

  3. . (Co-authored with Drew Fudenberg).“Moral Hazard and Renegotiation in Agency Contracts”, Econometrica, 58:6. November: 1279-1319.

  4. . (Co-authored with Jean-Jacques Laffont). A Theory of Incentives in Procurement and Regulation. MIT Press.

2003: (Co-authored with Jean-Charles Rochet). “An Economic Analysis of the Determination of Interchange Fees in Payment Card Systems.” Review of Network Economics. 2:2: 69-79.

  1. . (Co-authored with Jean-Charles Rochet). “Two-Sided Markets: A Progress Report.” The RAND Journal of Economics. 37:3. Autumn: 645-667.

2017, Economics for the Common Good. Princeton University Press.

 

(0 COMMENTS)

CEE November 30, 2018

The 2008 Financial Crisis

It was, according to accounts filtering out of the White House, an extraordinary scene. Hank Paulson, the U.S. treasury secretary and a man with a personal fortune estimated at 700m (380m), had got down on one knee before the most powerful woman in Congress, Nancy Pelosi, and begged her to save his plan to rescue Wall Street.

    The Guardian, September 26, 20081

The financial crisis of 2008 was a complex event that took most economists and market participants by surprise. Since then, there have been many attempts to arrive at a narrative to explain the crisis, but none has proven definitive. For example, a Congressionally-chartered ten-member Financial Crisis Inquiry Commission produced three separate narratives, one supported by the members appointed by the Democrats, one supported by four members appointed by the Republicans, and a third written by the fifth Republican member, Peter Wallison.2

It is important to appreciate that the financial system is complex, not merely complicated. A complicated system, such as a smartphone, has a fixed structure, so it behaves in ways that are predictable and controllable. A complex system has an evolving structure, so it can evolve in ways that no one anticipates. We will never have a proven understanding of what caused the financial crisis, just as we will never have a proven understanding of what caused the first World War.

There can be no single, definitive narrative of the crisis. This entry can cover only a small subset of the issues raised by the episode.

Metaphorically, we may think of the crisis as a fire. It started in the housing market, spread to the sub-prime mortgage market, then engulfed the entire mortgage securities market and, finally, swept through the inter-bank lending market and the market for asset-backed commercial paper.

Home sales began to slow in the latter part of 2006. This soon created problems for the sector of the mortgage market devoted to making risky loans, with several major lenders—including the largest, New Century Financial—declaring bankruptcy early in 2007. At the time, the problem was referred to as the “sub-prime mortgage crisis,” confined to a few marginal institutions.

But by the spring of 2008, trouble was apparent at some Wall Street investment banks that underwrote securities backed by sub-prime mortgages. On March 16, commercial bank JP Morgan Chase acquired one of these firms, Bear Stearns, with help from loan guarantees provided by the Federal Reserve, the central bank of the United States.

Trouble then began to surface at all the major institutions in the mortgage securities market. By late summer, many investors had lost confidence in Freddie Mac and Fannie Mae, and the interest rates that lenders demanded from them were higher than what they could pay and still remain afloat. On September 7, the U.S. Treasury took these two GSEs into “conservatorship.”

Finally, the crisis hit the short-term inter-bank collateralized lending markets, in which all of the world’s major financial institutions participate. This phase began after government officials’ unsuccessful attempts to arrange a merger of investment bank Lehman Brothers, which declared bankruptcy on September 15. This bankruptcy caused the Reserve Primary money market fund, which held a lot of short-term Lehman securities, to mark down the value of its shares below the standard value of one dollar each. That created jitters in all short-term lending markets, including the inter-bank lending market and the market for asset-backed commercial paper in general, and caused stress among major European banks.

The freeze-up in the interbank lending market was too much for leading public officials to bear. Under intense pressure to act, Treasury Secretary Henry Paulson proposed a 700 billion financial rescue program. Congress initially voted it down, leading to heavy losses in the stock market and causing Secretary Paulson to plead for its passage. On a second vote, the measure, known as the Troubled Assets Relief Program (TARP), was approved.

In hindsight, within each sector affected by the crisis, we can find moral hazard, cognitive failures, and policy failures. Moral hazard (in insurance company terminology) arises when individuals and firms face incentives to profit from taking risks without having to bear responsibility in the event of losses. Cognitive failures arise when individuals and firms base decisions on faulty assumptions about potential scenarios. Policy failures arise when regulators reinforce rather than counteract the moral hazard and cognitive failures of market participants.

The Housing Sector

From roughly 1990 to the middle of 2006, the housing market was characterized by the following:

  • an environment of low interest rates, both in nominal and real (inflation-adjusted) terms. Low nominal rates create low monthly payments for borrowers. Low real rates raise the value of all durable assets, including housing.
  • prices for houses rising as fast as or faster than the overall price level
  • an increase in the share of households owning rather than renting
  • loosening of mortgage underwriting standards, allowing households with weaker credit histories to qualify for mortgages.
  • lower minimum requirements for down payments. A standard requirement of at least ten percent was reduced to three percent and, in some cases, zero. This resulted in a large increase in the share of home purchases made with down payments of five percent or less.
  • an increase in the use of new types of mortgages with “negative amortization,” meaning that the outstanding principal balance rises over time.
  • an increase in consumers’ borrowing against their houses to finance spending, using home equity loans, second mortgages, and refinancing of existing mortgages with new loans for larger amounts.
  • an increase in the proportion of mortgages going to people who were not planning to live in the homes that they purchased. Instead, they were buying them to speculate. 3

These phenomena produced an increase in mortgage debt that far outpaced the rise in income over the same period. The trends accelerated in the three years just prior to the downturn in the second half of 2006.

The rise in mortgage debt relative to income was not a problem as long as home prices were rising. A borrower having difficulty finding the cash to make a mortgage payment on a house that had appreciated in value could either borrow more with the house as collateral or sell the house to pay off the debt.

But when house prices stopped rising late in 2006, households that had taken on too much debt began to default. This set in motion a reverse cycle: house foreclosures increased the supply of homes for sale; meanwhile, lenders became wary of extending credit, and this reduced demand. Prices fell further, leading to more defaults and spurring lenders to tighten credit still further.

During the boom, some people were speculating in non-owner-occupied homes, while others were buying their own homes with little or no money down. And other households were, in the vernacular of the time, “using their houses as ATMs,” taking on additional mortgage debt in order to finance consumption.

In most states in the United States, once a mortgage lender forecloses on a property, the borrower is not responsible for repayment, even if the house cannot be sold for enough to cover the loan. This creates moral hazard, particularly for property speculators, who can enjoy all of the profits if house prices rise but can stick lenders with some of the losses if prices fall.

One can see cognitive failure in the way that owners of houses expected home prices to keep rising at a ten percent rate indefinitely, even though overall inflation was less than half that amount.4Also, many house owners seemed unaware of the risks of mortgages with “negative amortization.”

Policy failure played a big role in the housing sector. All of the trends listed above were supported by public policy. Because they wanted to see increased home ownership, politicians urged lenders to loosen credit standards. With the Community Reinvestment Act for banks and Affordable Housing Goals for Freddie Mac and Fannie Mae, they spurred traditional mortgage lenders to increase their lending to minority and low-income borrowers. When the crisis hit, politicians blamed lenders for borrowers’ inability to repay, and political pressure exacerbated the credit tightening that subsequently took place

The Sub-prime Mortgage Sector

Until the late 1990s, few lenders were willing to give mortgages to borrowers with problematic credit histories. But sub-prime mortgage lenders emerged and grew rapidly in the decade leading up to the crisis. This growth was fueled by financial innovations, including the use of credit scoring to finely grade mortgage borrowers, and the use of structured mortgage securities (discussed in the next section) to make the sub-prime sector attractive to investors with a low tolerance for risk. Above all, it was fueled by rising home prices, which created a history of low default rates.

There was moral hazard in the sub-prime mortgage sector because the lenders were not holding on to the loans and, therefore, not exposing themselves to default risk. Instead, they packaged the mortgages into securities and sold them to investors, with the securities market allocating the risk.

Because they sold loans in the secondary market, profits at sub-prime lenders were driven by volume, regardless of the likelihood of default. Turning down a borrower meant getting no revenue. Approving a borrower meant earning a fee. These incentives were passed through to the staff responsible for finding potential borrowers and underwriting loans, so that personnel were compensated based on “production,” meaning the new loans they originated.

Although in theory the sub-prime lenders were passing on to others the risks that were embedded in the loans they were making, they were among the first institutions to go bankrupt during the financial crisis. This shows that there was cognitive failure in the management at these companies, as they did not foresee the house price slowdown or its impact on their firms.

Cognitive failure also played a role in the rise of mortgages that were underwritten without verification of the borrowers’ income, employment, or assets. Historical data showed that credit scores were sufficient for assessing borrower risk and that additional verification contributed little predictive value. However, it turned out that once lenders were willing to forgo these documents, they attracted a different set of borrowers, whose propensity to default was higher than their credit scores otherwise indicated.

There was policy failure in that abuses in the sub-prime mortgage sector were allowed to continue. Ironically, while the safety and soundness of Freddie Mac and Fannie Mae were regulated under the Department of Housing and Urban Development, which had an institutional mission to expand home ownership, consumer protection with regard to mortgages was regulated by the Federal Reserve Board, whose primary institutional missions were monetary policy and bank safety. Though mortgage lenders were setting up borrowers to fail, the Federal Reserve made little or no effort to intervene. Even those policy makers who were concerned about practices in the sub-prime sector believed that, on balance, sub-prime mortgage lending was helping a previously under-served set of households to attain home ownership.5

Mortagage Securities

A mortgage security consists of a pool of mortgage loans, the payments on which are passed through to pension funds, insurance companies, or other institutional investors looking for reliable returns with little risk. The market for mortgage securities was created by two government agencies, known as Ginnie Mae and Freddie Mac, established in 1968 and 1970, respectively.

Mortgage securitization expanded in the 1980s, when Fannie Mae, which previously had used debt to finance its mortgage purchases, began issuing its own mortgage-backed securities. At the same time, Freddie Mac was sold to shareholders, who encouraged Freddie to grow its market share. But even though Freddie and Fannie were shareholder-owned, investors treated their securities as if they were government-backed. This was known as an implicit government guarantee.

Attempts to create a market for private-label mortgage securities (PLMS) without any form of government guarantee were largely unsuccessful until the late 1990s. The innovations that finally got the PLMS market going were credit scoring and the collateralized debt obligation (CDO).

Before credit scoring was used in the mortgage market, there was no quantifiable difference between any two borrowers who were approved for loans. With credit scoring, the Wall Street firms assembling pools of mortgages could distinguish between a borrower with a very good score (750, as measured by the popular FICO system) and one with a more doubtful score (650).

Using CDOs, Wall Street firms were able to provide major institutional investors with insulation from default risk by concentrating that risk in other sub-securities (“tranches”) that were sold to investors who were more tolerant of risk. In fact, these basic CDOs were enhanced by other exotic mechanisms, such as credit default swaps, that reallocated mortgage default risk to institutions in which hardly any observer expected to find it, including AIG Insurance.

There was moral hazard in the mortgage securities market, as Freddie Mac and Fannie Mae sought profits and growth on behalf of shareholders, but investors in their securities expected (correctly, as it turned out) that the government would protect them against losses. Years before the crisis, critics grumbled that the mortgage giants exemplified privatized profits and socialized risks.6

There was cognitive failure in the assessment of default risk. Assembling CDOs and other exotic instruments required sophisticated statistical modeling. The most important driver of expectations for mortgage defaults is the path for house prices, and the steep, broad-based decline in home prices that took place in 2006-2009 was outside the range that some modelers allowed for.

Another source of cognitive failure is the “suits/geeks” divide. In many firms, the financial engineers (“geeks) understood the risks of mortgage-related securities fairly well, but their conclusions did not make their way to the senior management level (“suits”).

There was policy failure on the part of bank regulators. Their previous adverse experience was with the Savings and Loan Crisis, in which firms that originated and retained mortgages went bankrupt in large numbers. This caused bank regulators to believe that mortgage securitization, which took risk off the books of depository institutions, would be safer for the financial system. For the purpose of assessing capital requirements for banks, regulators assigned a weight of 100 percent to mortgages originated and held by the bank, but assigned a weight of only 20 percent to the bank’s holdings of mortgage securities issued by Freddie Mac, Fannie Mae, or Ginnie Mae. This meant that banks needed to hold much more capital to hold mortgages than to hold mortgage-related securities; that naturally steered them toward the latter.

In 2001, regulators broadened the low-risk umbrella to include AAA-rated and AA-rated tranches of private-label CDOs. This ruling helped to generate a flood of PLMS, many of them backed by sub-prime mortgage loans.7

By using bond ratings as a key determinant of capital requirements, the regulators effectively put the bond rating agencies at the center of the process of creating private-label CDOs. The rating agencies immediately became subject to both moral hazard and cognitive failure. The moral hazard came from the fact that the rating agencies were paid by the issuers of securities, who wanted the most generous ratings possible, rather than being paid by the regulators, who needed more rigorous ratings. The cognitive failure came from the fact that that models that the rating agencies used gave too little weight to potential scenarios of broad-based declines in house prices. Moreover, the banks that bought the securities were happy to see them rated AAA because the high ratings made the securities eligible for lower capital requirements on the part of the banks. Both sides, therefore, buyers and sellers, had bad incentives.

There was policy failure on the part of Congress. Officials in both the Clinton and Bush Administrations were unhappy with the risk that Freddie Mac and Fannie Mae represented to taxpayers. But Congress balked at any attempt to tighten regulation of the safety and soundness of those firms.8

The Inter-bank Lending Market

There are a number of mechanisms through which financial institutions make short-term loans to one another. In the United States, banks use the Federal Funds market to manage short-term fluctuations in reserves. Internationally, banks lend in what is known as the LIBOR market.

One of the least known and most important markets is for “repo,” which is short for “repurchase agreement.” As first developed, the repo market was used by government bond dealers to finance inventories of securities, just as an automobile dealer might finance an inventory of cars. A money-market fund might lend money for one day or one week to a bond dealer, with the loan collateralized by a low-risk long-term security.

In the years leading up to the crisis, some dealers were financing low-risk mortgage-related securities in the repo market. But when some of these securities turned out to be subject to price declines that took them out of the “low-risk” category, participants in the repo market began to worry about all repo collateral. Repo lending offers very low profit margins, and if an investor has to be very discriminating about the collateral backing a repo loan, it can seem preferable to back out of repo lending altogether. This, indeed, is what happened, in what economist Gary Gorton and others called a “run on repo.”9

Another element of institutional panic was “collateral calls” involving derivative financial instruments. Derivatives, such as credit default swaps, are like side bets. The buyer of a credit default swap is betting that a particular debt instrument will default. The seller of a credit default swap is betting the opposite.

In the case of mortgage-related securities, the probability of default seemed low prior to the crisis. Sometimes, buyers of credit default swaps were merely satisfying the technical requirements to record the underlying securities as AAA-rated. They could do this if they obtained a credit default swap from an institution that was itself AAA-rated. AIG was an insurance company that saw an opportunity to take advantage of its AAA rating to sell credit default swaps on mortgage-related securities. AIG collected fees, and its Financial Products division calculated that the probability of default was essentially zero. The fees earned on each transaction were low, but the overall profit was high because of the enormous volume. AIG’s credit default swaps were a major element in the expansion of shadow banking by non-bank financial institutions during the run-up to the crisis.

Late in 2005, AIG abruptly stopped writing credit default swaps, in part because its own rating had been downgraded below AAA earlier in the year for unrelated reasons. By the time AIG stopped selling credit default swaps on mortgage-related securities, it had outstanding obligations on 80 billion of underlying securities and was earning 1 billion a year in fees.10

Because AIG no longer had its AAA rating and because the underlying mortgage securities, while not in default, were increasingly shaky, provisions in the contracts that AIG had written allowed the buyers of credit default swaps to require AIG to provide protection in the form of low-risk securities posted as collateral. These “collateral calls” were like a margin call that a stock broker will make on an investor who has borrowed money to buy stock that subsequently declines in value. In effect, collateral calls were a run on AIG’s shadow bank.

These collateral calls were made when the crisis in the inter-bank lending market was near its height in the summer of 2008 and banks were hoarding low-risk securities. In fact, the shortage of low-risk securities may have motivated some of the collateral calls, as institutions like Deutsche Bank and Goldman Sachs sought ways to ease their own liquidity problems. In any event, AIG could not raise enough short-term funds to meet its collateral calls without trying to dump long-term securities into a market that had little depth to absorb them. It turned to Federal authorities for a bailout, which was arranged and creatively backed by the Federal Reserve, but at the cost of reducing the value of shares in AIG.

With repos and derivatives, there was moral hazard in that the traders and executives of the narrow units that engaged in exotic transactions were able to claim large bonuses on the basis of short-term profits. But the adverse long-term consequences were spread to the rest of the firm and, ultimately, to taxpayers.

There was cognitive failure in that the collateral calls were an unanticipated risk of the derivatives business. The financial engineers focused on the (remote) chances of default on the underlying securities, not on the intermediate stress that might emerge from collateral calls.

There was policy failure when Congress passed the Commodity Futures Modernization Act. This legislation specified that derivatives would not be regulated by either of the agencies with the staff most qualified to understand them. Rather than require oversight by the Securities and Exchange Commission or the Commodity Futures Trading Commission (which regulated market-traded derivatives), Congress decreed that the regulator responsible for overseeing each firm would evaluate its derivative position. The logic was that a bank that was using derivatives to hedge other transactions should have its derivative position evaluated in a larger context. But, as it happened, the insurance and bank regulators who ended up with this responsibility were not equipped to see the dangers at firms such as AIG.

There was also policy failure in that officials approved of securitization that transferred risk out of the regulated banking sector. While Federal Reserve Officials were praising the risk management of commercial banks,11risk was accumulating in the shadow banking sector (non-bank institutions in the financial system), including AIG insurance, money market funds, Wall Street firms such as Bear Stearns and Lehman Brothers, and major foreign banks. When problems in the shadow banking sector contributed to the freeze in inter-bank lending and in the market for asset-backed commercial paper, policy makers felt compelled to extend bailouts to satisfy the needs of these non-bank institutions for liquid assets.

Conclusion

In terms of the fire metaphor suggested earlier, in hindsight, we can see that the markets for housing, sub-prime mortgages, mortgage-related securities, and inter-bank lending were all highly flammable just prior to the crisis. Moral hazard, cognitive failures, and policy failures all contributed the combustible mix.

The crisis also reflects a failure of the economics profession. A few economists, most notably Robert Shiller,12warned that the housing market was inflated, as indicated by ratios of prices to rents that were high by historical standards. Also, when risk-based capital regulation was proposed in the wake of the Savings and Loan Crisis and the Latin American debt crisis, a group of economists known as the Shadow Regulatory Committee warned that these regulations could be manipulated. They recommended, instead, greater use of senior subordinated debt at regulated financial institutions.13Many economists warned about the incentives for risk-taking at Freddie Mac and Fannie Mae.14

But even these economists failed to anticipate the 2008 crisis, in large part because economists did not take note of the complex mortgage-related securities and derivative instruments that had been developed. Economists have a strong preference for parsimonious models, and they look at financial markets through a lens that includes only a few types of simple assets, such as government bonds and corporate stock. This approach ignores even the repo market, which has been important in the financial system for over 40 years, and, of course, it omits CDOs, credit default swaps and other, more recent innovations.

Financial intermediaries do not produce tangible output that can be measured and counted. Instead, they provide intangible benefits that economists have never clearly articulated. The economics profession has a long way to go to catch up with modern finance.


About the Author

Arnold Kling was an economist with the Federal Reserve Board and with the Federal Home Loan Mortgage Corporation before launching one of the first Web-based businesses in 1994.  His most recent books areSpecialization and Trade and The Three Languages of Politics. He earned his Ph.D. in economics from the Massachusetts Institute of Technology.


Footnotes

1

“A desperate plea – then race for a deal before ‘sucker goes down’” The Guardian, September 26, 2008. https://www.theguardian.com/business/2008/sep/27/wallstreet.useconomy1

 

2

The report and dissents of the Financial Crisis Inquiry Commission can be found at https://fcic.law.stanford.edu/

3

See Stefania Albanesi, Giacomo De Giorgi, and Jaromir Nosal 2017, “Credit Growth and the Financial Crisis: A New Narrative” NBER working paper no. 23740. http://www.nber.org/papers/w23740

 

4

Karl E. Case and Robert J. Shiller 2003, “Is there a Bubble in the Housing Market?” Cowles Foundation Paper 1089 http://www.econ.yale.edu/shiller/pubs/p1089.pdf

 

5

Edward M. Gramlich 2004, “Subprime Mortgage Lending: Benefits, Costs, and Challenges,” Federal Reserve Board speeches. https://www.federalreserve.gov/boarddocs/speeches/2004/20040521/

 

6

For example, in 1999, Treasury Secretary Lawrence Summers said in a speech, “Debates about systemic risk should also now include government-sponsored enterprises.” See Bethany McLean and Joe Nocera 2010, All the Devils are Here: The Hidden History of the Financial Crisis Portfolio/Penguin Press. The authors write that Federal Reserve Chairman Alan Greenspan was also, like Summers, disturbed by the moral hazard inherent in the GSEs.

 

7

Jeffrey Friedman and Wladimir Kraus 2013, Engineering the Financial Crisis: Systemic Risk and the Failure of Regulation, University of Pennsylvania Press.

 

8

See McLean and Nocera, All the Devils are Here

 

9

Gary Gorton, Toomas Laarits, and Andrew Metrick 2017, “The Run on Repo and the Fed’s Response,” Stanford working paper. https://www.gsb.stanford.edu/sites/gsb/files/fin_11_17_gorton.pdf

 

10

Talking Points Memo 2009, “The Rise and Fall of AIG’s Financial Products Unit” https://talkingpointsmemo.com/muckraker/the-rise-and-fall-of-aig-s-financial-products-unit

 

11

Chairman Ben S. Bernanke 2006, “Modern Risk Management and Banking Supervision,” Federal Reserve Board speeches. https://www.federalreserve.gov/newsevents/speech/bernanke20060612a.htm

 

12

National Public Radio 2005, “Yale Professor Predicts Housing ’Bubble’ Will Burst” https://www.npr.org/templates/story/story.php?storyId4679264

 

13

Shadow Financial Regulatory Committee 2001, “The Basel Committee’s Revised Capital Accord Proposal” https://www.bis.org/bcbs/ca/shfirect.pdf

14

See the discussion in Viral V. Acharya, Matthew Richardson, Stijn Van Nieuwerburgh and Lawrence J. White 2011, Guaranteed to Fail: Fannie Mae, Freddie Mac, and the Debacle of Mortgage Finance, Princeton University Press.

 

(0 COMMENTS)

Here are the 10 latest posts from Econlib.

Econlib September 18, 2020

What do models tell us?

Josh Hendrickson has a new post that defends the use of models that might in some respects be viewed as “unrealistic”. I agree with his general point about models, and also his specific defense of models that assume perfect competition. But I have a few reservations about some of his examples:

Ricardian Equivalence holds that governments should be indifferent between generating revenue from taxes or new debt issuances. This is a benchmark. The Modigliani-Miller Theorem states that the value of the firm does not depend on whether it is financing with debt or equity. Again, this is a benchmark. Regardless of what one thinks about the empirical validity of these claims, they provide useful benchmarks in the sense that they give us an understanding of when these claims are true and how to test them. By providing a benchmark for comparison, they help us to better understand the world.

With all that being said, a world without “frictions” is not always the correct counterfactual.

Taken as a whole, this statement is quite reasonable.  But I would slightly take issue with the first sentence, which is likely to mislead some readers.  Ricardian Equivalence doesn’t actually tell the government how it “should” feel about the issue of debt vs. taxes, even if Ricardian Equivalence is true.  Rather it says something like the following:

If the government believes that debt issuance is less efficient than tax financed spending because people don’t account for future tax liabilities, that belief will not be accurate if people do account for future tax liabilities.

But even if people do anticipate the future tax burden created by the national debt, heavy public borrowing may still be less efficient than tax-financed spending because taxes are distortionary, and hence tax rates should be smoothed over time.

I happen to believe Ricardian Equivalence is roughly true, but I still don’t believe the government should be indifferent between taxes and borrowing.  Similarly, I believe that rational expectations is roughly true, and yet also believe that monetary shocks have real effects due to sticky wages.  I believe that the Coase Theorem is true, but also believe that the allocation of resources depends on how legal liability is assigned (due to transactions costs).  Models generally don’t tell us what we should believe about a given issue; rather they address one aspect of highly complex problems.

Here’s Hendrickson on real business cycle theory:

Since the RBC model has competitive and complete markets, the inefficiency of business cycles can be measured by using the RBC as a benchmark. In addition, if your model does not add much insight relative to the RBC model, how valuable can it be?

[As an aside, I agree with Bennett McCallum that either the term ‘real business cycle model’ means a model where business cycles are not caused by nominal shocks interacting with sticky wages/prices, or else the term is meaningless.  There is nothing “real” about a model where nominal shocks cause business cycles.]

Do RBC models provide a useful benchmark for judging inefficiency?  Consider the following analogy:  “A model where there is no gravity provides a useful benchmark for airline industry inefficiency in a world with gravity.”  It is certainly true that airlines could be more fuel efficient in a world with no gravity, but it’s equally true that they have no way to make that happen.  I don’t believe that gravity-free models tell us much of value about airline industry efficiency.

In my view, the concept of efficiency is most useful at a policy counterfactual.  Thus monetary policy A is inefficient if monetary policy B or C produces a better outcome in terms of some plausible metric such as utility or GDP or consumption.  (I do understand that macro outcomes are hard to measure (especially utility), but unless we have some ability to measure outcomes then no one could claim that South Korea is more successful than North Korea.  I’m not that pessimistic about our knowledge of the world.)

In my view, you don’t measure inefficiency by comparing a sticky price model featuring nominal shocks against a flexible price RBC model, rather you measure efficiency by comparing two different types of monetary policies in a realistic model with sticky prices.

That’s not to say that there are not aspects of RBC models that are useful, and indeed some of those innovations might provide insights into thinking about what sort of fluctuation in GDP would be optimal.  But I don’t believe you can say anything about policy efficiency unless you first embed those RBC insights (on things like productivity shocks) into a sticky wage/price model, and then compare policy alternatives with that model.  I view sticky prices as 90% a given, much like gravity.  (The other 10% is things like minimum wage laws, which can be impacted by policy.)

PS.  Just to be clear, I agree with Hendrickson on the more important issues in his post.  My support for University of Chicago-style perfect competition models definitely puts me on “team Hendrickson”, especially when I consider the direction the broader profession is moving.

(7 COMMENTS)

Econlib September 18, 2020

Is Modern Democracy So Modern and How?

The Decline and Rise of Democracy, a new book by David Stasavage, a political scientist at New York University, reviews the history of democracy, from “early democracy” to “modern democracy.” I review the book in the just-out Fall issue of Regulation. One short quote of my review about the plight of modern democracy in America:

[Stasavage] notes the “tremendous expansion of the ability of presidents to rule by executive order.” Presidential powers, he explains, “have sometimes been expanded by presidents who cannot be accused of having authoritarian tendencies, such as Barack Obama, only to have this expanded power then used by Donald Trump.” We could, or course, as well say that the new powers grabbed by Trump will likely be used by a future Democratic president “who cannot be accused of authoritarian tendencies,” or perhaps who might legitimately be so accused.

The book is a book of history and political theory, not a partisan book. But the history of democracy has implications for today. An interesting one is how bureaucracy typically helped rulers prevent the development of democracy. Another quote from  my review—Stasavage deals with imperial China and I compare with today’s America:

At the apogee of the Han dynasty, at the beginning of the first millennium CE, there was one bureaucrat for every 440 subjects in the empire. … In the United States, which is at the low end of government bureaucracies in the rich world, public employees at all levels of government translate into one bureaucrat for 15 residents (about one for 79 at the federal level only).

If you read my review in the paper version of Regulation, beware. I made an error in my estimate for the federal bureaucracy and the printed version says “37” instead of “79”. It is corrected in the electronic version. Mea culpa.

(1 COMMENTS)

Econlib September 18, 2020

Raghuran Rajan’s The Third Pillar

In his latest book, Raghuram Rajan, a chaired professor of finance at the University of Chicago’s Booth School of Business and former governor of the Reserve Bank of India, advocates what he calls “inclusive localism.” His basic idea is that there are three pillars of a good and productive society: the market, the state, and the community. He argues that the community, which is the third pillar, nicely balances the excesses of both the free market and the state.

Although there is a strong case to be made for the importance of the community, Rajan does not make it nearly as well as he could have. The Third Pillar contains many insights and important facts, but his argument for inclusive localism is half-hearted. He concedes far too much to the current large state apparatus and, in doing so, implicitly accepts that communities will be weak. Again, and again in the book, when contemplating how to make local communities more powerful relative to federal governments, he fails to call for a massive reduction in state power. At times he accepts the state apparatus because he believes, often unjustifiably, in its goodness and effectiveness, and at times he accepts it because he seems to have a status quo bias.

Moreover, although he has better than the median economist’s understanding of the free market, he misses opportunities to point out how the market would straightforwardly solve some of the dilemmas he presents. Rajan also gets some important history wrong. And he makes too weak a case for free trade and favors ending child labor even in third-world countries where children and their families desperately need them to work.

This is from David R. Henderson, “An Unpersuasive Book with Some Encouraging Insights,” my review of Raghuram Rajan, The Third Pillar, Regulation, Fall 2020.

Rajan’s Misunderstanding of the Term “The Dismal Science”

In making his case that we can go too far in the direction of markets, Rajan writes, “Reverend Thomas Robert Malthus epitomized the heartless side of [classical] liberalism, when taken to its extreme.” Commenting on Malthus’s claim that disease, war, and famine would be natural checks on population growth, he writes, “No wonder historian Thomas Carlyle termed economics the ‘dismal science.’” But that is not why Carlyle coined the term. Instead, in noting that the dominant economists of his day strongly opposed slavery, he said economics was dismal because they opposed slavery. That is a big difference.

Rajan on Child Labor

One thing that is well established in economics is that child labor in very poor countries is a boon to children and their families. I made that point in Fortune in 1996 and Nobel economics prizewinner Paul Krugman made it in Slate in 1997. We both pointed out that children who work in “sweat shops” are virtually always better off than in their next best alternative. That next best alternative, if they are lucky, is a lower-paid job in agriculture or, if they are unlucky, picking through garbage or starving. Yet Rajan, who comes from a poor country, writes, “All countries should, of course, respect universal human rights, including refraining from using slave labor or child labor.” He is right on slave labor; he is horribly wrong on child labor. If he got his way, millions of poor children would suffer needlessly.

Rajan Has a Way with Words

One bright spot is Rajan’s refreshing way of expressing insights. For example, he sees a lot of problems with China’s unusual mixed economy and coins a beautiful phrase to describe it: “competitive cronyism.” And here is how he characterizes populism: “Populism, at its core, is a cry for help, sheathed in a demand for respect, and enveloped in the anger of those who feel they have been ignored.”

Read the whole thing.

(2 COMMENTS)

Econlib September 17, 2020

NASA Is Paying for Moon Rocks

NASA is creating financial incentives for private companies to market lunar resources. This could be a first step to developing lunar mining capabilities. The biggest benefit of the program, though, is precedent. It puts the U.S. government’s imprimatur on space commerce. Given the ambiguities in public international space law, this precedent has the potential to steer space policy and commerce in a pro-market direction.

This is a key paragraph in Alexander William Salter and David R. Henderson, “NASA is Paying for Moon Rocks. The Implications for Space Commerce are Huge,” AIER, September 16, 2020.

Read the whole thing. It’s short.

(3 COMMENTS)

Econlib September 17, 2020

It’s Complicated: Grasping the Syllogism

A few weeks ago, I presented the following syllogism:

 

Issue X is complicated.

Perspective Y’s position on X is not complicated.

Therefore, Perspective Y is wrong about X.

 

Almost all of the comments were critical.  Some notable examples:

Dan:

As someone who used to live in San Francisco and was involved in YIMBY activism, this argument was used frustratingly often by NIMBYs: “The housing crisis is complicated and you can’t simplify it to econ 101, therefore just building more won’t help”. The NIMBYs, after criticizing YIMBYism for being econ 101, then never made an econ 102 argument.

The problem with this argument is that you can make yourself sound wise about anything by claiming that it’s complicated and simple solutions won’t work.

Salem:

How about:

Trade is complicated.

Free traders’ positions on trade aren’t complicated.

Therefore free traders are wrong about trade.

There’s a big difference between an issue being complicated, and a position being complicated. It’s certainly possible to wisely address a complex issue in a simple way, particularly if your solution only has to satisfy one party. For example, “don’t get involved in that messy fight” is normally good advice.

Rob Weir:

The universe is complicated, full of cycles and epicycles, according to Ptolemaic astronomy.

Copernicus has a viewpoint that is not so complicated.

Ergo, Copernicus is wrong.

Notice, though, that my original argument targeted not simple conclusions, but simple perspectives. A conclusion is summary; a perspective is a full story.  The point of my syllogism is not to dismiss simple answers, but simple thinking.  Let’s consider the three preceding examples in turn.

  1. “Radically deregulate housing in San Francisco” is a simple conclusion with which I agree.  However, if someone added, “In a free market, everyone would live in a mansion” or “Radical deregulation will end homelessness,” I’d still say their perspective is wrong because they neglect the subtleties of the issue.  Deregulation will lead to large price declines, but not large enough to give everyone a mansion.  And highly irresponsible behavior reliably leads to noticeable levels of homelessness even in the cheapest of neighborhoods.

  2. “Free international trade is, all things considered, the best trade policy,” again, is a simple conclusion with which I agree.  However, if someone went on to claim, “When countries impose trade barriers, they always lower the average living standards of their own people,” or “Free trade would make Africa as rich as the United States,” I’d say their perspective is wrong because they neglect the subtleties of the issue.  The terms-of-trade argument is valid, and Africa has a long list of economic woes unrelated to trade policy.

  3. “The Copernican model is true” is another simple conclusion with which I agree.  However, if someone also claimed, “The Ptolemaic model was predictively useless” or “Ptolemy was a fool,” I’d say their perspective is wrong because they neglect the subtleties of the issue.  The Ptolemaic model worked well, and Ptolemy was a genius.

 

If my syllogism isn’t intended to discredit simple conclusions, what’s the point?  To discredit simple thinkers, of course.  Life is too short to listen to everyone.  Indeed, life is too short to listen to 1% of all the people eager to speak.  So when someone has a simple perspective on a complicated issue, I ignore them.  And so should you.

(18 COMMENTS)

Econlib September 16, 2020

Case and Deaton on Deaths of Despair and the Future of Capitalism

In their recent book Deaths of Despair and the Future of Capitalism, Anne Case and Nobel economics prizewinner Angus Deaton, both emeritus economists at Princeton University, show that the death rate for middle-age whites without a college degree bottomed out in 1999 and has risen since. They attribute the increase to drugs, alcohol, and suicide. Their data on deaths are impeccable. They are careful not to attribute the deaths to some of the standard but problematic reasons people might think of, such as increasing inequality, poverty, or a lousy health care system. At the same time, they claim that capitalism, pharmaceutical companies, and expensive health insurance are major contributors to this despair.

The dust jacket of their book states, “Capitalism, which over two centuries lifted countless people out of poverty, is now destroying the lives of blue-collar America.” Fortunately, their argument is much more nuanced than the book jacket. But it is also, at times, contradictory. Their discussion of the health care system is particularly interesting both for its insights and for its confusions. In their last chapter, “What to Do?” the authors suggest various policies but, compared to the empirical rigor with which they established the facts about deaths by despair, their proposals are not well worked out. One particularly badly crafted policy is their proposal on the minimum wage.

This is from “Blame Capitalism?“, my review of The Deaths of Despair and the Future of Capitalism,” in Regulation, Fall 2020.

Another excerpt:

To understand what is behind the increase in the death rate, the authors look at state data and note that death rates increased in all but six states. The largest increases in mortality were in West Virginia, Kentucky, Arkansas, and Mississippi. The only states in which midlife white mortality fell much were California, New York, New Jersey, and Illinois. All four of the latter states, they note, have high levels of formal education. That fact leads them to one of their main “aha!” findings: the huge negative correlation between having a bachelor’s degree and deaths of despair.

To illustrate, they focus on Kentucky, a state with one of the lowest levels of educational attainment. Between the mid-1990s and 2015, Case and Deaton show, for white non-Hispanics age 45–54 who had a four-year college degree, deaths from suicide, drug overdose, or alcoholic liver disease stayed fairly flat at about 25–30 per 100,000. But for that same group but without a college degree, the deaths in the same categories zoomed up from about 40 in the mid-1990s to a whopping 130 by 2015, over four times the rate for those with a college degree.

Why is a college degree so important? One big difference between those with and without a degree is the probability of being employed. In 2017, the U.S. unemployment rate was a low 3.6%. Of those with a bachelor’s degree or more, 84% of Americans age 25–64 were employed. By contrast, only 68% of those in the same age range who had only a high school degree were employed.

That leads to two questions. First, why are those without a college degree so much less likely to have jobs? Second, how does the absence of a degree lead to more suicide and drug and alcohol consumption? On the first question, the authors note that a higher percentage of jobs than in the past require higher skills and ability. Also, they write, “some jobs that were once open to nongraduates are now reserved for those with a college degree.”

I wish they had addressed this educational “rat race” in more detail. My Econlog blogging colleague Bryan Caplan, an economist at George Mason University, argues in his 2018 book The Case Against Education that a huge amount of the value of higher education is for people to signal to potential employers that they can finish a major project and be appropriately docile. To the extent he is right, government subsidies to higher education make many jobs even more off-limits to high school graduates. Yet, Case and Deaton do not cite Caplan’s work. Moreover, in their final chapter on what to do, they go the exact wrong way, writing, “Perhaps it is time to up our game to make college the norm?” That policy would further narrow the range of jobs available to nongraduates, making them even worse off.

On the second question—why absence of a degree leads to more deaths of despair—they cite a Gallup poll asking Americans to rate their lives on a scale from 0 (“the worst possible life you can imagine”) to 10 (“the best possible life you can imagine”). Those with a college degree averaged 7.3, while those with just a high school diploma averaged 6.6. That is not a large difference, a fact they do not note.

And note their novel argument for why improved health care, better entertainment through the internet, and more convenience don’t count in people’s real wages:

So, what are the culprits behind the deaths of those without college degrees? Case and Deaton blame the job market and health insurance. Jobs for those without college degrees do not pay as much and do not generally carry much prestige. And, as noted above, Case and Deaton mistakenly think that real wages for such jobs have fallen. Some economists, by adding nonmonetary benefits provided by employers and by noting the amazing goods we can buy with our wages such as cell phones, conclude that even those without a college degree are doing better. Case and Deaton reject that argument. They do not deny that health care now is better than it was 20 years ago, but they write that a typical worker is doing better now than then “only if the improvements—in healthcare, or in better entertainment through the internet, or in more convenience from ATMs—can be turned into hard cash by buying less of the good affected, or less of something else, a possibility that, however desirable, is usually not available.” They continue, “People may be happier as a result of the innovations, but while it is often disputed whether money buys happiness, we have yet to discover a way of using happiness to buy money.”

That thinking is stunning. Over many decades, economists have been accused, usually unjustly, of saying that only money counts. We have usually responded by saying, “No, what counts is utility, the satisfaction we get out of goods and services and life in general.” But now Case and Deaton dismiss major improvements in the happiness provided by goods and services by noting that happiness cannot be converted to money. That is a big step backward in economic thinking.

 

Read the whole thing.

(31 COMMENTS)

Econlib September 16, 2020

The Fed can create money

Here’s the Financial Times:

The Fed also has acknowledged it lacks the tools to solve all the problems in the economy, since it can only lend money, but not spend it to help businesses or households. And the Fed is acutely aware that its policies have done plenty to save financial markets from distress, but cannot deliver benefits as easily to low-income families and the unemployed.

That’s entirely false.  The Fed doesn’t just lend money; it can and does create money and also spend the new money on assets in order to boost NGDP and help businesses and households.  This policy delivers benefits to unemployed workers by reducing the unemployment rate.

The Fed is worried that the lack of a fiscal agreement will threaten the recovery and make its job harder. The US central bank does not want to be left alone in propping up the recovery.

Why?

This is good:

Some economists have suggested the Fed might tweak that to include a reference to an average 2 per cent inflation objective “over time” — reflecting its new policy framework.

Investors arguing for the new guidance to be rolled out this week say the Fed risks a loss of credibility if it does not act quickly to reinforce its monetary shift.

Today’s Fed meeting will be much more important than the typical meeting.  We will get some indication as to whether the Fed plans to obey the law—fulfill its mandate from Congress—or go sit in the corner and mope about the fact that fiscal policy is not all that it would prefer.

Bonus question: When the government lends money is that policy expansionary?  When the government borrows money is that policy expansionary?  Does the FT believe that the answer to both questions is yes?

(2 COMMENTS)

Econlib September 16, 2020

Hello Mind, Nice to Meet Ya

Universities at their best are places where reading, writing, speaking, and (hopefully) listening are carried out at the highest level. The core activity here is sharing words with other people. We share words—written, verbal, and non-verbal—to meet other minds, to learn and share experiences for the sake of mutual betterment. So, as teachers and students return to campus, I thought it might be fun to take a moment to reflect on WORDS, with a little inspiration from Vincent Ostrom, F. A. Hayek, and Stephen King.

Vincent Ostrom, maybe more than any other 20supth/sup century political economist, emphasized the fact that language is a powerful tool. When we name what we experience by assigning words to objects and relationships, we generate “shared communities of understanding.”(The Meaning of Deocracies and the Sahred Vulnerability of Democracies: A Response To Tocqueville’s Challenge, p. 153) These words and the understanding they enable are how people share what they learn with others, including across generations. Through words, our experiences benefit others. I interpret this as similar to Hayek’s claim from Constitution of Liberty that “civilization begins where the individual can benefit from more knowledge than he can himself acquire, and is able to cope with his ignorance by using knowledge which he does not possess.” Words—along with markets, culture, and law/rules of conduct—form the extended orders that make society possible.

In Stephen King’s On Writing—an intellectual memoir from a true master of words—he equates successful writing with being able to pull off the near-supernatural act of telepathy. Language is a vehicle through which we are able to either send or receive mental images that otherwise would remain electrical impulses with nowhere to go, trapped inside our own minds. He gives the following example of “telepathy in action”:

“Look, here’s a table covered with a red cloth. On it is a cage the size of a small fish aquarium. In the cage is a white rabbit, with a pink nose and pink rimmed eyes. In its front paws is a carrot stub upon which it is contentedly munching. On its back, clearly marked in blue ink, is the numeral eight. Do we see the same thing? We’d have to get together and compare notes to make absolutely sure, but I think we do.”

The quote doesn’t give the chapter full justice, so I definitely recommend reading the whole thing, especially if you have an interest in writing as a craft. He goes on to explain that we might imagine very different details, but nearly everybody comes away with the same understanding about what is important about the description: the blue number eight on the rabbit’s back. This is the puzzle, the unexpected element that makes the information new and unites our attention around an idea. What I take from this is that there’s something about finding the right way to say something—precise but only to the point of usefulness, thorough yet focused, with some understanding of what the reader is bringing to the table—that makes it possible to get a message across in the way it was intended. That makes it possible for two minds to meet.

King’s conclusion is that “You must not come lightly to the blank page.” To write is to transmit ideas to other people’s minds. That’s a serious responsibility that can be carried out well or poorly, put to good use or ill. I can think of no reason why the same admonition should not apply to lectures, conversations, and video presentations.

Vincent Ostrom built on this idea. For Ostrom, language is created through the process of continued communication, and the language that is created then enters back into every aspect of our lives: “The learning, use, and alteration of language articulations is constitutional in character, applicable to the constitutive character of human personality, to patterns of human association, and to the aggregate structure of the conventions of language usage… the way languages are associated with institutions, goods, cultures, and personality, attributes means that we find languages permeating all aspects of human existence” (p. 171-2).

In other words, by embarking on the academic’s quest to use words better, we are all taking on a particularly important constitutive role. Global markets are made up by millions of buyers and sellers scattered around the world. Languages are made up of millions of people talking, reading, writing, listening, and—to borrow King’s analogy—making telepathic connections with each other in an attempt to connect words to better ideas, and better ideas to better lives. It might be an abstract quest but it’s noble one. Getting it right can make the world better, getting it wrong can make the world worse.

There are several dozen morals about the importance of the endeavor, of sticking to one’s principles, of mastering the fundamentals, etc. that can be drawn from this, and I don’t really want to moralize or pontificate more than I already have. So I’ll just end by saying that if you’re still reading, it was nice to meet your mind for a moment. I hope we’ll meet again soon.

 

 

 

_ Jayme Lemke is a Senior Research Fellow and Associate Director of Academic and Student Programs at the Mercatus Center at George Mason University and a Senior Fellow in the F.A. Hayek Program for Advanced Study in Philosophy, Politics, and Economics. _

 


As an Amazon Associate, Econlib earns from qualifying purchases.

(2 COMMENTS)

Econlib September 16, 2020

The Great Reconciliation?

What is the best way to reconcile the results for these three polls?

How good is the following heuristic?

The resources you spend mitigating a problem should be directly proportional to its overall severity.

— Bryan Caplan (@bryan_caplan) August 18, 2020

script async src"https://platform.twitter.com/widgets.js" charset"utf-8"/script

Medically speaking, how bad is coronavirus compared to flu?

— Bryan Caplan (@bryan_caplan) August 17, 2020

script async src"https://platform.twitter.com/widgets.js" charset"utf-8"/script

How much time, inconvenience, and resources should we spend fighting coronavirus compared to flu?

— Bryan Caplan (@bryan_caplan) August 17, 2020

script async src"https://platform.twitter.com/widgets.js" charset"utf-8"/script

I’m tempted to just say “cognitive dissonance.”  The effort heuristic makes great sense, and the medical estimate seems about right.  But that in turn implies that past and current coronavirus efforts (public and private) are grossly excessive.  Indeed, do we even spend five hours per year fighting flu?  If so, why should we spend more than twenty five hours per year fighting coronavirus?  But almost no one feels comfortable with that relaxed attitude, hence the dissonance.

(23 COMMENTS)

Econlib September 16, 2020

The Logic of Protectionist Nationalists

It seems that economics and logic were not the strong fields of protectionist nationalists in college—or at least this is the case with the three lieutenant governors who published an op-ed in The Hill at the beginning of the Summer. In the just-published Fall issue of Regulation (the electrons are still hot and the paper version has not yet hit the newsstands), I write:

The USMCA, the authors glowingly wrote, will “increase U.S. annual agricultural exports by 2.2 billion.” This crowing claim comes just a few lines after the statement that “agriculture is what puts food on the table, literally and metaphorically.” They better  take their “metaphorically” very literally because exported agricultural products actually take food away from American tables in order to feed foreigners.

No wonder that with this sort of coherence, protectionists think they can prove anything, including that the only benefit of free trade lies in exports.

For a view of the free-market view of trade, have a look at my article. One question it answers is, Why do American exporters work for foreigners? I also review recent data on the impact of the 2018 tariff on the domestic price of washing machines.

 

(7 COMMENTS)

This site uses local and third-party cookies to maintain your shopping cart and to analyze traffic. If you want to know more, click here. By closing this banner or clicking any link in this page, you agree with this practice.