This is my archive

bar

Bonds

Bond markets are important components of capital markets. Bonds are fixed-income financial assets—essentially IOUs that promise the holder a specified set of payments. The value of a bond, like the value of any other asset, is the present value of the income stream one expects to receive from holding the bond. This has several implications: 1- Bond prices vary inversely with market interest rates. Because the stream of promised payments usually is fixed no matter what subsequently happens to interest rates, higher rates reduce the present value of these promised payments, and thus the bond price. 2- The value of bonds falls when people come to expect higher inflation. The reason is that higher expected inflation raises market interest rates, and therefore reduces the present value of the fixed stream of promised payments. 3- The greater the uncertainty about whether the promised payments will be made (the risk that the issuer will default on the promised payments), the lower the expected payments to bondholders and the lower the value of the bond. 4- Bonds whose payments are subjected to lower taxation provide investors with higher expected after-tax payments. Because investors are interested in after-tax income, such bonds sell for higher prices. The major classes of bond issuers are the U.S. government, corporations, and municipal governments. The default risk and tax status differ from one kind of bond to another. U.S. Government Bonds The U.S. government is highly unlikely to default on promised payments to its bondholders because the government has the right to tax as well as the authority to print money. Thus, virtually all of the variation in the value of its bonds is due to changes in market interest rates. That is why most securities analysts use prices of U.S. government bonds to compute market interest rates. Because the U.S. government’s tax revenues rarely cover expenditures, it relies on debt financing for the balance. Moreover, on the occasions when the government does not have a budget deficit, it still sells new debt to refinance the old debt as it matures. Most of the debt sold by the U.S. government is marketable, meaning that it can be resold by its original purchaser. Marketable issues include treasury bills, treasury notes, and treasury bonds. The major nonmarketable federal debt sold to individuals is U.S. savings bonds. Treasury bills have maturities of up to one year and are generally issued in denominations of $10,000. They do not have a stated coupon; that is, the government does not write a separate interest check to the owner. Instead, the U.S. Treasury sells these bills at a discount to their redemption value. The size of the discount determines the effective interest rate on the bill. For instance, a dealer might offer a bill with 120 days left until maturity at a yield of 7.48 percent. To translate this quoted yield into the price, one must “undo” this discount computation. Multiply the 7.48 by 120/360 (the fraction of the conventional 360-day year employed in this market) to obtain 2.493, and subtract that from 100 to get 97.506. The dealer is offering to sell the bond for $97.507 per $100 of face value. Treasury notes and treasury bonds differ from treasury bills in several ways. First, their maturities generally are greater than one year. Notes have maturities of one to seven years, while bonds can be sold with any maturity, but their maturities at issue typically exceed five years. Second, bonds and notes specify periodic interest (coupon) payments as well as a principal repayment. Third, they normally are registered, meaning that the government records the name and address of the current owner. When treasury notes or bonds are sold initially, their coupon rate is typically set so that they will sell at close to their face (par) value. Yields on bills, notes, or bonds of different maturities usually differ. (The array of rates associated with bonds of different maturities is referred to as the term structure of interest rates.) Because investors can invest either in a long-term note or in a sequence of short-term bills, expectations about future short-term rates affect current long-term rates. Thus, if the market expects future short-term rates to exceed current short-term rates, then current long-term rates would exceed current short-term rates—the term structure would have a positive slope (see Figure 1). If, for example, the current short-term rate for a one-year T-bill is 5 percent, and the market expects the rate on a one-year T-bill sold one year from now to be 6 percent, then the current two-year rate must exceed 5 percent. If it did not, investors would expect to do better by buying one-year bills today and rolling them over into new one-year bills a year from now. Savings bonds are offered only to individuals. Two types have been offered, both registered. Series E bonds are essentially discount bonds; investors receive no interest until the bonds are redeemed. Series H bonds pay interest semiannually. Unlike marketable government bonds, which have fixed interest rates, rates received by savings bond holders normally are revised when market rates change. Some bonds—for instance, U.S. Treasury Inflation-Protected Securities (TIPS)—are indexed for inflation. If, for example, inflation were 10 percent per year, then the value of the bond would be adjusted to compensate for this inflation. If indexation were perfect, the change in expected payments due to inflation would exactly offset the inflation-caused change in market interest rates. Figure 1 Corporate Bonds Corporate bonds promise specified payments at specified dates. In general, the interest the bondholder receives is taxed as ordinary income. An issue of corporate bonds generally is covered by a trust indenture, a contract that promises a trustee (typically a bank or trust company) that it will comply with the indenture’s provisions (or covenants). These include a promise of payment of principal and interest at stated dates, as well as other provisions such as limitations of the firm’s right to sell pledged property, limitations on future financing activities, and limitations on dividend payments. Potential lenders forecast the likelihood of default on a bond and require higher promised interest rates for higher forecasted default rates. (This difference in promised interest rates between low- and high-risk bonds of the same maturity is called a credit spread.) Bond-rating agencies (Moody’s and Standard and Poor’s, for example) provide an indication of the relative default risk of bonds with ratings that range from Aaa (the best quality) to C (the lowest). Bonds rated Baa and above typically are referred to as “investment grade.” Below-investment-grade bonds are sometimes referred to as “junk bonds.” Junk bonds can carry promised yields that are three to six percentage points higher than those of Aaa bonds. They have a credit spread of three hundred to six hundred basis points, a basis point being one one-hundredth of a percentage point. One way that corporate borrowers can influence the forecasted default rate is to agree to restrictive provisions or covenants that limit the firm’s future financing, dividend, and investment activities—making it more certain that cash will be available to pay interest and principal. With a lower anticipated probability of default, buyers are willing to offer higher prices for the bonds. Corporate officers, thus, must weigh the costs of the reduced flexibility from including the covenants against the benefits of lower interest rates. Describing all the types of corporate bonds that have been issued would be difficult. Sometimes different names are employed to describe the same type of bond, and, infrequently, the same name will be applied to two quite different bonds. Standard types include the following: •Mortgage bonds are secured by the pledge of specific property. If default occurs, the bondholders are entitled to sell the pledged property to satisfy their claims. If the sale proceeds are insufficient to cover their claims, they have an unsecured claim on the corporation’s other assets. •Debentures are unsecured general obligations of the issuing corporation. The indenture will regularly limit issuance of additional secured and unsecured debt. •Collateral trust bonds are backed by other securities (typically held by a trustee). Such bonds are frequently issued by a parent corporation pledging securities owned by a subsidiary. •Equipment obligations (or equipment trust certificates) are backed by specific pieces of equipment (railroad rolling stock, aircraft, etc.). •Subordinated debentures have a lower priority in bankruptcy than ordinary (unsubordinated) debentures. Junior claims are generally paid only after senior claims have been satisfied but rank ahead of preferred and common stock. •Convertible bonds give the owner the option either to be repaid in cash or to exchange the bonds for a specified number of shares in the corporation. Municipal Bonds Historically, interest paid on bonds issued by state and local governments has been exempt from federal income taxes. Such interest may be exempt from state income taxes as well. For instance, the New York tax code exempts interest from bonds issued by New York and Puerto Rico municipalities. Because investors are interested in returns net of tax, municipal bonds generally have promised lower interest rates than other government bonds that have similar risk but that lack this attractive tax treatment. In 2003, the percentage difference (not the percentage point difference) between the yield on long-term U.S. government bonds and the yield on long-term municipals was about 10 percent. Thus, if an individual’s marginal tax rate were higher than 10 percent, the after-tax promised return would be higher from municipal bonds than from taxable government bonds. (Although this difference might appear small, there is a credit spread in municipals just as in corporates.) Municipal bonds typically are designated as either general obligation bonds or revenue bonds. General obligation bonds are backed by the “full faith and credit” (and thus the taxing authority) of the issuing entity. Revenue bonds are backed by a specifically designated revenue stream, such as the revenues from a designated project, authority, or agency, or by the proceeds from a specific tax. Frequently, such bonds are issued by agencies that plan to sell their services at prices that cover their expenses, including the promised payments on the debt. In such cases, the bonds are only as good as the enterprise that backs them. In 1983, for example, the Washington Public Power Supply System (WPPSS), which Wall Street quickly nicknamed “Whoops,” defaulted on $2.25 billion on its number four and number five nuclear power plants, leaving bondholders with much less than they had been promised. Industrial development bonds are used to finance the purchase or construction of facilities to be leased to private firms. Municipalities have used such bonds to subsidize businesses choosing to locate in their area by, in effect, giving them the benefit of loans at tax-exempt rates. Some municipal bonds are still sold in bearer form; that is, possession of the bond itself constitutes proof of ownership. Historically in the United States, most public bonds (government, corporate, and municipal) were bearer bonds. Now, the Internal Revenue Service requires bonds that pay taxable interest to be sold in registered form. About the Author Clifford W. Smith is the Epstein Professor of Finance at the William E. Simon Graduate School of Business Administration, University of Rochester. He is an advisory editor of the Journal of Financial Economics and an associate editor of the Journal of Derivatives, the Journal of Risk and Insurance, and the Journal of Financial Services Research. Further Reading   Brealey, Richard A., and Stewart C. Myers. Principles of Corporate Finance. 7th ed. Boston: McGraw-Hill/Irwin, 2003. Peavy, John W., and George H. Hempel. “The Effect of the WPPSS Crisis on the Tax-Exempt Bond Market.” Journal of Financial Research 10, no. 3 (1987): 239–247. Sharpe, William F., Gordon J. Alexander, and Jeffrey V. Bailey. Investments. Upper Saddle River, N.J.: Prentice Hall, 1999. Smith, Clifford W. Jr., and Jerold B. Warner. “On Financial Contracting: An Analysis of Bond Covenants.” Journal of Financial Economics 7, no. 3 (1979): 117–161. Related Links Robert P. Murphy, Government Debt and Future Generations. June 2015. From the Web: http://www.Moodys.com http://www.Investinginbonds.com   (0 COMMENTS)

/ Learn More

Austrian School of Economics

The Austrian school of economics was founded in 1871 with the publication of Carl Menger’s Principles of Economics. menger, along with william stanley jevons and leon walras, developed the marginalist revolution in economic analysis. Menger dedicated Principles of Economics to his German colleague William Roscher, the leading figure in the German historical school, which dominated economic thinking in German-language countries. In his book, Menger argued that economic analysis is universally applicable and that the appropriate unit of analysis is man and his choices. These choices, he wrote, are determined by individual subjective preferences and the margin on which decisions are made (see marginalism). The logic of choice, he believed, is the essential building block to the development of a universally valid economic theory. The historical school, on the other hand, had argued that economic science is incapable of generating universal principles and that scientific research should instead be focused on detailed historical examination. The historical school thought the English classical economists mistaken in believing in economic laws that transcended time and national boundaries. Menger’s Principles of Economics restated the classical political economy view of universal laws and did so using marginal analysis. Roscher’s students, especially Gustav Schmoller, took great exception to Menger’s defense of “theory” and gave the work of Menger and his followers, eugen böhm-bawerk and Friedrich Wieser, the derogatory name “Austrian school” because of their faculty positions at the University of Vienna. The term stuck. Since the 1930s, no economists from the University of Vienna or any other Austrian university have become leading figures in the so-called Austrian school of economics. In the 1930s and 1940s, the Austrian school moved to Britain and the United States, and scholars associated with this approach to economic science were located primarily at the London School of Economics (1931–1950), New York University (1944–), Auburn University (1983–), and George Mason University (1981–). Many of the ideas of the leading mid-twentieth-century Austrian economists, such as ludwig von mises and f. a. hayek, are rooted in the ideas of classical economists such as adam smith and david hume, or early-twentieth-century figures such as knut wicksell, as well as Menger, Böhm-Bawerk, and Friedrich von Wieser. This diverse mix of intellectual traditions in economic science is even more obvious in contemporary Austrian school economists, who have been influenced by modern figures in economics. These include armen alchian, james buchanan, ronald coase, Harold Demsetz, Axel Leijonhufvud, douglass north, Mancur Olson, vernon smith, Gordon Tullock, Leland Yeager, and Oliver Williamson, as well as Israel Kirzner and Murray Rothbard. While one could argue that a unique Austrian school of economics operates within the economic profession today, one could also sensibly argue that the label “Austrian” no longer possesses any substantive meaning. In this article I concentrate on the main propositions about economics that so-called Austrians believe. The Science of Economics Proposition 1: Only individuals choose. Man, with his purposes and plans, is the beginning of all economic analysis. Only individuals make choices; collective entities do not choose. The primary task of economic analysis is to make economic phenomena intelligible by basing it on individual purposes and plans; the secondary task of economic analysis is to trace out the unintended consequences of individual choices. Proposition 2: The study of the market order is fundamentally about exchange behavior and the institutions within which exchanges take place. The price system and the market economy are best understood as a “catallaxy,” and thus the science that studies the market order falls under the domain of “catallactics.” These terms derive from the original Greek meanings of the word “katallaxy”—exchange and bringing a stranger into friendship through exchange. Catallactics focuses analytical attention on the exchange relationships that emerge in the market, the bargaining that characterizes the exchange process, and the institutions within which exchange takes place. Proposition 3: The “facts” of the social sciences are what people believe and think. Unlike the physical sciences, the human sciences begin with the purposes and plans of individuals. Where the purging of purposes and plans in the physical sciences led to advances by overcoming the problem of anthropomorphism, in the human sciences, the elimination of purposes and plans results in purging the science of human action of its subject matter. In the human sciences, the “facts” of the world are what the actors think and believe. The meaning that individuals place on things, practices, places, and people determines how they will orient themselves in making decisions. The goal of the sciences of human action is intelligibility, not prediction. The human sciences can achieve this goal because we are what we study, or because we possess knowledge from within, whereas the natural sciences cannot pursue a goal of intelligibility because they rely on knowledge from without. We can understand purposes and plans of other human actors because we ourselves are human actors. The classic thought experiment invoked to convey this essential difference between the sciences of human action and the physical sciences is a Martian observing the “data” at Grand Central Station in New York. Our Martian could observe that when the little hand on the clock points to eight, there is a bustle of movement as bodies leave these boxes, and that when the little hand hits five, there is a bustle of movement as bodies reenter the boxes and leave. The Martian may even develop a prediction about the little hand and the movement of bodies and boxes. But unless the Martian comes to understand the purposes and plans (the commuting to and from work), his “scientific” understanding of the data from Grand Central Station would be limited. The sciences of human action are different from the natural sciences, and we impoverish the human sciences when we try to force them into the philosophical/scientific mold of the natural sciences. Microeconomics Proposition 4: Utility and costs are subjective. All economic phenomena are filtered through the human mind. Since the 1870s, economists have agreed that value is subjective, but, following alfred marshall, many argued that the cost side of the equation is determined by objective conditions. Marshall insisted that just as both blades of a scissors cut a piece of paper, so subjective value and objective costs determine price (see microeconomics). But Marshall failed to appreciate that costs are also subjective because they are themselves determined by the value of alternative uses of scarce resources. Both blades of the scissors do indeed cut the paper, but the blade of supply is determined by individuals’ subjective valuations. In deciding courses of action, one must choose; that is, one must pursue one path and not others. The focus on alternatives in choices leads to one of the defining concepts of the economic way of thinking: opportunity costs. The cost of any action is the value of the highest-valued alternative forgone in taking that action. Since the forgone action is, by definition, never taken, when one decides, one weighs the expected benefits of an activity against the expected benefits of alternative activities. Proposition 5: The price system economizes on the information that people need to process in making their decisions. Prices summarize the terms of exchange on the market. The price system signals to market participants the relevant information, helping them realize mutual gains from exchange. In Hayek’s famous example, when people notice that the price of tin has risen, they do not need to know whether the cause was an increase in demand for tin or a decrease in supply. Either way, the increase in the price of tin leads them to economize on its use. Market prices change quickly when underlying conditions change, which leads people to adjust quickly. Proposition 6: Private property in the means of production is a necessary condition for rational economic calculation. Economists and social thinkers had long recognized that private ownership provides powerful incentives for the efficient allocation of scarce resources. But those sympathetic to socialism believed that socialism could transcend these incentive problems by changing human nature. Ludwig von Mises demonstrated that even if the assumed change in human nature took place, socialism would fail because of economic planners’ inability to rationally calculate the alternative use of resources. Without private ownership in the means of production, Mises reasoned, there would be no market for the means of production, and therefore no money prices for the means of production. And without money prices reflecting the relative scarcities of the means of production, economic planners would be unable to rationally calculate the alternative use of the means of production. Proposition 7: The competitive market is a process of entrepreneurial discovery. Many economists see competition as a state of affairs. But the term “competition” invokes an activity. If competition were a state of affairs, the entrepreneur would have no role. But because competition is an activity, the entrepreneur has a huge role as the agent of change who prods and pulls markets in new directions. The entrepreneur is alert to unrecognized opportunities for mutual gain. By recognizing opportunities, the entrepreneur earns a profit. The mutual learning from the discovery of gains from exchange moves the market system to a more efficient allocation of resources. Entrepreneurial discovery ensures that a free market moves toward the most efficient use of resources. In addition, the lure of profit continually prods entrepreneurs to seek innovations that increase productive capacity. For the entrepreneur who recognizes the opportunity, today’s imperfections represent tomorrow’s profit.1 The price system and the market economy are learning devices that guide individuals to discover mutual gains and use scarce resources efficiently. Macroeconomics Proposition 8: Money is nonneutral. Money is defined as the commonly accepted medium of exchange. If government policy distorts the monetary unit, exchange is distorted as well. The goal of monetary policy should be to minimize these distortions. Any increase in the money supply not offset by an increase in money demand will lead to an increase in prices. But prices do not adjust instantaneously throughout the economy. Some price adjustments occur faster than others, which means that relative prices change. Each of these changes exerts its influence on the pattern of exchange and production. Money, by its nature, thus cannot be neutral. This proposition’s importance becomes evident in discussing the costs of inflation. The quantity theory of money stated, correctly, that printing money does not increase wealth. Thus, if the government doubles the money supply, money holders’ apparent gain in ability to buy goods is prevented by the doubling of prices. But while the quantity theory of money represented an important advance in economic thinking, a mechanical interpretation of the quantity theory underestimated the costs of inflationary policy. If prices simply doubled when the government doubled the money supply, then economic actors would anticipate this price adjustment by closely following money supply figures and would adjust their behavior accordingly. The cost of inflation would thus be minimal. But inflation is socially destructive on several levels. First, even anticipated inflation breaches a basic trust between the government and its citizens because government is using inflation to confiscate people’s wealth. Second, unanticipated inflation is redistributive as debtors gain at the expense of creditors. Third, because people cannot perfectly anticipate inflation and because the money is added somewhere in the system—say, through government purchase of bonds—some prices (the price of bonds, for example) adjust before other prices, which means that inflation distorts the pattern of exchange and production. Since money is the link for almost all transactions in a modern economy, monetary distortions affect those transactions. The goal of monetary policy, therefore, should be to minimize these monetary distortions, precisely because money is nonneutral.2 Proposition 9: The capital structure consists of heterogeneous goods that have multispecific uses that must be aligned. Right now, people in Detroit, Stuttgart, and Tokyo City are designing cars that will not be purchased for a decade. How do they know how to allocate resources to meet that goal? Production is always for an uncertain future demand, and the production process requires different stages of investment ranging from the most remote (mining iron ore) to the most immediate (the car dealership). The values of all producer goods at every stage of production derive from the value consumers place on the product being produced. The production plan aligns various goods into a capital structure that produces the final goods in, ideally, the most efficient manner. If capital goods were homogeneous, they could be used in producing all the final products consumers desired. If mistakes were made, the resources would be reallocated quickly, and with minimal cost, toward producing the more desired final product. But capital goods are heterogeneous and multispecific; an auto plant can make cars, but not computer chips. The intricate alignment of capital to produce various consumer goods is governed by price signals and the careful economic calculations of investors. If the price system is distorted, investors will make mistakes in aligning their capital goods. Once the error is revealed, economic actors will reshuffle their investments, but in the meantime resources will be lost.3 Proposition 10: Social institutions often are the result of human action, but not of human design. Many of the most important institutions and practices are not the result of direct design but are the by-product of actions taken to achieve other goals. A student in the Midwest in January trying to get to class quickly while avoiding the cold may cut across the quad rather than walk the long way around. Cutting across the quad in the snow leaves footprints; as other students follow these, they make the path bigger. Although their goal is merely to get to class quickly and avoid the cold weather, in the process they create a path in the snow that actually helps students who come later to achieve this goal more easily. The “path in the snow” story is a simple example of a “product of human action, but not of human design” (Hayek 1948, p. 7). The market economy and its price system are examples of a similar process. People do not intend to create the complex array of exchanges and price signals that constitute a market economy. Their intention is simply to improve their own lot in life, but their behavior results in the market system. Money, law, language, science, and so on are all social phenomena that can trace their origins not to human design, but rather to people striving to achieve their own betterment, and in the process producing an outcome that benefits the public.4 The implications of these ten propositions are rather radical. If they hold true, economic theory would be grounded in verbal logic and empirical work focused on historical narratives. With regard to public policy, severe doubt would be raised about the ability of government officials to intervene optimally within the economic system, let alone to rationally manage the economy. Perhaps economists should adopt the doctors’ creed: “First do no harm.” The market economy develops out of people’s natural inclination to better their situation and, in so doing, to discover the mutually beneficial exchanges that will accomplish that goal. Adam Smith first systematized this message in The Wealth of Nations. In the twentieth century, economists of the Austrian school of economics were the most uncompromising proponents of this message, not because of a prior ideological commitment, but because of the logic of their arguments. About the Author Peter J. Boettke is a professor of economics at George Mason University, where he is also the deputy director of the James M. Buchanan Center for Political Economy and a senior fellow at the Mercatus Center. He is the editor of the Review of Austrian Economics. Further Reading General Reading   Boettke, P., ed. The Elgar Companion to Austrian Economics. Brookfield, Vt.: Edward Elgar, 1994. Dolan, E., ed. The Foundations of Modern Austrian Economics. Mission, Kans.: Sheed and Ward, 1976. Available online at: http://www.econlib.org/library/NPDBooks/Dolan/dlnFMA.html   Classic Readings   Böhm-Bawerk, E. Capital and Interest. 3 vols. 1883. South Holland, Ill.: Libertarian Press, 1956. Available online at: http://www.econlib.org/library/BohmBawerk/bbCI.html Hayek, F. A. Individualism and Economic Order. Chicago: University of Chicago Press, 1948. Kirzner, I. Competition and Entrepreneurship. Chicago: University of Chicago Press, 1973. Menger, C. Principles of Economics. 1871. New York: New York University Press, 1976. Mises, L. von. Human Action: A Treatise on Economics. New Haven: Yale University Press, 1949. Available online at: http://www.econlib.org/library/Mises/HmA/msHmA.html O’ Driscoll, G., and M. Rizzo. The Economics of Time and Ignorance. Oxford: Basil Blackwell, 1985. Rothbard, M. Man, Economy and State. 2 vols. New York: Van Nostrand Press, 1962. Vaughn, K. Austrian Economics in America. Cambridge: Cambridge University Press, 1994.   History of the Austrian School of Economics   Boettke, P., and Peter Leeson. “The Austrian School of Economics: 1950–2000.” In Jeff Biddle and Warren Samuels, eds., The Blackwell Companion to the History of Economic Thought. London: Blackwell, 2003. Hayek, F. A. “Economic Thought VI: The Austrian School.” In International Encyclopedia of the Social Sciences. New York: Macmillan, 1968. Machlup, F. “Austrian Economics.” In Encyclopedia of Economics. New York: McGraw-Hill, 1982.   Footnotes 1. Entrepreneurship can be characterized by three distinct moments: serendipity (discovery), search (conscious deliberation), and seizing the opportunity for profit.   2. The search for solutions to this elusive goal generated some of the most innovative work of the Austrian economists and led to the development in the 1970s and 1980s of the literature on free banking by F. A. Hayek, Lawrence White, George Selgin, Kevin Dowd, Kurt Schuler, and Steven Horwitz.   3. Propositions 8 and 9 form the core of the Austrian theory of the business cycle, which explains how credit expansion by the government generates a malinvestment in the capital structure during the boom period that must be corrected in the bust phase. In contemporary economics, Roger Garrison is the leading expositor of this theory.   4. Not all spontaneous orders are beneficial and, thus, this proposition should not be read as an example of a Panglossian fallacy. Whether individuals pursuing their own self-interest generate public benefits depends on the institutional conditions within which they pursue their interests. Both the invisible hand of market efficiency and the tragedy of the commons are results of individuals striving to pursue their individual interests; but in one social setting this generates social benefits, whereas in the other it generates losses. New institutional economics has refocused professional attention on how sensitive social outcomes are to the institutional setting within which individuals interact. It is important, however, to realize that classical political economists and the early neoclassical economists all recognized the basic point of new institutional economists, and that it was only the mid-twentieth-century fascination with formal proofs of general competitive equilibrium, on the one hand, and the Keynesian preoccupation with aggregate variables, on the other, that tended to cloud the institutional preconditions required for social cooperation.   Related Links Steven Horwitz, The Five Best Introductory Books in Austrian Economics. EconLog, December 2019. Steven Horwitz, The Five (okay, ten) Essential Books in Austrian Economics. EconLog, December 2019. Boettke on Austrian Economics. EconTalk, December 2007. Steven Horwitz, Ludwig von Mises’s Socialism: A Still Timely Case Against Marx. October, 2018. Don Boudreaux on Macroeconomics and Austrian Business Cycle Theory. EconTalk, April 2009. Boettke on the Austrian Perspective on Business Cycles and Monetary Policy. EconTalk, January 2009. Edwin G. Dolan (ed.), The Foundations of Modern Austrian Economics. Norman Barry, The Tradition of Spontaneous Order. Laurence S. Moss (ed.), The Economics of Ludwig von Mises: Toward a Critical Reappraisal. Boettke on Mises. EconTalk, December 2010. Caldwell on Hayek. EconTalk, January 2011. Boudreaux on Reading Hayek. EconTalk, December 2012. (0 COMMENTS)

/ Learn More

Bubbles

What Are Bubbles? In 1996, the fledgling Internet portal Yahoo.com made its stock-market debut. This was during a time of great excitement—as well as uncertainty—about the prosperous “new economy” that the rapidly expanding Internet promised. By the beginning of the year 2000, Yahoo shares were trading at $240 each.1 Exactly one year later, however, Yahoo’s stock sold for only $30 per share. A similar story could be told for many of Yahoo’s “dot-com” contemporaries—a substantial period of market-value growth during the late 1990s followed by a rapid decline as the twenty-first century approached. With the benefit of hindsight, many concluded that dot-com stocks were overvalued in the late 1990s, which created an “Internet bubble” that was doomed to burst. Thus, as this account implies, the definition of a bubble involves some characterization of the extent to which an asset is overvalued. Let us define the “fundamental value” of an asset as the present value of the stream of cash flows that its holder expects to receive. These cash flows include the series of dividends that the asset is expected to generate and the expected price of the asset when sold.2 In an efficient market, the price of an asset is equal to its fundamental value. For instance, if a stock is trading at a price below its fundamental value, savvy investors in the market will pounce on the profit opportunity by purchasing more shares of the stock. This will bid up the stock’s price until no further profits can be achieved—that is, until its price equals its fundamental value; the same mechanism works to correct stocks that are trading above their fundamental values. So, if an asset is persistently trading at a price higher than its fundamental value, we would say that its price exhibits a bubble and that the asset is overvalued by an amount equal to the bubble—the difference between the asset’s trading price and its fundamental value. This definition implies that if such bubbles persist, investors are irrational in their failure to profit from the “overpriced” asset. Thus, we refer to this type of bubble as an “irrational bubble.” Over the past few decades, economists have generated a compelling amount of evidence to suggest that asset markets are remarkably efficient. These markets comprise thousands of traders who constantly seek to exploit even the smallest profit opportunities. If irrational bubbles appear, investors can use a variety of market instruments (such as options and short positions) to quickly burst them and achieve profits by doing so. Yet episodes like those of the dot-com era suggest at least the possibility that asset prices might persistently deviate from their fundamental values. Is it, then, possible that the market may at any time succumb to the “madness of crowds”? To see how prices might persistently deviate from traditional market fundamentals, imagine that you are considering an investment in the publicly held firm Bootstrap Microdevices (BM), which is trading at fifty dollars per share. You know that BM will not declare any dividends and have ample reason to believe that one year from now BM will be trading at only ten dollars per share. Yet you also firmly believe that you can sell your BM shares in six months for one hundred dollars each. It would be entirely rational for you to purchase BM shares now and plan to sell them in six months.3 If you did so, you and those who shared your beliefs would be “riding a bubble” and would bid up the price of BM shares in the process. This example illustrates that if bubbles exist, they might be perpetuated in a manner that would be difficult to call irrational. The key to understanding this is in recalling that an asset’s fundamental value includes its expected price when sold. If investors rationally expect an asset’s selling price to increase, then including this in their assessment of the asset’s fundamental value would be justified. It is possible, then, that the price of such an asset could grow and persist even if the viability of its issuing company is unlikely to support these prices indefinitely. This situation can be called a “rational bubble.”4 Because market fundamentals are based on expectations of future events, bubbles can be identified only after the fact. For instance, it will be several years before we truly understand the impact of the Internet on our economy. It is possible that future innovations based on Internet technologies will fundamentally justify people’s decision to buy and hold Yahoo shares at $240 each. In this light, it would be difficult to condemn those who paid such a price for Yahoo shares at a time when Internet usage was growing exponentially. Can bubbles, rational or otherwise, exist? An ex post examination of history’s so-called famous first bubbles helps to answer this question. Famous First Bubbles The Tulip Bubble Tulip bulb speculation in seventeenth-century Holland is widely recounted as a classic example of how bubbles can be generated by the “madness of crowds.”5 In 1593, tulip bulbs arrived in Holland and subsequently became fashionable accessories for elite households. A handful of bulbs were infected with a virus known as mosaic, so named for the brilliant mosaic of colors exhibited by flowers from infected bulbs. These rare bulbs soon became symbols of their owners’ prominence and vehicles for speculation. In 1625, an especially rare type of infected bulb called Semper Augustus sold for two thousand guilders—about $23,000 in 2003 dollars. By 1627, at least one of these bulbs was known to have sold at today’s equivalent of $70,000. The growth in value of the Semper Augustus continued until a dramatic decline in early 1637, when they could not be sold for more than 10 percent of their peak value. The dramatic rise and fall of Semper Augustus prices, and the fortunes made and lost on them, exhibited the symptoms of a classic bubble. Yet economist Peter Garber provided compelling evidence that “tulipmania” did not generate a bubble. He argued that the dynamics of bulb prices during the tulip episode were typical of even today’s market for rare bulbs. It is important to note that the mosaic virus could not be systematically introduced to common bulb types. The only way to cultivate a prized Semper Augustus was to raise it from the offshoot bud of an infected mother bulb. Just as the fundamental value of a stock includes its expected stream of dividends, the fundamental value of a Semper Augustus included its expected stream of rare offspring. As the rare bulbs were introduced to the public, their growing popularity, combined with their limited supply, commanded a high price. This price was pushed up by speculators who hoped to profit from the bulb’s popularity by cultivating its valuable offspring. These offspring expanded the supply of bulbs, making them less rare, and thus less valuable. Perhaps tulips’ decreased popularity accelerated this downward trend in bulb prices. Interestingly, a small quantity of prototype lily bulbs sold at a 1987 Netherlands flower auction for more than $900,000 in 2003 dollars, and their offspring now sell at a tiny fraction of this price; yet no one mentions “lilymania.” The Mississippi and South Sea Bubbles In 1717, John Law organized the Compagnie d’Occident to take over the French government’s trade monopolies in Louisiana and on Canadian beaver pelts. The company was later renamed the Compagnie des Indies following a series of mergers and acquisitions, including France’s Banque Royale, whose notes were guaranteed by the crown. Eventually the company acquired the right to collect all taxes and mint new coinage, and it funded these enterprises with a series of share issues at successively higher prices. Shares sold for five hundred livres each at the company’s onset, but their price increased to nearly ten thousand livres in October 1720 after these expansive moves. By September 1721, however, shares of the Compagnie des Indies fell back to their original value of five hundred livres. Meanwhile, in England, the South Sea Company, whose only notable asset at the time was a defunct trade monopoly with the Spanish colonies in South America, had its own expansion plans. The company’s goals were not as well defined as those of its French counterpart, but it managed to gain broad parliamentary support through a series of bribes and generous share allowances. In January 1720, South Sea shares sold for 120 pounds each. This price rose to 1,000 pounds by June of that year through a series of new issues. By October, however, prices fell to 290 pounds. Were Compagnie des Indies and South Sea Company shareholders riding bubbles? Peter Garber provided a detailed account of how market fundamentals, not irrational speculation or rational bubble dynamics, might have driven these price movements. The companies started with similar plans to finance their ventures by acquiring government debt in exchange for shares. This generated streams of government cash payments, at reduced interest rates, that could be used as leverage to finance each company’s commercial enterprises. With this came an extraordinary degree of visible privilege and support from their governments, extending all the way up to their royal families. The remarkable credibility of each company’s potential for profit and growth may well have justified their peak share prices based on market fundamentals. The decline of South Sea share prices began with Parliament’s passage of the Bubble Act in June 1720—an act that was intended to limit the expansion of the South Sea Company’s competitors. This placed downward price pressure on competitors’ shares that were largely bought on margin. A wave of selling, including South Sea shares, ensued in a scramble for liquidity to meet these margins. As prices continued to drop, Parliament turned against the company and liquidated its assets. In France, the fall of the Compagnie des Indies was more complex. At the peak of its market value, many investors wanted to convert their capital gains into the more tangible asset of gold. Of course, there was not enough gold in France at the time to satisfy all of these desires, just as there is not enough gold in the United States today to back each dollar. The Banque Royale intervened by fixing Compagnie share prices at nine thousand livres and exchanging its notes for Compagnie stock. Within a few months, France’s money supply was effectively doubled since Banque Royale notes were considered legal tender. A period of hyperinflation ensued, followed by the company’s stopgap deflationary efforts of reducing the fixed price of Compagnie shares to five thousand livres. Confidence in the company dissolved, and John Law was eventually removed from power. This brief account shows how each company’s rise and fall is traceable to events that were likely to change how investors fundamentally valued South Sea and Compagnie shares, contrary to what a bubble hypothesis would suggest. These companies were essentially performing large-scale financial experiments based on prospects for long-term growth. They ultimately failed, but they could well have shown enough promise to convince even the most incredulous investors of their potential for success. It would be difficult to characterize what may have been rational behavior ex ante as evidence of bubble formation. Indeed, John Law’s operations with the Banque Royale essentially attempted to expand French commerce by expanding France’s money supply. This monetary policy is one that an entire generation of Keynesian economists promoted more than two hundred years after the Mississippi and South Sea “bubbles.” Yet few economists, even those highly dismissive of Keynesian economics, are willing to call Keynesians irrational. The Modern Bubble Debate The jury is still out on whether or not bubbles can persist in modern asset markets. Debates continue among economists even on the existence of irrational or rational bubbles. And there is often confusion in trying to distinguish irrational bubbles from rational bubbles that might be generated by investors’ rational but flawed perceptions of market fundamentals. Most modern efforts focus on developing sophisticated statistical methods to detect bubbles, but none has enjoyed a consensus of support among economists. About the Author Seiji Steimetz is an economics professor at California State University at Long Beach. He was previously a senior consultant at Bates White LLC, an economic consulting firm. Further Reading Introductory   Garber, Peter. Famous First Bubbles: The Fundamentals of Early Manias. Cambridge: MIT Press, 2000. Mackay, Charles. Memoirs of Extraordinary Popular Delusions and the Madness of Crowds. London: Office of National Illustrated Library, 1852. Available online at: http://www.econlib.org/library/Mackay/macEx.html Malkiel, Burton. A Random Walk Down Wall Street: The Time-Tested Strategy for Successful Investing. New York: Norton, 2003. Shiller, Robert. Irrational Exuberance. Princeton: Princeton University Press, 2000. Smant, David. “Famous First Bubbles or Bubble Myths Explained?” Available online at: http://www.few.eur.nl/few/people/smant/m-economics/bubbles.htm.   Advanced   Abreu, Dilip, and Markus Brunnermeier. “Bubbles and Crashes.” Econometrica 71 (2003): 173–204. Evans, George. “Pitfalls in Testing for Explosive Bubbles in Asset Prices.” American Economic Review 4 (1991): 922–930. Flood, Robert, and Robert Hodrick. “On Testing for Speculative Bubbles.” Journal of Economic Perspectives 4 (1990): 85–101. Garber, Peter. “Famous First Bubbles.” Journal of Economic Perspectives 4 (1990): 35–54. Garber, Peter. “Tulipmania.” Journal of Political Economy 3 (1989): 535–560. Shiller, Robert. “Speculative Prices and Popular Models.” Journal of Economic Perspective 4 (1990): 55–65. Stiglitz, Joseph. “Symposium on Bubbles.” Journal of Economic Perspectives 4 (1990): 13–18.   Footnotes 1. This is a split-adjusted figure. The actual trading price at the time was $475 per share.   2. If the asset is to be held forever, its fundamental value is just the present value of its expected dividend stream since the present value of any dollar amount to be received an infinite number of years from now is zero.   3. In doing so, one might say that you were applying the “greater fool theory” to your investment decision, thereby building a “castle in the air.”   4. Economists often refer to these types of bubble conditions as “bootstrap equilibria.” High prices are thought to be held high by self-fulfilling prophecies, just as one might attempt to hold himself high off the ground by pulling up on his bootstraps.   5. This section is based primarily on the influential work of economist Peter Garber.   Related Links Eugene Fama, from the Concise Encyclopedia of Economics Fama on Finance. EconTalk, January 2012. Stock Market, from the Concise Encyclopedia of Economics Shiller on Housing and Bubbles. EconTalk, September 2008. Pedro Schwartz, Housing Bubbles…and the Laboratory. April 2015. (0 COMMENTS)

/ Learn More

Behavioral Economics

How Behavioral Economics Differs from Traditional Economics All of economics is meant to be about people’s behavior. So, what is behavioral economics, and how does it differ from the rest of economics? Economics traditionally conceptualizes a world populated by calculating, unemotional maximizers that have been dubbed Homo economicus. The standard economic framework ignores or rules out virtually all the behavior studied by cognitive and social psychologists. This “unbehavioral” economic agent was once defended on numerous grounds: some claimed that the model was “right”; most others simply argued that the standard model was easier to formalize and practically more relevant. Behavioral economics blossomed from the realization that neither point of view was correct. The standard economic model of human behavior includes three unrealistic traits—unbounded rationality, unbounded willpower, and unbounded selfishness—all of which behavioral economics modifies. Nobel Memorial Prize recipient Herbert Simon (1955) was an early critic of the idea that people have unlimited information-processing capabilities. He suggested the term “bounded rationality” to describe a more realistic conception of human problem-solving ability. The failure to incorporate bounded rationality into economic models is just bad economics—the equivalent to presuming the existence of a free lunch. Since we have only so much brainpower and only so much time, we cannot be expected to solve difficult problems optimally. It is eminently rational for people to adopt rules of thumb as a way to economize on cognitive faculties. Yet the standard model ignores these bounds. Departures from rationality emerge both in judgments (beliefs) and in choices. The ways in which judgment diverges from rationality are extensive (see Kahneman et al. 1982). Some illustrative examples include overconfidence, optimism, and extrapolation. An example of suboptimal behavior involving two important behavioral concepts, loss aversion and mental accounting, is a mid-1990s study of New York City taxicab drivers (Camerer et al. 1997). These drivers pay a fixed fee to rent their cabs for twelve hours and then keep all their revenues. They must decide how long to drive each day. The profit-maximizing strategy is to work longer hours on good days—rainy days or days with a big convention in town—and to quit early on bad days. Suppose, however, that cabbies set a target earnings level for each day and treat shortfalls relative to that target as a loss. Then they will end up quitting early on good days and working longer on bad days. The authors of the study found that this is precisely what they do. Consider the second vulnerable tenet of standard economics, the assumption of complete self-control. Humans, even when we know what is best, sometimes lack self-control. Most of us, at some point, have eaten, drunk, or spent too much, and exercised, saved, or worked too little. Though people have these self-control problems, they are at least somewhat aware of them: they join diet plans and buy cigarettes by the pack (because having an entire carton around is too tempting). They also pay more withholding taxes than they need to in order to assure themselves a refund; in 1997, nearly ninety million tax returns paid an average refund of around $1,300. Finally, people are boundedly selfish. Although economic theory does not rule out altruism, as a practical matter economists stress self-interest as people’s primary motive. For example, the free-rider problems widely discussed in economics are predicted to occur because individuals cannot be expected to contribute to the public good unless their private welfare is thus improved. But people do, in fact, often act selflessly. In 1998, for example, 70.1 percent of all households gave some money to charity, the average dollar amount being 2.1 percent of household income.1 Likewise, 55.5 percent of the population age eighteen or more did volunteer work in 1998, with 3.5 hours per week being the average hours volunteered.2 Similar selfless behavior has been observed in controlled laboratory experiments. People often cooperate in prisoners’ dilemma games and turn down unfair offers in “ultimatum” games. (In an ultimatum game, the experimenter gives one player, the proposer, some money, say ten dollars. The proposer then makes an offer of x, equal or less than ten dollars, to the other player, the responder. If the responder accepts the offer, he gets x and the proposer gets 10 − x. If the responder rejects the offer, then both players get nothing. Standard economic theory predicts that proposers will offer a token amount (say twenty-five cents) and responders will accept, because twenty-five cents is better than nothing. But experiments have found that responders typically reject offers of less than 20 percent (two dollars in this example). Behavioral Finance If economists had been asked in the mid-1980s to name a discipline within economics to which bounded rationality was least likely to apply, finance would probably have been the one most often named. One leading economist called the efficient markets hypothesis (see definition below), which follows from traditional economic thinking, the best-established fact in economics. Yet finance is perhaps the branch of economics where behavioral economics has made the greatest contributions. How has this happened? Two factors contributed to the surprising success of behavioral finance. First, financial economics in general, and the efficient market hypothesis (see efficient capital markets) in particular, generated sharp, testable predictions about observable phenomena. Second, high-quality data are readily available to test these sharp predictions. The rational efficient markets hypothesis states that stock prices are “correct” in the sense that asset prices reflect the true or rational value of the security. In many cases, this tenet of the efficient market hypothesis is untestable because intrinsic values are not observable. In some special cases, however, the hypothesis can be tested by comparing two assets whose relative intrinsic values are known. Consider closed-end mutual funds (Lee et al. 1991). These funds are much like typical (open-end) mutual funds, except that to cash out of the fund, investors must sell their shares on the open market. This means that the market prices of closed-end funds are determined by supply and demand rather than set equal to the value of their assets by the fund managers, as in open-end funds. Because closed-end funds’ holdings are public, market efficiency would mean that the price of the fund should match the price of the underlying securities they hold (the net asset value, or NAV). Instead, closed-end funds typically trade at substantial discounts relative to their NAV, and occasionally at substantial premia. Most interesting from a behavioral perspective is that closed-end fund discounts are correlated with one another and appear to reflect individual investor sentiment. (Individual investors rather than institutions are the primary owners of closed-end funds.) Lee and his colleagues found that discounts shrank in months when shares of small companies (also owned primarily by individuals) did well and in months when there was a lot of initial public offering (IPO) activity, indicating a “hot” market. Since these findings were predicted by behavioral finance theory, they move the research beyond the demonstration of an embarrassing fact (price not equal to NAV) toward a constructive understanding of how markets work. The second principle of the efficient market hypothesis is unpredictability. In an efficient market, it is not possible to predict future stock price movements based on publicly available information. Many early violations of this principle had no explicit link to behavior. Thus it was reported that small firms and “value firms” (firms with low price-to-earnings ratios) earned higher returns than other stocks with the same risk. Also, stocks in general, but especially stocks of small companies, have done well in January and on Fridays (but poorly on Mondays). An early study by Werner De Bondt and Richard Thaler (1985) was explicitly motivated by the psychological finding that individuals tend to overreact to new information. For example, experimental evidence suggested that people tended to underweight base rate data (or prior information) in incorporating new data. De Bondt and Thaler hypothesized that if investors behave this way, then stocks that perform quite well over a period of years will eventually have prices that are too high because people overreacting to the good news will drive up their prices. Similarly, poor performers will eventually have prices that are too low. This yields a prediction about future returns: past “winners” ought to underperform, while past “losers” ought to outperform the market. Using data for stocks traded on the New York Stock Exchange, De Bondt and Thaler found that the thirty-five stocks that had performed the worst over the past five years (the losers) outperformed the market over the next five years, while the thirty-five biggest winners over the past five years subsequently underperformed. Follow-up studies showed that these early results cannot be attributed to risk; by some measures the portfolio of losers was actually less risky than the portfolio of winners. More recent studies have found other violations of unpredictability that have the opposite pattern from that found by De Bondt and Thaler, namely underreaction rather than overreaction. Over short periods—for example, six months to one year—stocks display momentum: the stocks that go up the fastest for the first six months of the year tend to keep going up. Also, after many corporate announcements such as large earnings changes, dividend initiations and omissions, share repurchases, and splits, the price jumps initially on the day of the announcement and then drifts slowly upward for a year or longer (see Shleifer 2000 for a nice introduction to the field). Behavioral economists have also hypothesized that investors are reluctant to realize capital losses because doing so would mean that they would have to “declare” the loss to themselves. Hersh Shefrin and Meir Statman (1985) dubbed this hypothesis the “disposition effect.” Interestingly, the tax law encourages just the opposite behavior. Yet Terrance Odean (1998) found that in a sample of customers of a discount brokerage firm, investors were more likely to sell a stock that had increased in value than one that had decreased. While around 15 percent of all gains were realized, only 10 percent of all losses were realized. Odean showed, moreover, that the loser stocks that were held underperformed the gainer stocks that were sold. Saving If finance was held to be the field in which a behavioral approach was least likely, a priori, to succeed, saving had to be one of the most promising. Although the standard life-cycle model of savings abstracts from both bounded rationality and bounded willpower, saving for retirement is both a difficult cognitive problem and a difficult self-control problem. It is thus perhaps less surprising that a behavioral approach has been fruitful here. As in finance, progress has been helped by the combination of a refined standard theory with testable predictions and abundant data sources on household saving behavior. Suppose that Tom is a basketball player and therefore earns most of his income early in his life, while Ray is a manager who earns most of his income late in life. The life-cycle model predicts that Tom would save his early income to increase consumption later in life, while Ray would borrow against future income to increase consumption earlier in life. The data do not support this prediction. Instead, they show that consumption tracks income over individuals’ life cycles much more closely than the standard life-cycle model predicts. Furthermore, the departures from predicted behavior cannot be explained merely by people’s inability to borrow. James Banks, Richard Blundell, and Sarah Tanner (1998) showed, for example, that consumption drops sharply as individuals retire and their incomes drop because they have not saved enough for retirement. Indeed, many low- to middle-income families have essentially no savings. The primary cause of this lack of saving appears to be lack of self-control. One bit of evidence supporting this conclusion is that virtually all of Americans’ saving takes place in forms that are often called “forced savings”—for example, accumulating home equity by paying the mortgage and participating in pension plans. Coming full circle, individuals may impose another type of “forced” savings on themselves—high tax withholding—so that when the refund comes, they can buy something they might not have had the willpower to save up for. One of the most interesting research areas has been devoted to measuring the effectiveness of tax-advantaged savings programs such as individual retirement accounts (IRAs) and 401(k) plans. Consider the original IRA program of the early 1980s. This program provided tax subsidies for savings up to a threshold, often two thousand dollars per year. Because there was no tax incentive to save more than two thousand dollars per year, those saving more than the threshold should not have increased their total saving, but instead should have merely switched some money from a taxable account to the IRA. Yet, by some accounts, these programs appear to have generated substantial new savings. Some researchers argue that almost every dollar of savings in IRAs appears to represent new savings. In other words, people are not simply shifting their savings into IRAs and leaving their total behavior unchanged. Similar results are found for 401(k) plans. The behavioral explanation for these findings is that IRAs and 401(k) plans help solve self-control problems by setting up special mental accounts that are devoted to retirement savings. Households tend to respect the designated use of these accounts, and the tax penalty that must be paid if funds are removed prematurely bolsters people’s self-control.3 An interesting flip side to IRA and 401(k) programs is that these programs have generated far less than the full participation expected. Many eligible people do not participate, forgoing, in effect, a cash transfer from the government (and in some cases from their employer). Ted O’Donoghue and Matthew Rabin (1999) presented an explanation based on procrastination and hyperbolic discounting. Individuals typically show very sharp impatience for short-horizon decisions, but much more patience at long horizons. This behavior is often referred to as hyperbolic discounting, in contrast to the standard assumption of exponential discounting, in which patience is independent of horizon. In exponential models, people are equally patient at long and short horizons. O’Donoghue and Rabin argued that hyperbolic individuals will show exactly the low IRA participation that we observe. Though hyperbolic people will eventually want to participate in IRAs (because they are patient in the long run), something always comes up in the short run (where they are very impatient) that provides greater immediate reward. Consequently, they may indefinitely delay starting an IRA. If people procrastinate about joining the savings plan, then it should be possible to increase participation rates simply by lowering the psychic costs of joining. One simple way of accomplishing this is to switch the default option for new workers. In most companies, employees who become eligible for the 401(k) plan receive a form inviting them to join; to join, they have to send the form back and make some choices. The default option, therefore, is not to join. Several firms have made the seemingly inconsequential change of switching the default: employees are enrolled into the plan unless they explicitly opt out. This change often produces dramatic increases in savings rates. For example, in one company studied by Brigitte C. Madrian and Dennis F. Shea (2000), the employees who joined after the default option was switched were 50 percent more likely to participate than the workers in the year prior to the change. The authors also found that the default asset allocation—that is, the allocation the firm made among stocks, bonds, and so on if the employee made no explicit choice—had a strong effect on workers’ choices. The firm had made the default asset allocation 100 percent in a money market account, and the proportion of workers “selecting” this allocation soared. It is possible to go further and design institutions that help people make better choices, as defined by the people who choose. One successful effort along these lines is Richard Thaler and Shlomo Benartzi’s (2004) “Save More Tomorrrow” program (SMarT). Under the SMarT plan, employers invite their employees to join a plan in which employees’ contribution rates to their 401(k) plan increase automatically every year (say, by two percentage points). The increases are timed to coincide with annual raises, so the employee never sees a reduction in take-home pay, thus avoiding loss aversion (at least in nominal terms). In the first company that adopted the SMarT plan, the participants who joined the plan increased their savings rates from 3.5 percent to 13.6 percent after four pay raises (Thaler and Benartzi 2004). About the Authors Richard H. Thaler is the Ralph and Dorothy Keller Distinguished Service Professor of Economics and Behavioral Science at the University of Chicago’s Graduate School of Business, where he is director of the Center for Decision Research. He is also a research associate at the National Bureau of Economic Research (NBER), where he codirects the behavioral economics project. Sendhil Mullainathan is a professor of economics at Harvard University and a research associate with the NBER. In 2002, he was awarded a grant from the MacArthur Fellows Program. Further Reading   Banks, James, Richard Blundell, and Sarah Tanner. “Is There a Retirement-Savings Puzzle?” American Economic Review 88, no. 4 (1998): 769–788. Camerer, Colin, Linda Babcock, George Loewenstein, and Richard H. Thaler. “Labor Supply of New York City Cabdrivers: One Day at a Time.” Quarterly Journal of Economics 112, no. 2 (1997): 407–441. Conlisk, John. “Why Bounded Rationality?” Journal of Economic Literature 34, no. 2 (1996): 669–700. De Bondt, Werner F. M., and Richard H. Thaler. “Does the Stock Market Overreact?” Journal of Finance 40, no. 3 (1985): 793–805. DeLong, Brad, Andrei Shleifer, Lawrence Summers, and Robert Waldman. “Noise Trader Risk in Financial Markets.” Journal of Political Economy 98, no. 4 (1990): 703–738. Kahneman, Daniel, and Amos Tversky. “Judgement Under Uncertainty: Heuristics and Biases.” Science 185 (1974): 1124–1131. Kahneman, Daniel, and Amos Tversky. “Prospect Theory: An Analysis of Decision Under Risk.” Econometrica 47, no. 2 (1979): 263–291. Kahneman, Daniel, Paul Slovic, and Amos Tversky. Judgement Under Uncertainty: Heuristics and Biases. Cambridge: Cambridge University Press, 1982. Laibson, David. “Golden Eggs and Hyperbolic Discounting.” Quarterly Journal of Economics 112, no. 2 (1997): 443–477. Lee, Charles M. C., Andrei Shleifer, and Richard H. Thaler. “Investor Sentiment and the Closed-End Fund Puzzle.” Journal of Finance 46, no. 1 (1991): 75–109. Madrian, Brigitte C., and Dennis F. Shea. “The Power of Suggestion: Inertia in 401(k) Participation and Savings Behavior.” Quarterly Journal of Economics 116, no. 4 (2000): 1149–1187. Odean, Terrance. “Are Investors Reluctant to Realize Their Losses?” Journal of Finance 53, no. 5 (1998): 1775–1798. O’Donoghue, Ted, and Matthew Rabin. “Procrastination in Preparing for Retirement.” In Henry Aaron, ed., Behavioral Dimensions of Retirement Economics. Washington, D.C.: Brookings Institution, 1999. Shefrin, Hersh, and Meir Statman. “The Disposition to Sell Winners Too Early and Ride Losers Too Long: Theory and Evidence.” Journal of Finance 40, no. 3 (1985): 777–790. Shleifer, Andrei. Inefficient Markets: An Introduction to Behavioral Finance. Clarendon Lectures. Oxford: Oxford University Press, 2000. Shleifer, Andrei, and Robert Vishny. “The Limits of Arbitrage.” Journal of Finance 52, no. 1 (1997): 35–55. Simon, Herbert A. “A Behavioral Model of Rational Choice.” Quarterly Journal of Economics 69 (February 1955): 99–118. Thaler, Richard H. “Mental Accounting and Consumer Choice.” Marketing Science 4, no. 3 (1985): 199–214. Thaler, Richard H., and Shlomo Benartzi. “Save More Tomorrow: Using Behavioral Economics to Increase Employee Saving.” Journal of Political Economy 112 (February 2004): S164–S187.   Footnotes * This article is a revision of a manuscript originally written as an entry in the International Encyclopedia of the Social and Behavioral Sciences.   1. Data are from the Chronicle of Philanthropy (1999), available online at: http://philanthropy.com/free/articles/v12/i01/1201whodonated.htm.   2. Data are from Independent Sector (2004), available online at: http://www.independentsector.org/programs/research/volunteer_time.html. 3. Some issues remain controversial. See the debate in the fall 1996 issue of the Journal of Economic Perspectives. Related Links Richard Thaler on Libertarian Paternalism. EconTalk, November 2006. Phil Rosenzweig on Leadership, Decisions, and Behavioral Economics. EconTalk, April 2015. Rubinstein on Game Theory and Behavioral Economics. EconTalk, April 2011. Richard Epstein on Happiness, Inequality, and Envy. EconTalk, November 2008. Rosenberg on the Nature of Economics. EconTalk, September 2011. The Economics of Paternalism. EconTalk, September 2006. More EconTalk episodes on Behavioral Economics. Richard McKenzie, Market Competitiveness and Rationality: A Brain-Focused Perspective. October, 2019. Richard McKenzie, Of Diet Cokes and Brain-Focused Economics. March, 2018. Richard McKenzie, Predictably Rational or Predictably Irrational? January, 2010. Arnold Kling, Phools and Their Money. October, 2015. (0 COMMENTS)

/ Learn More

Business Cycles

The United States and all other modern industrial economies experience significant swings in economic activity. In some years, most industries are booming and unemployment is low; in other years, most industries are operating well below capacity and unemployment is high. Periods of economic prosperity are typically called expansions or booms; periods of economic decline are called recessions or depressions. The combination of expansions and recessions, the ebb and flow of economic activity, is called the business cycle. Business cycles as we know them today were codified and analyzed by Arthur Burns and Wesley Mitchell in their 1946 book Measuring Business Cycles. One of Burns and Mitchell’s key insights was that many economic indicators move together. During an expansion, not only does output rise, but also employment rises and unemployment falls. New construction also typically increases, and inflation may rise if the expansion is particularly brisk. Conversely, during a recession, the output of goods and services declines, employment falls, and unemployment rises; new construction also declines. In the era before World War II, prices also typically fell during a recession (i.e., inflation was negative); since the 1950s prices have continued to rise during downturns, though more slowly than during expansions (i.e., the rate of inflation falls). Burns and Mitchell defined a recession as a period when a broad range of economic indicators falls for a sustained period, roughly at least half a year. Business cycles are dated according to when the direction of economic activity changes. The peak of the cycle refers to the last month before several key economic indicators—such as employment, output, and retail sales— begin to fall. The trough of the cycle refers to the last month before the same economic indicators begin to rise. Because key economic indicators often change direction at slightly different times, the dating of peaks and troughs is necessarily somewhat subjective. The National Bureau of Economic Research (NBER) is an independent research institution that dates the peaks and troughs of U.S. business cycles. Table 1 shows the NBER monthly dates for peaks and troughs of U.S. business cycles since 1890. Recent research has shown that the NBER’s reference dates for the period before World War I are not truly comparable with those for the modern era because they were determined using different methods and data. Figure 1 shows the unemployment rate since 1948, with periods that the NBER classifies as recessions shaded in gray. Clearly, a key feature of recessions is that they are times of rising unemployment. In many ways, the term “business cycle” is misleading. “Cycle” seems to imply that there is some regularity in the timing and duration of upswings and downswings in economic activity. Most economists, however, do not think there is. As Figure 1 shows, expansions and recessions occur at irregular intervals and last for varying lengths of time. For example, there were three recessions between 1973 and 1982, but, then the 1982 trough was followed by eight years of uninterrupted expansion. The 1980 recession lasted just six months, while the 1981 recession lasted sixteen months. For describing the swings in economic activity, therefore, many modern economists prefer the term “short-run economic fluctuations” to “business cycle.” Table 1 Business Cycle Peaks and Troughs in the United States, 1890-2004 Peak Trough Peak Trough July 1890 May 1891 May 1937 June 1938 Jan. 1893 June 1894 Feb. 1945 Oct. 1945 Dec. 1895 June 1897 Nov. 1948 Oct. 1949 June 1899 Dec. 1900 July 1953 May 1954 Sep. 1902 Aug. 1904 Aug. 1957 Apr. 1958 May 1907 June 1908 Apr. 1960 Feb. 1961 Jan. 1910 Jan. 1912 Dec. 1969 Nov. 1970 Jan. 1913 Dec. 1914 Nov. 1973 Mar. 1975 Aug. 1918 Mar. 1919 Jan. 1980 July 1980 Jan. 1920 July 1921 July 1981 Nov. 1982 May 1923 July 1924 July 1990 Mar. 1991 Oct. 1926 Nov. 1927 Mar. 2001 Nov. 2001 Aug. 1929 Mar. 1933 Causes of Business Cycles Just as there is no regularity in the timing of business cycles, there is no reason why cycles have to occur at all. The prevailing view among economists is that there is a level of economic activity, often referred to as full employment, at which the economy could stay forever. Full employment refers to a level of production in which all the inputs to the production process are being used, but not so intensively that they wear out, break down, or insist on higher wages and more vacations. When the economy is at full employment, inflation tends to remain constant; only if output moves above or below normal does the rate of inflation systematically tend to rise or fall. If nothing disturbs the economy, the full-employment level of output, which naturally tends to grow as the population increases and new technologies are discovered, can be maintained forever. There is no reason why a time of full employment has to give way to either an inflationary boom or a recession. Business cycles do occur, however, because disturbances to the economy of one sort or another push the economy above or below full employment. Inflationary booms can be generated by surges in private or public spending. For example, if the government spends a lot to fight a war but does not raise taxes, the increased demand will cause not only an increase in the output of war matériel, but also an increase in the take-home pay of defense workers. The output of all the goods and services that these workers want to buy with their wages will also increase, and total production may surge above its normal, comfortable level. Similarly, a wave of optimism that causes consumers to spend more than usual and firms to build new factories may cause the economy to expand more rapidly than normal. Recessions or depressions can be caused by these same forces working in reverse. A substantial cut in government spending or a wave of pessimism among consumers and firms may cause the output of all types of goods to fall. Another possible cause of recessions and booms is monetary policy. The Federal Reserve System strongly influences the size and growth rate of the money stock, and thus the level of interest rates in the economy. Interest rates, in turn, are a crucial determinant of how much firms and consumers want to spend. A firm faced with high interest rates may decide to postpone building a new factory because the cost of borrowing is so high. Conversely, a consumer may be lured into buying a new home if interest rates are low and mortgage payments are therefore more affordable. Thus, by raising or lowering interest rates, the Federal Reserve is able to generate recessions or booms. Figure 1. Unemployment Rate and Recessions ZOOM   Source: The data are from the Bureau of Labor Statistics.Note: The series graphed is the seasonally adjusted civilian unemployment rate for those age sixteen and over. The shaded areas indicate recessions. This description of what causes business cycles reflects the Keynesian or new Keynesian view that cycles are the result of nominal rigidities. Only when prices and inflationary expectations are not fully flexible can fluctuations in overall demand cause large swings in real output. An alternative view, referred to as the new classical framework, holds that modern industrial economies are quite flexible. As a result, a change in spending does not necessarily affect real output and employment. For example, in the new classical view a change in the stock of money will change only prices; it will have no effect on real interest rates and thus on people’s willingness to invest. In this alternative framework, business cycles are largely the result of disturbances in productivity and tastes, not of changes in aggregate demand. The empirical evidence is strongly on the side of the view that deviations from full employment are often the result of spending shocks. Monetary policy, in particular, appears to have played a crucial role in causing business cycles in the United States since World War II. For example, the severe recessions of both the early 1970s and the early 1980s were directly attributable to decisions by the Federal Reserve to raise interest rates. On the expansionary side, the inflationary booms of the mid-1960s and the late 1970s were both at least partly due to monetary ease and low interest rates. The role of money in causing business cycles is even stronger if one considers the era before World War II. Many of the worst prewar depressions, including the recessions of 1908, 1921, and the Great Depression of the 1930s, were to a large extent the result of monetary contraction and high real interest rates. In this earlier era, however, most monetary swings were engendered not by deliberate monetary policy but by financial panics, policy mistakes, and international monetary developments. Historical Record of Business Cycles Table 2 shows the peak-to-trough decline in industrial production, a broad monthly measure of manufacturing and mining activity, in each recession since 1890. The industrial production series used was constructed to be comparable over time. Many other conventional macroeconomic indicators, such as the unemployment rate and real GDP, are not consistent over time. The prewar versions of these series were constructed using methods and data sources that tended to exaggerate cyclical swings. As a result, these conventional indicators yield misleading estimates of the degree to which business cycles have moderated over time. Table 2 Peak-to-Trough Decline in Industrial Production Year of NBER Peak % Decline Year of NBER Peak % Decline 1890 −5.3 1937 −32.5 1893 −17.3 1945 −35.5 1895 −10.8 1948 −10.1 1899 −10.0 1953 −9.5 1902 −9.5 1957 −13.6 1907 −20.1 1960 −8.6 1910 −9.1 1969 −7.0 1913 −12.1 1973 −13.1 1918 −6.2 1980 −6.6 1920 −32.5 1981 −9.4 1923 −18.0 1990 −4.1 1926 −6.0 2001 −6.2 1929 −53.6 Source: The industrial production data for 1919–2004 are from the Board of Governors of the Federal Reserve System. The series before 1919 is an adjusted and smoothed version of the Miron-Romer index of industrial production. This series is described in the appendix to “Remeasuring Business Cycles” by Christina D. Romer. Note: The peak-to-trough decline is calculated using the actual peaks and troughs in the industrial production series. These turning points often differ from the NBER dates by a few months, and occasionally by as much as a year. The empirical record on the duration and severity of recessions over time reflects the evolution of economic policy. The recessions of the pre–World War I era were relatively frequent and quite variable in size. This is consistent with the fact that before World War I, the government had little influence on the economy. Prewar recessions stemmed from a wide range of private-sector-induced fluctuations in spending, such as investment busts and financial panics, that were left to run their course. As a result, recessions occurred frequently, and some were large and some were small. After World War I the government became much more involved in managing the economy. Government spending and taxes as a fraction of GDP rose substantially in the 1920s and 1930s, and the Federal Reserve was established in 1914. Table 2 makes clear that the period between the two world wars was one of extreme volatility. The declines in industrial production in the recessions of 1920, 1929, and 1937 were larger than in any recessions in the pre– World War I and post–World War II periods. A key factor in these extreme fluctuations was the replacement, by the 1920s, of some of the private-sector institutions that had helped the U.S. economy weather prewar fluctuations with government institutions that were not yet fully functional. The history of the interwar era is perhaps best described as a painful learning period for the Federal Reserve. The downturn of the mid-1940s obviously reflects the effect of World War II. The war generated an incredible boom in economic activity, as production surged in response to massive government spending. The end of wartime spending led to an equally spectacular drop in industrial production as the economy returned to more normal levels of labor and capital utilization. Recessions in the early postwar era were of roughly the same average severity as those before World War I, although they were somewhat less frequent than in the earlier period and were more consistently of moderate size. The decreasing frequency of downturns reflects progress in economic policymaking. The Great Depression brought about large strides in the understanding of the economy and the capacity of government to moderate cycles. The Employment Act of 1946 mandated that the government use the tools at its disposal to stabilize output and employment. And indeed, economic policy since World War II has almost certainly counteracted some shocks and hence prevented some recessions. In the early postwar era, however, policymakers tended to carry expansionary policy too far, and in the process caused inflation to rise. As a result, policymakers, particularly the Federal Reserve, felt compelled to adopt contractionary policies that led to moderate recessions in order to bring inflation down. This boom-bust cycle was a common feature of the 1950s, 1960s, and 1970s. Recessions in the United States have become noticeably less frequent and severe since the mid-1980s. The nearly decade-long expansions of the 1980s and 1990s were interrupted by only very mild recessions in 1990 and 2001. Economists attribute this moderation of cycles to a number of factors, including the increasing importance of services (a traditionally stable sector of the economy) and a decline in adverse shocks, such as oil price increases and fluctuations in consumer and investor sentiment. Most economists believe that improvements in monetary policy, particularly the end of overexpansion followed by deliberate contraction, have been a significant factor as well.     In addition to reductions in the frequency and severity of downturns over time, the effects of recessions on individuals in the United States and other industrialized countries almost surely have been lessened in recent decades. The advent of unemployment insurance and other social welfare programs means that recessions no longer wreak the havoc on individuals’ standards of living that they once did. About the Author Christina D. Romer is a professor of economics at the University of California, Berkeley, and co-director of the Program in Monetary Economics at the National Bureau of Economic Research. Further Reading   Burns, Arthur F., and Wesley C. Mitchell. Measuring Business Cycles. New York: National Bureau of Economic Research, 1946. Friedman, Milton, and Anna Jacobson Schwartz. A Monetary History of the United States, 1867–1960. Princeton: Princeton University Press for NBER, 1963. Romer, Christina D. “Changes in Business Cycles: Evidence and Explanations.” Journal of Economic Perspectives 13 (Spring 1999): 23–44. Romer, Christina D. “Remeasuring Business Cycles.” Journal of Economic History 54 (September 1994): 573–609. Related Links Financial Crisis of 2008, from the Concise Encyclopedia of Economics. Bubbles, from the Concise Encyclopedia of Economics. Stock Market, from the Concise Encyclopedia of Economics.  Edward C. Prescott biography, from the Concise Encyclopedia of Economics. Jan Tinbergen biography, from the Concise Encyclopedia of Economics. Ludwig von Mises biography, from the Concise Encyclopedia of Economics.  Robert P. Murphy, The Importance of Capital in Economic Theory. May 2014. Boettke on the Austrian Perspective on Business Cycles and Monetary Policy. EconTalk, January 2009. Don Boudreaux on Macroeconomics and Austrian Business Cycle Theory. EconTalk, April 2009. Shlaes on the Great Depression. EconTalk, June 2007. Lucas on Growth, Poverty, and Business Cycles. EconTalk, February 2007. Ramey on Stimulus and Multipliers. EconTalk, October 2011. Gene Epstein on Gold, the Fed, and Money. EconTalk, June 2008. Robert Solow on Growth and the State of Economics. EconTalk, October 2014.           (0 COMMENTS)

/ Learn More

Auctions

When most people hear the word “auction,” they think of the open-outcry, ascending-bid (or English) auction. But this kind of auction is only one of many. Fundamentally, an auction is an economic mechanism whose purpose is the allocation of goods and the formation of prices for those goods via a process known as bidding. Depending on the properties of the bidders and the nature of the items to be auctioned, various auction structures may be either more efficient or more profitable to the seller than others. Like all well-designed economic mechanisms, the designer assumes that individuals will act strategically and may hold private information relevant to the decision at hand. Auction design is a careful balance of encouraging bidders to reveal valuations, discouraging cheating or collusion, and maximizing revenues. William Vickrey first established the taxonomy of auctions based on the order in which the auctioneer quotes prices and the bidders tender their bids. He established four major (one-sided) auction types: (1) the ascending-bid (open, oral, or English) auction; (2) the descending-bid (Dutch) auction; (3) the first-price, sealed-bid auction; and (4) the second-price, sealed-bid (Vickrey) auction. The Four Basic Auction Types The most common type of auction, the English auction, is often used to sell art, wine, antiques, and other goods. In it, the auctioneer opens the bidding at a reserve price (which may be zero), the lowest price he is willing to accept for the item. Once a bidder has announced interest at that price, the auctioneer solicits further bids, usually raising the price by a predetermined bid increment. This continues until no one is willing to increase the bid any further, at which point the auction is closed and the final bidder receives the item at his bid price. Because the winner pays his bid, this type of auction is known as a first-price auction. The Dutch auction, also a first-price auction, is descending. That is, the auctioneer begins at a high price, higher than he believes the item will fetch, then decreases the price until a bidder finally calls out, “Mine!” The bidder then receives the item at the price at which he made the call. If multiple items are offered, the process continues until all items are sold. One of the primary advantages of Dutch auctions is speed. Since there are never more bids than there are items being auctioned, the process takes relatively little time. This is one reason they are used in places such as flower markets in Holland (hence the name “Dutch”). In the English and Dutch auctions, bidders receive information as others bid (or refrain from bidding). However, in the third type of auction, known as the first-price, sealed-bid auction, this is not the case. In this mechanism, each bidder submits a single bid in a sealed envelope. Then, all of the envelopes are opened and the highest bidder is announced, and he receives the item at his bid price. This type of auction is most often used for refinancing credit and foreign exchange, among other (primarily financial) venues. The fourth type is the second-price, sealed-bid auction, otherwise known as the Vickrey auction. As in the first-price, sealed-bid auction, bidders submit sealed envelopes in one round of bid submission. The highest bidder wins the item, but at the price offered by the second-highest bidder (or, in a multiple-item case, the highest unsuccessful bid). This type of auction is rarely used aside from setting the foreign exchange rates in some African countries. Why So Many Auction Forms? One might think so many canonical auction forms unnecessary, that there is always a best choice that will yield the most surplus to the seller. In fact, under some strict assumptions, the revenue equivalence theorem (also due to Vickrey) states that all four auction types will result in an identical level of revenue to the seller. However, these assumptions regarding the nature of the item’s value and the risk attitudes of the bidders are very restrictive and rarely hold. The first assumption of the theorem is that the asset being auctioned has an independent, private value to all bidders. This assumption tends to hold when the item is for personal consumption, without thought toward resale, as might be the case for furniture, art, or wine. In this case, the value of the item is considered to be personal and independent of the value others might place on it (independent, private values). The assumption does not hold when bidders perceive a value of resale, either of the item itself or of a by-product of the item. Buying land for the rights to the oil that lies beneath it would be a good example. In this case, the value is common; that is, individual bids are predicated not only on personal valuation, but also on the valuation of prospective buyers. Each bidder tries to estimate the value of an object using the same known measurements (common values), but their conclusions may vary widely. In common-value environments, bidders may face the “winner’s curse.” If all of the bidders will eventually realize the same value from the item, then the primary differentiator between the bidders is their perception of that value. Absent special information about the item being purchased, the winner is the person with the largest positive error in his valuation, and, unless he is lucky, he will wind up losing money. The second assumption of the revenue equivalence theorem is that all bidders are risk-neutral. The strict definition of risk neutrality is: given the choice between a guaranteed return r and a gamble with expected return also equal to r, the bidder is completely indifferent. The bidder who prefers the guaranteed return is said to be “risk-averse,” while the bidder who prefers the gamble is said to be “risk-loving.” The style of auction a seller chooses depends on his judgment about which of these assumptions holds. If values are common rather than independent, the English auction yields higher seller revenue than the second-price, sealed-bid auction, which in turn yields higher revenues than the Dutch and first-price, sealed-bid auctions (which are tied). The rankings illustrate the strategic advantages of increased information. Because the English auction reveals all bids to all bidders, it permits dynamic updating of personal valuation. (If I see that others believe the real estate is worth more, I too may decide it is worth more.) In comparison, bidders, recognizing the winner’s curse, bid less aggressively in first-price, sealed-bid auctions and shade their bids downward. Similar reasoning applies to Dutch (descending) auctions. While the information is not updated in a second-price sealed-bid format, the winner pays the bid of the next-highest bidder, and so bidders raise bids, secure that they will not be disadvantaged if rival bids are lower. In fact, in both the first-price, sealed-bid auction and the Dutch auction, no information is revealed and the bidder pays the value of his bid. Therefore, in terms of revenue maximization, it does not matter which of these auctions a seller chooses; nor does it matter whether the bidders have private or common values. What about the role of risk aversion? In first-price, sealed-bid and Dutch auctions, risk aversion causes bidders to bid slightly higher than they might otherwise. Since they have only one chance to bid, fear of losing the item induces overbidding. In the English and Vickrey auctions, however, bidders are induced to bid their true valuation, regardless of risk attitudes. Once a seller has decided on which of the four basic auction forms to use, he can use many variations within the auction to further manipulate the outcome to maximize revenue. These mechanisms can have profound, and often counterintuitive, effects on bidding behavior—and therefore on outcomes. Among the available mechanisms are reserve prices, entry fees, invited bidders only, closing rules, lot sizes, proxy bidding, bidding increment rules, and postwin payment rules. Auction Success and Failure—An Example The 1994 U.S. Federal Communications Commission (FCC) auctions of wireless bandwidth provide a useful example of both the successes and the failures of auction design. The auction to allocate Personal Communications Service (PCS) spectrum had four primary goals: (1) to attain efficient allocation of spectrum, (2) to encourage rapid deployment and network build out, (3) to attain diversity of ownership, and (4) to raise revenue. Goals 1, 2, and 4 are met by any well-designed auction, as the winner is the one who values the item most. PCS licenses are a classic common-values good, in that they have a common, large, but uncertain value, triggering the winner’s curse. The FCC developed an elaborate network of rules to ensure the desired outcomes. To encourage price discovery, the auction was a multiround, ascending-bid, first-price auction. The many licenses available covered the entire United States, allowing major complementarities and substitutes in this market. To allow bidding that took this into account, the auctions were simultaneous, and no auction ended until they all did (every license was open until there were no more bids on any of them). Further, because the FCC wanted to discourage bidders from sitting on the sidelines until the very end, an activity rule was imposed. These and an elaborate network of other rules were carefully balanced to ensure the desired outcome. The result was great success in maximizing revenues. The 1994 FCC auction stumbled, however, in its goal of diversifying ownership. To achieve this goal, the FCC set aside two blocks (C and F) for entrepreneurs, female and minority-owned firms, and regional companies. To that end, the FCC took the carefully designed auction and changed it just a little bit. Bidders in these two special blocks received a 25 percent bid credit. That is, if they bid eighty dollars for an item, the bid was treated as if they had bid one hundred dollars. Further, their deposit requirement was just one-fifth of what the other winning bidders paid. Lastly, “diversity bids” were offered a generous installment payment plan. Bidders had a month to furnish 10 percent of the bid and owed no more principal until seven years later. The interest for this loan was charged at the T-bill rate. Unfortunately, this seemingly small change had disastrous effects. The payment policy created moral hazard (see insurance) by, in effect, providing bidders with low-cost insurance against big misestimates or drops in the value of the bandwidth. Since winning bidders had to make a down payment of only 10 percent, if, after seven years, the item turned out to be worth less than 90 percent of the bid price, then the purchaser could simply default. This is precisely what happened. Companies bought the licenses and invested 10 percent, and then declared bankruptcy when the license turned out to be worth less than 90 percent of the bid. Nearly every company that won a license in the C or F blocks in the 1994 auction either went bankrupt or was bought by a larger firm. In the end, the FCC’s ham-fisted pursuit of a noble goal destroyed this segment of the auction entirely. PCS auctions continue today, though they have been massively restructured. About the Author Leslie R. Fine is a scientist in the Information Dynamics Lab at HP Labs in Palo Alto, California. Further Reading   Ashenfelter, Orley. “How Auctions Work for Wine and Art.” Journal of Economic Perspectives 3 (1989): 23–26. Kagel, J. H. “Auctions: A Survey of Experimental Research.” In John H. Kagel and Alvin E. Roth, eds., The Handbook of Experimental Economics. Princeton: Princeton University Press, 1995. Pp. 1–86. Klemperer, P. D., ed. The Economic Theory of Auctions. Cheltenham, U.K.: Edward Elgar, 1999. McAfee, R. P., and J. McMillan. “Auctions and Bidding.” Journal of Economic Literature 25 (1987): 699–738. Milgrom, P. R. “Auctions and Bidding: A Primer.” Journal of Economic Perspectives 3 (1989): 3–22. Milgrom, P. R. “Putting Auction Theory to Work: The Simultaneous Ascending Auction.” Journal of Political Economy 108, no.21 (2000): 245–272. Related Links Vernon Smith on Markets and Experimental Economics. EconTalk, May 2007. (0 COMMENTS)

/ Learn More

Brand Names

Consumers pay a higher price for brand-name products than for products that do not carry an established brand name. Because this involves paying extra for what some consider an identical product that merely has been advertised and promoted, brand names may appear to be economically wasteful. This argument was behind the decision to eliminate all brand names on goods produced in the Soviet Union immediately after the 1917 Communist revolution. The problems this experiment caused—problems described by economist Marshall Goldman—suggest that brand names serve an important economic function. When the producers of products are not identified with brand names, a crucial element of the market mechanism cannot operate because consumers cannot use their past experience to know which products to buy and which not to buy. In particular, consumers can neither punish companies that supply low-quality products by stopping their purchases nor reward companies that supply high-quality products by increasing their purchases. Thus, when all brand names, including factory production marks, were eliminated in the Soviet Union, unidentified producers manufacturing indistinguishable products each had an incentive to supply lower-quality goods. And the inability to punish these producers created significant problems for consumers. Consumer reliance on brand names gives companies the incentive to supply high-quality products because they can take advantage of superior past performance to charge higher prices. Benjamin Klein and Keith Leffler (1981) showed that this price premium paid for brand-name products facilitates market exchange. A company that creates an established brand for which it can charge higher prices knows that if it supplies poor products and its future demand declines, it will lose the stream of income from the future price premium it would otherwise have earned on its sales. This decrease in future income amounts to a depreciation in the market value of the company’s brand-name. A company’s brand-name capital, therefore, is a form of collateral that ensures company performance. Companies without valuable brand names that are not earning price premiums on their products, on the other hand, have less to lose when they supply low-quality products and their demand falls. Therefore, while consumers may receive a direct benefit for the extra price they pay for brand-name products, such as the status of driving a BMW, the higher price also creates market incentives for companies with valuable brand names to maintain and improve product quality because they have something to lose if they perform poorly. Brand-name quality assurance is especially important when consumers lack complete information about product quality at the time of purchase. Companies may take advantage of this lack of information by shaving product quality, thereby lowering costs and increasing short-term profits. A company that takes such actions, however, will experience a decrease in its future demand, and therefore in its long-term profits. The greater the value of a company’s brand name—that is, the greater the present value of the extra profit a company earns on its sales—the more likely it is that this long-term negative effect on profits will outweigh any short-term positive effect and deter a policy of intentional quality deterioration. Moreover, the greater the value of a company’s brand name, the more likely the company is to take quality-control precautions. To protect its brand name, a company will want to make sure its consumers are satisfied. When it is difficult to determine the quality of a product before purchase and the consequences of poor quality are significant, it makes economic sense for consumers to rely on brand names and the company reputations associated with them. By paying more for a brand-name product in those circumstances, consumers are not acting irrationally. Consumers know that companies with established reputations for consistent high quality have more to lose if they do not perform well—namely, the loss of the ability to continue to charge higher prices. A company’s high reputation indicates not only that the company has performed well in the past, but also that it will perform well in the future because it has an economic incentive to maintain and improve the quality of its products. A consumer who pays a high price for a brand-name product is paying for the assurance of increased quality. When a company performs poorly, the brand-name, market-enforced sanction it faces is usually much greater than any court-enforced legal sanction it might face. Consider, for example, the case of defective Firestone tires on Ford Explorer sport-utility vehicles in 2000. Because consumers cannot ascertain the quality of tires by direct examination, they rely largely on the tire supplier’s brand name, which was badly damaged in this case. One day after Bridgestone (Firestone’s Japan-based parent company) announced the recall of the defective tires, Bridgestone’s stock price dropped nearly 20 percent; it continued to fall over the next three weeks as additional information about the problem was disclosed. Overall, this amounted to a decline of nearly 40 percent in Bridgestone’s stock-market value relative to the Nikkei general market index. Ford’s stock price did not drop initially, but eventually it fell about 18 percent relative to the S&P 500 index over the same period as information was revealed that Ford was aware of the possibility of tire failure more than a year before the tire recall. These stock-market declines amounted to losses of about $7 billion in Bridgestone’s market value and nearly $10 billion in Ford Motor Company’s market value—market measures of each company’s future lost profit caused by these events. These costs were substantially greater than the direct costs associated with the recall and liability litigation, estimated by Bridgestone at $754 million and by Ford at $590 million. Although these direct costs clearly were substantial, they were dwarfed by the brand-name market costs borne by Bridgestone and Ford, which were between some nine and seventeen times as large. Similar market effects occurred in 1993 when E. coli bacteria in the hamburger meat purchased by Jack-in-the-Box killed four people and sickened about five hundred. Although Jack-in-the-Box reacted quickly to the food poisoning and took actions to prevent its recurrence, its stock-market value fell by more than 30 percent when this information was disclosed, or more than double the direct litigation and recall costs. Even in cases where the problem is not strictly the company’s “fault,” such as the 1982 Tylenol tampering cases that led to seven poisoning deaths, the $2 billion (or more than 20 percent) decline in stock-market value borne by the producer, Johnson and Johnson, was almost ten times as great as the company’s direct recall and litigation costs. While the government regulates the quality of products, the regulatory cost that can be imposed on companies is generally a small fraction of the economic cost that the market imposes on poorperforming companies with established brand names. If those companies had lacked brand names, the economic punishment they suffered would have been much smaller. Because brand-name companies have a greater incentive to ensure high quality, consumers who buy brand-name products are necessarily paying for something: the added assurance that the company has taken the necessary measures to protect its reputation for quality. Therefore, even for purchases of a “standardized” product such as aspirin, where most suppliers purchase the basic ingredient, acetylsalicylic acid, from the same manufacturer, it may make sense for consumers to purchase a higher-priced brand-name product. Consumers are not ignorant or irrational when they buy an advertised brand-name aspirin rather than a non-brand-name product at a lower price. Bottled aspirin supplied by brand-name and “non-brand-name” producers may differ technologically in dissolve rate, shelf life, and other factors. But more important, the products differ economically. A lower-priced “nonbrand” aspirin is not economically equivalent to higher-priced brand-name aspirin, because a company selling aspirin under a valuable brand name has more to lose if something goes wrong. The brand-name aspirin supplier, therefore, has a greater economic incentive to take added precautions in producing the product. Similar economic forces are at work when multiple generic drug companies produce the same drug. Because pharmacies generally have an incentive to purchase the lowest-cost generic variant, each generic company has the incentive to lower costs, including reducing its quality-control efforts, subject only to imperfect FDA audits. When companies do not earn a large price premium on their products, the potential sanction the companies face for poor quality control is much lower than the economic cost borne by brand-name companies. Seen in this light, the question is not whether consumers are ignorant or irrational when they pay a higher price for a brand-name product, but whether they are paying too much for the additional quality assurance brand names necessarily provide. Even people who assume that all aspirin is alike spend some money on brand-name assurance since they do not buy “nonbrand” aspirin off the back of a pickup truck at a swap meet. Instead, they may buy “lower-brand-name” aspirin, such as aspirin carrying the brand name of a chain drugstore. It is significant, however, that consumers buy a much smaller share of such “lower-brand-name” aspirin when purchasing children’s aspirin than when buying adult-dosage aspirin. Many people decide, as evidenced by their behavior, that although they are willing to purchase less brand-name assurance for themselves, they want the higher-quality assurance for their children, for whom quality-control considerations may be more important. About the Author Benjamin Klein is professor emeritus of economics at UCLA and director, LECG, LLC. Further Reading   Goldman, Marshall. “Product Differentiation and Advertising: Some Lessons from the Soviet Experience.” Journal of Political Economy 68 (1960): 346–357. Klein, Benjamin, and Keith Leffler. “The Role of Market Forces in Assuring Contractual Performance.” Journal of Political Economy 89 (1981): 615–641. Mitchell, Mark. “The Impact of External Parties on Brand-Name Capital: The 1982 Tylenol Poisonings and Subsequent Cases.” Economic Inquiry 27, no. 4 (1989): 601–618. Related Links Advertising, from the Concise Encyclopedia of Economics. Consumer Protection, from the Concise Encyclopedia of Economics.  Rory Sutherland on Alchemy. EconTalk, November 2019. Postrel on Style. EconTalk, November 2006. Morgan Rose, Shedding Light on Market Power. December 2002.   (0 COMMENTS)

/ Learn More

Balance of Payments

Few subjects in economics have caused so much confusion—and so much groundless fear—in the past four hundred years as the thought that a country might have a deficit in its balance of payments. This fear is groundless for two reasons: (1) there never is a deficit, and (2) it would not necessarily hurt anything if there was one. The balance-of-payments accounts of a country record the payments and receipts of the residents of the country in their transactions with residents of other countries. If all transactions are included, the payments and receipts of each country are, and must be, equal. Any apparent inequality simply leaves one country acquiring assets in the others. For example, if Americans buy automobiles from Japan, and have no other transactions with Japan, the Japanese must end up holding dollars, which they may hold in the form of bank deposits in the United States or in some other U.S. investment. The payments Americans make to Japan for automobiles are balanced by the payments Japanese make to U.S. individuals and institutions, including banks, for the acquisition of dollar assets. Put another way, Japan sold the United States automobiles, and the United States sold Japan dollars or dollar-denominated assets such as treasury bills and New York office buildings. Although the totals of payments and receipts are necessarily equal, there will be inequalities—excesses of payments or receipts, called deficits or surpluses—in particular kinds of transactions. Thus, there can be a deficit or surplus in any of the following: merchandise trade (goods), services trade, foreign investment income, unilateral transfers (foreign aid), private investment, the flow of gold and money between central banks and treasuries, or any combination of these or other international transactions. The statement that a country has a deficit or surplus in its “balance of payments” must refer to some particular class of transactions. As Table 1 shows, in 2004 the United States had a deficit in goods of $665.4 billion but a surplus in services of $48.8 billion. Many different definitions of the balance-of-payments deficit or surplus have been used in the past. Each definition has different implications and purposes. Until about 1973 attention was focused on a definition of the balance of payments intended to measure a country’s ability to meet its obligation to exchange its currency for other currencies or for gold at fixed exchange rates. To meet this obligation, countries maintained a stock of official reserves, in the form of gold or foreign currencies, that they could use to support their own currencies. A decline in this stock was considered an important balance-of-payments deficit because it threatened the ability of the country to meet its obligations. But that particular kind of deficit, by itself, was never a good indicator of the country’s financial position. The reason is that it ignored the likelihood that the country would be called on to meet its obligation and the willingness of foreign or international monetary institutions to provide support. After 1973, interest in official reserve positions as a measure of balance of payments greatly diminished as the major countries gave up their commitment to convert their currencies at fixed exchange rates. This reduced the need for reserves and lessened concern about changes in the size of reserves. Since 1973, discussions of “the” balance-of-payments deficit or surplus usually refer to what is called the current account. This account contains trade in goods and services, investment income earned abroad, and unilateral transfers. It excludes the capital account, which includes the acquisition or sale of securities or other property. Because the current account and the capital account add up to the total account, which is necessarily balanced, a deficit in the current account is always accompanied by an equal surplus in the capital account, and vice versa. A deficit or surplus in the current account cannot be explained or evaluated without simultaneous explanation and evaluation of an equal surplus or deficit in the capital account. A country is more likely to have a deficit in its current account the higher its price level, the higher its gross national product, the higher its interest rates, the lower its barriers to imports, and the more attractive its investment opportunities—all compared with conditions in other countries—and the higher its exchange rate. The effects of a change in one of these factors on the current account balance cannot be predicted without considering the effect on the other causal factors. For example, if the U.S. government increases tariffs, Americans will buy fewer imports, thus reducing the current account deficit. But this reduction will occur only if one of the other factors changes to bring about a decrease in the capital account surplus. If none of these other factors changes, the reduced imports from the tariff increase will cause a decline in the demand for foreign currency (yen, deutsche marks, etc.), which in turn will raise the value of the U.S. dollar (see foreign exchange). The increase in the value of the dollar will make U.S. exports more expensive and imports cheaper, offsetting the effect of the tariff increase. The net result is that the tariff increase brings no change in the current account balance. Table 1 The U.S. Balance of Payments, 2004 *. Includes statistical discrepancy. Goods −665.4 Services +48.8 Investment income +30.4 Balance on goods, services, and income −587.2 Unilateral transfers −80.9 Balance on current account −668.1 Nonofficial capital* +270.6 Official reserve assets +397.5 Balance on capital account +668.1 Total balance 0 Source: U.S. Department of Commerce, Survey of Current Business. Notes: Dollar amounts are in billions; += surplus; − = deficit. Contrary to the general perception, the existence of a current account deficit is not in itself a sign of bad economic policy or bad economic conditions. If the United States has a current account deficit, all this means is that the United States is importing capital. And importing capital is no more unnatural or dangerous than importing coffee. The deficit is a response to conditions in the country. It may be a response to excessive inflation, to low productivity, or to inadequate saving. It may just as easily occur because investments in the United States are secure and profitable. Furthermore, the conditions to which the deficit responds may be good or bad and may be the results of good or bad policy; but if there is a problem, it is in the underlying conditions and not in the deficit per se. During the 1980s there was a great deal of concern about the shift of the U.S. current account balance from a surplus of $5 billion in 1981 to a deficit of $161 billion in 1987. This shift was accompanied by an increase of about the same amount in the U.S. deficit in goods. Claims that this shift in the international position was causing a loss of employment in the United States were common, but that was not true. In fact, between 1981 and 1987, the number of people employed rose by more than twelve million, and employment as a percentage of population rose from 60 percent to 62.5 percent. Many people were also anxious about the other side of the accounts—the inflow of foreign capital that accompanied the current account deficit—fearing that the United States was becoming owned by foreigners. The inflow of foreign capital did not, however, reduce the assets owned by Americans. Instead, it added to the capital within the country. In any event, the amount was small relative to the U.S. capital stock. Measurement of the net amount of foreign-owned assets in the United States (the excess of foreign assets in the United States over U.S. assets abroad) is very uncertain. At the end of 1988, however, it was surely much less than 4 percent of the U.S. capital stock and possibly even zero. Later, there was fear of what would happen when the capital inflow slowed down or stopped. But after 1987 it did slow down and the economy adjusted, just as it had adjusted to the big capital inflow earlier, by a decline in the current account and trade deficits. These same concerns surfaced again in the late 1990s and early 2000s as the current account went from a surplus of $4 billion in 1991 to a deficit of $666 billion in 2004. The increase in the current account deficit account, just as in the 1980s, was accompanied by an almost equal increase in the deficit in goods. Interestingly, the current account surpluses of 1981 and 1991 both occurred in the midst of a U.S. recession, and the large deficits occurred during U.S. economic expansions. This makes sense because U.S. imports are highly sensitive to U.S. economic conditions, falling more than proportionally when U.S. GDP falls and rising more than proportionally when U.S. GDP rises. Just as in the 1980s, U.S. employment expanded, with the U.S. economy adding more than twenty-one million jobs between 1991 and 2004. Also, employment as a percentage of population rose from 61.7 percent in 1991 to 64.4 percent in 2000 and, although it fell to 62.3 percent in 2004, was still modestly above its 1991 level. How about the issue of foreign ownership? By the end of 2003, Americans owned assets abroad valued at market prices of $7.86 trillion, while foreigners owned U.S. assets valued at market prices of $10.52 trillion. The net international investment position of the United States, therefore, was $2.66 trillion. This was only 8.5 percent of the U.S. capital stock.1 About the Author Herbert Stein, who died in 1999, was a senior fellow at the American Enterprise Institute in Washington, D.C., and was on the board of contributors of the Wall Street Journal. He was chairman of the Council of Economic Advisers under Presidents Richard Nixon and Gerald Ford. The editor, David R. Henderson, with the help of Kevin Hoover and Mack Ott, updated the data and added the last two paragraphs. Further Reading   Dornbusch, Rudiger, Stanley Fischer, and Richard Startz. Macroeconomics. 9th ed. New York: McGraw-Hill Irwin, 2003. For general concepts and theory, see pp. 298–332. Economic Report of the President. 2004. For good, clear reasoning about balance of payments, see pp. 239–264. Survey of Current Business. Online at: http://www.bea.gov/bea/pubs.htm (for current data).   Footnotes 1. If by capital stock we mean the net value of U.S. fixed reproducible assets, which was $31.4 trillion in 2003. See Survey of Current Business, September 2004, online at: http://www.bea.gov/bea/ARTICLES/2004/09September/Fixed_Assets.pdf. Related Links Pedro Schwartz, Commercial Reprisals are a Mistake. July 2018. Don Boudreaux on Globalization and Trade Deficits. EconTalk, January 2008. Jacob Viner, Studies in the Theory of International Trade. Ludwig von Mises, The Theory of Money and Credit.  (0 COMMENTS)

/ Learn More

Antitrust

Origins Before 1890, the only “antitrust” law was the common law. Contracts that allegedly restrained trade (e.g., price-fixing agreements) often were not legally enforceable, but they did not subject the parties to any legal sanctions, either. Nor were monopolies illegal. Economists generally believe that monopolies and other restraints of trade are bad because they usually reduce total output, and therefore the overall economic well-being for producers and consumers (see monopoly). Indeed, the term “restraint of trade” indicates exactly why economists dislike monopolies and cartels. But the law itself did not penalize monopolies. The Sherman Act of 1890 changed all that by outlawing cartelization (every “contract, combination . . . or conspiracy” that was “in restraint of trade”) and monopolization (including attempts to monopolize). The Sherman Act defines neither the practices that constitute restraints of trade nor monopolization. The second important antitrust statute, the Clayton Act, passed in 1914, is somewhat more specific. It outlaws, for example, certain types of price discrimination (charging different prices to different buyers), “tying” (making someone who wants to buy good A buy good B as well), and mergers—but only when the effects of these practices “may be substantially to lessen competition or to tend to create a monopoly.” The Clayton Act also authorizes private antitrust suits and triple damages, and exempts labor organizations from the antitrust laws. Economists did not lobby for, or even support, the antitrust statutes. Rather, the passage of such laws is generally ascribed to the influence of populist “muckrakers” such as Ida Tarbell, who frequently decried the supposed ability of emerging corporate giants (“the trusts”) to increase prices and exploit customers by reducing production. One reason most economists were indifferent to the law was their belief that any higher prices achieved by the supposed anticompetitive acts were more than outweighed by the pricereducing effects of greater operating efficiency and lower costs. Interestingly, Tarbell herself conceded, as did “trustbuster” Teddy Roosevelt, that the trusts might be more efficient producers. Only recently have economists looked at the empirical evidence (what has happened in the real world) to see whether the antitrust laws were needed. The popular view that cartels and monopolies were rampant at the turn of the century now seems incorrect to most economists. Thomas DiLorenzo (1985) has shown that the trusts against which the Sherman Act supposedly was directed were, in fact, expanding output many times faster than overall production was increasing nationwide; likewise, the trusts’ prices were falling faster than those of all enterprises nationally. In other words, the trusts were doing exactly the opposite of what economic theory says a monopoly or cartel must do to reap monopoly profits. Anticompetitive Practices In referring to contracts “in restraint of trade,” or to arrangements whose effects “may be substantially to lessen competition or to tend to create a monopoly,” the principal antitrust statutes are relatively vague. There is little statutory guidance for distinguishing benign from malign practices. Thus, judges have been left to decide which practices run afoul of the antitrust laws. An important judicial question has been whether a practice should be treated as “per se illegal” (i.e., devoid of redeeming justification, and thus automatically outlawed) or whether it should be judged by a “rule of reason” (its legality depends on how it is used and on its effects in particular situations). To answer such questions, judges sometimes have turned to economists for guidance. In the early years of antitrust, though, economists were of little help. They had not extensively analyzed arrangements such as tying, information sharing, resale price maintenance, and other commercial practices challenged in antitrust suits. But as the cases exposed areas of economic ignorance or confusion about different commercial arrangements, economists turned to solving the various puzzles. Indeed, analyzing the efficiency rationale for practices attacked in antitrust litigation has dominated the intellectual agenda of economists who study what is called industrial organization. Initially, economists concluded that unfamiliar commercial arrangements that were not explicable in a model of perfect competition must be anticompetitive. In the past forty years, however, economic evaluations of various practices have changed. Economists now see that the perfect competition model relies on assumptions—such as everyone having perfect information and zero transaction costs—that are inappropriate for analyzing real-world production and distribution problems. The use of more sophisticated assumptions in their models has led economists to conclude that many practices previously deemed suspect are not typically anticompetitive. This change in evaluations has been reflected in the courts. Per se liability has increasingly been superseded by rule-of-reason analysis reflecting the procompetitive potential of a given practice. Under the rule of reason, courts have become increasingly sophisticated in analyzing information and transaction costs and the ways that contested commercial practices can reduce them. Economists and judges alike are more sophisticated in several important areas. Vertical Contracts Most antitrust practitioners once believed that vertical mergers (i.e., one company acquiring another that is either a supplier or a customer) reduced competition. Today, most antitrust experts believe that vertical integration usually is not anticompetitive. Progress in this area began in the 1950s with work by Aaron Director and the Antitrust Project at the University of Chicago. Robert Bork, a scholar involved with this project (and later the federal judge whose unsuccessful nomination to the U.S. Supreme Court caused much controversy), showed that if firm A has monopoly power, vertically integrating with firm B (or acquiring B) does not increase A’s monopoly power in its own industry. Nor does it give A monopoly power in B’s industry if that industry was competitive in the first place. Lester Telser, also of the University of Chicago, showed in a famous 1960 article that manufacturers used resale price maintenance (“fair trade”) not to create monopoly at the retail level, but to stimulate nonprice competition among retailers. Since retailers operating under fair trade agreements could not compete by cutting price, noted Telser, they instead competed by demonstrating the product to uninformed buyers. If the product is a sophisticated one that requires explaining to prospective buyers, resale price maintenance can be a rational—and competitive—action by a manufacturer. The same rationale can account for manufacturers’ use of exclusive sales territories. This new knowledge about vertical contracts has had a large impact on judicial antitrust rulings. Horizontal Contracts Changes in the assessment of horizontal contracts (agreements among competing sellers in the same industry) have come more slowly. Economists remain almost unanimous in condemning all horizontal price-fixing. Many, however (e.g., Donald Dewey), have indicated that price-fixing may actually be procompetitive in some situations, a conclusion bolstered by Michael Sproul’s empirical finding that in industries where the government successfully sues against price-fixing, prices increase, rather than decrease, after the suit. At a minimum, Peter Asch and Joseph Seneca have shown empirically, price-fixers have not earned higher than normal profits. Other practices that some people believed made it easier for competitors to fix prices have been shown to have procompetitive explanations. Sharing of information among competitors, for example, may not necessarily be a prelude to price-fixing; it can instead have an independent efficiency rationale. Perhaps the most important change in economists’ understanding has occurred in the area of mergers. Particularly with the work of Joe Bain and George Stigler in the 1950s, economists (and courts) inferred a lack of competition in markets simply from the fact that an industry had a high four-firm concentration ratio (the percentage of sales accounted for by the four largest firms in the industry). But later work by economists such as Yale Brozen and Harold Demsetz demonstrated that crrelations between concentration and profits either were transitory or were due more to superior efficiency than to anticompetitive conduct. Their work followed that of Oliver Williamson, who showed that even if merger caused a large increase in monopoly power, it would be efficient if it produced only slight cost reductions. As a result of this new evidence and new thinking, economists and judges no longer assume that concentration alone indicates monopoly. The various versions of the Department of Justice/Federal Trade Commission Merger Guidelines promulgated in the 1980s and revised in the 1990s have deemphasized concentration as a factor inviting government challenge of a merger. Nonmerger Monopolization Perhaps the most publicized monopolization case of recent years is the government’s case against Microsoft, which (see Liebowitz and Margolis 2001) rested on questionable empirical claims and resulted ultimately in victory for Microsoft on most of the government’s allegations. The failure of the government’s case reflects a general recent decline in the importance of monopolization cases. Worries about monopoly have progressively diminished with the realization that various practices traditionally thought to be monopolizing devices (including vertical contracts, as discussed above) actually have procompetitive explanations. Likewise, belief in the efficacy of predatory pricing—cutting price below cost—as a monopolization device has diminished. Work begun by John McGee in the late 1950s (also an outgrowth of the Chicago Antitrust Project) showed that firms are highly unlikely to use predatory pricing to create monopoly. That work is reflected in several recent Supreme Court opinions, such as that in Matsushita Electric Industrial Co. v. Zenith Radio Corp., where the Court wrote, “There is a consensus among commentators that predatory pricing schemes are rarely tried, and even more rarely successful.” As older theories of monopolization have died, newer ones have been hatched. In the 1980s, economists began to lay out new monopolization models based on strategic behavior, often relying on game-theory constructs. They postulated that companies could monopolize markets by raising rivals’ costs (sometimes called “cost predation”). For example, if firm A competes with firm B and supplies inputs to both itself and to B, A could raise B’s costs by charging B a higher price. It remains to be seen whether economists will ultimately accept the proposition that raising a rival’s costs can be a viable monopolizing strategy, or how the practice will be treated in the courts. But courts have sometimes imposed antitrust liability on firms possessing supposedly “essential facilities” when they deny competitors access to those facilities. The recent era of antitrust reassessment has resulted in general agreement among economists that the most successful instances of cartelization and monopoly pricing have involved companies that enjoy the protection of government regulation of prices and government control of entry by new competitors. Occupational licensing and trucking regulation, for example, have allowed competitors to alter terms of competition and legally prevent entry into the market. Unfortunately, monopolies created by the federal government are almost always exempt from antitrust laws, and those created by state governments frequently are exempt as well. Municipal monopolies (e.g., taxicabs, utilities) may be subject to antitrust action but often are protected by statute. The Effects of Antitrust With the hindsight of better economic understanding, economists now realize that one undeniable effect of antitrust has been to penalize numerous economically benign practices. Horizontal and especially vertical agreements that are clearly useful, particularly in reducing transaction costs, have been (or for many years were) effectively banned. A leading example is the continued per se illegality of resale price maintenance. Antitrust also increases transaction costs because firms must hire lawyers and often must litigate to avoid antitrust liability. One of the most worrisome statistics in antitrust is that for every case brought by government, private plaintiffs bring ten. The majority of cases are filed to hinder, not help, competition. According to Steven Salop, formerly an antitrust official in the Carter administration, and Lawrence J. White, an economist at New York University, most private antitrust actions are filed by members of one of two groups. The most numerous private actions are brought by parties who are in a vertical arrangement with the defendant (e.g., dealers or franchisees) and who therefore are unlikely to have suffered from any truly anticompetitive offense. Usually, such cases are attempts to convert simple contract disputes (compensable by ordinary damages) into triple-damage payoffs under the Clayton Act. The second most frequent private case is that brought by competitors. Because competitors are hurt only when a rival is acting procompetitively by increasing its sales and decreasing its price, the desire to hobble the defendant’s efficient practices must motivate at least some antitrust suits by competitors. Thus, case statistics suggest that the anticompetitive costs from “abuse of antitrust,” as New York University economists William Baumol and Janusz Ordover (1985) referred to it, may actually exceed any procompetitive benefits of antitrust laws. The case for antitrust gets no stronger when economists examine the kinds of antitrust cases brought by government. As George Stigler (1982, p. 7), often a strong defender of antitrust, summarized, “Economists have their glories, but I do not believe that antitrust law is one of them.” In a series of studies done in the early 1970s, economists assumed that important losses to consumers from limits on competition existed, and constructed models to identify the markets where these losses would be greatest. Then they compared the markets where government was enforcing antitrust laws with the markets where governments should enforce the laws if consumer well-being was the government’s paramount concern. The studies concluded unanimously that the size of consumer losses from monopoly played little or no role in government enforcement of the law. Economists have also examined particular kinds of antitrust cases brought by the government to see whether anticompetitive acts in these cases were likely. The empirical answer usually is no. This is true even in price-fixing cases, where the evidence indicates that the companies targeted by the government either were not fixing prices or were doing so unsuccessfully. Similar conclusions arise from studies of merger cases and of various antitrust remedies obtained by government; in both instances, results are inconsistent with antitrust’s supposed goal of consumer well-being. If public-interest rationales do not explain antitrust, what does? A final set of studies has shown empirically that patterns of antitrust enforcement are motivated at least in part by political pressures unrelated to aggregate economic welfare. For example, antitrust is useful to politicians in stopping mergers that would result in plant closings or job transfers in their home districts. As Paul Rubin documented, economists do not see antitrust cases as driven by a search for economic improvement. Rubin reviewed all articles written by economists that were cited in a leading industrial organization textbook (Scherer and Ross 1990) generally favorable to antitrust law. Per economists’ evaluations, more bad than good cases were brought. “In other words,” wrote Rubin, “it is highly unlikely that the net effect of actual antitrust policy is to deter inefficient behavior.. . . Factors other than a search for efficiency must be driving antitrust policy” (Rubin 1995, p. 61). What might those factors be? Pursuing a point suggested by Nobel laureate Ronald Coase (1972, 1988), William Shughart argued that economists’ support for antitrust derives considerably from their ability to profit personally, in the form of full-time jobs and lucrative part-time work as experts in antitrust matters: “Far from contributing to improved antitrust enforcement, economists have for reasons of self-interest actively aided and abetted the public law enforcement bureaus and private plaintiffs in using the Sherman, Clayton and FTC Acts to subvert competitive market forces” (Shughart 1998, p. 151). About the Author Fred S. McChesney is the Class of 1967 James B. Haddad Professor of Law at Northwestern University School of Law and a professor in the Kellogg School of Management at Northwestern. Further Reading   Asch, Peter, and J. J. Seneca. “Is Collusion Profitable?” Review of Economics and Statistics 53 (February 1976): 1–12. Baumol, William J., and Janusz A. Ordover. “Use of Antitrust to Subvert Competition.” Journal of Law and Economics 28 (May 1985): 247–265. Bittlingmayer, George. “Decreasing Average Cost and Competition: A New Look at the Addyston Pipe Case.” Journal of Law and Economics 25 (October 1982): 201–229. Bork, Robert H. The Antitrust Paradox: A Policy at War with Itself. New York: Basic Books, 1978. Bork, Robert H. “Vertical Integration and the Sherman Act: The Legal History of an Economic Misconception.” University of Chicago Law Review 22 (Autumn 1954): 157–201. Brozen, Yale. “The Antitrust Task Force Deconcentration Recommendation.” Journal of Law and Economics 13 (October 1970): 279–292. Coase, R. H. “Industrial Organization: A Proposal for Research.” In V. Fuchs, ed., Economic Research: Retrospective and Prospect. Vol. 3. Cambridge, Mass.: National Bureau of Economic Research. Reprinted in R. H. Coase, The Firm, the Market and the Law. Chicago: University of Chicago Press, 1988. Coate, Malcolm B., Richard S. Higgins, and Fred S. Mcchesney. “Bureaucracy and Politics in FTC Merger Challenges.” Journal of Law and Economics 33 (October 1990): 463–482. Crandall, Robert W., and Clifford Winston. “Does Antitrust Policy Improve Consumer Welfare? Assessing the Evidence.” Journal of Economic Perspectives 17, no. 4 (2003): 3–26. Demsetz, Harold. “Industry Structure, Market Rivalry, and Public Policy.” Journal of Law and Economics 16 (April 1973): 1–9. Dewey, Donald. “Information, Entry and Welfare: The Case for Collusion.” American Economic Review 69 (September 1979): 588–593. DiLorenzo, Thomas J. “The Origins of Antitrust: An Interest-Group Perspective.” International Review of Law and Economics 5 (June 1985): 73–90. Liebowitz, Stan J., and Stephen E. Margolis. Winners, Losers and Microsoft. Rev. ed. Oakland, Calif.: Independent Institute, 2001. McGee, John S. “Predatory Price Cutting: The Standard Oil (N.J.) Case.” Journal of Law and Economics 1 (1958): 137–169. Rubin, Paul H. “What Do Economists Think About Antitrust? A Random Walk down Pennsylvania Avenue.” In Fred S. McChesney and William F. Shughart II, eds., The Causes and Consequences of Antitrust: The Public-Choice Perspective. Chicago: University of Chicago Press, 1995. Scherer, F. M., and David Ross. Industrial Market Structure and Economic Performance. 3d ed. Boston: Houghton Mifflin, 1990. Shughart, William F. II. “Monopoly and the Problem of the Economists.” In Fred S. McChesney, ed., Economic Inputs, Legal Outputs: The Role of Economists in Modern Antitrust. New York: Wiley, 1998. Shughart, William F. II, and Robert D. Tollison. “The Positive Economics of Antitrust Policy: A Survey Article.” International Review of Law and Economics 5 (June 1985): 39–57. Sproul, Michael F. “Antitrust and Prices.” Journal of Political Economy 101 (1993): 741–754. Stigler, George J. “The Economists and the Problem of Monopoly.” In Stigler, The Economist as Preacher and Other Essays. Chicago: University of Chicago Press, 1982. Pp.38–54. Stigler, George J. “The Economists and the Problem of Monopoly.” American Economic Review Papers and Proceedings 72 (May 1982): 1–11. Telser, Lester G. “Why Should Manufacturers Want Fair Trade?” Journal of Law and Economics 3 (October 1960): 86–105. Williamson, Oliver E. “Economies as an Antitrust Defense: The Welfare Tradeoffs.” American Economic Review 58 (March 1968): 18–35.   Related Links Capitalism. Concise Encyclopedia of Economics. David Henderson, Why Predatory Pricing is Highly Unlikely. Econlib, May 2017. Pierre Lemieux, In Defense of Google. Econlib, May 2015. Richard McKenzie, In Defense of Apple. Econlib, July 2012. Roger Noll on the Economics of Sports. EconTalk, August 2012. Boudreaux on Market Failure, Government Failure, and the Economics of Antitrust Regulation. EconTalk, October 2007.   (0 COMMENTS)

/ Learn More

Airline Deregulation

The 1978 Airline Deregulation Act partially shifted control over air travel from the political to the market sphere. The Civil Aeronautics Board (CAB), which had previously controlled entry, exit, and the pricing of airline services, as well as intercarrier agreements, mergers, and consumer issues, was phased out under the CAB Sunset Act and expired officially on December 31, 1984. The economic liberalization of air travel was part of a series of “deregulation” moves based on the growing realization that a politically controlled economy served no continuing public interest. U.S. deregulation has been part of a greater global airline liberalization trend, especially in Asia, Latin America, and the European Union. Network industries, which are critical to a modern economy, include air travel, railroads, electrical power, and telecommunications. The air travel sector is an example of a network industry involving both flows and a grid. The flows are the mobile system elements: the airplanes, the trains, the power, the messages, and so on. The grid is the infrastructure over which these flows move: the airports and air traffic control system, the tracks and stations, the wires and cables, the electromagnetic spectrum, and so on. Network efficiency depends critically on the close coordination of grid and flow operating and investment decisions. Under CAB regulation, investment and operating decisions were highly constrained. CAB rules limiting routes and entry and controlling prices meant that airlines were limited to competing only on food, cabin crew quality, and frequency. As a result, both prices and frequency were high, and load factors—the percentage of the seats that were filled—were low. Indeed, in the early 1970s load factors were only about 50 percent. The air transport market today is remarkably different. Because airlines compete on price, fares are much lower. Many more people fly, allowing high frequency today also, but with much higher load factors—74 percent in 2003, for example. Airline deregulation was a monumental event. Its effects are still being felt today, as low-cost carriers (LCCs) challenge the “legacy” airlines that were in existence before deregulation (American, United, Continental, Northwest, US Air, and Delta). Indeed, the airline industry is experiencing a paradigm shift that reflects the ongoing effects of deregulation. Although deregulation affected the flows of air travel, the infrastructure grid remains subject to government control and economic distortions. Thus, airlines were only partially deregulated. Benefits of Partial Deregulation Even the partial freeing of the air travel sector has had overwhelmingly positive results. Air travel has dramatically increased and prices have fallen. After deregulation, airlines reconfigured their routes and equipment, making possible improvements in capacity utilization. These efficiency effects democratized air travel, making it more accessible to the general public. Airfares, when adjusted for inflation, have fallen 25 percent since 1991, and, according to Clifford Winston and Steven Morrison of the Brookings Institution, are 22 percent lower than they would have been had regulation continued (Morrison and Winston 2000). Since passenger deregulation in 1978, airline prices have fallen 44.9 percent in real terms according to the Air Transport Association. Robert Crandall and Jerry Ellig (1997) estimated that when figures are adjusted for changes in quality and amenities, passengers save $19.4 billion dollars per year from airline deregulation. These savings have been passed on to 80 percent of passengers accounting for 85 percent of passenger miles. The real benefits of airline deregulation are being felt today as never before, with LCCs increasingly gaining market share. The dollar savings are a direct result of allowing airlines the freedom to innovate in routes and pricing. After deregulation, the airlines quickly moved to a hub-and-spoke system, whereby an airline selected some airport (the hub) as the destination point for flights from a number of origination cities (the spokes). Because the size of the planes used varied according to the travel on that spoke, and since hubs allowed passenger travel to be consolidated in “transfer stations,” capacity utilization (“load factors”) increased, allowing fare reduction. The hub-and-spoke model survives among the legacy carriers, but the LCCs—now 30 percent of the market—typically fly point to point. The network hubs model offers consumers more convenience for routes, but point-to-point routes have proven less costly for airlines to implement. Over time, the legacy carriers and the LCCs will likely use some combination of point-to-point and network hubs to capture both economies of scope and pricing advantages. The rigid fares of the regulatory era have given way to today’s competitive price market. After deregulation, the airlines created highly complex pricing models that include the service quality/price sensitivity of various air travelers and offer differential fare/service quality packages designed for each. The new LCCs, however, have far simpler price structures—the product of consumers’ (especially business travelers’) demand for low prices, increased price transparency from online Web sites, and decreased reliance on travel agencies. As prices have decreased, air travel has exploded. The total number of passengers that fly annually has more than doubled since 1978. Travelers now have more convenient travel options with greater flight frequency and more nonstop flights. Fewer passengers must change airlines to make a connection, resulting in better travel coordination and higher customer satisfaction. Industry Problems after Deregulation Although the gains of economic liberalization have been substantial, fundamental problems plague the industry. Some of these problems are transitional, the massive adjustments required by the end of a half century of strict regulation. The regulated airline monopolies received returns on capital that were supposed to be “reasonable” (comparable to what a company might expect to receive in a competitive market), but these returns factored in high costs that often would not exist in a competitive market. For example, the airlines’ unionized workforce, established and strengthened under regulation and held in place by the Railway Labor Act, gained generous salaries and inefficient work rules compared with what would be expected in a competitive market. Problems remain in today’s market, especially with the legacy airlines. Health of the Industry The airlines have not found it easy to maintain profitability. The industry as a whole was profitable through most of the economic boom of the 1990s. As the national economy slowed in 2000, so did profitability for the legacy airlines. Consumers became more price-sensitive and gravitated toward the lower-cost carriers. High labor costs and the network hub business model hurt legacy airlines’ competitiveness. Hub-and-spoke systems decreased unit costs but created high fixed costs that required larger terminals, investments in information technology systems, and intricate revenue management systems. The LCCs have thus far successfully competed on price due to lower hourly employee wages, higher productivity, and no pension deficits. It remains to be seen whether the LCC cost and labor structures will change over time. The Air Transport Association reports that the U.S. airline industry experienced net losses of $23.2 billion from 2001 through 2003, though the LCCs largely remained profitable. While the September 11, 2001, terrorist attack and its aftermath are a major factor in the industry’s hardships, they only accelerated an already developing trend within the industry. The industry was experiencing net operating losses for many reasons, including the mild recession, severe acute respiratory syndrome (SARS), and the increase in LCC services and the decline in business fares relied on by legacy carriers. Higher fuel prices, residual labor union problems, fears of terrorism, and the intrusive measures that government now uses to clear travelers through security checkpoints are further drags on the industry. Remaining Domestic Economic Controls As a form of regulation, antitrust laws inhibit post-deregulation restructuring efforts, making it harder to bring salaries and work rules into line with the realities of a competitive marketplace. The antitrust regulatory laws inhibit the restructuring of corporations and block needed consolidation; the antitrust authorities view with suspicion efforts to retain higher prices. Historically, the CAB had antitrust jurisdiction over airline mergers. When Congress disbanded the CAB in 1985, it temporarily transferred merger review authority to the Department of Transportation (DOT). In 1989, the Justice Department assumed merger review jurisdiction from the DOT that, when combined with its antitrust authority under the Sherman Act, makes it the primary antitrust regulator of the airline industry. The Justice Department has contested past merger proposals, including Northwest’s attempt to gain a controlling interest in Continental and the merger of United Airlines and US Airways. Antitrust law also applies to international alliances, arrangements that attempt to ameliorate restrictive foreign ownership and competition laws. While labor contracts, airport asset management, and other business practices are themselves high barriers to restructuring, these difficulties are magnified by antitrust regulatory hurdles. Cabotage restrictions, discussed below, also limit competition. Reservation Systems During the regulatory era, rates were determined politically and changed infrequently. The CAB had to approve every fare, limiting the airlines’ ability to react to demand changes and to experiment with discount fares. After deregulation, airlines were free to set prices and to change them frequently. That was possible only because the airlines had earlier created computer reservation systems (CRSs) capable of keeping track of the massive inventory of seats on flights over a several-month period. The early CRSs allowed the travel agent to designate an origin-destination pair and call up all available flights. The computer screen could show only a limited number of flights at one time, of course; thus, some rule was essential to rank-order the flights shown. CRSs were available only to travel agents and, beginning in 1984, were highly regulated to ensure open access to airlines that had not developed their own CRS system. The DOT regulations restricted private agreements for guaranteeing access. However, the growth of Internet travel sites and direct access to airline Web sites created new forms of competition to the airline reservation systems. Therefore, the DOT allowed the CRS regulations to expire in 2004. Problems with Political Control of the Grid A network can be efficient only if the flows and the grid interact smoothly. The massive expansion of air travel should have resulted in comparable expansions—either in the physical infrastructure or in more sophisticated grid management. Government management of the air travel grid has resulted in political compromises that cause friction with the smooth flow across the grid. Flight delays are increasing due to a lack of aviation infrastructure and the failure to allocate air capacity efficiently. The Air Transport Association estimates that delays cost airlines and passengers more than five billion dollars per year due to the increased costs for aircraft operation and ground personnel and loss of passengers’ time. The FAA predicts that the number of passengers will increase by 60 percent and that cargo volume will double by 2010. Airports Airport construction and expansion face almost insurmountable political and regulatory hurdles. The number of federal requirements associated with airport finances has grown considerably in recent years and is tied to the awarding of grants from the federal Airport Improvement Program (AIP). Since 1978, only one major airport has been constructed (in Denver), and only a few runways have been added at congested airports. Airport construction faces significant nonpolitical barriers, such as vocal “not in my back yard” (NIMBY) opposition and environmental noise and emissions considerations. Federal law restricts the fees airports charge air carriers to amounts that are “fair and reasonable.” These fee restrictions, although promoted as a way to provide nondiscriminatory access to all aircraft, limit an airport’s ability to recover costs for air carriers’ use of airfield and terminal facilities. Allowing airports more flexibility to price takeoffs and landings based on supply and demand would also help ease congestion at overburdened airports. Air Traffic Control Air traffic control involves the allocation of capacity and has a complex history of government management. Unfortunately, the Federal Aviation Administration (FAA), which manages air traffic control, made bad upgrading decisions. The advanced system funded by the FAA was more than a decade late and never performed as hoped. The result was that the airline expansion was not met by an expanded grid, and congestion occurred. Better technology for air traffic control will help efficient navigation and routings. Global Positioning System (GPS) navigation technology holds great promise for more precise flight paths, allowing for increased airplane traffic. Ultimately, however, a privately managed system that allows for better coordination of airline investment and operation decisions will be necessary to ease congestion. Air traffic control operation is a business function distinct from the regulation of air traffic safety. Using pricing mechanisms to allocate the scarce resource of air traffic capacity would reduce congestion and more efficiently allocate resources. Implementing cost-based structures by privatizing air traffic control is a controversial and politically daunting issue in the United States, but twenty-nine nations—including Canada—have already separated their traffic systems from their regulating agency. Air traffic control privatization will likely be driven by the decreasing ability of the Airport and Airways Trust Fund to deliver the necessary financial support. Currently, the FAA rations flights by delay on a first-come, first-served basis—a system that creates overcrowding during peak hours. A system based on pricing at rates determined by voluntary contractual arrangements of market participants, not government regulators, would reduce this overcrowding. One of the results would be the use of “congestion pricing,” such as rush hour surcharges or early bird discounts. Airport Access FAA rules that limit the number of hourly takeoffs and landings—called “slot” controls—were adopted in 1968 as a temporary measure to deal with congestion and delays at major airports. These artificial capacity limitations—known as the high density rule—still exist at JFK, LaGuardia, and Reagan National. However, limiting supply through governmental fiat is a crude form of demand management. Allowing increased capacity and congestion pricing, and allowing major airports to use their slots to favor larger aircraft, would lead to better results. Remaining International and Economic Rules International Competition “Open Skies” agreements are bilateral agreements between the United States and other countries to open the aviation market to foreign access and remove barriers to competition. They give airlines the right to operate air services from any point in the United States to any point in the other country, as well as to and from third countries. The United States has Open Skies agreements with more than sixty countries, including fifteen of the twenty-five European Union nations. Open Skies agreements have been successful at removing many of the barriers to competition and allowing airlines to have foreign partners, access to international routes to and from their home countries, and freedom from many traditional forms of economic regulation. A global industry would work better with a globally minded set of rules that would allow airlines from one country (or investors of any sort) to establish airlines in another country (the right of establishment) and to operate domestic services in the territory of another country (cabotage). However, these agreements still fail to approximate the freedoms that most industries have when competing in other global markets. National Ownership National ownership laws are an archaic barrier to a more competitive air travel sector. These rules seem to reflect a concern for national security, even though many industries as strategic as the airline industry do not have such restrictions. Federal law restricts the percentage of foreign ownership in air transportation. Only U.S.-registered aircraft can transport passengers and freight domestically. Airline citizenship registration is limited to U.S. citizens or permanent residents, partnerships in which all partners are U.S. citizens, or corporations registered in the United States in which the chief executive officer and two-thirds of the directors are U.S. citizens and where U.S. citizens hold or control 75 percent of the capital stock. Only U.S. citizens are able to obtain a certificate of public convenience and necessity, a prerequisite for operation as a domestic carrier. Additional Problems Resulting from the 9/11 Response After 9/11, safety and security regulation responsibilities were given to the new Transportation Security Administration (TSA) within the Department of Homeland Security. Created just months after 9/11, the TSA is an outgrowth of the belief that only the government can be entrusted to perform certain duties, especially those related to security. No one has clearly established that a government whose employees are difficult to fire, even for incompetence, will do better than a private employer who can more easily fire incompetent workers. In September 2001, Congress passed the Air Transportation Safety and System Stabilization Act, which authorized payments of up to five billion dollars in assistance to reimburse airlines for the postattack four-day shutdown of air traffic and attributable losses through the end of 2001. It also created and authorized the Air Transportation Stabilization Board (ATSB) to provide up to ten billion dollars in loan guarantees for airlines in need of emergency capital. While the ATSB risked the kind of mission creep that is inevitable in an industry subsidy program, the deadline for applications to the ATSB has passed. Of the ten billion dollars authorized by Congress for these loan guarantees, the board actually committed less than two billion. Conclusion Air travel is a network industry, but only its flow element— the airlines—is economically liberalized. The industry is still structurally adjusting to a more competitive situation and remains subject to a large number of regulations. The capital, work rules, and compensation practices of the airline industry still reflect almost fifty years of political protection and control. We are finally seeing the kinds of internal restructuring among airlines that was expected from deregulation. Yet, government still has much to do to ensure that the airline market will thrive in the future. The FAA is a command-and-control government agency ill-suited to providing air traffic control services to a dynamic industry. Land slots and airport space should be allocated using market prices instead of through administrative fiat. International competition will increase, and rules regarding national ownership need to change accordingly. If the government deregulates the grid and transitions toward a market solution, the benefits of flow deregulation will increase, and costs for air travelers will fall even more. About the Authors Fred L. Smith Jr. is the president of, and Braden Cox is the technology counsel with, Competitive Enterprise Institute, a free-market public policy group based in Washington, D.C. Further Reading   Bailey, Elizabeth E. “Airline Deregulation Confronting the Paradoxes.” Regulation: The Cato Review of Business and Government 15, no. 3. Available online at: http://www.cato.org/pubs/regulation/regv15n3/reg15n3-bailey.html. Button, Kenneth, and Roger Stough. Air Transport Networks: Theory and Policy Implications. Northampton, Mass.: Edward Elgar, 2000. Crandall, Robert, and Jerry Ellig. Economic Deregulation and Customer Choice. Fairfax, Va.: Center for Market Processes, George Mason University, 1997. Available online at: http://www.mercatus.org/repository/docLib/MC_RSP_RP-Dregulation_970101.pdf. Doganis, Rigas. The Airport Business. New York: Routledge, 1992. Havel, Brian F. In Search of Open Skies: Law and Policy for a New Era in International Aviation. A Comparative Study of Airline Deregulation in the United States and the European Union. Boston: Kluwer Law International, 1997. Morrison, Steven A., and Clifford Winston. “The Remaining Role for Government Policy in the Deregulated Airline Industry.” In Sam Peltzman and Clifford Winston, eds., Deregulation of Network Industries: What’s Next? Washington, D.C.: AEI Brookings Joint Center for Regulatory Studies, 2000. Poole, Robert W. Jr., and Viggo Butler. Airline Deregulation: The Unfinished Revolution. December 1998. Available online at: http://cei.org/pdf/1451.pdf. Poole, Robert W. Jr., and Viggo Butler. How to Commercialize Air Traffic Control. Policy Study No. 278. Los Angeles: Reason Public Policy Institute, 2001. U.S. GAO. Airline Deregulation: Changes in Airfares, Service, and Safety at Small, Medium-Sized, and Large Communities. April 1996. Report online at: http://www.gao.gov/archive/1996/rc96079.pdf.   Related Links Antitrust. Concise Encyclopedia of Economics. Price Controls. Concise Encyclopedia of Economics. Robert P. Murphy, Ensuring- And Insuring- Airline Safety. Econlib, February 2011.         (0 COMMENTS)

/ Learn More