This is my archive

bar

Industrial Revolution and the Standard of Living

Between 1760 and 1860, technological progress, education, and an increasing capital stock transformed England into the workshop of the world. The industrial revolution, as the transformation came to be known, caused a sustained rise in real income per person in England and, as its effects spread, in the rest of the Western world. Historians agree that the industrial revolution was one of the most important events in history, marking the rapid transition to the modern age, but they disagree vehemently about many aspects of the event. Of all the disagreements, the oldest one is over how the industrial revolution affected ordinary people, often called the working classes. One group, the pessimists, argues that the living standards of ordinary people fell, while another group, the optimists, believes that living standards rose. At one time, behind the debate was an ideological argument between the critics (especially Marxists) and the defenders of free markets. The critics, or pessimists, saw nineteenth-century England as Charles Dickens’s Coketown or poet William Blake’s “dark, satanic mills,” with capitalists squeezing more surplus value out of the working class with each passing year. The defenders, or optimists, saw nineteenth-century England as the birthplace of a consumer revolution that made more and more consumer goods available to ordinary people with each passing year. The ideological underpinnings of the debate eventually faded, probably because, as T. S. Ashton pointed out in 1948, the industrial revolution meant the difference between the grinding poverty that had characterized most of human history and the affluence of the modern industrialized nations. No economist today seriously disputes the fact that the industrial revolution began the transformation that has led to extraordinarily high (compared with the rest of human history) living standards for ordinary people throughout the market industrial economies. The standard-of-living debate today is not about whether the industrial revolution made people better off, but about when. The pessimists claim no marked improvement in standards of living until the 1840s or 1850s. Most optimists, by contrast, believe that living standards were rising by the 1810s or 1820s, or even earlier. The most influential recent contribution to the optimist position (and the center of much of the subsequent standard-of-living debate) is a 1983 paper by Peter Lindert and Jeffrey Williamson that produced new estimates of real wages in England for the years 1755 to 1851. These estimates are based on money wages for workers in several broad categories, including both blue-collar and white-collar occupations. The authors’ cost-of-living index attempted to represent actual working-class budgets. Lindert’s and Williamson’s analyses produced two striking results. First, they showed that real wages grew slowly between 1781 and 1819. Second, after 1819, real wages grew rapidly for all groups of workers. For all blue-collar workers—a good stand-in for the working classes—the Lindert-Williamson index number for real wages rose from 50 in 1819 to 100 in 1851. That is, real wages doubled in just thirty-two years. Other economists challenged Lindert’s and Williamson’s optimistic findings. Charles Feinstein produced an alternative series of real wages based on a different price index. In the Feinstein series, real wages rose much more slowly than in the Lindert-Williamsons series. Other researchers have speculated that the largely unmeasured effects of environmental decay more than offset any gains in well-being attributable to rising wages. Wages were higher in English cities than in the countryside, but rents

/ Learn More

Industrial Concentration

“Industrial concentration” refers to a structural characteristic of the business sector. It is the degree to which production in an industry—or in the economy as a whole—is dominated by a few large firms. Once assumed to be a symptom of “market failure,” concentration is, for the most part, seen nowadays as an indicator of superior economic performance. In the early 1970s, Yale Brozen, a key contributor to the new thinking, called the profession’s about-face on this issue “a revolution in economics.” Industrial concentration remains a matter of public policy concern even so. The Measurement of Industrial Concentration Industrial concentration was traditionally summarized by the concentration ratio, which simply adds the market shares of an industry’s four, eight, twenty, or fifty largest companies. In 1982, when new federal merger guidelines were issued, the Herfindahl-Hirschman Index (HHI) became the standard measure of industrial concentration. Suppose that an industry contains ten firms that individually account for 25, 15, 12, 10, 10, 8, 7, 5, 5, and 3 percent of total sales. The four-firm concentration ratio for this industry—the most widely used number—is 25 + 15 + 12 + 10 = 62, meaning that the top four firms account for 62 percent of the industry’s sales. The HHI, by contrast, is calculated by summing the squared market shares of all of the firms in the industry: 252 + 152 + 122 + 102 + 102 + 82 + 72 + 52 + 52 + 32 = 1,366. The HHI has two distinct advantages over the concentration ratio. It uses information about the relative sizes of all of an industry’s members, not just some arbitrary subset of the leading companies, and it weights the market shares of the largest enterprises more heavily. In general, the fewer the firms and the more unequal the distribution of market shares among them, the larger the HHI. Two four-firm industries, one containing equalsized firms each accounting for 25 percent of total sales, the other with market shares of 97, 1, 1, and 1, have the same four-firm concentration ratio (100) but very different HHIs (2,500 versus 9,412). An industry controlled by a single firm has an HHI of 1002 = 10,000, while the HHI for an industry populated by a very large number of very small firms would approach the index’s theoretical minimum value of zero. Concentration in the U.S. Economy According to the U.S. Department of Justice’s merger guidelines, an industry is considered “concentrated” if the HHI exceeds 1,800; it is “unconcentrated” if the HHI is below 1,000. Since 1982, HHIs based on the value of shipments of the fifty largest companies have been calculated and reported in the manufacturing series of the Economic Census.1 Concentration levels exceeding 1,800 are rare. The exceptions include glass containers (HHI = 2,959.9 in 1997), motor vehicles (2,505.8), and breakfast cereals (2,445.9). Cigarette manufacturing also is highly concentrated, but its HHI is not reported owing to the small number of firms in that industry, the largest four of which accounted for 89 percent of shipments in 1997. At the other extreme, the HHI for machine shops was 1.9 the same year. Whether an industry is concentrated hinges on how narrowly or broadly it is defined, both in terms of the product it produces and the extent of the geographic area it serves. The U.S. footwear manufacturing industry as a whole is very unconcentrated (HHI = 317 in 1997); the level of concentration among house slipper manufacturers is considerably higher, though (HHI = 2,053.4). Similarly, although

/ Learn More

Information

Since about 1970, an important strand of economic research, sometimes referred to as information economics, has explored the extent to which markets and other institutions process and convey information. Many of the problems of markets and other institutions result from costly information, and many of their features are responses to costly information. Many of the central theories and principles in economics are based on assumptions about perfect information. Among these, three stand out: efficiency, full employment of resources, and uniform prices. Efficiency At least since Adam Smith, most economists have believed that competitive markets are efficient, and that firms, in pursuing their own interests, enhance the public good “as if by an invisible hand.” A major achievement of economic science during the first half of the twentieth century was finding the precise sense in which that result is true. This result, known as the Fundamental Theorem of Welfare Economics, provides a rigorous analytic basis for the presumption that competitive markets allocate resources efficiently. In the 1980s economists made clear the hidden information assumptions underlying that theorem. They showed that in a wide variety of situations where information is costly (indeed, almost always), government interventions could make everyone better off if government officials had the right incentives. At the very least these results have undermined the long-standing presumption that markets are necessarily efficient. Full Employment of Resources A central result (or assumption) of standard economic theory is that resources are fully employed. The economy has a variety of mechanisms (savings and inventories provide buffers; price adjustments act as shock absorbers) that are supposed to dampen the effects of any shocks the economy experiences. In fact, for the past two hundred years economies have experienced large fluctuations, and there has been massive unemployment in the slumps. Though the great depression of the 1930s was the most recent prolonged and massive episode, the American economy suffered major recessions from 1979 to 1982, and many European economies experienced prolonged high unemployment rates during the 1980s. Information economics has explained why unemployment may persist and why fluctuations are so large. The failure of wages to fall so that unemployed workers can find jobs has been explained by efficiency wage theories, which argue that the productivity of workers increases with higher wages (both because employees work harder and because employers can recruit a higher-quality labor force). If information about their workers’ output were costless, employers would not pay such high wages because they could costlessly monitor output and pay accordingly. But because monitoring is costly, employers pay higher wages to give workers an incentive not to shirk. While efficiency wage theory helps explain why unemployment may persist, other theories that focus on the implications of imperfect information in the capital markets can help explain economic volatility. One strand of this theory focuses on the fact that many of the market’s mechanisms for distributing risk, which are critical to an economy’s ability to adjust to economic shocks, are imperfect because of costly information. Most notable in this respect is the failure of equity markets. In recent years less than 10 percent of new capital has been raised via equity markets. Information economics explains why. First, issuers of equity generally know more about the value of the shares than buyers do, and are more inclined to sell when they think buyers are overvaluing their shares. But most potential buyers know that this incentive exists and, therefore, are wary of buying. Second, shareholders have only limited control over managers. Information about what management is doing, or should be doing, to maximize shareholder value is costly. Thus, shareholders often limit the amount of “free cash” managers have to play with by imposing sufficient debt burdens to put managers’ “backs to the wall.” Managers must then exert strong efforts to meet those debt obligations and lenders will carefully scrutinize firms’ behavior. The fact that firms cannot (or choose not to) raise capital via equity markets means that if firms wish to invest more than their cash flow allows—or if they wish to produce more than they can finance out of their current working capital—they must turn to credit markets, and to banks in particular. From the firm’s perspective, borrowing has one major disadvantage: it imposes a fixed obligation on the firm. If it fails to meet that obligation, the firm can go bankrupt. (By contrast, an all-equity firm cannot go bankrupt.) Firms normally take actions to reduce the likelihood of bankruptcy by acting in a risk-averse manner. Risk-averse behavior, in turn, has two important consequences. First, it means that a firm’s behavior is affected by its net-worth position. When its financial position is adversely affected, it cuts back on all its activities (since there is some risk associated with virtually all activities);

/ Learn More

Information and Prices

Modern economists excel at identifying theoretical reasons why markets might fail. While these theories may temper uncritical views of the market, it is important to note that markets do, in fact, work incredibly well. Indeed, markets work so thoroughly and quietly that their success too often goes unnoticed. Consider that the number of different ways to arrange, even in a single dimension, a mere twenty items is far greater than the number of seconds in ten billion years. Now consider that the world contains trillions of different resources: my labor, iron ore, Hong Kong harbor, the stage at the Met, countless stands of pine trees, fertile Russian plains, orbiting satellites, automobile factories—the list is endless. The number of different ways to use, combine, and recombine these resources is unimaginably colossal. And almost all of these ways are useless. It would be a mistake, for example, to combine Arnold Schwarzenegger with medical equipment and have him perform brain surgery. Likewise, it would be a genuine shame to use the fruit of Chateau Petrus’s vines to make grape juice. Only a tiny fraction of all the possible ways to allocate resources is useful. How can we discover these ways? Random chance clearly will not work. Nor will central planning—which is really just a camouflaged method of relying on random chance. It is impossible for a central planning body even to survey the full set of possible resource arrangements, much less to rank these according to how well each will serve human purposes. That citizens of modern market societies eat and bathe regularly; wear clean clothes; drive automobiles; fly to Rome, Italy, or Branson, Missouri, for holidays; and chat routinely on cell phones is powerful evidence that our economy is amazingly well arranged. An effective means must be at work to ensure that some of the relatively very few patterns of resource use that are beneficial are actually used (rather than any of the 99.9999999+ percent of resource-use patterns that would be either useless or calamitous). The decentralized price system is that means. Critical to its functioning is the institution of private property with its associated duties and rights, including the duty to avoid physically harming and taking other people’s property, and the right to exchange property and its fruits at terms agreed on voluntarily. Each person seeks to use every parcel of his property in ways that yield him maximum benefit, either by consuming it most effectively according to his own subjective judgment or by employing it most effectively (“profitably”) in production. Market prices are vital to making such decisions. Vital Role of Prices Market prices are vital because they condense, in as objective a form as possible, information on the value of alternative uses of each parcel of property. Nearly every parcel of property has alternative uses. For example, a plot of land can be used to site a pumpkin patch, a restaurant, a suite of physicians’ offices, or any of many other things. If this plot of land is to be used beneficially rather than wastefully, those responsible for deciding how it will be used must be able to determine the likely worth of each possible alternative. Making such determinations requires reliable information. And market prices are a marvelously compact and reliable source of such information. Offers on the land from potential buyers or renters combine with the current owner’s assessment of the value of the land to him to create a price for the land. Each potential user values the land by at least as much as he is willing to bid. The more intense the bidding, the more likely that each bid will reflect the maximum value each bidder places on the land. Of course, the market prices of goods or services that can be produced with the land are an especially important source of information exploited by potential users of the land to determine how much each will bid. If the land’s current owner cannot use it in a way that promises him as much value as he can get by selling it, he will sell to the buyer offering the highest price. If a commercial developer purchases the land as a site for doctors’ offices, it is because this buyer observed that the rents for office space currently paid by physicians are sufficiently high to justify his purchase of the land, construction of the buildings, and purchase and assembly of all other inputs necessary to create a suite of medical offices.

/ Learn More

Innovation

“Innovation”: creativity; novelty; the process of devising a new idea or thing, or improving an existing idea or thing. Although the word carries a positive connotation in American culture, innovation, like all human activities, has costs as well as benefits. These costs and benefits have preoccupied economists, political philosophers, and artists for centuries. Nature and Effects Innovation can turn new concepts into realities, creating wealth and power. For example, someone who discovers a cure for a disease has the power to withhold it, give it away, or sell it to others.1 Innovations can also disrupt the status quo, as when the invention of the automobile eliminated the need for horse-powered transportation. joseph schumpeter coined the term “creative destruction” to describe the process by which innovation causes a free market economy to evolve.2 Creative destruction occurs when innovations make long-standing arrangements obsolete, freeing resources to be employed elsewhere, leading to greater economic efficiency. For example, when a business manager installs a new machine that replaces manual laborers, the laborers who lose their jobs are now free to put their labor into another enterprise, resulting in more productivity. In fact, in many cases, the number of jobs available will actually increase because the machinery is introduced. Henry Hazlitt provides the example of cotton-spinning machinery introduced in England in the 1760s.3 At the time, the English textile industry employed some 7,900 people, and many workers protested the introduction of machinery out of fear for their livelihoods. But in 1787 there were 320,000 workers in the English textile industry. Although the introduction of machinery caused temporary discomfort to some workers, the machinery increased the aggregate wealth of society by decreasing the cost of production. Amazingly, concerns over technology and job loss in the textile industry continue today. One report notes that the introduction of new machinery in American textile mills between 1972 and 1992 coincided with a greater than 30 percent decrease in the number of textile jobs. However, that decrease was offset by the creation of new jobs. The authors conclude that “there is substantial entry into the industries, job creation rates are high, and productivity dynamics suggest surviving plants have emerged all the stronger while it has been the less productive plants that have exited.”4 According to Schumpeter, the process of technological change in a free market consists of three parts: invention (conceiving a new idea or process), innovation (arranging the economic requirements for implementing an invention), and diffusion (whereby people observing the new discovery adopt or imitate it). These stages can be observed in the history of several famous innovations. The Xerox photocopier was invented by Chester Carlson,5 a patent attorney frustrated by the difficulty of copying legal documents.6 After several years of tedious work, Carlson and a physicist friend successfully photocopied a phrase on October 22, 1938. But industry and government were not interested in further development of the invention. In 1944, the nonprofit Battelle Corporation,7 dedicated to helping inventors, finally showed interest. It and the Haloid Company (later called Xerox) invested in further development. Haloid announced the successful development of a photocopier on October 22, 1948, but the first commercially available copier was not sold until 1950. After another $16 million was invested in developing the photocopier concept, the Xerox 915 became the first simple push-button plain-paper copier. An immense success, it earned Carlson more than $150 million.8 In the following years, competing firms began selling copiers, and other inventions, such as the fax machine, adapted the technology.

/ Learn More

Health Care

Is Health Care Different? Health care is different from other goods and services: the health care product is ill-defined, the outcome of care is uncertain, large segments of the industry are dominated by nonprofit providers, and payments are made by third parties such as the government and private insurers. Many of these factors are present in other industries as well, but in no other industry are they all present. It is the interaction of these factors that tends to make health care unique. Even so, it is easy to make too much of the distinctiveness of the health care industry. Various players in the industry—consumers and providers, to name two—respond to incentives just as in other industries. Federal and state governments are a major health care spender. Together they account for 46 percent of national health care expenditures; nearly three-quarters of this is attributable to Medicare and Medicaid. Private health insurance pays for more than 35 percent of spending, and out-of-pocket consumer expenditures account for another 14 percent.1 Traditional national income accounts substantially understate the role of government spending in the health care sector. Most Americans under age sixty-five receive their health insurance through their employers. This form of employee compensation is not subject to income or payroll taxes, and as a result, the tax code subsidizes employer purchase of employee health insurance. The Joint Economic Committee of the U.S. Congress estimated that in 2002, the federal tax revenue forgone as a result of this tax “subsidy” equaled $137 billion.2 Risk and Insurance Risk of illness and the attendant cost of care lead to the demand for health insurance. Conventional economics argues that the probability of purchasing health insurance will be greater when the consumer is particularly risk averse, when the potential loss is large, when the probability of loss is neither too large nor too small, and when incomes are lower. The previously mentioned tax incentive for the purchase of health insurance increases the chances that health insurance will be purchased. Indeed, the presence of a progressive income tax system implies that higher income consumers will buy even more insurance. The 2002 Current Population Survey reports that nearly 83 percent of the under-age-sixty-five population in the United States had health insurance. More than three-quarters of these people had coverage through an employer, fewer than 10 percent purchased coverage on their own, and the remainder had coverage through a government program. Virtually all of those aged sixty-five and older had coverage through Medicare. Nonetheless, approximately 43.3 million Americans did not have health insurance in 2002.3 The key effect of health insurance is to lower the out-of-pocket price of health services. Consumers purchase goods and services up to the point where the marginal benefit of the item is just equal to the value of the resources given up. In the absence of insurance a consumer may pay sixty dollars for a physician visit. With insurance the consumer is responsible for paying only a small portion of the bill, perhaps only a ten-dollar copay. Thus, health insurance gives consumers an incentive to use health services that have only a very small benefit even if the full cost of the service (the sum of what the consumer and the insurer must pay) is much greater. This overuse of medical care in response to an artificially low price is an example of “moral hazard” (see insurance). Strong evidence of the moral hazard from health insurance comes from the RAND Health Insurance Experiment, which randomly assigned families to health insurance plans with various coinsurance and deductible amounts. Over the course of the study, those required to pay none of the bill used 37 percent more physician services than those who paid 25 percent of the bill. Those with “free care” used 67 percent more than those who paid virtually all of the bill. Prescription drugs were about as price sensitive as physician services. Hospital services were less price sensitive, but ambulatory mental health services were substantially more responsive to lower prices than were physician visits.4 Is the Spending Worth It? National health care spending in 2002 was $1.55 trillion, 14.9 percent of GDP. By comparison, the manufacturing sector constituted only 12.9 percent of GDP. Adjusted for inflation, health care spending in the United States increased by nearly 102 percent over the 1993-2002 period. Hospital services reflect 31 percent of spending; professional services, 22 percent; and drugs, medical supplies, and equipment reflect nearly 14 percent. David Cutler and Mark McClellan note that between 1950 and 1990 the present value of per person medical spending in the United States increased by $35,000 and life expectancy increased by seven years. An additional year of life is conventionally valued at $100,000, and so, using a 3 percent real interest rate, the present value of the extra years is $135,000. Thus the extra spending on medical care is worth the cost if medical spending accounts for more than one-quarter ($35,000/$130,000) of the increase in longevity. Researchers have found that the substantial improvements in the treatment of heart attacks and low-birth-weight births over this period account, just by themselves, for one-quarter of the overall mortality reduction. Thus, the increased health spending seems to have been worth the cost.5 This does not mean that there is no moral hazard. Much spending is on things that have no effect on mortality and little effect on quality of life, and these are encouraged when the patient pays only a fraction of the bill. Taxes and Employer-Sponsored Health Insurance There are three reasons why most people under age sixtyfive get their health insurance through an employer. First, employed people, on average, are healthier than those who are unemployed; therefore, they have fewer insurance claims. Second, the sales and administrative costs of group policies are lower. Third, health insurance premiums paid by an employer are not taxed. Thus, employers and their employees have a strong incentive to substitute broader and deeper health insurance coverage for money wages. Someone in the 27 percent federal income tax bracket, paying 5 percent state income tax and 7.65 percent in Social Security and Medicare taxes, would find that an extra dollar of employer-sponsored health insurance effectively costs him less than sixty-one cents. Workers, not employers, ultimately pay for the net-of-taxes cost of employer-sponsored health insurance. Employees are essentially paid the value of what they produce. Compensation can take many forms: money wages, vacation days, pensions, and health insurance coverage. If health insurance is added to the compensation bundle or if the health insurance becomes more expensive, something else must be removed from the bundle. Perhaps the pension plan is reduced; perhaps a wage increase is smaller than it otherwise would have been. A recent study demonstrates the effects of rising insurance premiums on wages and other benefits in a large firm. This firm provided employees with wages and “benefits credits” that they could spend on health insurance, pensions, vacation days, and so on. Workers could trade wages for additional benefits credits, and vice versa. Health insurance premiums on all plans increased each year. When all health insurance premiums increased, the workers switched to relatively less expensive health plans, took fewer other benefits, and reduced their take-home pay. A 10 percent increase in health insurance premiums led to increased insurance expenditures of only 5.2 percent because many workers shifted to relatively cheaper health plans offered by the employer. The bulk of these higher expenditures (71 percent) was paid for with lower take-home pay; 29 percent by giving up some other benefits.6 Thus, if insurance premiums increased, on average, by $200, the typical worker spent $104 more on coverage and paid for this by reducing take-home pay by $74 and giving up $30 in other benefits. These so-called compensating wage differentials, reductions in wages due to higher nonwage benefits, have important policy implications. They imply, for example, that a governmental requirement that all employers provide health insurance will result in lower wages for the affected workers. Growth and Effects of Managed Care The health care industry has undergone fundamental changes since 1990 as a result, in large part, of the growth of managed care. As recently as 1993, 49 percent of insured workers had coverage through a conventional insurance plan; in 2002 only 5 percent did so. The rest were in health maintenance organizations (HMOs), preferred provider organizations (PPOs), or other forms of managed care. Unlike conventional insurance plans, managed care plans provide coverage only for care received from a selected set of providers in a community. The basic idea with managed care is to limit the moral hazard that comes from overuse of health care, thus keeping insurance premiums lower than otherwise and potentially making the insured person, his employer, and the insurance company better off. An HMO typically provides coverage only if the care is delivered by a member of its hospital, physician, or pharmacy panel. PPOs allow subscribers to use nonpanel providers, but only if the subscriber pays a higher out-of-pocket price. Conventional plans allow subscribers to use any licensed provider in the community, usually for the same out-of-pocket price. Managed care changed the nature of competition among providers. Prior to the growth of managed care, hospitals competed for patients (and their physicians) by providing higher-quality care, more amenities, and more services. This so-called medical arms race resulted in the unusual economic circumstance that more hospitals in a market resulted in higher, not lower, prices. Conventional insurers (as well as government programs) essentially paid providers on a cost basis. The more that was spent, the more that was received. So providers rationally competed along dimensions that mattered. Managed care changed this by the use of “selective contracting.” Not every provider in the community got a contract from the managed care plan. Contracts were awarded based on quality, amenities, services, and price. Research has demonstrated that in the presence of selective contracting, the usual laws of economics apply: the presence of more providers in a market results in lower prices, more idle capacity results in lower prices, and a larger market share on the part of an insurer results in lower prices paid to providers. As a consequence, health care costs increased less rapidly than they otherwise would have and health care markets have become much more competitive.7 Managed care savings have been called illusionary. The plans have been accused of enrolling healthier individuals and providing less intense care. It is true that managed care plans disproportionately attract healthier subscribers. If this was all there was to managed care, the differences in costs between managed care and conventional coverage would be illusionary. However, a 2001 study demonstrates that the innovation offered by managed care is its ability to negotiate lower prices. The authors examined the mix of enrollees, the service intensity, and the prices paid for care among Massachusetts public employees in conventional and HMO plans. The focus was on enrollees with one of eight medical conditions. Across these eight conditions, the HMOs had per capita plan costs that were $107 lower, on average. Fifty-one percent of the difference was attributable to the younger, healthier individuals the HMOs enrolled; 5 percent was attributable to less-intense treatments; and 45 percent of the difference was attributable to lower negotiated prices. The conventional plan paid more than $72,600, on average, for coronary artery bypass graft surgery while the HMO plans in the study, on average, paid less than $52,000.8 Selective contracting arguably led to the slower rate of increase in health insurance premiums through the mid-1990s. Since that time insurance premiums have increased more rapidly. Health economists believe that this change is a result of consumers’ unwillingness to accept the limited provider choice that comes with selective contracting, as well as from the reduction in competition that has resulted from consolidation in the health care industry. Government-Provided Health Insurance Medicare is a federal tax-subsidy program that provides health insurance for some forty million persons aged sixtyfive and older in the United States. Medicare Part A, which provides hospital and limited nursing home care, is funded by payroll taxes imposed on both employees and employers. Part B covers physician services. Beneficiaries pay 25 percent of these costs through a monthly premium; the other 75 percent of Part B costs is paid from general tax revenues. Part C, now called “Medicare-Advantage,” allows beneficiaries to join Medicare-managed care plans. These plans are paid from Part A and Part B revenues. Part D is the new Medicare prescription drug program enacted in 2003 but not fully implemented until 2006. In 1983 Medicare began paying hospitals on a diagnosis-related group (DRG) basis; that is, payments were made for more than five hundred specific inpatient diagnoses. Prior to DRGs, hospitals were paid on an allowable cost basis. The DRG system changed the economic incentives facing hospitals, reduced the average length of stay, and reduced Medicare expenditures relative to the old system. In 1999 Medicare began paying physicians based on a fee schedule derived from a resource-based relative value scale (RBRVS) that ranks procedures based on their complexity, effort, and practice costs. As such, the RBRVS harkens back to the discredited labor theory of value (see marxism). Medicare payments, therefore, do not necessarily reflect market prices and are likely to over- or underpay providers relative to a market or competitive bidding approach. Thus, it is not surprising that physicians have argued that the system pays less than costs and some have begun to refuse to accept new Medicare patients. Moreover, the Medicare program effectively prohibits physicians from accepting payments higher than the fee schedule from Medicare beneficiaries. The result is a system of price controls that will result in shortages whenever the fee schedule is below the market-clearing price. Medicaid, a federal-state health care program for the poor, covers more than forty million people. The federal government pays 50-85 percent of the cost of the program depending on the relative per capita income of the state. States have considerable flexibility in determining eligibility and the extent of coverage within broad federal guidelines. Medicaid is essentially three distinct programs—one for low-income pregnant women and children, one for the disabled, and one for nursing home care for the elderly. Approximately 47 percent of recipients are children, but the aged and disabled receive more than 70 percent of the payments. Much of this is due to nursing home expenditures; Medicaid provides approximately 40 percent of nursing home revenue. State governments have gamed the system to obtain federal matching Medicaid funds. The state would tax a hospital or nursing home based on Medicaid days of care or the number of licensed beds. It would then match the taxes with federal matching dollars at a ratio of two to one or three to one, and essentially return the taxed dollars to the provider. When the federal government said this was not permissible, the states dropped the taxes and asked for “provider contributions” from the hospitals, nursing homes, and so on. Most states used the new federal money for health care services. Others simply reduced general fund expenditures by the amount of the new federal dollars—essentially using federal Medicaid dollars to fund road construction and other state functions. Neither “taxes” nor “contributions” may now be used. The states do, however, funnel state mental health and other state health program dollars through Medicaid to take advantage of the matching grants. The expansion of the Medicaid program, particularly for children, also has had the effect of crowding out private coverage. One estimate suggests that for each two new Medicaid children enrolled, one child lost private coverage.9 Regulation and the Health Care Market The health care industry is one of the most heavily regulated industries in the United States. These regulations stem from efforts to ensure quality, to facilitate the government’s role as purchaser of care, and to respond to provider efforts to increase the demand for their services. Hospitals and nursing homes are licensed by the state and must comply with quality and staffing requirements to maintain eligibility for participation in federal programs. Physicians and other health professionals are licensed by the states. Prescription drugs and medical devices are regulated by the Food and Drug Administration (see pharmaceuticals: economics and regulation). Some state governments require government permission before allowing a hospital or nursing home to be built or extensively changed. All of the above regulations restrict supply and raise the price of health care; interestingly, those who lobby for such regulations are medical providers, not consumers, presumably because they want to limit competition. Some state governments limit the extent to which managed care plans may selectively contract with providers. All state governments have imposed laws governing the content of insurance packages and the factors that may be used to determine insurance rates. While these may enhance quality, they do impose costs that raise the price of health insurance and increase the number of uninsured. In testimony before the Joint Economic Committee of the Congress, one analyst reported the annual net cost of regulation in the health care industry to be $128 billion.10 Industry Structure In 2002, there were 4,949 nonfederal short-term hospitals in the United States. Over the last decade the hospital sector has been consolidating: the number of hospitals declined by 6.4 percent and hospital beds per capita declined by more than 18 percent.11 In addition, the sector has been reorganizing itself into systems of hospitals that are commonly owned or managed. Nearly 46 percent of hospitals were part of a system in 2002, up from only 32 percent in 1994. The hospital sector has long been dominated by not-for-profit organizations. Only 14.4 percent of the industry is legally for-profit; this ratio has been constant for the last decade. There is some evidence that the consolidation and reorganization have been a reaction to the competition generated by the selective contracting actions of managed care. In 2001, the average cost of a stay at a government hospital was $7,400—24 percent more than at a private for-profit hospital. A study released in 2000 found that for-profit hospitals offer better-quality care.12 There were 272 private sector physicians per 100,000 population in the United States in 2002, an 8 percent increase since 1993, but a decline since 2000. There has been a steady decline in the proportion of physicians in solo practice; by 2001 more than three-quarters of physicians were in group practice or were employees.13 Physicians have been accused of inducing demand for their services because of the information asymmetry they hold relative to their patients. However, this argument has lost much of its impact in the last decade. Physicians’ inflation-adjusted average income has declined. Primary care physician incomes declined by 6.4 percent between 1995 and 1999, and specialist income declined by 4 percent.14 Industry Outlook The industry is faced with rising health care costs and an increasing number of uninsured. In the private sector the cost increases have led to an interest in consumer-directed health care. The idea is to provide health insurance payments only for expenditures in excess of a high deductible. The expectation is that consumers who must pay the full price for most health services will buy such services only when the expected benefits are at least equal to the full costs. Others see the reemergence of more aggressive selective contracting by managed care firms as a way to keep costs under control. The government is expected to be more aggressive in promoting competition among providers as well. The retirement of the baby boom generation will put more pressure on Medicare. Indeed, the Medicare trustees reported in 2004 that the costs of the Medicare program will exceed those of Social Security by 2024. Medicare Part A—hospital coverage—is estimated to be unable to cover its expenses starting in 2019.15 Interestingly, the 5 percent of Medicare fee-for-service beneficiaries who die each year account for one-fourth of all Medicare inpatient expenditures.16 Tax increases, benefit reductions, and/or wholesale reform of the program will have to occur. The number of uninsured will increase if health insurance continues to be more expensive. Some have proposed expansions of existing public programs; others have proposed “refundable” tax credits as a means of subsidizing targeted groups.17 Still others argue for reductions in regulations and a greater reliance on consumer-directed health plans as a means of lowering costs and expanding insurance coverage (see health insurance). The Inefficiency of Socialized Medicine Patricia M. Danzon Although other countries with more centralized government control over health budgets appear to have controlled costs more successfully, that does not mean that they have produced a more efficient result. In any case, reported statistics may be misleading. Efficient resource allocation requires that resources be spent on medical care as long as the marginal benefit exceeds the marginal cost. Marginal benefits are very hard to measure, but certainly they include more subjective values than the crude measures of morbidity and mortality that are widely used in international comparisons. In addition to forgone benefits, government health care systems have hidden costs. Any insurance system, public or private, must raise revenues, pay providers, control moral hazard, and bear some nondiversifiable risk. In a private insurance market such as that in the United States, the costs of performing these functions can be measured by insurance overhead costs of premium collection, claims administration, and return on capital. Public monopoly insurers must also perform these functions, but their costs tend to be hidden and do not appear in health expenditure accounts. Tax financing entails deadweight costs that have been estimated at more than seventeen cents per dollar raised—far higher than the 1 percent of premiums required by private insurers to collect premiums. The use of tight physician fee schedules gives doctors incentives to reduce their own time and other resources per patient visit; patients must therefore make multiple visits to receive the same total care. But these hidden patient time costs do not appear in standard measures of health care spending. Both economic theory and a careful review of the evidence that goes beyond simple accounting measures suggest that a government monopoly of financing and provision achieves a less efficient allocation of resources to medical care than would a well-designed private market system. The performance of the current U.S. health care system does not provide a guide to the potential functioning of a well-designed private market system. Cost and waste in the current U.S. system are unnecessarily high because of tax and regulatory policies that impede efficient cost control by private insurers, while at the same time the system fails to provide for universal coverage. Excerpt from Patricia M. Danzon, “Health Care Industry,” in David R. Henderson, ed., The Fortune Encyclopedia of Economics (New York: Warner Books, 1993), 679-680. About the Author Michael A. Morrisey is a professor of health economics in the School of Public Health and director of the Lister Hill Center for Health Policy at the University of Alabama at Birmingham. Further Reading   Dranove, David. The Economic Evolution of American Health Care: From Marcus Welby to Managed Care. Princeton: Princeton University Press, 2000. Morrisey, Michael A. “Competition in Hospital and Health Insurance Markets: A Review and Research Agenda.” Health Services Research 36, no. 1, pt. 2 (2001): 191-221. Morrisey, Michael A. Cost Shifting in Health Care: Separating Evidence from Rhetoric. Washington, D.C.: AEI Press, 1994. Pauly, Mark V. Health Benefits at Work: An Economic and Political Analysis of Employment-Based Health Insurance. Ann Arbor: University of Michigan Press, 2000. Pauly, Mark V., and John S. Hoff. Responsible Tax Credits for Health Insurance. Washington, D.C.: AEI Press, 2002.   Footnotes 1. Katharine Levit et al., “Health Spending Rebound Continues in 2002,” Health Affairs 23, no. 1 (2004): 147-159.   2. U.S. Congress, Joint Economic Committee, “How the Tax Exclusion Shaped Today’s Private Health Insurance Market,” December 17, 2003.   3. Paul Fronstin, “Sources of Health Insurance and Characteristics of the Uninsured: Analysis of the March 2003 Current Population Survey,” EBRI Issue Brief, no. 264 (Washington, D.C.: Employee Benefit Research Institute, 2003).   4. Joseph P. Newhouse et al., Free for All? Lessons from the RAND Health Insurance Experiment (Cambridge: Harvard University Press, 1993).   5. David A. Cutler and Mark McClellan, “Is Technology Change in Medicine Worth It?” Health Affairs 20, no.5 (2001): 11-29.   6. Dana P. Goldman, N. Sood, and Arlene A. Leibowitz, “The Reallocation of Compensation in Response to Health Insurance Premium Increases,” NBER Working Paper no. 9540, National Bureau of Economic Research, Cambridge, Mass., 2003.   7. Michael A. Morrisey, “Competition in Hospital and Health Insurance Markets: A Review and Research Agenda,” Health Services Research 36, no. 1, pt. 2 (2001): 191-221.   8. Daniel Altman et al., “Enrollee Mix, Treatment Intensity, and Cost in Competing Indemnity and HMO Plans,” Journal of Health Economics 22, no. 1 (2003): 23-45.   9. David Cutler and Jonathan Gruber, “Medicaid and Private Health Insurance: Evidence and Implications,” Health Affairs 16, no. 1 (1997): 194-200.   10. Christopher J. Conover, Testimony before the Joint Economic Committee, U.S. Congress, May 13, 2004.   11. American Hospital Association, Hospital Statistics 2004 (Chicago: AHA, 2004).   12. Mark McClellan and Douglas Staiger, “Comparing Hospital Quality at For-Profit and Not-for-Profit Hospitals,” NBER Working Paper no. 7324, National Bureau of Economic Research, Cambridge, Mass., 2000.   13. Kaiser Family Foundation, Trends and Indicators in the Changing Health Care Marketplace, 2004 Update, May 19, 2004, online at: http://www.kff.org/insurance/7031/index.cfm.   14. Marie C. Reed and Paul B. Ginsburg, Behind the Times: Physician Income, 1995-1999, Center for Studying Health System Change, Data Bulletin 24, March 2003.   15. Centers for Medicare and Medicaid Services, 2004 Annual Report of the Board of Trustees of the Federal Hospital Insurance and Federal Supplementary Medical Insurance Trust Funds, March 23, 2004, online at: http://www.cms.hhs.gov/publications/trusteesreport/2004/secib.asp.   16. Amber E. Barnato, Mark B. McClellan, Christopher R. Kagay, and Alan M. Garber, “Trends in Inpatient Treatment Intensity Among Medicare Beneficiaries at the End of Life,” Health Services Research 39, no. 2 (2004): 363-376.   17. Mark V. Pauly and John S. Hoff, Responsible Tax Credits for Health Insurance (Washington, D.C.: AEI Press, 2002).   (0 COMMENTS)

/ Learn More

Great Depression

A worldwide depression struck countries with market economies at the end of the 1920s. Although the Great Depression was relatively mild in some countries, it was severe in others, particularly in the United States, where, at its nadir in 1933, 25 percent of all workers and 37 percent of all nonfarm workers were completely out of work. Some people starved; many others lost their farms and homes. Homeless vagabonds sneaked aboard the freight trains that crossed the nation. Dispossessed cotton farmers, the “Okies,” stuffed their possessions into dilapidated Model Ts and migrated to California in the false hope that the posters about plentiful jobs were true. Although the U.S. economy began to recover in the second quarter of 1933, the recovery largely stalled for most of 1934 and 1935. A more vigorous recovery commenced in late 1935 and continued into 1937, when a new depression occurred. The American economy had yet to fully recover from the Great Depression when the United States was drawn into World War II in December 1941. Because of this agonizingly slow recovery, the entire decade of the 1930s in the United States is often referred to as the Great Depression. The Great Depression is often called a “defining moment” in the twentieth-century history of the United States. Its most lasting effect was a transformation of the role of the federal government in the economy. The long contraction and painfully slow recovery led many in the American population to accept and even call for a vastly expanded role for government, though most businesses resented the growing federal control of their activities. The federal government took over responsibility for the elderly population with the creation of Social Security and gave the involuntarily unemployed unemployment compensation. The Wagner Act dramatically changed labor negotiations between employers and employees by promoting unions and acting as an arbiter to ensure “fair” labor contract negotiations. All of this required an increase in the size of the federal government. During the 1920s, there were, on average, about 553,000 paid civilian employees of the federal government. By 1939 there were 953,891 paid civilian employees, and there were 1,042,420 in 1940. In 1928 and 1929, federal receipts on the administrative budget (the administrative budget excludes any amounts received for or spent from trust funds and any amounts borrowed or used to pay down the debt) averaged 3.80 percent of GNP while expenditures averaged 3.04 percent of GNP. In 1939, federal receipts were 5.50 percent of GNP, while federal expenditures had tripled to 9.77 percent of GNP. These figures provide an indication of the vast expansion of the federal government’s role during the depressed 1930s. The Great Depression also changed economic thinking. Because many economists and others blamed the depression on inadequate demand, the Keynesian view that government could and should stabilize demand to prevent future depressions became the dominant view in the economics profession for at least the next forty years. Although an increasing number of economists have come to doubt this view, the general public still accepts it. Interestingly, given the importance of the Great Depression in the development of economic thinking and economic policy, economists do not completely agree on what caused it. Recent research by Peter Temin, Barry Eichengreen, David Glasner, Ben Bernanke, and others has led to an emerging consensus on why the contraction began in 1928 and 1929. There is less agreement on why the contraction phase was longer and more severe in some countries and why the depression lasted so long in some countries, particularly the United States. The Great Depression that began at the end of the 1920s was a worldwide phenomenon. By 1928, Germany, Brazil, and the economies of Southeast Asia were depressed. By early 1929, the economies of Poland, Argentina, and Canada were contracting, and the U.S. economy followed in the middle of 1929. As Temin, Eichengreen, and others have shown, the larger factor that tied these countries together was the international gold standard. By 1914, most developed countries had adopted the gold standard with a fixed exchange rate between the national currency and gold—and therefore between national currencies. In World War I, European nations went off the gold standard to print money, and the resulting price inflation drove large amounts of the world’s gold to banks

/ Learn More

Health Insurance

The Birth of the “Blues” In the 1930s and 1940s, a competitive market for health insurance developed in many places in the United States. Typically, premiums tended to reflect risks, and insurers aggressively monitored claims to keep costs down and prevent abuses. Following World War II, however, the market changed radically. Hospitals had created Blue Cross in 1939, and doctors started Blue Shield at about the same time. Under pressure from hospital and physician organizations, the “Blues” won competitive advantages from state governments and special discounts from medical providers. Once the Blues had used these advantages to gain a monopoly, the medical community was in a position to refuse to deal with commercial insurers unless they adopted many of the same practices followed by the Blues. The federal government also later adopted some of these practices through its Medicare (for the elderly) and Medicaid (for the poor) programs.1 Cost-Plus Finance Four characteristics of Blue Cross/Blue Shield health insurance fundamentally shaped the way Americans paid for health care in the postwar period. First, hospitals were reimbursed on a cost-plus basis. If Blue Cross patients accounted for 40 percent of a hospital’s total patient days, Blue Cross was expected to pay for 40 percent of the hospital’s total costs. If Medicare patients accounted for one-third of patient days, Medicare paid one-third of the total costs. Other insurers reimbursed hospitals in much the same way. For the most part, physicians and hospital managers were free to incur costs as they saw fit. The role of insurers was to pay the bills with few questions asked. Second, the philosophy of the Blues was that health insurance should cover all medical costs—even routine checkups and diagnostic procedures. The early Blue plans had no deductibles and no copayments; insurers paid the total bill, and patients and physicians made choices with little interference from insurers. Therefore, health insurance was not really insurance; it was prepayment for the consumption of medical care. Third, the Blues priced their policies based on “community rating.” In the early days, this meant that everyone in a given geographical area was charged the same price for health insurance, regardless of age, sex, occupation, or any other factor related to differences in real health risks. Even though a sixty-year-old can be expected to incur four times the health care costs of a twenty-five-year-old, for example, both paid the same premium. In this way, higher-risk people were undercharged and lower-risk people were overcharged. Fourth, instead of pricing their policies to generate reserves that would pay bills not presented until future years (as life insurers and property and casualty insurers do), the Blues adopted a pay-as-you-go approach to insurance. This meant that each year’s premium income paid for that year’s health care costs. If a policyholder developed an illness that required treatment over several years, in each successive year insurers had to collect additional premiums from all policyholders to pay those additional costs. Even though most health care and most health insurance were provided privately, the U.S. health care system developed into a regulated, institutionalized market dominated by nonprofit bureaucracies. Such a market is very different from a truly competitive market. Indeed, the primary reason that the medical community created the Blues was to avoid the consequences of a competitive market—including vigorous price competition and careful oversight of provider behavior by third-party payers. One area where consumers become immediately aware that the medical marketplace is different is that of hospital prices. Even today, most patients cannot find out in advance what even routine surgical procedures will cost them. When discharged, they receive lengthy itemized bills that are difficult for even physicians to understand. Thus, the buyers (i.e., the patients) of hospital services cannot discover the price prior to buying and cannot understand the charge after making the purchase. Contrast this experience with the market for cosmetic surgery. Because neither public nor private insurance any longer covers cosmetic surgery, patients pay with their own funds. And even though many parties are involved in supplying the service (physician, nurse, anesthetist, and the hospital), patients are quoted a single package price in advance. Moreover, during the past decade, the real price of cosmetic surgery actually fell, while prices of other medical services rose faster than the rate of inflation. Consumers spending their own money have achieved something that few health insurers have.2 Managed Care For all its faults, the cost-plus approach to health care finance worked tolerably well until the establishment of Medicare and Medicaid in 1965. These two programs unleashed a tidal wave of new demand. Partly in response, an era of technological innovation emerged with opportunities to spend expanding in every direction. Since there were no market-based mechanisms to deal with these pressures, double-digit increases in annual health care spending were inevitable. The system began to unravel in the 1970s and 1980s. Large employers began to manage their own health care plans, started paying hospitals based on set charges rather than on costs, and negotiated price discounts. Through the Medicare program, the federal government began paying hospitals fixed prices for surgical procedures (the Prospective Payment System). Health maintenance organizations (HMOs) emerged as competitors to traditional fee-for-service insurance. In 1980, fewer than ten million people were enrolled in HMOs. Today, more than seventy-four million are, about one in four Americans. Three-fourths of all employees with health insurance are covered by some type of managed care.3 What difference has this change made? First of all, it has meant fewer choices for patients and doctors. Only a few years ago, a person with private health insurance could see any doctor, enter any hospital, or (with a prescription) obtain any drug. Today, things are different. In general, patients must choose from a list of approved doctors covered by their health plans. But because employers switch health plans and employees often switch jobs, long-term relationships between patients and physicians are hard to form. Moreover, many people cannot see a specialist without a referral from a “gatekeeper” family physician or even get treatment at a hospital emergency room without prior (telephone) approval from their managed care organization. Patients who fail to follow the rules may have to pay part or all of the bill out of their own pockets. Under managed care, doctors’ choices have been curtailed even more than patients’ choices. Not long ago, most doctors ordered tests, prescribed drugs, admitted patients to hospitals or referred them to specialists, and performed procedures based on their own experience and professional judgment. No longer. Now doctors who want to be on the “approved” list must agree to practice medicine based on a health plan’s guidelines. For most doctors, the guidelines mean fewer tests, fewer referrals, and fewer hospital admissions. By the end of the 1990s, though, managed care plans faced a backlash from patients and doctors. Politicians threatened to create a patients’ bill of rights. In response, the plans began to loosen their control over patient access to specialists and expensive treatments, and the rate of increase in health care costs began to rise. Consumer-Driven Health Care As the twenty-first century began, many large employers and some large health insurers became convinced that a market-based solution was the answer to U.S. health care problems. Consumer-driven health care (CDHC), defined narrowly, refers to health plans in which employees have personal health accounts from which they pay medical expenses directly. The phrase is sometimes used more loosely to refer to defined contribution health plans under which employees receive a fixed-dollar contribution from an employer to choose among various plans. Those who opt for plans with rich benefits pay more of their own money in addition to the employer’s contribution, while those who choose bare-bones health plans contribute less of their own money. As early as 1996, a federal pilot program was launched, allowing the self-employed and employees of small businesses to have tax-free Medical Savings Accounts (MSAs) in conjunction with high-deductible health insurance.4 In 2002, a U.S. Treasury Department ruling allowed large companies to implement similar plans, called Health Reimbursement Arrangements (HRAs).5 And, as of January 1, 2004, all nonelderly Americans who have high-deductible health insurance can also have Health Savings Accounts (HSA).6 Regardless of the acronym, the idea behind all these efforts is pretty much the same: to empower individual patients and encourage them to make the tough choices between health care and other uses of their money. The proponents expect to unleash into the medical marketplace an army of savvy consumers who can compare prices, investigate quality, bargain for services, and so on. Among the expected responses of suppliers are “focus factories”—highly efficient producers who specialize in treating only a few diseases. Yet, even if consumer-driven health care performs as well as advertised, five serious problems with the health care system remain. Problem One: Medicare and Medicaid While change has been rapid and swift in the private sector, government programs have been slow to evolve. Medicare today still resembles the Blue Cross plan that it copied forty years ago. That is why the program does not cover prescription drugs, although a partial drug benefit is being phased in. Medicare will pay to amputate the leg of a diabetic, but will not pay for the chronic care that would have made the amputation unnecessary. It will pay for hospitalization for a stroke victim, but will not pay for drugs that might have prevented the stroke in the first place. Medicaid, whose program specifics differ from state to state, exhibits similar inefficiencies. One-third of Medicare dollars go for patients in the last two years of life; and because Medicare is use-it-or-lose-it, the only way to get more benefits is to consume more care. There has been some movement toward private-sector options. Roughly one in six seniors is enrolled in a private-sector HMO; under Medicaid, it is close to one in two. However, there are no HSA accounts available in either program, other than a very limited pilot program for the chronically disabled.7 These two enormously expensive programs are the fastest-growing programs at the state and federal levels. Medicare costs one thousand dollars for every person in the country, or roughly four thousand dollars for a family of four. Medicaid costs even more. As a result, many families pay more in taxes for other people’s health insurance than they pay for their own. Problem Two: Private Sector Spending Medical research has pushed the boundaries of what doctors can do for us in every direction. As a result, we could probably spend the entire gross domestic product on health care in useful ways:8 • The Cooper Clinic in Dallas now offers a comprehensive checkup (with a full body scan) for about $2,500. If everyone in America took advantage of this opportunity, the U.S. annual health care bill would increase by one-half. • More than nine hundred diagnostic tests can be done on blood alone, and one does not need too much imagination to justify, say, $5,000 worth of tests each year. But if everyone did that, U.S. health care spending would double. • Americans purchase nonprescription drugs almost twelve billion times a year, and almost all of these are for selfmedication. If everyone sought a physician’s advice before making such purchases, we would need twenty-five times the current number of primary care physicians. • Some 1,100 tests can be done on our genes to determine if we have a predisposition toward one disease or another. At, say, $1,000 a test, it would cost more than $1 million for a patient to run the full gamut. But if every American did so, the total cost would be about thirty times the nation’s total output of goods and services. Note that these are all examples of information collection; carrying them out would not cure a single disease or treat an actual illness. If, in the process of performing all these tests, something that warranted treatment was found, spending would be even more. The spread of HSAs will encourage people to make choices between health care and other uses of money, but HSAs are designed mainly for small-dollar expenses. A possible solution for high-dollar expenses is to adopt the casualty model of insurance familiar to homeowners and automobile buyers. Insurance pays for the repair of a haildamaged roof, but the homeowners are usually free to upgrade (or downgrade), and roof repairers function as the homeowners’ agents rather than as agents of the insurers.9 Problem Three: Lack of Health Insurance About forty-five million Americans do not have health insurance, and that number, though not the percentage of the population, has been rising.10 Approximately 75 percent of episodes without health care coverage are over within one year. About 91 percent are over within two years. Less than 3 percent (2.5 percent) last longer than three years.11 At least four government policies have contributed to this problem and made it much worse than it needs to be. The first is the tax law. Most people with private health insurance receive health insurance as an untaxed fringe benefit. Middle-income employees effectively avoid a 25 percent income tax, a 15.3 percent tax for Social Security (half of which is paid by employers), and perhaps another 5 or 6 percent state and local income tax. Thus, almost half of every dollar spent on health insurance through employers is a cost to government. In contrast, most of the uninsured do not have access to tax-subsidized insurance. To become insured, they must first pay taxes and then purchase the insurance with what is left over.12 A second source of the problem is the extensive system of free care for uninsured people who cannot pay their medical bills. Several studies estimate that we are spending about one thousand dollars per uninsured person per year in unreimbursed medical care, a practice that clearly rewards people who are uninsured by choice.13 A sensible solution would be to use the free-care money to subsidize (say, through a tax credit) private health insurance premiums for the uninsured. However, the local governments that maintain the health care safety net do not have that option. A third source of the problem is state government regulations, including laws that mandate what is covered under health insurance plans. Under these laws, insurers are required to cover services ranging from acupuncture to in vitro fertilization, and providers ranging from chiropractors to naturopaths. Coverage for heart transplants is mandated in Georgia, and for liver transplants in Illinois. Mandates cover marriage counseling in California, pastoral counseling in Vermont, and sperm bank deposits in Massachusetts. Studies estimate that as many as one in four uninsured people have been priced out of the market by such regulations.14 A fourth problem (discussed below) is that legislation has made it increasingly easy for people to obtain insurance after they get sick. Problem Four: Lack of Portability One disadvantage of employer-based insurance is that employees must switch health plans whenever they switch employers. In the old fee-for-service days, this defect imposed less of a hardship because employees were generally free to see any doctor under any plan. Today, however, changing jobs often means changing doctors as well. For an employee or family member with a health problem, that means no continuity of care. Individually owned insurance that travels with employees as they move from job to job would allow employees to establish long-term relations both with insurers and with doctors. Yet, portable health insurance is largely impossible under federal tax and employee benefit laws. The reason: in order to get tax-subsidized insurance, most people must obtain it through an employer; but employers are not allowed to buy individually owned insurance for their employees with pretax dollars. Problem Five: Lack of Actuarially Priced Insurance An increasingly common feature of insurance markets is “guaranteed issue” regulation, which forces insurers to sell to all applicants, no matter how sick or how well they are. Perversely, this practice, when combined with community rating, encourages healthy people to avoid high premiums and stay uninsured. After all, why buy health insurance today if you know you can buy it for the same price after you get sick? Under “pure” community rating, insurers charge the same price to every policyholder, regardless of age, sex, or any other indicator of health risk. Despite the fact that health costs for a sixty-year-old male are typically three to four times as high as those for a twenty-five-year-old male, both pay the same premium. “Modified” community rating allows for price differences based on age and sex, but not on health status. Ironically, many large corporations community rate insurance premiums to their own employees, even though not required to do so by law. To the extent that employees pay part of the premiums for these plans, the premiums tend to be the same for everyone, regardless of expected costs. Whether in the marketplace or inside a corporation, distortions in prices produce distortions in results. People who are overcharged tend to underinsure. People who are undercharged tend to overinsure. In general, people cannot make rational choices about risk if risks are not accurately priced. About the Author John C. Goodman is the president of the National Center for Policy Analysis, a Dallas-based think tank. In 1988 he won the Duncan Black Award for the best article in public choice economics. Further Reading   Goodman, John C. Regulation of Medical Care: Is the Price Too High? San Francisco: Cato Institute, 1980. Goodman, John C., and Gerald L. Musgrave. Patient Power: Solving America’s Health Care Crisis. Washington, D.C.: Cato Institute, 1992. Goodman, John C., Gerald L. Musgrave, and Devon M. Herrick. Lives at Risk: Single-Payer National Health Insurance Around the World. Lanham, Md.: Rowman and Littlefield, 2004. Herzlinger, Regina E., ed. Consumer-Driven Health Care: Implications for Providers, Payers, and Policy-Makers. San Francisco: John Wiley and Sons, 2004. Herzlinger, Regina E., ed. Market-Driven Health Care: Who Wins, Who Loses in the Transformation of America’s Largest Service Industry. Cambridge, Mass.: Perseus Books, 1999. Pauly, Mark V., and John S. Hoff. Responsible Tax Credits for Health Insurance. Washington, D.C.: AEI Press, 2002.   Footnotes 1. John C. Goodman and Gerald L. Musgrave, Patient Power: Solving America’s Health Care Crisis (Washington, D.C.: Cato Institute, 1992); and John C. Goodman, Regulation of Medical Care: Is the Price Too High? (San Francisco: Cato Institute, 1980).   2. Devon Herrick, “Why Are Health Costs Rising?” Brief Analysis no. 437, National Center for Policy Analysis, May 7, 2003.   3. “The InterStudy Competitive Edge 13.1, Part II: HMO Industry Report,” InterStudy Publications, 2002.   4. NCPA Staff, “A Brief History of Health Savings Accounts,” Brief Analysis no. 481, National Center for Policy Analysis, August 13, 2004.   5. Devon Herrick, “Health Reimbursement Arrangements: Making a Good Deal Better,” Brief Analysis no. 438, National Center for Policy Analysis, May 8, 2003.   6. John C. Goodman, “Health Savings Accounts Will Revolutionize American Health Care,” Brief Analysis no. 464, National Center for Policy Analysis, January 15, 2004.   7. Arkansas and Florida tested a program, often referred to as “cash and counseling” whereby selected Medicaid home care patients were allowed to control a portion of the funds used to pay their home care provider. The results were that providers were more attentive to the needs of patients who controlled the funds to pay for their own care. The patients also found the program to be beneficial. Even those patients who subsequently dropped out still had positive things to say about the program. For instance, see Leslie Foster et al., “Improving the Quality of Medicaid Personal Assistance Through Consumer Direction,” Health Affairs, Web Exclusive W3-162, March 26, 2003.   8. John C. Goodman, Gerald L. Musgrave, and Devon M. Herrick, Lives at Risk: Single-Payer National Health Insurance around the World (Lanham, Md.: Rowman and Littlefield, 2004).   9. John C. Goodman, “Designing Health Insurance for the Information Age,” in Regina E. Herzlinger, ed., Consumer-Driven Health Care: Implications for Providers, Payer, and Policymakers (San Francisco: John Wiley and Sons, 2004).   10. Carmen DeNavas-Walt, Bernadette D. Proctor, and Robert J. Mills, “Income, Poverty, and Health Insurance Coverage in the United States: 2003,” Current Population Reports P60-226, U.S. Census Bureau, August 2004.   11. Robert J. Mills and Shailesh Bhandari, “Health Insurance Coverage in the United States: 2002,” Current Population Reports P60-223, U.S. Census Bureau, September 2003, figure 7.   12. Mark V. Pauly, Health Benefits at Work: An Economic and Political Analysis of Employment-Based Health Insurance (Ann Arbor: University of Michigan Press, 1997).   13. Jack Hadley and John Holahan, “How Much Medical Care Do the Uninsured Use, and Who Pays for It?” Health Affairs, Web Exclusive, February 12, 2003. Also see Texas State Comptroller’s Office, “Texas Estimated Health Care Spending on the Uninsured,” Texas Comptroller of Public Accounts, State of Texas, 1999, online at: www.window.state.tx.us/uninsure.   14. John C. Goodman and Gerald L. Musgrave, “Freedom of Choice in Health Insurance,” NCPA Policy Report no. 134, National Center for Policy Analysis, November 1988; and Gail A. Jensen and Michael A. Morrisey, “Mandated Benefit Laws and Employer-Sponsored Health Insurance,” Health Insurance Association of America, January 1999.   (0 COMMENTS)

/ Learn More

Government Growth

A modern government is not a single, simple thing. It consists of many institutions, agencies, and activities and includes many separate actors—legislators, administrators, judges, and various ordinary employees. These actors act somewhat independently, and even, at times, at cross-purposes. Because government is complex, no single measure suffices to capture its true “size.” Each of the commonly employed measures has serious shortcomings and sometimes can be misleading. Nevertheless, the various measures reveal at least something about the size of government. The most common measure used by economists is government expenditure as a percentage of gross domestic product (GDP). Sometimes, net national product or national income is used, which make more defensible denominators. Table 1 sketches the long-run growth of government in six countries in terms of this measure. As the table shows, government expenditures have grown enormously during the past century. As late as 1913, for example, even in a group of seventeen economically advanced countries, government expenditures averaged only about 13 percent of GDP. At most (in Austria, France, and Italy), they came to just 17 percent, and for the United States they were less than 8 percent. Taxation and government employment were at similarly low levels. In contrast, by 1996, government expenditures in the same seventeen countries had reached nearly 46 percent of GDP. Sweden’s were the highest, at more than 64 percent, and U.S. expenditures reached more than 32 percent.1 Taxation, government employment, and other aspects of government had expanded similarly. Moreover, governments have vastly increased the scope and societal penetration of their regulation in ways that spending measures do not reflect. In the United States, for example, private individuals and firms spend hundreds of billions of dollars each year to comply with government regulations aimed at reducing air and water pollution, lowering health risks, and eliminating workplace discrimination against women and members of various ethnic and other protected groups. Table 1 Government Expenditure as a Percentage of GDP France Germany Sweden Japan United Kingdom United States Circa 1870 12.6 10.0 — 8.8 9.4 7.3 1913 17.0 14.8 10.4 8.3 12.7 7.5 1920 27.6 25.0 10.9 14.8 26.2 12.1 1937 29.0 34.1 16.5 25.4 30.0 19.7 1960 34.6 32.4 31.0 17.5 32.2 27.0 1980 46.1 47.9 60.1 32.0 43.0 31.4 1990 49.8 45.1 59.1 31.3 39.9 32.8 1996 55.0 49.1 64.1 35.9 43.0 32.4 Source:Vito Tanzi and Ludger Schuknecht, Public Spending in the 20th Century: A Global Perspective (New York: Cambridge University Press, 2000), p. 6. Though imperfect, the measures illustrated in the table reflect the size of government. However, they have only a rough association with its scope—that is, the number of separate matters the government tries to influence or control. Over the very long run, governments have increased in both size and scope, but in any particular short period, the two measures may diverge widely. Likewise, we need to consider, apart from either size or scope, the government’s power—its authority and capacity to bring coercive force to bear effectively. Again, over the very long run, most governments have increased in size, scope, and power, but the three dimensions have grown at different rates in particular short intervals. To some extent, governments may substitute growth in one dimension for growth in another; they may augment, say, their scope or power rather than their size. Eventually, however, an increase in one dimension tends to lead to increases in the others. Passage of the Social Security Act in 1935, for example, increased the power and scope of the U.S. government, but not until two decades later did the operation of the Social Security system begin to have a major effect on the magnitude of federal spending. Crises and the Growth of Government Superimposed on the century-long trend to bigger government—measured by size, scope, and power—are several episodes of extraordinarily rapid growth associated with crises, especially the two world wars and the Great Depression. Although much of the wartime expansion of government was reversed when the wars ended, not all of it was, and thus each episode had a “ratchet effect,” lifting the size of government to a permanently higher level. Wartime expansions of government power tended to become lodged in the statute books, administrative decisions, and judicial rulings, and these legacies fostered the growth of government even during peacetime. New York City’s rent controls, for example, date from World War II (see rent control). Moreover, crisis-driven changes in the prevailing ideology supported greater long-term growth of government. Many who had opposed “big government” in the United States as late as 1930, for example, became convinced by the fifteen years of activist government during the Great Depression and World War II that government should play a much larger role in economic affairs. One upshot was the Employment Act of 1946, by which the federal government pledged itself to continuing management of the national economy. Crises promoted the growth of government in other countries, as well. During both world wars, all the belligerents adopted extraordinary measures of economic control to mobilize resources and place them at the government’s disposal for war purposes. These measures included price, wage, and rent controls; inflationary increases in the money stock; physical allocations of raw materials and commodities; conscription of labor; industrial takeovers; rationing of consumer goods and transportation services; financial and exchange controls; vast increases in government spending and employment; and increased tax rates and the imposition of new kinds of taxation. On each occasion, the war left institutional and ideological legacies that promoted the subsequent resort to similar measures even during peacetime. As Bruce Porter wrote, “The mass state, the regulatory state, the welfare state—in short, the collectivist state that reigns in Europe today—is an offspring of the total warfare of the industrial age.”2 The Great Depression elicited similar responses, especially in the United States under Franklin D. Roosevelt’s New Deal. Many current welfare-state and regulatory institutions—the Social Security system, the Securities and Exchange Commission, the National Labor Relations Act, to name but a few—originated with the New Deal. Later crises, such as the social and political turbulence associated with the civil rights revolution and the Vietnam War between the early 1960s and the early 1970s, likewise contributed significantly to the expansion of the welfare state, spawning, for example, Medicare, Medicaid, and a host of welfare, antidiscrimination, and environmental regulatory programs that remain in force today. Trends and crises interact. Because trends bring about the particular preconditions on which each crisis bursts, they affect how each crisis unfolds. And because each crisis leaves the long-run condition of the economic and political systems altered, it affects later trends. Although many economists have dismissed crisis events as “aberrations” or statistical “outliers” in the growth of government, this practice is a serious mistake: the long run, after all, is nothing more than a series of short runs. Structural Changes Promoting the Growth of Government In the nineteenth century, a number of interrelated “modernizing” changes began to accelerate: industrialization, urbanization, the relative decline of agricultural output and employment, and a variety of significant improvements in transportation and communication. As these developments proceeded, masses of people, though made better off in the long run, experienced tremendous changes in their way of life. In response, they sought government assistance in order to gain from, or at least to minimize their losses from, the social and economic transformations that swept them along. The ongoing structural changes altered the perceived costs and benefits of collective action for all sorts of latent special-interest groups. Thus, for example, the gathering of large workforces in urban factories, mills, and commercial facilities created greater potential for the successful organization of labor unions and working-class political parties. New means of transportation and communication—the railroad and the telegraph, later the telephone and the automobile—reduced the costs of organizing agrarian protest movements and agrarian populist political parties. Urbanization created new demands for government provision of infrastructure such as paved streets, lighting, sewerage, and pure water supply. All such events tended to alter the configuration of political power, encouraging, enlarging, and strengthening various special interest groups. The structural transformations, in addition to increasing the demand for government, also increased the supply. When, for example, more people received their income in pecuniary payments traceable in business accounts, as opposed to unrecorded farm income in kind, governments found it easier to collect income taxes. The modern welfare state is often seen as originating in Imperial Germany in the 1880s, when Otto von Bismarck established compulsory accident, sickness, and old-age insurance to divert workers from revolutionary socialism and to purchase their loyalty to the Kaiser’s regime. The lesson was not lost on governments elsewhere, and, by 1914, most other Western European countries had enacted similar programs. The U.S. government caught up in 1935, after such policies had been adopted at state and local levels earlier in the twentieth century. From the mid-nineteenth century onward, collectivist ideologies of various stripes, especially certain forms of socialism, gained greater intellectual and popular followings. Traditional conservatism and classical liberalism increasingly fell out of favor and, with a lag, suffered losses in their political influence. By the early twentieth century, the intellectual cutting edge in all the economically advanced countries had become more or less socialistic (in the United States, in greater part, “Progressive”). The masses also had become more supportive of various socialist or Progressive schemes, from regulation of railway rates to municipal operation of utilities to outright takeovers of industry on a national scale. Not until the 1970s did this collectivist ideological tide begin to turn, and even now, collectivism remains the reigning mode of thought for most intellectuals and political leaders. In the United States, politicians who call themselves conservative today would have been regarded as socialists a century ago. Indeed, quadrennial Socialist Party candidate Norman Thomas announced in 1956 that he would no longer run for president because even the Republican Party had adopted all of his socialist proposals. And the scope of government has grown enormously since 1956. Political developments mirrored the changes in the economy and in the dominant ideology. Throughout the nineteenth and twentieth centuries, democracy tended to gain ground. The franchise was widened, and more popular parties, including frankly socialist parties and labor parties closely allied with the unions, gained greater representation in legislatures at all levels of government— more so, however, in Europe than in the United States. Everywhere, the trend toward universal manhood suffrage and, eventually, women’s suffrage became seemingly irresistible. We might note that even Adolf Hitler came to power via the ballot box. Modernizing economic transformation, collectivist ideological drift, and democratic political reconfiguration tended to bring about a changing balance of forces that favored, not always but as a rule, increases in the size, scope, and power of government. Where We Stand For more than half a century, the political economy of the economically advanced countries has been rife with interest groups seeking policies that make government bigger. The old fundamental checks on such growth—vestigial allegiance to classical liberal ideology and, in the United States, a Constitution long understood as placing limits on the government’s role in economic life—have more or less dissolved as significant obstacles. Intellectual developments during the past thirty years, however, have revived the classical liberals’ hope that, ultimately, they may be able to stem the ongoing growth of government that now seems to be an inherent aspect of the workings of the modern political economy. About the Author Robert Higgs is senior fellow in political economy at the Independent Institute and editor of the Independent Review. Further Reading   Higgs, Robert. Crisis and Leviathan: Critical Episodes in the Growth of American Government. New York: Oxford University Press, 1987. Higgs, Robert. “Eighteen Problematic Propositions in the Analysis of the Growth of Government.” Review of Austrian Economics 5 (1991): 3-40. Higgs, Robert. “The Ongoing Growth of Government in the Economically Advanced Countries.” Advances in Austrian Economics 8 (2005): 279-300. Holcombe, Randall G. From Liberty to Democracy: The Transformation of American Government. Ann Arbor: University of Michigan Press, 2002. Jouvenel, Bertrand de. On Power: The Natural History of Its Growth. Indianapolis: Liberty Fund, 1993. Originally published in French in 1945. Mueller, Dennis C. “The Size of Government.” In Dennis C. Mueller, Public Choice II. New York: Cambridge University Press, 1989. Pp. 320-347. Porter, Bruce. War and the Rise of the State: The Military Foundations of Modern Politics. New York: Free Press, 1994. Tanzi, Vito, and Ludger Schuknecht. Public Spending in the 20th Century: A Global Perspective. New York: Cambridge University Press, 2000. Twight, Charlotte. Dependent on D.C.: The Rise of Federal Control over the Lives of Ordinary Americans. New York: Palgrave, 2002.   Footnotes 1. Vito Tanzi and Ludger Schuknecht, Public Spending in the 20th Century: A Global Perspective (New York: Cambridge University Press, 2000), pp. 6-7.   2. Bruce Porter, War and the Rise of the State: The Military Foundations of Modern Politics (New York: Free Press, 1994), p. 192.   (0 COMMENTS)

/ Learn More

Immigration

Immigration is once again a major component of demographic change in the United States. Since 1940, the number of legal immigrants increased at a rate of one million per decade. By 2002, approximately one million legal

/ Learn More