This is my archive

bar

What’s wrong with macro?

In my previous post, I looked at the development of modern macroeconomics. Several commenters responded by discussing what they thought was wrong with macro. Here I’ll put in my own two cents, and then explain how my views relate to those of my commenters. In my view, the biggest problem with modern macro is the treatment of monetary policy.  Simply put, modern macroeconomists don’t know what monetary policy is.   Of course I’m slightly exaggerating.  Economists have a general idea as to what the term monetary policy means, but the concept remains frustratingly vague.  Exactly what is monetary policy?  I feel so strongly about this issue that I recently published an entire book on the subject. Here are a couple excellent comments from the previous post: Thomas Hutcheson: For me the biggest failing of macro modeling is leaving out (leaving implicit) the Fed’s reaction function. It the modeler guesses right, then the model can be useful in examining other variables, fiscal “policy” for example or shocks to the price of petroleum. Michael Sandifer: The most disappointing thing about the development of macroeconomics to me, as a non-economist, is the lack of progress at understanding the intersection of macroeconomics and finance, particularly with regard to liquid asset markets.   What is the fiscal multiplier?  Without knowing the Fed’s reaction function, that’s not even a coherent question. Suppose the Fed were an NGDP targeter.  If fiscal policy changed, then the Fed would adjust its interest rates target so that NGDP growth would be expected to remain on target.  In that case, in what sense is there any “fiscal multiplier”?  And even if interest rates are stuck at zero (as in late 2012) the Fed adjusted other tools like forward guidance and QE in order to offset the impact of fiscal austerity. If you argue that the fiscal multiplier is the effect of fiscal policy on GDP holding monetary policy constant, then you must define what you mean by “monetary policy”.  The money supply?  The nominal interest rate?  The real interest rate?  The gap between the policy rate and the natural interest rate?  There are lots of possibilities. In my view, there is only one useful definition of the stance of monetary policy—the expected future level of the nominal aggregate being targeted.  If NGDP is the target (which is my preference), then the future expected level of NGDP measures the current stance of monetary policy.  Sandifer is correct that macroeconomists pay too little attention to financial asset prices. Any efficient estimate of future expected NGDP would involve at least some financial asset prices.  In a perfect world, we’d have a highly liquid NGDP futures market.  The price of NGDP futures would represent the current stance of monetary—which would move around in real time.  Even without this market, we can look at a wide range of other asset prices (including things like TIPS spreads) and infer a rough estimate of the market forecast for NGDP growth. Macroeconomics will never become a mature field until we get serious about defining the stance of monetary policy.  Step one is to abandon interest rates as an indicator of policy and move on to more promising alternatives—especially market forecasts of future NGDP. (0 COMMENTS)

/ Learn More

An Economics Lesson from the Console Wars

Recently, the head of Microsoft’s Xbox gaming division, Phil Spencer, gained some attention when he acknowledged that Xbox has lost the so-called “console wars” against Sony and Nintendo. He goes on to suggest that the current difficulties facing the Xbox business stems from the fact that they “lost the worst generation to lose in the Xbox One generation, where everybody built their digital library of games.” Even if you, dear reader, are not as interested in video games as I am (or at least as I was, when I actually had time for them), there’s an important lesson about economics and markets to be taken from this.  First of all, why did Xbox lose the console wars, and particularly, why did they fail so badly at the Xbox One generation? One gaming journalist points out that while “the [previous generation system] Xbox 360 had serious momentum behind it, the Xbox One stumbled out of the gate with its plans for absolutely bizarre, anti-consumer online requirements and used game restrictions.” The Xbox One would require users to maintain an internet connection in order to play games, even games that didn’t have any online features. You couldn’t simply loan a game to a friend by giving them the game disc – the system wouldn’t work with that. Buying used games had similar restrictions. Additionally, the Xbox One came bundled with a voice and movement tracking system called the Kinect, which in theory would allow extra features to be built into games but in practice was poorly supported by game developers and not of particular interest to most gamers. But since the Kinect was required to be purchased with the system, the Xbox One was $100 more expensive than Sony’s competing PlayStation 4. And most fundamentally, in the big event when Microsoft revealed their system, they were trying to pitch the Xbox One as a digital hub for all home entertainment needs, with its actual status as a game console being treated as just a secondary feature. But gamers didn’t want a game system that treated gaming as a secondary feature, and people who were primarily interested in a home media setup weren’t looking for a gaming console.   In short, Microsoft tried to convince consumers to want the product that Microsoft made. Sony, by contrast, focused on making the product that consumers wanted. And as a result, the PlayStation 4 dominated the Xbox One, both in sales and in the esteem of the gaming community. While I suspect Phil Spencer is right that people building up libraries of digital games had a long lasting impact, another factor he doesn’t mention is that by trying to get gamers to buy the system Microsoft wanted to make, instead of Microsoft making the system consumers would want to buy, the reputation of the Xbox brand took a major credibility hit among the gaming community – and gamers can hold grudges for a long time.  So what’s the important economic lesson I see in this? It’s an example that shows how in a market the consumer is king, and the wealth of corporations gives them little meaningful power. If gamers didn’t want to buy an Xbox, there wasn’t a single thing Microsoft could do to make them. And Microsoft is a much bigger company than either Sony or rival gaming company Nintendo. According to some quick Googling, at present, Nintendo is worth about $50 billion while Sony is worth about $100 billion. Microsoft, by contrast, is worth about $2.3 trillion. That’s over twenty times what Sony is worth, and more than forty times the value of Nintendo. But both Sony and Nintendo have significantly outperformed Microsoft in the gaming industry. Despite all their wealth and resources, Microsoft can’t simply buy success for their gaming system on the market. All of their considerable resources do them no good when customers are free to spend their money elsewhere, and when even much smaller competitors can entice those customers away from Microsoft by giving the consumer something they want more. (0 COMMENTS)

/ Learn More

Whatever We Are Doing is Wrong

You didn’t like the blue one? Eric Boehm, in “Taylor Swift, Junk Fees, and the ‘Happy Meal’ Fallacy,” Reason, October 2023, does a nice job of explaining the case for, in some instances, charging separately for some components of a purchase rather than for bundling. In his State of the Union address, President Biden discussed the pressing issue of whether airlines should charge an air fare plus extra charges for various special features, or should charge a price for a bundle that doesn’t allow people to choose the individual components. Biden, in his wisdom, proposed the latter and, strangely, claimed that it would save people money. Boehm writes: Consider the budget airlines that currently offer low fares but charge additional fees for picking seats, bringing bags (sometimes even carry-on luggage), and getting in-flight snacks. If those airlines have to bundle all those costs together for every flyer, passengers who want to travel light, are willing to sit anywhere, and can go 90 minutes without a snack will have to pay more so that other travelers can avoid paying for those things à la carte. The second group won’t pay less; they’ll just pay it all upfront. Meanwhile, a low-cost option will disappear for the first group. Instead of being able to evaluate tradeoffs—should I save money even if that means I don’t get to sit with my traveling companions?—consumers would face a market with fewer choices. Well said. Reading it made me recall an earlier issue on which a life arranger from a different political party, John McCain, proposed forced unbundling. In 2013, he proposed legislation to force cable companies to let people choose specific components separately rather than charging for a bundle. What do Biden and McCain have in common? They think they know better than the individuals and companies buying and selling and they’re so confident about it that they want (or, in McCain’s case, wanted) to use force to get their way. Whatever we’re doing, according to them, is wrong. It reminds me of the joke in How to Be a Jewish Mother by Dan Greenburg. (I lost my copy in my 2007 fire and so I’m going by memory here.) The Jewish mother brings home two nice shirts, one red and one blue, that she gives as a gift to her son. Wanting to please her, he comes down to dinner wearing the red one. The Jewish mother says: “You didn’t like the blue one?” This analogy is a little unfair to the Jewish mother: she just complained and didn’t use force. (0 COMMENTS)

/ Learn More

Macroeconomics at 100

Has macroeconomics progressed over the past 100 years, or are we merely treading water? There are good arguments for both sides. Before considering macroeconomics, I use an analogy in the field of urban planning. Then I’ll argue that macro looks a lot better if we view it as a series of “critiques”, not a series of models. In the middle of the 20th century, city planners favored replacing messy old urban neighborhoods with modern high rises and expressways.  Here’s Le Corbusier’s plan for central Paris: Today, those models of urban planning seem almost dystopian.  What endures are the critiques of the modernist project, such as the work of Jane Jacobs.   I believe that macroeconomics has followed a broadly similar path.  We’ve developed lots of highly technical models that have not proved to be very useful, and a bunch of critiques that have proven quite useful. Many people would choose 1936 as the beginning of modern macroeconomics, as this is when Keynes published his General Theory.  I believe 1923 is a more appropriate date.  This is partly because Keynes published his best book on macro in 1923 (the Tract on Monetary Reform), but mostly because this was the year that Irving Fisher published his model of the business cycle, which he called a “dance of the dollar.”  [BTW, Here’s how Brad DeLong described Keynes’s Tract on Monetary Reform: “This may well be Keynes’s best book. It is certainly the best monetarist economics book ever written.”  Bob Hetzel reminded me that 1923 is also the year when the Fed’s annual report first recognized that monetary policy influences the business cycle, and they began trying to mitigate the problem. And it was the year that the German hyperinflation was ended with a currency reform.] The following graph (from a second version of the paper in 1925) shows Fisher’s estimate of output relative to trend (T) and a distributed lag of monthly inflation rates (P).  Many economists regard this as the first important Phillips Curve model. Fisher argued that causation went from nominal shocks to real output, which is quite different from the “NAIRU” approach to the Phillips Curve more often used by modern macroeconomists—which sees a strong labor market causing inflation. Today, people tend to underestimate the sophistication of pre-Keynesian macroeconomics, mostly because they used a very different theoretical framework, which makes it hard for modern economists to understand what they were doing.  In fact, views on core issues have changed less than many people assume.  During the 1920s, most elite macroeconomists assumed that business cycles occurred because nominal shocks impacted employment due to wage and price stickiness.  Many prominent economists favored a policy of either price level stabilization (Fisher and Keynes) or nominal income stabilization (Hayek). The subsequent Keynesian revolution led to a number of important changes in macroeconomics.  In my view, four factors played a key role in the Keynesian revolution (which might also be termed the modernist revolution in macro): 1.  Very high unemployment in the 1930s made the economy seem inherently unstable—in need of government stabilization policy. 2.  Near zero interest rates during the 1930s made monetary policy seem ineffective. 3.  Increases in the size of government made fiscal policy seem more powerful. 4.  A move from the gold exchange standard to fiat money made the Phillips Curve seem to offer policy options—“tradeoffs”. While I believe that the implications of these changes were misunderstood, they nonetheless had a profound impact on the direction of macroeconomics.  There was a belief that we could construct models of the economy that would allow policymakers to tame the business cycle. Most people are familiar with the story of how Keynesian macroeconomics overreached in the 1960s, leading to high inflation.  This led to a series of important policy critiques.  Milton Friedman was the key dissident in the 1960s.  He argued: 1. The Phillips Curve does not provide a reliable guide to policy trade-offs. 2. Policy should follow rules, not discretion. 3.  Interest rates are not a reliable policy indicator. 4.  Fiscal austerity is not an effective solution to inflation. But Friedman’s positive program for policy (monetary supply targeting) fared less well, and is now rejected by most macroeconomists. Bob Lucas built on the work of Friedman, and developed the “Lucas critique” of using econometric models to determine public policy.  Unless the models were built up from fundamental microeconomic foundations, the predictions would not be robust when the policy regime shifted.  As with Friedman, Lucas was more effective as a critic than as architect of models with policy implications.  It proved quite difficult to create plausible equilibrium models of the business cycle.  New Keynesians had a bit more luck by adding wage and price stickiness to Lucasian rational expectations models, but even those models were unable to produce robust policy implications.  Here the problem is not so much that we are unable to come up with plausible models, rather we have many such models, and we have no way of knowing which model is correct.  In practice, the real world probably exhibits many different types of wage and price stickiness, making the macroeconomy too complex for any single model to provide a roadmap for policymakers.    Paul Krugman’s 1998 paper (It’s Baaack . . . “) provides another example where the critique is the most effective part of the model.  Krugman argues that a central bank needs to “promise to be irresponsible” when stuck in a liquidity trap, although it’s hard to know exactly how much inflation would be appropriate.  The paper is most effective in showing the limitations of traditional policy recommendations such as printing money (QE) at the zero lower bound.  Just as the work of Friedman and Lucas can be viewed as a critique of Keynesianism, Krugman’s 1998 paper is (among other things) a critique of the positive program of traditional monetarism.   That’s not to say there’s been no progress.  Back in 1975, Friedman argued that over the past few hundred years all we had really done in macroeconomics is go “one derivative beyond Hume”.  Thus Friedman’s famous Natural Rate model went one derivative beyond Fisher’s 1923 model.   There’s no question that when compared to the economists of 1923, we now have a more sophisticated understanding of the implications of changes in the trend rate of inflation/NGDP growth.   That’s not because we are smarter, rather it reflects the fact that an extra derivative didn’t seem that important under gold standard where long run trend inflation was roughly zero.  When I started out doing research, I bought into the claims that we were making “progress” in developing models of the macroeconomy.  Over time, we might expect better and better models, capable of providing useful advice to policymakers.  After the fiasco of 2008, I realized that the emperor had no clothes.  Economists as a whole were not equipped with a consensus model capable of providing useful policy advice.  Economists were all over the map in their policy recommendations.  If we actually had been making progress, we would not have revived the tired old debates of the 1930s.  Even if the quality of academic publications is higher than ever in a technical sense, the content seems less interesting than in the past.  Maybe we expected too much. More recently, high inflation has led to a revival of 1970s-era inflation models that I had assumed were long dead.  You see discussion of “wage-price spirals”, of “greedflation”, or of the need for tax increases to rein in inflation.  And just as in the 1920s, you have some economists advocating price level targets while other endorse NGDP targeting.  Going forward, I’d expect to see a greater role for market indicators such as financial derivatives linked to important macroeconomic variables.   In other words, like most macroeconomists I see future developments as validating my current views. So how should we think about the progress in macro over the past century?  Here are a few observations: 1. Both in the 1920s and today, economists have concluded that certain types of shocks have a big impact on the business cycle.  The name given to these shocks varies over time, including “demand shocks”, “nominal shocks” and “monetary shocks”, but all describe broadly similar concept.  Then and now, economists believe that sticky wages and prices help to explain why these shocks have real effects.  In addition, economists have always recognized that supply shocks such as wars and droughts can impact aggregate output.  So there is certainly some important continuity in macroeconomics. 2. Economists have made enormous progress in developing highly technical general equilibrium models of the business cycle.  But it’s not clear what we are to do with these models.  Forecasting?  Economists continue to be unable to forecast the business cycle.  Indeed it’s not clear that there has been any progress in our business cycle forecasting ability since the 1920s.  Policy implications?  Today, macroeconomists tend to favor policies such as inflation/price level targeting, or NGDP targeting.  Back in the 1920s, the most distinguished macroeconomists had similar views.  What’s changed is that this view is now much more widely held.  Back in the 1920s, many real world policymakers were skeptical of price level or NGDP targets, instead relying on the supposedly automatic stabilizing properties of the gold standard, which had been degraded by WWI. 3. The shift from a gold exchange standard to a pure fiat money regime allows a much wider range of monetary policy choices, such as different trend rates of inflation.  Fiscal policy has become more important. These changes made policymakers much more ambitious, perhaps too ambitious.  In my view, the greatest progress in macro is a series of “critiques” of policy recommendations during the second half of the 20th century.  Friedman and Lucas provided an important critique of mid-20th century Keynesian ideas, and Krugman’s 1998 paper pointed to problems with monetarist assumptions about the effectiveness of QE at the zero lower bound.  A series of critiques sounds less impressive than a successful positive program to tame the business cycle.  But I it is a mistake to discount the importance of these ideas.  They have helped to steer policy in a better direction, even as many problems remain unsolved.    (1 COMMENTS)

/ Learn More

Zombie Welfare Functions

In a previous post, I challenged James Broughel’s recent suggestion that libertarians should re-evaluate their allegiance to the legacy of James Buchanan. There I focused on Broughel’s claims regarding Buchanan’s radical subjectivism. In this piece, I turn to the implications for welfare economics. In his piece, Broughel wants to raise the zombie idea of social welfare functions, both in his critique of Buchanan and in an earlier Econlib piece. To see where these arguments go awry, it is helpful to review why mid-20th century economists were trying to construct plausible social welfare functions. Consider Figure 1. Society faces tradeoffs in the production of various goods, such as guns and butter. (Buchanan would already hate this way of formulating the economic problem.) The concave curve is the set of possible efficient allocations of scarce productive resources. As long as society is somewhere on that curve, one cannot have more guns without giving up more butter or vice versa.[1] Suppose one buys, based on some modeling assumptions, that markets might get us in the neighborhood of the frontier. Some well-crafted policy interventions might be able to get us the rest of the way. That does not tell us where on the frontier we would like to go. The concept of economic efficiency cannot tell us whether it is better to be at point A, B, or C. Economists sought after plausible social welfare functions to figure out where on the frontier to go. The point of a social welfare function is to rank possible states of the social world, even producing rankings among efficient states. This would allow economic science to say something about matters of distribution as well as efficiency without invoking interpersonal comparisons of utility. If markets could get us to efficiency and democracy to distributive justice, we would have a powerful defense of the liberal order. Enter Kenneth Arrow. Arrow posits that a social welfare function should exhibit the same sort of rationality that economists typically posit of individual choosers. One feature of such rationality is transitivity: if A is preferred to B, and B to C, then A should be preferred to C. One way to get such rationality at the level of social orderings is to appoint a dictator. As long as the dictator is rational, the ranking of social states will be rational. For Arrow, this is an unattractive answer to the question that social welfare functions were meant to solve. Arrow sought an aggregation procedure that would start from individual preferences and generate a set of rational social preferences. One constraint Arrow places on such a welfare function is the “independence of irrelevant alternatives” (IIA) which is meant to create the transitivity condition noted above. Majority rule cannot ensure this condition, because of what has long been called the Condorcet Paradox or Condorcet Cycle. If voters with the preference orderings in figure 2 confront pairwise choices, A defeats B and B defeats C, but C defeats A. There is no determinate will of the majority. If this is true for the trivial case of three voters and three options, it becomes even more likely as we increase the number of voters and the issues they might care about. What is now called Arrow’s impossibility theorem shows that there is no collective decision procedure that satisfies the IIA alongside the other assumptions. Back to Broughel. He argues that IIA is a bad assumption: Suppose you’re torn between going out to party tonight and staying home to study for an exam. At face value, partying appears to be the more enjoyable option. However, a third option, like “getting into a good college,” may depend on your decision. Clearly, this third choice—the future consequence—ranks higher among your preferred alternatives than the immediate option to party. This is despite the fact that partying, when taken independently of future consequences, seems to be the preferred choice over studying. Broughel has attempted to engineer a situation in which a third “option” is in fact relevant. But getting into college is not a third option. It is a further consequence of the second option that would naturally inform the decision as to whether to party or not. There is of course no possibility of a paradox with only two options and one rational decision maker. The next paragraph is even more problematic: Herein lies the problem with Arrow’s impossibility theorem: it overlooks long-term consequences and focuses too much on immediate gratification. This is troubling because much of today’s welfare analysis, including cost-benefit analysis, is based, at least indirectly, on Arrow’s theorem, leading economic theory broadly and public policy specifically to take a short-sighted approach. Arrow’s theorem does not entail that only immediate consequences should count in preference orderings. In fact, Arrow says the exact opposite. One of the other assumptions he makes is “universal domain:” all logically possible individual preference orderings are allowed.[2] This would include preferences for benefits that only manifest in the long term. And Arrow’s theorem is in no way the foundation of modern cost-benefit analysis. Cost-benefit analysis concerns efficiency. Specifically, empirical cost-benefit analysis relies on the concept of Kaldor-Hicks efficiency since monetary outlays are measurable. Social welfare functions are mathematical constructs meant to rank Pareto efficient outcomes. Arrow spends an entire chapter of Social Choice and Individual Values arguing against the Kaldor-Hicks approach as a satisfactory social welfare function, so it is supremely odd to claim that his work is the basis of cost-benefit analysis. Broughel’s next argumentative move is no better. He makes similar claims in his articles on Arrow and Buchanan. Ironically, rejecting the use of any social welfare function at all, as some libertarians do, also implies rejecting the market process—which itself is guided by a social welfare function of sorts. Arrow himself acknowledges as much in his book that presents the impossibility theorem, Social Choice and Individual Values, when he concludes that “the market mechanism does not create a rational social choice.” Strangely, most libertarians have failed to heed the lesson. Buchanan’s dismissal of the reasonableness of the social welfare function concept altogether likely contributed to many libertarians accepting Arrow’s theorem in a knee-jerk fashion. Yet, the market process itself operates under the guidance of a particular social welfare function (as Arrow understood, despite Buchanan arguing the opposite). Thus, libertarians who accept Buchanan and Arrow’s ideas inadvertently reject the process underlying the market, which forms the foundation of modern civilization. Note the utterly strange claim that the market process is guided by a social welfare function. This is somewhat like saying that individual markets are guided by supply and demand diagrams. Not so. Supply and demand diagrams are a model of how markets work. But Broughel’s claim is even stranger than this because a social welfare function is a normative rather than positive construct. Its purpose is not predictive but evaluative. This comes across like a bizarre version of Hegelianism[3]: the social welfare function is realizing itself through the market process. That there is some funky metaphysics. Perhaps Broughel means something different, though. Perhaps he means that markets must be judged by whether they produce rational social choices in accord with a social welfare function. This is the only way I can make sense of the claim that Arrow understood the importance of social welfare functions to the defense of markets. To which I respond: why? Let me propose an alternative: social welfare functions were never that important to begin with. There is no reason to believe that the emergent properties of an economic or political system will conform to some set of rationalistic criteria derived from a model of individual decision making. Neither democracy nor markets aggregate preferences into a coherent ordering of social states. So what? This is, of course, Buchanan’s original point that Broughel links to. Arrow was simply wrong to claim that a system is justified to the extent that it approximates rationality. For Broughel’s claim to be true, it must be the case that only a social welfare function is capable of underwriting normative support for markets (or democracy). But there are many alternative normative standards that could provide such support. Efficiency. Innovation. Discovery. Coordination. Natural rights. Basic rights. Public reason. Economic growth. Social morality. Virtue. Or mere agreement. One might be forgiven for believing that virtually any other normative standard is more useful than social welfare functions for judging economic and political institutions.   Description in Deficit Broughel’s final beef with Buchanan concerns Buchanan’s view on deficits as burdening future generations. …the reality is that current resources in the form of land, labor and capital must be marshalled to “finance” any increase in government expenditure. In that sense, larger deficits are “paid for” today and do not necessarily burden our children and grandchildren. Here Broughel echoes an argument initially made by Abba Lerner. Scarce resources cannot be literally borrowed from the future. The “social cost” of deficit financing is always paid today. Wealth is transferred from taxpayers to bond holders, but the scarce resources expended are the same whether public expenditure is financed through taxation or debt.[4] Buchanan’s rejoinder is that future taxpayers do suffer a utility loss from having to transfer resources to bondholders. Regardless of the wisdom of the government spending, future taxpayers would be even better off if that spending had been financed with present taxation. In essence, Lerner is focused on the objective side of the ledger and Buchanan with the subjective side. Both insights are straightforward and difficult if not impossible to dispute. Karen Vaughn and Richard Wagner reconcile Lange and Buchanan’s views on debt, along with Robert Barro’s concept of Ricardian Equivalence. The key insight of Barro’s view is likewise straightforward and uncontroversial: deficits are future taxes. Deficits thus reduce the present discounted value of assets held by individuals in the present. Reconciling these three views requires recognizes that individuals are heterogeneous. Some have children, some do not. Some like their children more than others. There will thus be variation in intergenerational altruism. For those with lower levels of intergenerational altruism, deficit spending is a lower cost method for financing government expenditures. Deficit financing thus represents a transfer of wealth from those with high to those with low intergenerational altruism. It is a method of changing who pays for any given government expenditure. If some individuals who have political influence on the margin have less than complete intergenerational altruism, deficit financing also results in increasing the net amount of government spending. Wagner has further developed this point in later work. Many decisions about fertility, marginal tax rates, and methods of servicing debt necessarily lie in the future. Deficit financing thus serves as a means of obscuring who ultimately bears the burden of government spending. Broughel’s critique of Buchanan here is not based on an error but rather simply an incomplete picture. The only statement of his concerning deficits that I take substantive issue with is this: Moreover, the government can, in its unique position, perpetually roll over its obligations, thereby avoiding ever having to “pay back” some debts. (Granted, this is contingent on obligations not ballooning out of control.) Though he does not take it as far as others, this Lerneresque view is very close to that used by proponents of Modern Monetary Theory. Scarce resources used in government consumption or investment must come from somewhere. The ability to roll over debt does not make government spending into a magic goodies creator. We must keep in mind all three aspects of deficits—scarce resources, future utility losses, and Ricardian equivalence—in order to grasp the full consequences of deficit financing.   Concluding Thoughts No thinker is above critique. There are many arguments in Buchanan that, in my view, simply do not work. But Broughel has failed to identify any substantive flaws in his thinking. Radical subjectivism does not obscure the losses imposed by government policies. Buchanan’s thoughts on debt do not imply that scarce resources can be shifted into the future. And social welfare functions should be left in the ground where Arrow and Buchanan buried them. One final point of clarification: Buchanan’s work should not be judged by how useful it is for libertarianism. It should be judged on its own merits, whatever conclusions they point to. Paraphrasing Peter Boettke, libertarianism is a conversation for children. Liberal political economy is an adult conversation that freely engages with normative considerations, especially about the value of liberty. But it does not prejudge arguments as to whether they conclude that “state bad, market good.” Buchanan’s work is and will remain a vital and central contribution to that conversation.   [1] Alternatively, one could interpret this as a Pareto frontier, in which case it measures preference satisfaction for guns and butter. The same point applies. [2] For those who hold out hope for social welfare functions, universal domain is actually the best point of attack. [3] It goes without saying that all versions of Hegelianism are bizarre. [4] There is a slight qualification in Lerner’s analysis: debt owed to parties external to the country can meaningfully be said to impose a burden. I set this aside to address more important considerations. Adam Martin is Political Economy Research Fellow at the Free Market Institute and an assistant professor of agricultural and applied economics in the College of Agricultural Sciences and Natural Resources at Texas Tech University. For more articles by Adam Martin, see the Archive. (0 COMMENTS)

/ Learn More

Background for Tonight’s Republican Debate

You don’t have to study federal budget data closely to know that the only way to reduce the huge budget deficits over the next 10 years, while avoiding tax increases, is to cut the growth rate of spending. The Congressional Budget Office estimates that federal spending in 2023 will come in at a whopping 24.2 percent of GDP, up from an average of 21.0 percent between 1993 and 2022. So a good way to judge various Republican and Democratic candidates for the presidential nomination, if they were governors, is to examine their record on state government spending. Doing so leads to two interesting conclusions. First, there are important differences among the Republicans who were or are governors. Second, although the old saying is that there’s not a dime’s worth of difference between Republicans and Democrats, on spending there sometimes are billions of dollars of difference. In fact, all the Republican governors or former governors performed better than the Democratic governors. These are the opening two paragraphs of David R. Henderson, “How to Judge Governors Running for President,” Institute for Policy Innovation, TaxBytes, September 27, 2023. The ranking, measured by average annual growth in state government spending and average annual per capita growth in government spending is, from best to worst: Mike Pence Asa Hutchinson Doug Burghum Chris Christie Ron DeSantis Nikki Haley Gretchen Whitmer Gavin Newsom The data are from my fellow Canuck Chris Edwards, “Governors Running for President,” Cato at Liberty, July 20, 2023. One thing to note that I didn’t note because of my word constraint: the top 4 are fairly tightly clustered. DeSantis and Haley are outliers. One positive surprise is Chris Christie. He was governor in a “blue” state and probably had to fight a legislature more than did the 3 governors above him. The biggest negative surprise, to me, at least, was Ron DeSantis. He had a legislature that should have been easy to work with on spending. (0 COMMENTS)

/ Learn More

The Agency to Act

In the original version of a now classic thought experiment, five people are about to be killed by a runaway trolley. Would you divert the trolley knowing that your choice will kill a single innocent bystander? That thought experiment is the seed for this conversation between EconTalk host Russ Roberts and fan fav Mike Munger. It gets even more interesting when Munger claims that Adam Smith had an elegant solution for this problem- a problem not articulated until the 20th century! Part of the challenge of the trolley problem is the question of using cost-benefit analysis to make life or death decisions. As Roberts expresses the challenge of the trolley problem, it’s the difference between killing and allowing to die. Phillipa Foot, the mid 20th century philosopher who formally posed the trolley problem, it’s the doctrine of double effect- when you’re faced with a choice between two alternatives which you did not create. I’ll not ask how you would solve the trolley problem, but I will ask how you feel abut some of the permutations Roberts and Munger discuss. We hope you’ll share your responses to the prompts below in the comments; we love to hear from you.     1- Roberts suggests we’re so uncomfortable with thought experiments like the trolley problem is because they pose a moral dilemma. That is, if we operates by a strictly utilitarian calculus, the “answer” would be obvious (assuming the original trolley problem- not a baby on the tracks, etc.). Why do we feel there’s a difference between the two options in terms of moral agency?   2- The trolley problem is, of course, a hypothetical puzzle. Munger describes his class discussions on the movie Oppenheimer, emphasizing that the United States’ decision to drop the atomic bomb was NOT a hypothetical. After listening to the conversation, do you think the US was morally justified in using atomic weapons? Why or why not? How does your answer take into account the distinction between killing and “not allowing to die?” (And if you’ve still got some ethical muscle left, why do you think the firebombing of Tokyo was treated differently than the atomic bomb, as Roberts asks?)   3- Roberts and Munger discuss Adam Smith’s famous example of the Chinese earthquake, which Munger points to as Smith’s version of the trolley problem. How is the earthquake example analogous to the trolley problem? What is Smith’s “solution,” according to Munger? What’s the “second part of the story” that commentators often leave out, and why is this important to Smith’s solution? Does his solution depend  on this second part? Explain.   4- I pose this next question with great trepidation… And that’s because I think Russ is wrong on the point he makes about externalities (53:27). He says, in response to Munger’s reading from the Book of Smith: If Smith were literally right in that passage–if that passage was accurate–we would not worry about externalities in economics. We would just say, ‘Well, they’re not important, because people internalize them because of the man in the breast.’ They would never pollute or litter, or do things that harmed other people, because they would be aware that they were putting themselves forward. Why do you think that I think Russ is wrong? Would there really be no externalities if Smith’s impartial spectator always worked? Why or why not?   5- Roberts closes by asking Munger, “Do you think capitalism and the commercial society encourage us to put ourselves before others? Do you believe that we are coarsened by the competitive nature of capitalism and tend to frequently ask, ‘What’s in it for me’?” How does Munger respond? How would you respond? In what ways does your response differ from Munger’s, and why?   (0 COMMENTS)

/ Learn More

We Can’t Collect Economic Information

It turns out that economists who have stressed the significance of the knowledge problem and the importance of information had it all wrong. At least, that’s the case if the socialist writer Nathan Robinson is to be believe. On Twitter (yes, I refuse to call the platform X, and you can’t make me), Robinson explains his solution: Who knew the answer could be so simple? Unfortunately, while there is some virtue to be found in simplicity, this goes too far and ends up being woefully simplistic.  The first mistake here is similar to the one made by Daron Acemoglu when he suggested that the knowledge problem Hayek was focused on could be solved with a sufficiently powerful supercomputer. Both treat information as a static thing that just exists out there somewhere, perhaps in people’s heads, and all you need to do is collect that existing information and properly compute or aggregate it. But economic information isn’t something with its own independently active existence, just waiting to be collected and computed in the right way. Economic information exists only as part of the process that generates it. You can’t simply collect information and then use it to decide how to carry out economic activity, because the information itself doesn’t exist until after economic activity has generated it. This is also true of the “information” that is in people’s heads. One reason is that simply asking people for information about what they want (perhaps through polls or when casting a vote) will often result in them giving you an expressive preference, rather than an instrumental preference. The “information” you get by “asking people is often contradicted by the information you would get by observing what people actually choose – and as the old saw goes, actions speak louder than words. A deeper problem is that the information isn’t clearly available in pre-existing form even in our own heads, not even to ourselves. As James Buchanan put it in his book The Logical Foundations of Constitutional Liberty, “Individuals do not act so as to maximize utilities, described in independently-existing functions. They confront genuine choices, and the sequence of decisions taken may be conceptualized, ex post, (after the choices), in terms of ‘as if’ functions that are maximized. But those ‘as if’ functions are, themselves, generated in the choosing process, not separately from such process.” As a consequence, Buchanan goes on to explain, “The potential participants do not know until they enter the process what their own choices will be.”  Here’s a straightforward example of what Buchanan is describing in practice, taken from my own personal experience. You would think that I could easily answer the question of “Do I want a PlayStation 5 console?” For a while, the answer to that seemed like an obvious yes to me. If asked, I would have certainly said yes – indeed, I was known to mention it from time to time without anyone needing to ask me at all! But in the first few years after the console was released, it was all but impossible to get without a combination of good luck and good timing. But some retailers would let you sign up on a waiting list, and once they got some in stock, a random selection of people on the waiting list would receive an invite to buy the console. One day, I actually got such an email. I immediately clicked on the link to buy the console. But then I stopped. I waited. Did I actually want one? I wasn’t so sure all of a sudden. I could certainly afford it – money wasn’t my constraint. But I started to consider another variable – time. When would I actually have time to play any games? I started thinking about what the opportunity costs were. Spending less time with my kids in order to make time for video games was a nonstarter for me. Cutting into my reading time was also off the table as far as I was concerned. I do weight training five days a week along with six days of cardio exercise – but especially as middle age sets in, keeping up on my fitness has only become more important to me, so I wasn’t willing to cut back on my exercise time. As I thought about it, I realized that there would only be small, intermittent pockets of time where I would ever be able to play any games – and as this thought process was carried out, new “information in my head” was generated, and I decided that I didn’t actually want a new video game console. But you couldn’t have gotten that information out of my head by simply asking me for it, as Robinson’s simplistic take would suggest. Even I didn’t have that information in my head, until the actual act of choosing was carried out to generate it. Now, it might have worked out differently. Suppose when the time came, I immediately bought the console and never regretted it. Would that mean that in this case, the information really was there “in my head” and could have been accurately collected by asking me? No. What it means is that the answer you would have gotten by asking me was an accurate prediction of what the information would turn out to be. But it doesn’t always work out that way. What people say they want and what they actually choose when the time comes are very often different from each other. And I’m sure, dear reader, if you introspect a bit, you can come up with examples from your own life where you were surprised by your own choices when the time came to decide.  For his part, Karl Marx very much insisted on seeing the world as a series of ongoing processes, rather than collection of static things. Marx’s theory and analysis of those processes was irreparably deficient, but he deserves some credit for at least being ahead of many of his modern-day followers, who see the world in terms of snapshots and outcomes, rather than ongoing processes, or who make the mistake of treating information as nothing more than merely collectable data points.  (0 COMMENTS)

/ Learn More

Trustbusters in Wonderland

When you think about it, the story is even more fantastic than Lewis Carroll’s Alice in Wonderland. A government—politicians and bureaucrats—which, among other problems, is mired in deficit and debt is suing two companies, Google and Amazon, with which nobody is forced to do business, while the same government is, just to take an example, forcing future taxpayers to finance its vote purchases with trillions of dollars of expenditures, not to mention its continuous misleading advertising. Each of these companies has to supply free and voluntary consumers with services, often free of charge, lest it be dethroned by competitors. Add that each (and perhaps especially Amazon) deserves a Medal of Freedom for arguably contributing to culture and knowledge more than any government has done. There is another irony difficult to reproduce in any rabbit hole one can imagine. It is probably true that both companies are owned and managed if not manned by fans of state power who probably thought all along that “progressive” politicos were on their side. Any serious economist or student of Leviathan, I think, could have told them. Whether the lesson is learned or not is an interesting question. Google has been sued by the Department of Justice’s Antitrust Division, supervised by Assistant Attorney General Jonathan Kanter, while Amazon is sued by the Federal Trade Commission, chaired by Lina Kan. In my Regulation article on “Bidenomics,” I note: Biden’s nomination of Lina Kahn, a 32‐year‐old Harvard Law School professor, as head of the Federal Trade Commission marked an attempt at expanding government antitrust action. Confirmed with the help of 21 Republican votes in the Senate, Khan has embarked on a crusade to extend the reach of existing law by developing new legal theories and filing more lawsuits. Mere “bigness” and high technology seem to be her chief concerns, rather than the longstanding consumer‐welfare standard. The antitrust division of the Department of Justice has moved in the same direction. Large corporations are seen as the problem simply because of their size, though there appears to be no parallel concerns about big trade unions and big government. … The international news magazine The Economist, despite being generally favorable to antitrust laws, has argued that the new crusade against bigness and high tech is unproductive. It threatens American research and development, of which one‐fourth is done by the five largest high‐tech firms. With nearly Schumpeterian accents, the venerable magazine defended the idea that “dealmaking, even involving big firms, is a vital part of healthy capitalism,” which the new antitrust warriors in D.C. do not seem to understand. Let’s hope that the courts will stop the comedy. (0 COMMENTS)

/ Learn More

Who Won the Caplan/Brook Debate?

When I read that my friend and former co-blogger Bryan Caplan would be debating Yaron Brook on anarcho-capitalism (Bryan is an anarchist; Yaron is not), I wanted Bryan to win. It’s not because I’m an anarchist; I’m not. But Bryan has brought me way closer to his position on many big issues, including open borders. So I hoped that he would make a good argument that would bring me closer to his position on anarchism. Then, when I read the actual proposition being debated, I was sure Bryan would win. The proposition is: Anarcho-capitalism would definitely be a complete disaster for humanity. With both “definitely” and “complete” in there, that’s a particularly strong statement. Before the debate, I said on Facebook that with that formulation, Bryan was almost sure to win. What if you could show that anarcho-capitalism would be only a 50% disaster? Bryan would win. Bryan explained that it was Yaron who chose the wording. I said that I wasn’t surprised because Yaron likes to state things boldly; I think that reflects the influence of Ayn Rand, although maybe Yaron was that way before he had ever heard of Ayn Rand. Bryan’s opening argument is very good and he did bring me closer to anarchism. But then Bryan said this: If you claim that anarcho-capitalism would be a complete disaster for humanity if were tried today, I agree. My friend and co-author Charley Hooper and I were talking about this yesterday and we both agreed that this one statement cinches the win for–Yaron Brook. If even the person debating the issue admits the other side’s point, it’s game over. And Bryan admitted Yaron’s point. That doesn’t mean you shouldn’t read Bryan’s opening statement and/or watch the whole debate. You can lose a debate but still bring people closer to your viewpoint. And that’s what Bryan did with me. By the way, the traditional way of judging who won is to say who moved more people to his side. By that measure, for example, I won big-time in my debate on lockdowns with Justin Wolfers in April 2020. But I don’t know how the numbers went on Caplan/Brook. (0 COMMENTS)

/ Learn More