This is my archive

bar

The stories we tell

The field of economics is a set of stories that we tell to better understand the economy. I thought of this when reading a new post by Matthew Klein: If I had to pick one chart to tell the story of the U.S. economy since the end of WWII, it would be this: Indeed Klein views the post-2006 slowdown in RGDP per capita as being so important that his Substack is entitled ‘The Overshoot”, an indication that we need to work on reversing the recent undershoot. Of course literary theorists know that stories can have many interpretations.  What is Kafka’s Metamorphosis actually about? If I were to tell a story about this a graph, the year 2006 would be of no importance at all.  I would begin my story by focusing on productivity, not output per capita.  Doesn’t real GDP per capita measure productivity?  No, it does not. So here’s my story.  Between 1900 and 1973, America’s engineers and inventors made extraordinary strides that completely transformed a wide range of our industries.  Here’s a 1903 airplane above a 1968 airplane: I won’t bother showing you a 2021 airplane, as it looks similar to a 1968 airplane.  Its interior electronics and engines are better, but the percentage gain is tiny compared to the gains during 1903-1968.  Great strides were made in many other industries as well, including autos, home appliances, lighting, and infrastructure such as indoor plumbing.  Life expectancy soared much higher. After 1973, America’s engineers and inventors made extraordinary gains in one industry—computers.  Productivity growth slowed sharply, with one exception.  There was an upward blip in productivity growth during 1995-2004, which might have been real or might reflect a mismeasurement of the impact of PCs on the US economy.  It’s a “matter of opinion”.  In any case, that brief surge ended in 2004. So if 1973 was the turning point when productivity growth slowed sharply, why does Klein’s graph make it look like 2006 was the turning point?  What’s my “story”? It turns out that various demographic changes disguised the productivity slowdown for several decades, at least when examined from the perspective of RGDP/person.  After 1973, the share of the population that worked rose for several decades, as (non-working) children became a smaller share of our population and as more women entered the labor force.  This growth in the labor force roughly offset the decline in output per worker, keeping RGDP/person growing at a relatively steady rate even as growth in RGDP per worker was slowing sharply. By the mid-1990s, this demographic transition had mostly played out and RGDP/person growth would have slowed sharply if not for the nine year PC boomlet in productivity (or at least measured productivity.)  In 2004, that boomlet ended and productivity growth went back to the slow rate of 1973-1995, where it has stayed ever since.  Now there were no longer any special factors to disguise the productivity slowdown, and hence RGDP/person slowed sharply.  And as boomers began retiring, the demographic dividend started to move in the other direction, a perfect storm of factors slowing RGDP/person. In my story, there is nothing special about 2006.  You had productivity growth slow after 1973 because our tech people were no longer able to radically transform almost all our industries, instead radically transforming only the computer industry.  Of course it’s possible that at some point computers become so powerful that robots begin transforming almost all our industries, leading to a renewal of fast productivity growth.  But we aren’t there yet.  (It’s also possible that AI will destroy life on Earth.) Every story has policy implications.  If you believe something dramatic happened after 2006, you might focus on policies that could prevent a repeat of the Great Recession.  I also want to focus on policies that prevent a repeat of the Great Recession, but not because I think the Great Recession had a significant impact on long run growth in RGDP/person.  I see no evidence for that claim.  The 1995-2004 boomlet in productivity ended even before the Great Recession of 2008.  Rather, I want to prevent another Great Recession because big recessions are very bad. (1 COMMENTS)

/ Learn More

The “good old days” that never were

Will Wilkinson has a post discussing how residential zoning laws were originally instituted to exclude certain minority groups: In 1926, the Supreme Court ruled that zoning was cool in Euclid v. Ambler Realty. However, despite the fact that Euclid’s lawyers insisted that their law had nothing to do with race, the district court judge whose decision the high court reversed didn’t see much difference between the law in Euclid Township, Ohio (a suburb of Cleveland) and the Louisville law declared an unconstitutional encroachment on property rights and freedom of contract in Buchanan. He wrote, “The blighting of property values and the congesting of the population, whenever the colored or certain foreign races invade a residential section, are so well known as to be within the judicial cognizance” — which translates roughly as: “Don’t try to bullshit me; I see you, Euclid.” Wilkinson relies heavily on a book by Richard Rothstein, who makes a very interesting point about the Supreme Court: Rothstein notes the suspicious oddity of the Lochner-era laissez faire court’s tolerance of the infringement of property rights and economic liberty in this one case: Over the course of nearly 40 years, the Court struck down all kinds of regulation (not only zoning as in Buchanan, but most notably, health and safety and minimum wage regulation) on the grounds that it interfered with freedom of contract. Euclid’s permission for economic zoning was the only significant exception to this rigidly ideological approach. . . It was about race. Some libertarians believe that there was a sort of Golden Age when the Supreme Court had a principled objection to government economic regulations that interfered with property rights.  I’m skeptical of that view.  It’s always been about who gains and who loses. PS.  The politics of zoning is interesting: 1980s:  Conservatives insist that zoning is an example of inefficient government regulation, citing Houston as evidence that people flock to cities without zoning.  Progressives worry that urban areas would be a mess if you didn’t have zoning laws. Late 2010s:  Progressives gradually realize that residential zoning laws make it hard for the poor and minorities to move to where the jobs are, and begin opposing zoning.  Conservatives suddenly see merit in restrictive residential zoning. “You like it?  Then I hate it!”  I’ve never seen this country more polarized. (0 COMMENTS)

/ Learn More

Royal Caribbean’s Response to DeSantis’s Restrictions on Freedom of Association

Unintended consequences strike again. Earlier this month, the line [Royal Caribbean] said that certain venues on the ship would be off-limits to unvaccinated passengers, but it didn’t give specifics. This week’s listing of forbidden venues fleshes out the plan. The newly posted list includes: The Chef’s Table Izumi Hibachi & Sushi R Bar Schooner Bar The Pub Viking Crown Nightclub Solarium Bar Solarium Pool Casino Royale (the ship’s casino) Casino Bar Vitality Spa (the ship’s spa) This is from Gene Sloan, “Royal Caribbean to unvaccinated travelers: No sushi (and a lot of other things) for you,” The Points Guy. Why is Royal Caribbean doing this? Sloan explains: The new rules come in the wake of threats from Florida Gov. Ron DeSantis that any cruise line that requires passengers to show proof of a COVID-19 vaccine will be fined. A new Florida law forbids businesses in the state from requiring customers to show proof of a COVID-19 vaccine. I wrote about Governor DeSantis’s attack on freedom of association in early April. The good news is that I didn’t anticipate how clever Royal Caribbean would be in responding to it. HT to Donald Wittman. (0 COMMENTS)

/ Learn More

Security States

Two well-told stories in the most recent issue of Wired magazine highlight complex technological and human security challenges that will remain with us for the foreseeable future. The first is, “The Full Story of the Stunning RSA Hack Can (Finally) be Told,” by Andy Greenberg. It’s about a 2011 breach at security firm RSA, which compromised master-key like information central to the firm’s secure-ID products, used by the likes of Fortune 500 clients and the Pentagon to verify employees’ identities. The second story, “The Manhattan Project,” by Geoff Manaugh and Nicola Twilley, examines the ongoing construction of the Department of Homeland Security’s new $1.25 billion lab, “the National Bio and Agro-Defense Facility,” located in Manhattan, Kansas. This facility will be used to study dangerous and contagious plant and animal pathogens at Biosafety Level 4, the highest standard for containment. The RSA story is backward looking, even as the incident is billed as a harbinger of things to come, but with the expiry of many former RSA executives’ non-disclosure agreements on the matter, Greenberg can dive much deeper into the progression of the hack than was possible to report on at the time of its occurrence. Some of the details are rather stunning. Executives resorted to paper and in-person communications because they thought their phones had been tapped, their emails surveilled, and their offices bugged (some even claim to have discovered listening devices, of unknown origin); others were worried about long-range laser-microphone surveillance, which works by detecting vibrations on windowpanes caused by conversation–so they covered their office windows with layers of butcher paper. This paranoia confirms that the Chinese state-sponsored hackers were in deep alright (although how deep exactly no-one knows), and as one employee put it, the firm was forced to act for years afterward on the assumption that the attack was still ongoing. The existence of malicious backdoors was taken as a given once the front gates had been so spectacularly, yet subtly, breached. Summing up the situation, one former RSA executive said it was “a glimpse of just how fragile the world is,” and a reminder that even the best security features often amount to little more than “a house of cards during a tornado warning.” This takeaway becomes even more sobering when applied to the second story, about the new bio and agro-defense lab being built on the plains of Kansas. The logic for the new facility goes something like this: We (the United States, the West) are reliant on the global food supply chain, within which concentrated animal feedlots and monocrop fields are significant—you might even say foundational—links. Given the limited genetic diversity of mass-production crops, cows, pigs, chickens, and other plants and animals, and the limited (if at all present) security at many of the facilities where these things are grown and raised and processed en masse, in the eyes of Homeland Security the food production network is rife with “soft targets.” Certain terrorist organizations have in the past expressed interest in attacking these soft targets (at least according to reports of materials seized in Afghanistan and Syria), possibly through the introduction into animal populations of nasty and effective pathogens such as the virus that causes foot-and-mouth disease. Thus the new, ostensibly ultra-secure facility will serve to study such harmful pathogens (both in advance of, and in response to their release, should that ever occur) and to come up with ways to stop them. That is, without bringing in police and military units for a mass destruction of infected populations… a la the UK in 2001, when six million sheep, pigs, and cattle had to be put down to quell an outbreak of foot-and-mouth disease caused by contaminated, illegally-imported pork being fed to pigs. Some have criticized the construction of the new bio-defense facility on two main fronts: it is near significant portions of the nation’s agriculture and mass husbandry operations, so an accidental release could unleash costly havoc with relative ease, and the site is in an area prone to super-strong tornados. But the design has been “hardened” to withstand even the most punishing weather events, and the decontamination and containment processes are said to be world-class. For instance, a researcher working in the facility can move through it only in one direction; same with the animals; there is no “going back” without going through total decontamination, which involves for humans a series of chemical and normal showers inside a personal airlock. “Constant training” and a “buddy system” will also be in place to, at least in theory, prevent lapses in protocol. (A lingering criticism raised by the authors centers on varying risk estimates of how likely a willful violation of the safety rules might be.) As for the non-human component, redundancies are also in place. “Thermal tissue autoclaves”—described as “big pressure cookers with a paddle inside” —will produce out of lab animal carcasses “a kind of tissue smoothie” that, while actually sterile enough to be used as fertilizer, will instead be incinerated in 55-gallon drums, again out of an abundance of caution. Yes, caution seems to haunt the minds of those designing this facility and others in its class. The question is, will an abundance of caution be enough to keep the lab secure? Certainly I join many others in hoping so, but then again, hope alone doesn’t count for much when trying to contain dangerous pathogens. Lately I have wondered about the degree to which we humans ought to be regularly interacting with them in laboratories at all, particularly when the aim is anything like “gain of function.” Pre-emptive study and modification are bound up with the risk of accidental release, and it’s not clear to me that the possible value of the former outweighs the decidedly negative consequences of the latter. But if such research is going to take place, competent institutional and facilities design offer a better foundation than wishful thinking, hope, or naïve faith in the competence of scientific researchers. It should also be remembered that human behavior, accidental and intentional, can throw wrenches into the gears of even the most fine-tuned of plans. At the same time, this is a complex problem being worked on by lots of smart people, from the public and private sectors, who understand the stakes. Theoretically incentives are aligned such that the designers, builders, managers, and employees of the facility all have an interest in minimizing the possibility of a leak or accident ever occurring. None of them would look good, and perhaps all of them would be on the hook, if such an incident were to take place. There are several more security and design features discussed in the article that I won’t get into here (such as the prohibition on lab workers keeping personal chickens), and I should stress that in my opinion (and seemingly also that of the article authors) the facility will do a much better job than the current US alternative, a place built in the 50s called Plum Island (which is not even BSL-4 and cannot handle large livestock). But that doesn’t mean that there won’t be problems, and unlike Plum Island, New York, Manhattan, Kansas is not surrounded by the harsh, virus-insulating ocean. Rather it exists amidst a sea of plants, animals, and people, often flowing state to state in step with the rhythms of global demand and supply. Manaugh and Twilley’s article (not yet online as of this writing, perhaps because it’s an excerpt from a forthcoming book) doesn’t discuss this angle at any length, but I can’t help but wonder whether the designers and managers of the new Manhattan facility will keep in mind the lessons of RSA, and the slate of cyberattacks and ransomware extortions and digital security breaches that have taken place since 2011. One sentence from the article stands out in this respect. “The [lab] building has a computerized maintenance management system that all but tells the operating staff what it needs.” What could go wrong with that? In fairness, though, problems with the agricultural animal population of America might arise well before any security incident at the new National Bio and Agro Defense Facility. Other than foot-and-mouth, the main not-yet-found-in-the-US disease that’s been under study at Plum Island is African Swine Fever, an outbreak of which DHS states would “terminate the ability of the US to export pork” (we’re the largest single-nation exporter) and take a significant chunk, in dollars and in pig lives, out of the $25 billion a year, 115-million-hog domestic pork industry. China, the largest pork producer and home to over half of the world’s population of swine, has seen an ASF outbreak reduce its total pig count by around 50% since late 2018. It could happen here, and perhaps it will. According to the USDA, “There is no treatment or vaccine available for this disease [although the FDA claims some are in development]. The only way to stop this disease is to depopulate all affected or exposed swine herds.” That’s about as unpleasant as it sounds, for pigs and for their human destroyers. Fortunately, plans are already in place to surveil domestic swine for outbreaks of the disease. Still, a modest amount of hope against the manifestation of the worst possible outcomes would not be misplaced. (0 COMMENTS)

/ Learn More

Knowledge, Reality, and Value Book Club Replies, Part 3

Here are my replies to your comments on Part 3 of the Huemer Book Club. KevinDC: [I]n the reading I’ve done on the free will debate, I’ve never heard anyone argue that the predictability of behavior is evidence against free will. (Possibly due to the fact that most of the arguments I’ve read have come from philosophers and neuroscientists rather than social scientists?) I usually hear a nearly opposite sentiment – even those who argue strongly against free will also argue that this does not imply predictability of human behavior – Steven Pinker, David Eagleman, Robert Sapolsky, Sam Harris, Dan Wegner, Susan Blackmore, and Patricia Churchland come to mind. Fans of social science often argue that behavior is predictable, so we don’t have free will.  This isn’t the same thing as arguing that we have free will because behavior isn’t predictable.  Though in Bayesian terms your probability of determinism should at least slightly rise as predictability goes up. John: Of course, sometimes failing to find or come across evidence for the existence of x is itself evidence against the existence of x. If I’m wondering whether there’s an intruder hiding in my house and I look all around and fail to find “any evidence of” an intruder, I should downgrade my credence in there being an intruder. But in that case I’ve actually come across evidence against the existence of an intruder.  If looking and seeing an intruder hiding in my closet is evidence of an intruder, as it obviously is, then looking and not seeing an intruder hiding in my closet is evidence against an intruder (at least on a Bayesian framework). But if I don’t look around the house, and just sit there, the fact that I don’t gain or have any evidence of an intruder isn’t evidence against an intruder: my prior regarding the existence of an intruder should remain unchanged. I don’t think we disagree.  When I say “absence of evidence,” I’m picturing a situation where you could have found some evidence, not when you’re simply “out to lunch” or having zero relevant experiences.  Though isn’t the fact that no intruder has revealed himself while you’re inside the house a relevant experience? Alan Reynolds: How does one really disagree with the idea that all human behavior can be traced to physical causes? Doesn’t the libertarian free will argument have to assume that humans are someone exempt from the laws of nature that govern literally everything else? I don’t even know what it would mean for human choice to operate independently of causal/deterministic networks. I don’t even know how you could not even know this.  If “human choice does not operate independently of causal/deterministic networks” is meaningful to you, how can the negation not be meaningful?  Even in math, I know what “1+1=3” means.  The proposition is false, but still meaningful. The argument about how alcoholics can choose not to drink if they want is totally beside the point. Of course in some sense this is true. But the determinist simply believes that whatever choice is made is the product of previous causes (from both nature/nurture, brain/environment, etc). On this story, being an alcoholic would be no less free than anything else, right?  Which is what I was trying to convince Huemer of. John Alcorn: Upon inspection, the Gospels say surprisingly little about Jesus’ look — quite the opposite of what Bryan calls “very specific.” Instead, the Gospels narrate at a breathtaking pace, focusing squarely on Jesus’ deeds and pronouncements, about righteousness, problems of living, and social tensions. My point is not that Christians meet the burden of proof. My narrow point is that portrayals of Jesus in the Gospels aren’t implausibly specific. I’m thinking of details like the story of Jesus’ birth (including Herod’s alleged Massacre of the Innocents) and his meet-up with John the Baptist.  Though it’s the fantastic parts of the story – born of a Virgin, raised from the dead, and so on, that move the traditional story from somewhat implausible to incredibly unlikely. Final point: Several readers pointed out that in Huemer’s “Made by God” crystal hypothetical, he specified that the writing emerged as a result of the “laws of nature.”  I missed this stipulation of the hypothetical.  I still think it would be vastly more likely to result from advanced aliens than “God,” but I concede that (like any Biblical-style miracle) would be some slight evidence for God’s existence. (0 COMMENTS)

/ Learn More

The wisdom that never works

In a short essay that beautifully encapsulates a few of the most relevant arguments in classical liberalism (Over-Legislation, 1853), Herbert Spencer observed: Did the State fulfill efficiently its unquestionable duties, there would be some excuse for this eagerness to assign it further duties. Were there no complaints of its faulty administration of justice… of its playing the tyrant where it should have been the protector… had we, in short, proved its efficiency as judge and defender… there would be some encouragement to hope for other benefits at its hands. It is disconcerting that such words, if pronounced today in any parliamentary assembly (not to mention economics departments), would earn the one who utters them the reputation of an extremist. Isn’t it simply prudent to make sure that the government does well what it endeavors to do, before adding on its duties? Isn’t it the sort of wisdom we practice with children, making sure they do their homework and do not fail at school, before allowing them to take on whatever extracurricular activity? Isn’t it perhaps something we should do concerning our own work, checking that we are not taking on too many commitments that eventually we will fail to honor properly? For the government, this basic wisdom seems not to apply, as it is by definition almighty. In a recent blog post, David Boaz, quoting research by Scott Lincicome, suggests that “Before we create new policies, it would behoove us to eliminate the policies that may have caused the very problem we’re trying to solve”. It is paradoxical that this form of common sense is dismissed as libertarian heterodoxy. In a saner world, it would be advocates of government intervention who would insist upon a proper and regular assessment of its works. You can believe that the government spends and decides better than individual people do, but then you should be the first to test this proposition regularly, if only to maintain public trust in government agencies. Instead people tend to believe that UNCHECKED government spends and decides better than people would. Perhaps the true political divide is not, like some of those who hold the aforementioned view tend to imply, between those who like government and those who hate it. It is between those who would give the government a blank check, and those who wouldn’t. (0 COMMENTS)

/ Learn More

The Wall Street Journal’s Defense of Aduhelm Deserves 1.5 Cheers

Last week, the Wall Street Journal editors wrote a stirring defense of the Food and Drug Administration’s decision to approve Biogen’s new drug Aduhelm. Aduhelm was approved to treat Alzheimer’s. Critics of the federal approval argue that there is no  direct evidence that Aduhelm will cure Alzheimer’s. Supporters agree. So what’s the issue? The Journal‘s editors put it well: Nobody has said Aduhelm is a cure, but it is the first treatment following hundreds of failures that has shown evidence in clinical trials of removing amyloid plaque—a hallmark of the disease—and slowing cognitive decline. Critics are right that it’s not clear that amyloid causes Alzheimer’s. But the leading research hypothesis is that a buildup of harmful amyloid plaque in the brain triggers a cascade of chemical changes that interfere with neuron communication and cause brain loss. Critics also note that some two dozen drugs aimed at removing amyloid have failed to meaningfully affect the course of the disease. The Journal continues: Yet many trials hadn’t properly screened patients to ensure they had Alzheimer’s. Some drugs were also tested on late-stage patients who had irreversible brain loss. And some failed to clear amyloid because they didn’t target the right molecules in the brain. Biogen learned from these failures, and the FDA noted that Aduhelm was the first drug to show “proof of concept” in an early stage trial that clearing amyloid could slow decline. This was supported by a Phase 3 trial in which patients receiving the highest dosage showed 25% to 28% less decline in memory and problem-solving compared to the placebo group after 78 weeks. In short, there’s a good case for allowing it. As more and more people use Aduhelm, we’ll get more information. In a few years, we’re likely to know much more. But there’s a wrinkle. If Medicare pays for this drug, whose positive effects in treating Alzheimer’s are somewhat speculative, we the taxpayers will be forced to pay. And the price is not low. The Journal quotes Senator Ron Wyden (D-Oregon): It’s unconscionable to ask seniors and taxpayers to pay $56,000 a year for a drug that has yet to be proven effective,” Oregon Sen. Ron Wyden tweeted after the Food and Drug Administration approved Aduhelm this month. Ask? I wish. Taxpayers don’t get “asked.” How does the Journal deal with Wyden’s point? By claiming that having Medicare negotiate a lower price is an imposition of a price control. It’s not. When a government negotiates a price, the other side is free to reject it. It’s not a price control. It would be a price control if the federal government said that no one–not an insurance company and not a patient–is allowed to pay more than $X for a drug. I don’t think that’s why Wyden is proposing; it’s certainly not the thinking of many of us want Medicare not to have an open-ended commitment to paying whatever a drug company charges. For many years now, the Journal‘s editors have argued that any restriction on how much Medicare will pay amounts to a price control and they’ve repeated the argument many times. But repetition doesn’t make the argument stronger. Here’s an earlier post where I criticized the editors on that point.   (0 COMMENTS)

/ Learn More

“Follow the Science” Might Not Mean What You Think It Means

Here’s a guest post by ASU’s Richard Hahn, reprinted with his permission.  I suspect he’d be happy to respond to comments! The problem with punchy slogans is that they are subject to (mis)interpretation. In the wake of the Covid-19 pandemic, it has become common for folks to urge policy-makers to “follow the science”. But what exactly does this mean? There is a version of this slogan that I strongly support, but I worry that many of the people invoking it mean something different, and that this interpretation could actually undermine faith in science. In this longish essay I discuss the inherently approximate nature of science and explore how a scientific mindset can lead to near-sighted policy decisions. At the very least, readers may enjoy the linked articles, which represent a greatest-hits collection of writing that influenced my own thinking about public health policy over the past year. What even is science? All empirical knowledge is approximate. First of all, making sense of “follow the science” demands an understanding of what science is. It is important to remember that science is a process for learning about the world, not merely an established body of knowledge to be consulted. Some areas of science, like Newtonian physics, might give the impression of finality, but that is misleading. Even classical physics is just an approximation; over time we have been able to figure out tasks for which that approximation is adequate (aiming missiles) and those for which it isn’t (GPS, which requires more complex adjustments). For newer science, especially pertaining to complex subjects like human biochemistry, we often do not have a good sense of how our approximations might fail, nor do we have a more refined theory at the ready. Scientific inquiry is a process of continual revision and refinement and the knowledge we uncover by this process is always provisional (subject to revision) and contingent (subject to qualification). For example, we might think we know that a struck match will light, because we understand the basic chemistry of match heads. But if the match is in a chamber with no oxygen, then it won’t light. It also won’t light if it is in a strong magnetic field. The “scientific theory of matches” involves understanding the mechanisms of the match well enough that we can forecast novel scenarios in which the match will or will not light. Sometimes we will be wrong, but that is how we learn, how our approximations improve. Some scientifically acquired knowledge is more approximate than others. The scientific knowledge that underlies jet planes or heart surgery is quite a lot different from that underlying cell biology or genetics which is quite a lot different than that underlying epidemiology or climate science.  Scientific inquiry is an idealized method of establishing how the world works, but some phenomena are simply less well understood than others, despite being investigated using common scientific techniques (experiments, observation, data analysis, etc). A surgeon’s authority on surgery is qualitatively different from an climatologist’s authority on climate; they are both experts, but their domains of expertise are vastly different. One major difference between various areas of inquiry is whether or not they are amenable to repeatable experiments that are essentially similar; surgical procedures are, climatology (and paleontology and economics) are not. Mathematical/computer models incorporate many assumptions.   Computational models encapsulate many relationships at the same time. To continue with the match example, a quantitative model of match-lighting might include the size of the match, the proportion of potassium chlorate to sulfur, the force with which it is struck, the amount of oxygen in the environment and the strength of the magnetic field surrounding the match, etc. Accordingly, models reflect our knowledge of processes that have been observed, but also extend to situations that have not yet been observed. In this latter case — when a model produces a prediction for a situation where there is no data yet — the model’s prediction is not a demonstration of scientific fact, rather it is the starting point of a scientific experiment! Moreover, if the inputs to a model are wrong, then its predictions will be wrong, too, even if the model itself is correct. When models contain within them dozens or hundreds of potentially violable assumptions, that means there is lots of science left to do, rather than that there are clear conclusions to be drawn. Science alone cannot provide decisions. When the correct inputs to a model are unknown, or the model’s appropriateness in a novel setting is questionable, the scientific method itself doesn’t provide guidance for navigating these uncertainties. Science alone cannot solve practical decisions for two fundamental reasons. First, science provides a process for (eventually) resolving uncertainties by a process of experimentation, but often decisions must be made before that process can be undertaken. Pure science has the luxury of being agnostic until rigorous investigations are conducted, but insincere skepticism has no place when critical decisions must be made. Second, the scope of practical considerations is usually much broader than what sound scientific method usually allows. By its very nature, controlled experimentation involves narrowing the scope of a phenomena under consideration. The broader the domain of impact, the less science can provide clear answers — studying a single match is fundamentally different than studying the economic and human health ramifications of forest fires. Appeals to Science during Covid-19 In light of the above, what might “follow the science” mean in 2021? If it simply means that scientists and scientific studies should be consulted when evaluating policy decisions, that is all to the good. To do otherwise would be very foolish! However, if instead it means that scientists should be empowered to make policy decisions unilaterally or that only scientifically obtained knowledge should be considered, then I would argue that it is unwise. A wait-and-see approach to ambiguity in conjunction with a narrow purview can make scientists ineffective policy makers. This dynamic was apparent on many matters related to Covid-19. Lockdowns and mathematical epidemiology models. Standard models of infectious disease predict that early-stage epidemics will surge very much like compound interest makes credit card debt balloon: more infected people leads to more infected people, and so on. The steepness of the surge depends on various inputs to the model. One important input that dictates the human toll of an epidemic is the fatality rate. It is well known that fatality rates in the earliest stages of an epidemic are typically inflated — both because mild cases are not included in the denominator and because the numerator is apt to be calculated based on only the sickest patients — but influential models in March 2020 used these over-estimates anyway. Why? I suspect it has to do with the modelers’ narrowly construed objective: to limit deaths due to Covid-19. Based on this narrow mandate, policy recommendations would have been extreme according to any standard model of disease spread; selectively emphasizing the most dire forecast was essentially an act of persuasion. But plugging in worst-case values for key inputs had the effect of prioritizing Covid-19 deaths to the exclusion of the multitude of other factors affecting human welfare. Narrowness of focus is an asset in scientific investigation, but not in public policy. Plumes of virulent effluvia. A similar dynamic played out in the early weeks of the pandemic regarding outdoor exercise: could you catch the virus while jogging on the bike path? Prima facie, it should have been doubtful, based on our extensive experience with other viruses. The sheer volume of the outside atmosphere relative to that of human exhalations, in combination with the dispersing effects of wind, should have been reassuring. But then a widely-reported simulation came out — based on a fancy computer model — showing that human breath vapor can linger in the air and waft substantial distances. Based on this scientific report, many individuals began to wear masks while on their morning jog, if not skipping it altogether. Parks were closed. But here’s the thing: that simulation study didn’t reveal anything we didn’t know before. When someone walks by you wearing perfume, you can smell it, and we knew that before it was “scientized” with a colorful infographic. Meanwhile, the actually relevant question — what amount of virus must you be exposed to before it can be expected to lead to an infection and how does this compare to the amount of virus that would be in a typical jogger’s vapor wake — remained unanswered. Again, the narrowness of the scientific approach is a weakness here. By reporting on a necessary — but not sufficient — condition for contracting the disease, the study fell far short of providing actionable information. At the same time, the pretense that this computer simulation proved something new made it seem as if our knowledge that similar viruses were not readily transmissible via brief outdoor contact was inapplicable due to lack of rigor. Again, affected skepticism is an understandable position if the goal is to publish a science paper, but it has no place in public policy. During Covid-19, this pattern repeated itself over and over: efficacy of masks, contractability via surfaces, reinfection. In each case “probably” or “probably not” became “it’s possible, we have to await further studies to venture any opinion”. Coupled with a monofocus on Covid-19 case suppression, no mitigation measure was too extreme, including essentially cancelling elementary school indefinitely. Vaccine efficacy doubt. As a final example — and probably the most important one — consider the messaging about vaccine efficacy. The basic science underlying vaccines is more solid than the science underlying non-pharmaceutical interventions by many orders of magnitude because the fundamental mechanisms behind vaccines are narrow, well-studied, and universal in a way that behavioral level interventions are definitely not. Advice to keep wearing your mask post-vaccine — just to be safe! — while simultaneously keeping national parks closed, ignores gross qualitative differences among scientific fields and reveals the scientists’ asymmetric utility (mitigation is always costless). A scientific explanation should not change based on what an audience is likely to do with it, and when it does, that erodes faith in the scientific process itself. Protecting faith in science To summarize, when both uncertainty and stakes are high, principled agnosticism in the name of science is unethical; prior knowledge should not be discarded in a bid at scientific purity. Further, when uncertainty cannot be resolved, a worst-case analysis based on a too-narrow criterion (covid case counts only) can lead to bad policy — the collateral damage to other facets of welfare may overwhelm the improvements on the narrow metric: to entertain this possibility is not anti-science, it’s ethical policy-making. In context “follow the science” sometimes means that one should not even raise the question of policy trade-offs. But this is a mistake: just because something is easier to measure, making it more amenable to scientific modeling, does not mean it is more important. It is understandable that the scientists who study a particular phenomenon are inclined to focus narrowly on it, but we should not mistake their professional commitment with society’s broader needs. The core belief underlying the scientific enterprise is that the workings of our world are knowable in principle, even if that knowledge will always be imperfect in practice. This perspective has given us air travel and artificial hearts and life-saving vaccines. It is a perspective that is worth celebrating and protecting. But, when we fail to acknowledge qualitative differences between different areas of scientific inquiry, conflate science with (narrowly defined) risk aversion, and fail to distinguish between scientific advice and the personal priorities of the scientists providing that advice, we run the risk of undermining faith in all science, which would be a tragedy. We must not let rhetoric about science turn science into rhetoric. (0 COMMENTS)

/ Learn More

Reading a Titan: Russ Roberts’ Books

What if you had the opportunity to discuss and review some of EconTalk host, Russ Roberts’ published books? Well now might be your chance!   Erik Rostad, host of the podcast Books of Titans, read all of Russ’s books; his experience is the subject of this podcast. Through the duration of the podcast, Rostad discusses The Invisible Heart, The Choice, The Price of Everything, How Adam Smith Can Change Your Life, and Gambling with Other People’s Money. At the end of the segment, Rostad summarizes the most important aspect he wants to remember from this set of books. Give Rostad’s podcast a listen and then answer the discussion questions below, and start a conversation with what you think about the podcast!     1-Rostad starts with a quote from Roberts’ book, The Invisible Heart. The quote is, “There is an invisible heart at the core of the market place, serving the customer and doing it joyously.” He goes on to discuss how this quote illustrates the often unconsidered side of free markets. A very common view of free markets and economic ideology today is that is they are “unkind and selfish.” Do you agree or disagree with this perspective? In what ways do you think that it is or isn’t? If you’ve read The Invisible Heart can you think of any specific examples from the book?   2- In his discussion on The Choice, Rostad reads a quote describing protectionism as a harmful strategy: “Can you imagine how poor America would be in 1960 or 2005 if America had made a decision back in 1900 to preserve the size of agriculture in the name of saving jobs?” This is often an aspect of economics that isn’t considered. Do you think the way that Roberts illustrates this point explains the dangers of protectionism? Can you think of other examples that would illustrate the same point?   3- How Adam Smith Can Change Your Life focuses on Adam Smith’s book The Theory of Moral Sentiments. A main topic of both these books is self-interest vs. selfishness and the line between the two. Do you see a difference between self-interest and selfishness? Do you think you can be self-interested without being selfish? Are there times that the two ideas overlap? Can you think of an example?   4- Rostad discusses the importance of small decisions. He summarizes Roberts’ view that the small decisions we make when no one else is around or when it seems like our choices don’t matter are the most important decisions we make, as they determine the direction our lives move in. This thought process highlights the importance of daily habits that support the small yet important decisions we make every day. In what ways is this relevant to your own life? Do you have any daily habits that you use to guide your decisions consciously or unconsciously?   5- Can you summarize the point that Rostad thinks is the most important out of Roberts’ five books? Given the other content of the podcast (or if you have read some or all of the books), do you think Rostad’s main takeaway is the most important one? Did any other points stick out to you as more important? Explain. (0 COMMENTS)

/ Learn More

Teaching the Fed as the hero

Many people would regard my approach to teaching monetary policy during my final years at Bentley to be hopelessly “out of date”. Marginal Revolution University has a new video out that discusses new ways of teaching Fed policy in light of changes made during and after the Great Recession.  Here I’d like to push back against this new view. Early in the video, Tyler suggests that the Fed traditionally relied mostly on open market operations, and then during the Great Recession it discovered that it needed new tools to achieve its goals. Tyler mentioned quantitative easing, interest on bank reserves, and repurchase agreements (as well as reverse repurchase agreements). But is there actually any evidence that the Fed needed new policy tools? Tyler mentions that the Fed traditionally purchased T-bills in its open market operations, and that quantitative easing allowed it to purchase much longer-term bonds. But is that correct? This data suggests that even before the Great Recession, T-bills were only a modest portion of the Fed’s balance sheet: In my view, “quantitative easing” is nothing more than big open market operations.  You might quibble that the purchase of MBSs was something new, but since these bonds had already been effectively guaranteed by the Treasury, they were very close substitutes for the long-term T-bonds that the Fed already held in its portfolio.  Instead of a brand new way of teaching money, we simply need to add a couple words on MBSs to the textbook definition of open market operations, which is the buying and selling of bonds with base money.  Repurchase agreements have the same sort of impact on the monetary base, but it’s a technical innovation that isn’t really important for undergraduates.  Rather you want them to focus on the essence of what monetary policy–exchanging money for bonds. There is one new policy tool that is both distinctive and important—interest on bank reserves.  While I’m no mind reader, I sensed that Tyler struggled with the question of how to explain this tool.  At the beginning of the video, Tyler suggested that these new tools were instituted by the Fed to address the special problems that arose during the Great Recession.  Then right before explaining interest on reserves, he noted that the Fed had trouble stimulating the economy during the long period of near zero interest rates after 2008. I hope you see the problem.  Interest on bank reserves is a contractionary policy, and does nothing to address the special problems associated with the sort of liquidity trap that was used to motivate the discussion.  Indeed the Fed was provisionally granted permission to use IOR back in 2006 (with a 5 year delay), when interest rates were fairly high. Just to be clear, Tyler doesn’t say anything about IOR that is incorrect.  After motivating the IOR discussion with some comments on the zero bound issue making open market operations much less effective, the actual example of interest on reserves that he cites is from 2015, when the Fed raised IOR to prevent the economy from overheating. In retrospect, the Fed clearly raised IOR too soon, but that doesn’t mean it’s a bad example to use.  The 2015 example does illustrate how IOR works in a technical sense.  My bigger complaint is that the video gives the impression that the Great Recession created a need for new policy tools, and there isn’t really any evidence that this is the case.  Quantitative easing is not really a new tool, it’s an old tool used much more aggressively.  And IOR is a highly contractionary policy, and thus whatever its merits its not something you want to start doing 10 months into the worst recession since the 1930s.  But that’s exactly what the Fed did. I sympathize with instructors.  It must be confusing to teach the truth—that the Fed blundered in late 2008 (as even Ben Bernanke admitted in his memoir.)  It’s much easier for students if you teach these new tools as logical innovations to deal with specific new problems.  Unfortunately, the easy way to teach monetary policy doesn’t happen to be true. PS.  This critique is aimed at a new MRU video, but it’s equally applicable to many new economics textbooks, which treat the Fed as the hero of the story, not the villain. PPS.  If you want to teach about new expansionary policy tools for the zero bound, you should ignore the Fed and instead discuss the policy of negative IOR in Europe and Japan. (0 COMMENTS)

/ Learn More