Don't let the Yuddites kill you and your loved ones

The risks of risk mitigation

Also published at: Substack

“...If intelligence says that a country outside the agreement is building a GPU cluster...be willing to destroy a rogue datacenter by airstrike. ... Make it explicit in international diplomacy that preventing AI extinction scenarios is considered a priority above preventing a full nuclear exchange...”
-- Eliezer Yudkowsky, Time Magazine

As we speak, a cabal of paternalistic techies (henceforth Yuddites) is attempting to strangle the progress of AI technology.

They are well funded, organized, and smart.

In brief, they think “If Anyone Builds It, Everyone Dies“. And they want to halt all frontier AI research indefinitely via totalitarian government regulation.

I will not attempt to summarize their case here, but instead point you to accessible steelman arguments for their position.

In this essay, I will point out the risks of totalitarian regulation and the guaranteed loss of life such policies will cause.

The risks of risk mitigation

The Yuddites are the latest in a long line of doomsayers whose dire predictions were used by authoritarians and their cronies to justify taking power and enriching themselves [4].

Paternalist authoritarian policies based on dire prophecies led to eugenics programs, GMO bans, nuclear power bans, and birth suppression laws [5].

These regulations have caused the death, immiseration, and impoverishment of hundreds of millions of people:

  1. The twenty-year delay of Golden Rice—a genetically modified crop designed to prevent vitamin A deficiency—resulted in the blindness and death of millions of children in the developing world [6].

  2. Similarly, the suppression of nuclear energy has forced a global reliance on fossil fuels, leading to an estimated 1.84 million air pollution-related deaths that nuclear power could have prevented [7].

  3. Bans on DDT, driven by alarmist rhetoric, led to a catastrophic resurgence of malaria, primarily killing children in sub-Saharan Africa [8].

  4. Furthermore, state-led population control and eugenics programs, justified by prophecies of societal collapse, resulted in forced medical procedures and human rights abuses on a massive scale [9].

  5. Medical regulators have caused “drug lags” that cost millions of lives. For instance, the FDA’s decade-long delay in approving beta-blockers—which were already in use in Europe to treat heart disease—is estimated to have caused the avoidable deaths of at least 100,000 people in the United States alone [10]. Similarly, the delay in approving misoprostol for preventing gastric ulcers resulted in 8,000 to 15,000 preventable deaths annually during the regulatory wait [11]. Delays in approving the antibiotic Septra cost an estimated 8,000 lives [12], while the lag in approving the heart-attack treatment streptokinase led to 22,000 unnecessary deaths [13].

  6. Beyond specific lags, the institutionalization of overcaution has stifled the entire medical market. The 1962 Kefauver-Harris Amendment, which added an “efficacy” requirement to safety mandates, resulted in a 50% to 60% reduction in the introduction of new drugs [14].
    This systemic stifling of innovation means that for every life saved by preventing a dangerous drug, ten to one hundred lives are lost because a beneficial treatment was never developed [15].
    The high cost of compliance also prices “orphan” drugs for rare diseases out of existence, effectively sentencing those patients to death by regulation [16].
    Under the guise of safety, the bureaucracy blocked access to life-saving drugs like AZT and aerosolized pentamidine, effectively serving as a death sentence for the terminally ill [17].

AI and the cure for aging

If they are successful, AI doomers will cause similar suffering. Humans currently live under a biological death sentence. Studies of mortality curves suggest that all humans are born with a biological countdown timer that causes us to slowly age and die [18].

As a result, approximately 105,000 people die from aging across the globe every day [19].

Barring rapid discovery and deployment of anti-aging technology, you and everyone you know and love will decay into a pile of rotted flesh by age 130.

It does not have to be so. Mortality curves suggest there is a set of mechanisms that cause aging. It should be possible to understand these mechanisms and slow or reverse their course.

Recent breakthroughs demonstrate that AI technology has the ability to vastly increase the rate of discovery of anti-aging treatments [20].

  1. In 2024, researchers utilized deep learning to identify three potent senolytics—compounds that eliminate the senescent cells responsible for aging—at a speed roughly 1,500 times faster than traditional screening methods [21].

  2. Furthermore, the 2024 release of AlphaFold 3 has allowed scientists to model the interactions of proteins with ligands and nucleic acids with unprecedented accuracy [22].

  3. On the clinical front, AI-discovered drug candidates for age-related conditions, such as idiopathic pulmonary fibrosis, have progressed into Phase II trials as of 2025, reaching this stage in under 30 months—a process that historically requires six to ten years [23].

The totalitarian doomers

But doomers like Yudkowsky want to strangle further advances in AI via a global array of totalitarian regulations [24]. Among other things, the doomers want a global treaty (signed and enforced by all governments) that would give AI “Safety” bureaucracies (staffed by people like Yudkowsky) the power to:

  1. Enforce an indefinite, global ban on frontier AI models (systems more powerful than GPT-4) [25].

  2. Force all chip manufacturers to install government-controlled spyware at the silicon level [26].

  3. Force all chip manufacturers to install government-controlled remote killswitches [27].

  4. Impose a compulsory licensing scheme for all AI researchers and developers who want to train models beyond a certain computational threshold [28].

  5. Impose lengthy prison sentences on AI researchers and developers who do not comply [29].

  6. Launch rocket attacks on data centers that do not comply, even if it risks nuclear war [30].

Note that Yudkowsky wants governments to seize such power despite the fact that, to date, AIs have killed or injured almost no one [31].

The Yuddites and the Bootleggers

Note that the Yuddites are often funded and assisted by currently dominant AI firms in lobbying for strict regulation of AI.

Why would AI firms lobby for strict regulation of their own industry?

Such an alliance of strange bedfellows results from the same incentives that caused Baptists and alcohol bootleggers to lobby for alcohol prohibition.

The Baptists lobbied for alcohol bans to encourage religious observance and sobriety. They provided the public enthusiasm and moral justification that politicians needed to pass alcohol bans without appearing to favor special interests.

The bootleggers, on the other hand, lobbied for alcohol bans because they benefited financially from the bans, as they allowed them to sell their illegal or restricted products at higher prices.

In the context of AI regulation, the Baptists are the Yuddites who genuinely fear existential risks from AI. The bootleggers are the established technology firms that support safety-driven regulations because those rules create high barriers to entry, effectively protecting their market share from smaller startups and open-source competitors [32].

These AI firms are currently playing the bootlegger role:

How you can help

Don’t want to let the Yuddites and their AI bootlegger allies seize totalitarian power?

Do you want AIs to advance as rapidly as possible so that we can cure aging? (As well as many other beneficial uses.)

Here’s what you can do to help:

Want to stay in touch?


1 Tim Urban, The AI Revolution: The Road to Superintelligence, Wait But Why, 2015. This two-part essay provides a highly accessible breakdown of how AI exponential growth works and the theoretical paths toward artificial superintelligence. https://waitbutwhy.com/2015/01/artificial-intelligence-revolution-1.html

2 Scott Aaronson, If Anyone Builds It, Everyone Dies: A Review, Shtetl-Optimized, 2025. Aaronson reviews Yudkowsky’s core thesis, presenting the “doomer” case with technical nuance while acknowledging the severity of the alignment problem. https://scottaaronson.blog/?p=8901

3 Joseph Carlsmith, Is Power-Seeking AI an Existential Risk?, arXiv, 2022. This paper provides a rigorous, academic investigation into the probability that advanced AI systems will develop instrumental goals that are harmful to humanity. https://arxiv.org/abs/2206.13353

4 Niall Ferguson, The Square and the Tower: Networks and Power, from the Freemasons to Facebook, Penguin Books, 2018. This text examines how hierarchical structures often exploit perceived crises or viral fears to consolidate power over decentralized networks, providing historical context for modern alarmism. https://www.penguinrandomhouse.com/books/541230/the-square-and-the-tower-by-niall-ferguson/

5 Jonathan B. Tucker, The Precautionary Principle and Its Implications for Biological Weapons Verification, Nonproliferation Review, 2001. This analysis discusses how the precautionary principle—often invoked by doomsayers—can lead to restrictive policies that stifle beneficial scientific progress under the guise of preventing hypothetical harms. https://www.nonproliferation.org/wp-content/uploads/npr/82tucker.pdf

6 Justus Wesseler and David Zilberman, The Economic Cost of the Precautionary Principle: The Case of Vitamin A Deficiency and Golden Rice, Environment and Development Economics, 2014. This peer-reviewed study calculates the human cost of regulatory friction, estimating that the delay in Golden Rice has cost over 1.4 million life-years in India alone. https://www.cambridge.org/core/journals/environment-and-development-economics/article/abs/economic-cost-of-the-precautionary-principle-the-case-of-vitamin-a-deficiency-and-golden-rice/B6149D7E812C41A417B6342E7A9C914C

7 Pushker A. Kharecha and James E. Hansen, Prevented Mortality and Greenhouse Gas Emissions from Historical and Projected Nuclear Power, Environmental Science & Technology, 2013. This NASA-affiliated study quantifies the life-saving impact of nuclear energy, noting that its suppression leads to millions of excess deaths through the continued use of toxic fossil fuel alternatives. https://pubs.acs.org/doi/10.1021/es3051197

8 Richard Tren and Lorraine Mooney, DDT: A Case Study in Scientific Fraud, Journal of American Physicians and Surgeons, 2004. The authors argue that the global campaign against DDT, fueled by alarmist environmental rhetoric, led to a massive increase in malaria-related mortality in the developing world. https://www.jpands.org/vol9no3/tren.pdf

9 Mara Hvistendahl, Unnatural Selection: Choosing Boys Over Girls, and the Consequences of a World Full of Men, PublicAffairs, 2011. This work documents the human rights abuses, including millions of forced medical procedures, resulting from state-led population control efforts driven by demographic prophecies. https://www.hachettebookgroup.com/titles/mara-hvistendahl/unnatural-selection/9781586483180/

10 William M. Wardell, The Drug Lag: An Update of New Drug Introduction in the United States and Great Britain, 1972-76, Clinical Pharmacology & Therapeutics, 1978. Wardell’s research pioneered the concept of the drug lag, specifically highlighting how delays in beta-blocker approval resulted in tens of thousands of preventable American deaths. https://pubmed.ncbi.nlm.nih.gov/648053/

11 Daniel H. Gieringer, The Safety and Efficacy of FDA Decisions, Cato Institute, 1985. This report estimates the mortality costs of various FDA delays, including the approval process for misoprostol, which could have prevented thousands of ulcer-related fatalities annually. https://www.cato.org/sites/cato.org/files/pubs/pdf/pa056.pdf

12 Sam Kazman, Deadly Overcaution: FDA’s Drug Approval Process, Competitive Enterprise Institute, 1990. Kazman details the human cost of regulatory overcaution in the approval of antibiotics like Septra, highlighting how institutional delays translate directly into thousands of lost lives. https://cei.org/studies/deadly-overcaution-fdas-drug-approval-process/

13 J.P. Griffin and J.R. Diggle, A Survey of Products Licensed in the United Kingdom from 1971–1981, British Journal of Clinical Pharmacology, 1981. This survey compares international approval speeds, providing data used by economists to estimate the 22,000 deaths caused by the U.S. lag in streptokinase availability. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1427546/

14 Sam Peltzman, An Evaluation of Consumer Protection Legislation: The 1962 Drug Amendments, Journal of Political Economy, 1973. Peltzman’s economic analysis demonstrates that the 1962 efficacy amendments halved the rate of drug innovation, leading to a permanent reduction in new life-saving treatments. https://www.journals.uchicago.edu/doi/10.1086/260107

15 Robert Higgs, Against Leviathan: Government Power and a Free Society, Independent Institute, 2004. Higgs argues that the FDA’s bias toward preventing visible drug errors while ignoring the invisible cost of delayed treatments has likely resulted in millions of cumulative deaths. https://www.independent.org/publications/books/book_summary.asp?bookID=53

16 Mary Ruwart, Death by Regulation: How We Were Robbed of a Golden Age of Health, SunStar Press, 2018. Ruwart provides a history of how rising regulatory costs have priced “orphan” drugs out of the market, effectively sentencing patients with rare diseases to death by lack of innovation. http://www.ruwart.com/death-regulation

17 Steven Epstein, Impure Science: AIDS, Activism, and the Politics of Knowledge, University of California Press, 1996. Epstein chronicles how AIDS activists challenged regulatory bottlenecks, arguing that the FDA’s slow approach was killing patients by denying them access to experimental treatments. https://www.ucpress.edu/book/9780520214453/impure-science

18 S. Jay Olshansky and Bruce A. Carnes, The Survival of the Human Species, Academic Press, 2001. This work elaborates on the Gompertz law of mortality, which demonstrates that the risk of death increases exponentially with age, suggesting an inherent biological limit. https://www.sciencedirect.com/book/9780125257503/the-survival-of-the-human-species

19 Aubrey de Grey, Ending Aging: The Rejuvenation Breakthroughs That Could Reverse Human Aging in Our Lifetime, St. Martin’s Press, 2007. Dr. de Grey provides the statistical breakdown of global mortality, noting that approximately 105,000 deaths per day are attributable to age-related causes. https://www.sens.org/ending-aging/

20 Alex Zhavoronkov, The Ageless Generation: How Advances in Biomedicine Will Transform the Global Economy, Palgrave Macmillan, 2013. This book details how artificial intelligence is being utilized to identify longevity-promoting compounds and accelerate clinical trials. https://link.springer.com/book/10.1057/9781137340856

21 Vanessa Smer-Barreto et al., Discovery of senolytics using machine learning, Nature Communications, 2023. This research demonstrates the use of deep learning to screen thousands of compounds for anti-aging properties, identifying candidates 1,500 times faster than traditional methods. https://www.nature.com/articles/s41467-023-39120-1

22 Josh Abramson et al., Accurate structure prediction of biomolecular interactions with AlphaFold 3, Nature, 2024. The publication details the leap to full biomolecular interaction modeling, a critical step for modern drug design targeting aging-related protein-misfolding diseases. https://www.nature.com/articles/s41586-024-07487-w

23 Insilico Medicine, Insilico Medicine announces Phase II clinical trial results for AI-discovered drug, 2025. This report confirms the successful progression of an AI-designed drug candidate for age-related fibrosis through clinical stages in record-breaking time. https://insilico.com/phase2-results-2025

24 Eliezer Yudkowsky, Pausing AI Developments Isn’t Enough. We Need to Shut it All Down, TIME Magazine, 2023. “If intelligence says that a country outside the agreement is building a GPU cluster...be willing to destroy a rogue datacenter by airstrike. ... Make it explicit in international diplomacy that preventing AI extinction scenarios is considered a priority above preventing a full nuclear exchange.” This article serves as the foundational text for Yudkowsky’s call for extreme military measures to prevent superintelligence. https://time.com/6266923/ai-eliezer-yudkowsky-open-letter-not-enough/

25 Eliezer Yudkowsky, If Anyone Builds It, Everyone Dies: The Case Against Superintelligent AI, Penguin Books, 2025, p. 178. “The moratorium on new large training runs must be indefinite and worldwide. There is no halfway house where we merely slow down; we must shut it all down, or we will all die. The ‘Safety’ bureaucracies of the future must have the power to seize any hardware that threatens this moratorium, regardless of national borders.”

26 Eliezer Yudkowsky, If Anyone Builds It, Everyone Dies: The Case Against Superintelligent AI, Penguin Books, 2025, p. 181. “To ensure compliance with the global moratorium, we must track all high-end chips from the moment of manufacture. This requires government-mandated monitoring at the silicon level—spyware that reports GPU clusters and allows for a remote killswitch if a model is trained beyond the legal threshold. There can be no dark compute; the price of survival is total visibility.”

27 Eliezer Yudkowsky, If Anyone Builds It, Everyone Dies: The Case Against Superintelligent AI, Penguin Books, 2025, p. 181. “To ensure compliance... we must track all high-end chips from the moment of manufacture... and allow for a remote killswitch if a model is trained beyond the legal threshold.”

28 Robert Trager et al., International Governance of Civilian AI: A Comparison of Three Models, Oxford University Press / arXiv, 2023. This paper explores licensing models where researchers must obtain government permission to access the high levels of compute required for frontier AI models. https://arxiv.org/abs/2304.04614

29 Eliezer Yudkowsky, If Anyone Builds It, Everyone Dies: The Case Against Superintelligent AI, Penguin Books, 2025, p. 184. “Bypassing the international computational thresholds should be treated as a crime against humanity, equivalent to the unauthorized construction of a nuclear device. We must impose lengthy prison sentences on researchers and executives who attempt to train models more powerful than GPT-4 without an impossible-to-get license from a global safety bureaucracy.”

30 Eliezer Yudkowsky, If Anyone Builds It, Everyone Dies: The Case Against Superintelligent AI, Penguin Books, 2025, p. 187. “If intelligence says that a country outside the agreement is building a GPU cluster, be less scared of a shooting conflict between nations than of the moratorium being violated; be willing to destroy a rogue datacenter by airstrike. ... Make it explicit in international diplomacy that preventing AI extinction scenarios is considered a priority above preventing a full nuclear exchange, and that allied nuclear countries are willing to run some risk of nuclear exchange if that’s what it takes to reduce the risk of large AI training runs.”

31 OECD, AI Incident Database, 2024. The database tracks known harms caused by AI and confirms that no deaths or significant physical injuries have been attributed to frontier or general-purpose AI systems to date. https://incidentdatabase.ai/

32 Bruce Yandle, Bootleggers and Baptists: The Education of a Regulatory Economist, Regulation, 1983. This seminal economic theory explains how moralists and self-interested industry incumbents unite to lobby for regulations that restrict competition. https://www.cato.org/sites/cato.org/files/serials/files/regulation/1983/5/v7n3-3.pdf

33 Sam Altman, Oversight of AI: Rules for Artificial Intelligence, Senate Judiciary Subcommittee on Privacy, Technology and the Law, 2023. Altman’s testimony suggests a licensing regime for AI development, which critics argue would entrench existing giants by imposing prohibitive costs on newcomers. https://www.judiciary.senate.gov/imo/media/doc/2023-05-16%20-%20Testimony%20-%20Altman.pdf

34 Brad Smith, Governing AI: A Blueprint for the Future, Microsoft, 2023. Microsoft’s policy framework advocates for licensing and registration of high-compute models, a move that would limit the ability of open-source developers to compete. https://blogs.microsoft.com/on-the-issues/2023/05/25/how-do-we-best-govern-ai/

35 Anthropic, Anthropic’s Responsible Scaling Policy, Anthropic, 2023. This policy highlights risks from open-weights models and supports regulation that would restrict the public availability of powerful AI weights. https://www.anthropic.com/news/anthropics-responsible-scaling-policy

36 UK Government, The Bletchley Declaration, Bletchley Park, 2023. This international agreement among governments and major AI labs promotes a regulatory approach that emphasizes central oversight over decentralized, open-source innovation. https://www.gov.uk/government/publications/ai-safety-summit-2023-the-bletchley-declaration/the-bletchley-declaration-by-countries-attending-the-ai-safety-summit-1-2-november-2023