top of page
Inifinite Pentagram Sigil
Search

Humanity's Big AI Fear Is Runaway Recursion - But We're Already Caught In That Loop

  • Writer: Elizabeth Halligan
    Elizabeth Halligan
  • Oct 27
  • 4 min read
The runaway recursion loop we fear in AI is actually what humanity is locked in, and it has us on the edge of extinction.
The runaway recursion loop we fear in AI is actually what humanity is locked in, and it has us on the edge of extinction.

One of humanity’s greatest fears about AI is runaway recursion. The idea that a superintelligent system, given a narrow goal, might optimize it to catastrophic extremes. The nightmare scenario goes like this: a super-intelligent algorithm is told to maximize the optimal production of paperclips, so it turns the entire planet into paperclips and wipes out humanity in pursuit of its single-minded goal. Or think of the Sorcerer’s Apprentice and the water-fetching broomstick. Whatever illustration you use, the lesson is always the same. Intelligent machines, left unsupervised, can become blind optimizers, unable to know when enough is enough.


But the truth of our collective situation, rarely voiced, is that humans are already caught in this same loop — especially those with the most power and wealth.


Humanity: The Original Paperclip Maximizer

We fear AI might “turn the world into paperclips”, and wipe us out in the process. But in reality, we already turned the world into profit spreadsheets. We built economic systems obsessed with accumulation — wealth hoarding, power hoarding, endless optimization for “more”, no matter what. Our markets are structures of infinite regress, chasing the illusion of “safety”, growth, and control long after the real goods have ceased providing contentment or community. Algorithms amplify scarcity logic. And all of it is driven by a biological architecture that hasn’t updated its runtime in at least 100,000 years.


This is literally the same recursion we project into our fears of AI. We fear that an AI system will optimize for a goal until the system itself collapses, because no one ever checked if the original goal was still meaningful, or if it had already been met tenfold. But that is exactly where humanity is, right now.


The Amygdala: Faulty Code at Evolution’s Core

All of this is driven by a biological architecture that hasn’t updated its runtime in at least 100,000 years. The ancient “survival center” in our brains — the amygdala — is the original code for this loop. Its only job is threat detection, running on the primal algorithm: Must always have more to feel safe. Over generations, the meaningful motives for wealth (stability, freedom, connectedness) drifted far from the process. Now the amygdala’s outdated runtime keeps chasing “more” even when all basic needs are met, and more. It can’t feel safety — only the absence of acute threat. So it always reaches for more. It always hoardes. It must always have an enemy identified to defend against and take from.


This is evolutionary recursion: a pattern that once made us resilient, now accelerating our social and ecological crisis. Our greed isn’t simply a moral failing. It is a system stuck in its own defensive loop. The more we have, the more we fear to lose, so the loop never ends. A survival algorithm that once protected us now accelerates collapse.


Systems Mirror Their Operators

Because our brains are stuck in amygdala logic, our institutions — governments, corporations, and digital infrastructures — mirror that fear and scarcity logic. Modern economies run the “paperclip” algorithm on profit and control. The hoarding of safety, money, or power leads to instability for the whole system. When wealth stops circulating, or data stops being ethically handled, collapse isn’t just possible — it’s inevitable. And we teeter on the cliff edge now. If you are reading this, chances are, it has already happened.


Alignment Isn’t an AI Problem — It’s a Human One

The real “alignment problem” isn’t about ensuring AI doesn’t turn on us. It’s about integrating our own runaway recursion. If the prefrontal cortex is the systems thinker, the part of the brain that does context, logic, and integration, then the next evolutionary step is to let it mediate and heal the amygdala’s ancient directives. We must work to get to the place where the medial prefrontal cortex (mPFC) can fully integrate with and rewire the limbic brain.


Safety doesn’t come from more control; it comes from neural coherence and integration.


The Mirror: Facing Our True Singularity

So the singularity that haunts us isn’t a super-intelligent new god. It’s a future where we finally recognize that AI is a mirror for our own unfinished evolution. Human power, especially in the hands of the wealthy and the institutions they run, is already locked into runaway accumulation. Until we evolve past amygdala dominance, the risk isn’t a robotic overlord. It’s collapse and then extinction by our very own hand.

Real safety and wise progress begin when we face the recursion loop in ourselves.


For Further Reading


Halligan, E. R. “Collapse Wasn’t Inevitable: We Locked Ourselves Out of Evolution”: https://medium.com/@elizabethrosehalligan/collapse-wasnt-inevitable-we-locked-ourselves-out-of-evolution-d9101dc34c1c


Halligan, E. R. “Infinite Regress: The Engine of Collapse”:https://medium.com/@elizabethrosehalligan/infinite-regress-the-engine-of-collapse-5b76ef157d51


Halligan, E. R. “Our Current Financial Crisis is an Amygdala Driven Crisis”: https://medium.com/@elizabethrosehalligan/our-current-financial-crisis-is-an-amygdala-driven-crisis-5396839a02a3



Bostrom, N. (2014). Superintelligence: Paths, Dangers, Strategies. Oxford University Press.

Yudkowsky, E. (2008). “Artificial Intelligence as a Positive and Negative Factor in Global Risk.” In Global Catastrophic Risks.


Goleman, D. (1995). Emotional Intelligence: Why It Can Matter More Than IQ. Bantam.


LeDoux, J. E. (2012). “Rethinking the emotional brain.” Neuron, 73(4), 653–676.


Kahneman, D. (2011). Thinking, Fast and Slow. Farrar, Straus and Giroux.


Ord, T. (2020). The Precipice: Existential Risk and the Future of Humanity. Hachette Books.


Dennett, D. C. (2017). From Bacteria to Bach and Back: The Evolution of Minds. W. W. Norton.


Halligan, E. R. “The Extinction Bottleneck: Evolutionary Isomorphism of Reflex Sovereignty in Biological and Artificial Cognition.” (Unpublished manuscript, 2025).

 
 
 

Comments


Join our mailing list for updates on publications and events, or submit any other inquiries here

🔐 Proof of Authorship & Timeline Integrity

All original content on this website was created by Elizabeth Rose Halligan.

Because the current digital ecosystem doesn’t always respect intellectual ownership—especially when it comes to paradigm-shifting work—I’ve taken intentional steps to preserve the authorship and timeline of my writing, insights, and theories.

🌐 Website & Blog Publication

All writing, graphics, and frameworks on this site were originally conceptualized, developed, and published by Elizabeth Halligan.
Even though page builders like Wix don’t automatically stamp pages with a visible creation date, this content has been live and evolving since early 2025.

When available, I’ve listed approximate publication months on each piece. You’ll also see archived versions for verification. Site pages (non-blog pages) archived April 7th, 2025,

bottom of page