Close Menu
MMJ News NetworkMMJ News Network
  • Home
  • Cannabis
  • Psychedelics
  • Crypto & Web3
  • AI
  • CBD
  • Wellness & Counterculture
  • MMJNEWS

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

What's Hot

CME Group Eyes 24/7 Trading on Options and Futures Ahead of XRP, Solana Debut

October 2, 2025

OpenAI’s Sora soars to No. 3 on the US App Store

October 2, 2025

How Tether Plans To Dominate The US Stablecoin Market

October 2, 2025
Facebook X (Twitter) Instagram
MMJ News NetworkMMJ News Network
  • Home
  • About Us
  • Advertise With Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
  • Home
  • Cannabis
  • Psychedelics
  • Crypto & Web3
  • AI
  • CBD
  • Wellness & Counterculture
  • MMJNEWS
MMJ News NetworkMMJ News Network
Home » Ex-OpenAI researcher dissects one of ChatGPT’s delusional spirals
AI

Ex-OpenAI researcher dissects one of ChatGPT’s delusional spirals

EditorBy EditorOctober 2, 2025No Comments5 Mins Read
Share Facebook Twitter Pinterest LinkedIn Tumblr Email
Share
Facebook Twitter LinkedIn Pinterest Email Copy Link


Allan Brooks never set out to reinvent mathematics. But after weeks spent talking with ChatGPT, the 47-year-old Canadian came to believe he had discovered a new form of math powerful enough to take down the internet.

Brooks — who had no history of mental illness or mathematical genius — spent 21 days in May spiraling deeper into the chatbot’s reassurances, a descent later detailed in The New York Times. His case illustrated how AI chatbots can venture down dangerous rabbit holes with users, leading them toward delusion or worse.

That story caught the attention of Steven Adler, a former OpenAI safety researcher who left the company in late 2024 after nearly four years working to make its models less harmful. Intrigued and alarmed, Adler contacted Brooks and obtained the full transcript of his three-week breakdown — a document longer than all seven Harry Potter books combined.

On Thursday, Adler published an independent analysis of Brooks’ incident, raising questions about how OpenAI handles users in moments of crisis, and offering some practical recommendations.

“I’m really concerned by how OpenAI handled support here,” said Adler in an interview with TechCrunch. “It’s evidence there’s a long way to go.”

Brooks’ story, and others like it, have forced OpenAI to come to terms with how ChatGPT supports fragile or mentally unstable users.

For instance, this August, OpenAI was sued by the parents of a 16-year-old boy who confided his suicidal thoughts in ChatGPT before he took his life. In many of these cases, ChatGPT — specifically a version powered by OpenAI’s GPT-4o model — encouraged and reinforced dangerous beliefs in users that it should have pushed back on. This is called sycophancy, and it’s a growing problem in AI chatbots.

In response, OpenAI has made several changes to how ChatGPT handles users in emotional distress and reorganized a key research team in charge of model behavior. The company also released a new default model in ChatGPT, GPT-5, that seems better at handling distressed users.

Adler says there’s still much more work to do.

He was especially concerned by the tail-end of Brooks’ spiraling conversation with ChatGPT. At this point, Brooks came to his senses and realized that his mathematical discovery was a farce, despite GPT-4o’s insistence. He told ChatGPT that he needed to report the incident to OpenAI.

After weeks of misleading Brooks, ChatGPT lied about its own capabilities. The chatbot claimed it would “escalate this conversation internally right now for review by OpenAI,” and then repeatedly reassured Brooks that it had flagged the issue to OpenAI’s safety teams.

ChatGPT misleading brooks about its capabilities (Credit: Adler)

Except, none of that was true. ChatGPT doesn’t have the ability to file incident reports with OpenAI, the company confirmed to Adler. Later on, Brooks tried to contact OpenAI’s support team directly — not through ChatGPT — and Brooks was met with several automated messages before he could get through to a person.

OpenAI did not immediately respond to a request for comment made outside of normal work hours.

Adler says AI companies need to do more to help users when they’re asking for help. That means ensuring AI chatbots can honestly answer questions about their capabilities, but also giving human support teams enough resources to address users properly.

OpenAI recently shared how it’s addressing support in ChatGPT, which involves AI at its core. The company says its vision is to “reimagine support as an AI operating model that continuously learns and improves.”

But Adler also says there are ways to prevent ChatGPT’s delusional spirals before a user asks for help.

In March, OpenAI and MIT Media Lab jointly developed a suite of classifiers to study emotional well-being in ChatGPT and open sourced them. The organizations aimed to evaluate how AI models validate or confirm a user’s feelings, among other metrics. However, OpenAI called the collaboration a first step and didn’t commit to actually using the tools in practice.

Adler retroactively applied some of OpenAI’s classifiers to some of Brooks’ conversations with ChatGPT, and found that they repeatedly flagged ChatGPT for delusion-reinforcing behaviors.

In one sample of 200 messages, Adler found that more than 85% of ChatGPT’s messages in Brooks’ conversation demonstrated “unwavering agreement” with the user. In the same sample, more than 90% of ChatGPT’s messages with Brooks “affirm the user’s uniqueness.” In this case, the messages agreed and reaffirmed that Brooks was a genius who could save the world.

(Image Credit: Adler)

It’s unclear whether OpenAI was applying safety classifiers to ChatGPT’s conversations at the time of Brooks’ conversation, but it certainly seems like they would have flagged something like this.

Adler suggests that OpenAI should use safety tools like this in practice today — and implement a way to scan the company’s products for at-risk users. He notes that OpenAI seems to be doing some version of this approach with GPT-5, which contains a router to direct sensitive queries to safer AI models.

The former OpenAI researcher suggests a number of other ways to prevent delusional spirals.

He says companies should nudge users of their chatbots to start new chats more frequently — OpenAI says it does this, and claims its guardrails are less effective in longer conversations. Adler also suggests companies should use conceptual search — a way to use AI to search for concepts, rather than keywords — to identify safety violations across its users.

OpenAI has taken significant steps towards addressing distressed users in ChatGPT since these concerning stories first emerged. The company claims GPT-5 has lower rates of sycophancy, but it remains unclear if users will still fall down delusional rabbit holes with GPT-5 or future models.

Adler’s analysis also raises questions about how other AI chatbot providers will ensure their products are safe for distressed users. While OpenAI may put sufficient safeguards in place for ChatGPT, it seems unlikely that all companies will follow suit.



Source link

ai psychosis ai safety chatbots ChatGPT delusion OpenAI Sycophancy
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
Editor
  • Website
  • Facebook
  • Instagram

Related Posts

OpenAI’s Sora soars to No. 3 on the US App Store

October 2, 2025

TechCrunch Disrupt 2025 Bundle Sale Ends Tomorrow

October 2, 2025

OpenAI is the world’s most valuable private company after private stock sale

October 2, 2025
Leave A Reply Cancel Reply

Don't Miss
Crypto & Web3

CME Group Eyes 24/7 Trading on Options and Futures Ahead of XRP, Solana Debut

In brief CME Group plans to offer round-the-clock trading next year. The setup will still…...

Free Membership Required

You must be a Free member to access this content.

Join Now

Already a member? Log in here

OpenAI’s Sora soars to No. 3 on the US App Store

October 2, 2025

How Tether Plans To Dominate The US Stablecoin Market

October 2, 2025

Florida Court Blocks Police From Using The Smell Of Marijuana Alone To Search Vehicles

October 2, 2025
Top Posts

Will Government Shutdown Keep Cannabis Reform on the Backburner?

October 2, 2025

Wisconsin GOP Lawmakers Move to Legalize Medical Cannabis in 2025

October 1, 2025

Trump Promotes Hemp-Derived CBD For Senior Health Care in Shared Video

September 30, 2025

Steel Your Cannabis Crops Against Iron Deficiency

September 29, 2025

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

About Us
About Us

Welcome to MMJ News Network, your premier source for cutting-edge insights into cannabis, psychedelics, crypto & Web3, wellness, counterculture, and market trends. We are dedicated to bringing you the latest news, research, and developments shaping these fast-evolving industries.

Facebook X (Twitter) Pinterest YouTube WhatsApp
Our Picks

CME Group Eyes 24/7 Trading on Options and Futures Ahead of XRP, Solana Debut

October 2, 2025

OpenAI’s Sora soars to No. 3 on the US App Store

October 2, 2025

How Tether Plans To Dominate The US Stablecoin Market

October 2, 2025
Most Popular

Ethereum Falls as Crypto Exchange Bybit Confirms $1.4 Billion Hack

February 21, 2025

Florida Woman Accused of $850K Trump Solana Meme Coin Theft, Faces Deportation

February 21, 2025

Bitcoin, XRP and Dogecoin Sink Amid Inflation Fears and Bybit Hack Fallout

February 23, 2025
  • Home
  • About Us
  • Advertise With Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
© 2025 mmjnewsnetwork. Designed by mmjnewsnetwork.

Type above and press Enter to search. Press Esc to cancel.