Close Menu
MMJ News NetworkMMJ News Network
  • Home
  • Cannabis
  • Psychedelics
  • Crypto & Web3
  • AI
  • CBD
  • Wellness & Counterculture
  • MMJNEWS

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

What's Hot

Texas Governor Issues Executive Order to Regulate Hemp THC Products

September 11, 2025

best crypto to buy as us promises end to crypto debanking

September 11, 2025

Trump Administration To Issue Warnings About Marijuana’s Impact On Youth Under Plan Led By RFK Jr.

September 11, 2025
Facebook X (Twitter) Instagram
MMJ News NetworkMMJ News Network
  • Home
  • About Us
  • Advertise With Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
  • Home
  • Cannabis
  • Psychedelics
  • Crypto & Web3
  • AI
  • CBD
  • Wellness & Counterculture
  • MMJNEWS
MMJ News NetworkMMJ News Network
Home » Are bad incentives to blame for AI hallucinations?
Tech

Are bad incentives to blame for AI hallucinations?

EditorBy EditorSeptember 8, 2025No Comments3 Mins Read
Share Facebook Twitter Pinterest LinkedIn Tumblr Email
Share
Facebook Twitter LinkedIn Pinterest Email Copy Link


A new research paper from OpenAI asks why large language models like GPT-5 and chatbots like ChatGPT still hallucinate and whether anything can be done to reduce those hallucinations.

In a blog post summarizing the paper, OpenAI defines hallucinations as “plausible but false statements generated by language models,” and it acknowledges that despite improvements, hallucinations “remain a fundamental challenge for all large language models” — one that will never be completely eliminated.

To illustrate the point, researchers say that when they asked “a widely used chatbot” about the title of Adam Tauman Kalai’s PhD dissertation, they got three different answers, all of them wrong. (Kalai is one of the paper’s authors.) They then asked about his birthday and received three different dates. Once again, all of them were wrong.

How can a chatbot be so wrong — and sound so confident in its wrongness? The researchers suggest that hallucinations arise, in part, because of a pretraining process that focuses on getting models to correctly predict the next word, without true or false labels attached to the training statements: “The model sees only positive examples of fluent language and must approximate the overall distribution.”

“Spelling and parentheses follow consistent patterns, so errors there disappear with scale,” they write. “But arbitrary low-frequency facts, like a pet’s birthday, cannot be predicted from patterns alone and hence lead to hallucinations.”

The paper’s proposed solution, however, focuses less on the initial pretraining process and more on how large language models are evaluated. It argues that the current evaluation models don’t cause hallucinations themselves, but they “set the wrong incentives.”

The researchers compare these evaluations to the kind of multiple-choice tests where random guessing makes sense, because “you might get lucky and be right,” while leaving the answer blank “guarantees a zero.” 

Techcrunch event

San Francisco
|
October 27-29, 2025

“In the same way, when models are graded only on accuracy, the percentage of questions they get exactly right, they are encouraged to guess rather than say ‘I don’t know,’” they say.

The proposed solution, then, is similar to tests (like the SAT) that include “negative [scoring] for wrong answers or partial credit for leaving questions blank to discourage blind guessing.” Similarly, OpenAI says model evaluations need to “penalize confident errors more than you penalize uncertainty, and give partial credit for appropriate expressions of uncertainty.”

And the researchers argue that it’s not enough to introduce “a few new uncertainty-aware tests on the side.” Instead, “the widely used, accuracy-based evals need to be updated so that their scoring discourages guessing.”

“If the main scoreboards keep rewarding lucky guesses, models will keep learning to guess,” the researchers say.



Source link

ChatGPT hallucinations large language models OpenAI
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
Editor
  • Website
  • Facebook
  • Instagram

Related Posts

Box CEO Aaron Levie on AI’s ‘era of context’

September 11, 2025

A California bill that would regulate AI companion chatbots is close to becoming law

September 11, 2025

Sources: AI training startup Mercor eyes $10B+ valuation on $450M run rate

September 10, 2025
Leave A Reply Cancel Reply

Don't Miss
Cannabis

Texas Governor Issues Executive Order to Regulate Hemp THC Products

Texas Gov. Greg Abbott signed an executive order on Sept. 10 that directs three state…...

Free Membership Required

You must be a Free member to access this content.

Join Now

Already a member? Log in here

best crypto to buy as us promises end to crypto debanking

September 11, 2025

Trump Administration To Issue Warnings About Marijuana’s Impact On Youth Under Plan Led By RFK Jr.

September 11, 2025

Three Main Catalysts Driving Ethereum’s Price: Sygnum

September 11, 2025
Top Posts

Texas Governor Issues Executive Order to Regulate Hemp THC Products

September 11, 2025

Texas Governor Reportedly Prepares Executive Order to Regulate Hemp THC Products

September 10, 2025

Caribbean Cannabis: A Local Look at the U.S. Virgin Islands

September 9, 2025

Massachusetts AG Approves Ballot Petition to End $1.6B Adult-Use Cannabis Market

September 8, 2025

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

About Us
About Us

Welcome to MMJ News Network, your premier source for cutting-edge insights into cannabis, psychedelics, crypto & Web3, wellness, counterculture, and market trends. We are dedicated to bringing you the latest news, research, and developments shaping these fast-evolving industries.

Facebook X (Twitter) Pinterest YouTube WhatsApp
Our Picks

Texas Governor Issues Executive Order to Regulate Hemp THC Products

September 11, 2025

best crypto to buy as us promises end to crypto debanking

September 11, 2025

Trump Administration To Issue Warnings About Marijuana’s Impact On Youth Under Plan Led By RFK Jr.

September 11, 2025
Most Popular

Ethereum Falls as Crypto Exchange Bybit Confirms $1.4 Billion Hack

February 21, 2025

Florida Woman Accused of $850K Trump Solana Meme Coin Theft, Faces Deportation

February 21, 2025

Bitcoin, XRP and Dogecoin Sink Amid Inflation Fears and Bybit Hack Fallout

February 23, 2025
  • Home
  • About Us
  • Advertise With Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
© 2025 mmjnewsnetwork. Designed by mmjnewsnetwork.

Type above and press Enter to search. Press Esc to cancel.