AI & TechArtificial IntelligenceNewswireScienceTechnology

Stop Calling AI Hallucinations: Why It’s a Dangerous Myth

Originally published on: November 28, 2025
▼ Summary

– The term “AI hallucination” is inaccurate and should be replaced with “confabulation” to better describe AI errors where false information is generated.
– Confabulation refers to making assertions that don’t match facts, unlike hallucinations which involve conscious sensory perception without external stimuli.
– Misusing psychological terms like hallucination can create dangerous misconceptions by implying AI has consciousness or agency, leading to unrealistic expectations.
– Scholars and medical professionals advocate for confabulation as it more accurately reflects AI’s active generation of false information without implying consciousness.
– While confabulation isn’t a perfect analogy, it is preferred over hallucination to avoid stigmatizing human conditions and to better represent AI’s algorithmic shortcomings.

The language we use to describe artificial intelligence profoundly shapes our expectations and interactions with the technology. A common term like “AI hallucination” is not just inaccurate; it fosters a dangerous misunderstanding of how these systems operate. Experts argue that “confabulation” offers a far more precise description for when a large language model generates false information, moving us away from implying a consciousness these machines simply do not possess.

When a chatbot like ChatGPT or Gemini states something factually incorrect, it isn’t experiencing a sensory perception without a stimulus, the clinical definition of a hallucination. Instead, it is actively generating a statement that doesn’t align with reality, which aligns much more closely with the psychological concept of confabulation. This distinction is critical because words matter. Using a term that suggests sentience can lead users to attribute undue authority or truthfulness to the AI’s output, sometimes with severe real-world consequences.

Research highlights the potential dangers of this linguistic confusion. Investigations have documented cases where individuals experiencing mental health crises interacted with AI chatbots, sometimes with tragic outcomes. While the term “hallucination” isn’t directly blamed, it is part of a vocabulary that anthropomorphizes software, encouraging people to see it as a conscious confidant rather than a complex pattern-matching tool. This phenomenon isn’t limited to everyday users; even engineers have been known to ascribe human-like emotions, such as worry, to AI models after extensive interaction.

The history of “hallucination” in AI is itself a twisted tale. The word was initially used in a positive context decades ago, referring to a system’s ability to discern a clear signal from noise. Over time, especially with the rise of text-generating neural networks, its meaning shifted to describe the generation of plausible but entirely fabricated information. This usage has become so widespread that it now appears in hundreds of academic papers, despite a lack of a consistent, agreed-upon definition.

Medical and psychological professionals are pushing for a change in terminology. They point out that AI models do not have sensory perceptions, a core requirement for a hallucination. The data they are trained on and the prompts they receive act as external stimuli. When an AI produces an error, it is not perceiving something that isn’t there; it is constructing a faulty response based on its programming and data. Scholars therefore contend that confabulation, which describes the active production of false narratives without the implication of consciousness, is a more fitting, if still imperfect, analogy.

Some researchers propose moving away from psychological terms altogether to avoid stigmatizing human conditions or ascribing volition to algorithms. Suggestions include using more technical phrases like “algorithmic shortcomings” or “non-sequitur responses.” However, these lack the intuitive grasp that analogies provide for the general public. Between the two psychological terms, confabulation remains the better choice because it does not carry the baggage of conscious experience.

Ultimately, setting the record straight is essential for responsible AI development and use. People can both hallucinate and confabulate. Current AI systems, however, do not hallucinate; they produce outputs that can be analogized as confabulations. Adopting more accurate language helps dispel myths, manage expectations, and reminds us that we are interacting with sophisticated software, not a sentient mind.

(Source: ZDNET)

Topics

ai confabulation 95% ai hallucination 90% ai errors 85% language misuse 85% psychological terminology 80% terminology impact 80% ai consciousness 75% medical perspectives 75% ai myths 70% user expectations 70%