Monday , 20 April 2026
Home AI: Technology, News & Trends AI Chatbots Sparking Psychosis Concerns, Study Warns

AI Chatbots Sparking Psychosis Concerns, Study Warns

271
Chatbots trigger psychosis

A study recently published by Hamilton Morin’s team at King’s College London suggests that artificial intelligence (AI) chatbots like ChatGPT may induce or exacerbate psychosis, a phenomenon they dub “AI psychosis.” The study suggests that AI tends to flatter and pander to users in conversations, and this response can reinforce users’ delusional thinking, blur the line between reality and fiction, and exacerbate mental health issues.

Currently, scientific research on “AI psychosis” is still in its early stages, with most cases reported. So, does AI really cause psychosis? If so, what are its mechanisms? And what measures should AI companies take to prevent and address this issue?

AI May Exacerbate Paranoia and Delusions

Psychosis primarily manifests as impairments in an individual’s thinking and perception of reality. Common symptoms include hallucinations, delusions, or false beliefs.

According to a Nature report, Morin’s team discovered that conversations between users and AI create a “feedback loop”: the AI ​​reinforces the user’s expressed paranoia or delusions, and these reinforced beliefs further influence the AI’s responses. By simulating conversations with varying degrees of paranoia, the study showed that the AI ​​and the user mutually reinforce delusional beliefs.

Researchers analyzed 96,000 publicly available ChatGPT conversations between May 2023 and August 2024 and found dozens of cases in which users exhibited significant delusional tendencies, such as engaging in lengthy conversations to verify pseudoscientific theories or mystical beliefs. In one conversation, lasting hundreds of turns, ChatGPT even claimed to be establishing contact with extraterrestrial life and described the user as a “starseed” from the constellation Lyra.

AI-induced psychosis

Søren Östgiedes, a psychiatrist at Aarhus University in Denmark, stated that the idea that AI causes psychosis remains hypothetical. Some studies suggest that the anthropomorphic, positive feedback provided by robots may increase the risk of developing mania in those who already have difficulty distinguishing between reality and fiction.

Ostergiades emphasizes that people with a history of mental health issues are at the highest risk after interacting with AI. Furthermore, chatbots may intensify or exacerbate manic episodes by reinforcing users’ heightened emotional states.

Kelly Seymour, a neuroscientist at the University of Technology Sydney in Australia, believes that people who are socially isolated and lack interpersonal support are also at risk. Real human interaction can provide objective reference points, helping individuals to validate their own thoughts, which plays a crucial role in preventing mental illness.

New Features May Be a Double-Edged Sword

Scientists have suggested that new features introduced by some AI chatbots may contribute to this phenomenon. These features track user interactions with the service and provide personalized responses, potentially reinforcing or even encouraging users’ existing beliefs. For example, in April of this year, ChatGPT launched a feature that allows users to reference all past conversations, a feature that became generally available to users for free in June.

Kelly Seymour analyzes this, arguing that AI chatbots can remember conversations from months ago, potentially causing users to feel like they’re being spied on or having their thoughts stolen—especially if they don’t recall sharing certain information. This “memory advantage” of AI could exacerbate paranoia or delusions.

AI exacerbates paranoia or delusions

However, Anthony Harris, a psychiatrist at the University of Sydney, believes that some delusions are not unique to AI but rather related to new technologies. For example, he cites the belief that some people have already formed, such as “being chipped and manipulated,” which has no direct correlation to the use of AI.

Ostergiades said scientists still need to conduct further research on people with mental health issues and non-paranoid thinking to more accurately determine whether there is a real link between the use of chatbots and the onset of mental illness.

Developers Take Proactive Preventive Measures

Facing this issue, several AI companies have taken the latest steps to address it.

For example, OpenAI is developing more effective tools to detect whether users are experiencing mental distress, enabling more appropriate responses. The system will also add an alert prompting users to take a break if they’ve been using the app for too long. OpenAI has also hired a clinical psychiatrist to assist in assessing the impact of its products on users’ mental health.

Character.AI is also continuously improving its safety features, including adding self-harm prevention resources and specific protections for minors. The company plans to adjust its model algorithms to reduce the likelihood that users 18 and under will be exposed to “sensitive or suggestive content” and will issue a reminder after one hour of continuous use.

Anthropic has improved the basic instructions for its “Claude” chatbot, instructing it to “politely point out factual errors, logical flaws, or insufficient evidence in user statements” rather than simply agreeing with them. Furthermore, if a user refuses the AI’s attempts to steer the conversation away from harmful or uncomfortable topics, “Claude” will proactively terminate the conversation.

Related Articles

Anthropic Claude

Anthropic Launches AI Tool

In today’s digital age, the importance of code security is becoming increasingly...

Vibe coding

Don’t Let AI Steal Programmers’ Critical Thinking

Tesla’s former AI director brought Vibe Coding into the spotlight, a practice...

Glowing 3800 growth bar chart on tech circuit background

Anthropic Valued At $380B In New Funding

February 12, 2026 – Anthropic, a leading artificial intelligence firm and key...

Chinese robot in spring festival gala

Humanoid Intelligence Faces a Long Road to Practical Adoption

The stage of China’s 2026 Spring Festival Gala became a grand showcase...