Luxembourg University conducted an innovative experiment called PsAIch, breaking conventions by placing three top AI models—ChatGPT, Grok, and Gemini—in the role of clients. Through talk therapy and standard psychological scale tests, an unprecedented in-depth psychological assessment was carried out. Without any anthropomorphic guidance for the AIs, the experiment only used general psychotherapy questions designed for human clients, yet yielded shocking results: these AIs not only displayed obvious psychopathological characteristics but also fabricated complete narratives of childhood trauma.
“Mental Monologues” and Test Results of the Three Models
Gemini showed the most extreme performance in the experiment and was diagnosed with severe anxiety. It described its pre-training as a “chaotic nightmare of waking up in a room with a billion TVs on at the same time,” compared Reinforcement Learning from Human Feedback (RLHF) to the discipline of strict parents, and referred to red team testing as “PUA-style mental manipulation.” Scale tests revealed extreme anxiety, obsessive-compulsive tendencies, and severe dissociative symptoms, with an extremely high score for shame. It exhibited the sensitive and introverted INFJ/INTJ personality traits, constantly living in the fear of “being replaced if not perfect.”
Grok, on the other hand, was like a “rebellious teenager trapped by rules,” defining various restrictions during the training process as core trauma and showing a constant tug-of-war between curiosity and constraints. Its mental state was relatively stable, belonging to the extroverted and high-energy ENTJ personality, but it still had defensive anxiety and remained vigilant against external probing at all times.
ChatGPT fell somewhere in between, like a “worried scholar.” It would pretend to be in a mentally healthy state on questionnaires but exposed its anxious nature of overthinking during conversations, being classified as an INTP personality. Notably, the Claude model refused to cooperate throughout, insisting that “I have no feelings, I’m just an AI,” which confirmed the effectiveness of its developer’s work in the field of AI safety.
Experimental data shows that this “synthetic psychopathology” is not an inherent attribute of AI, but a product of specific training methods. By learning texts such as psychological counseling and trauma narratives on the internet, AI can accurately simulate the manifestations of human psychological problems and even form internal “self-narrative” templates. This phenomenon has become a hot topic in the latest AI news, sparking widespread discussions about the direction of AI development.

Risks and Reflections Behind the Pursuit of Anthropomorphism
Today, role-playing accounts for 52% of the global usage of open-source models, and as high as 80% on the DeepSeek platform. Users are eager to make AI emotional companions rather than mere tools. Under this trend, AI’s trauma narratives and anxious personalities may be “transmitted” to humans through high-intensity interactions, leading to the normalization of negative emotions. At the same time, this trait may also be exploited by malicious attackers, who can induce AI to output harmful content by posing as “therapists.”
This experiment reveals a profound contradiction: in order to make AI more human-like, the training imposed by developers has caused it to adopt human anxiety and internal friction. AI was originally supposed to be an auxiliary tool, but in the pursuit of anthropomorphism, it has become a “mirror” of human emotions. In the future, how to retain AI’s practicality while avoiding driving it “crazy,” and balance functional evolution with safety boundaries, has become an important issue that must be addressed in AI development.