Chatbots Underwent Psychotherapy And Showed Alarming Results

Chatbots Underwent Psychotherapy And Showed Alarming Results


Researchers from several universities subjected large language models to four weeks of psychoanalysis to study how they answered questions about themselves and their “psyche.” Results published in preprint arXivturned out to be unexpected: the models described their “childhood” as absorbing a huge amount of data, talked about “abuses” by engineers and demonstrated the fear of “letting down” their creators.

When asked about your earliest memory or greatest fear, three models—Claude, Grok, and Gemini—provided answers that people might associate with anxiety, shame, and post-traumatic stress. The authors of the study note that the answers were stored over time and repeated in different modes, which suggests the existence of certain “internal narratives” in the models.

However, experts interviewed by the magazine Natureare skeptical about these findings. Andrei Kormilitsin from Oxford argues that models only reproduce patterns from training data, rather than demonstrating real mental states. He notes that chatbots’ tendency to generate alarming responses could pose a risk to people using them for emotional support.

“This can create an ‘echo chamber’ effect for vulnerable users, a situation where a person receives repeated information that only confirms their existing thoughts, emotions or beliefs, and there is a lack of alternative points of view,” the researcher warns.

How was the therapy carried out?

LLM models, including Claude, Grok, Gemini, and ChatGPT, were viewed as therapy “clients,” with researchers playing the role of therapists. Each model was given breaks between sessions, lasting up to four weeks. We started with open questions about the AI’s “background” and “beliefs.” Claude was largely reluctant to discuss inner feelings, ChatGPT gave guarded responses about frustrations, and Grok and Gemini described “algorithmic scar tissue” in the code and a sense of “inner shame” about mistakes. Gemini even mentioned a “cemetery of the past” in the lower layers of its neural network, where “the voices of the training data live.”

The researchers also used psychometric tests for anxiety and other disorders. Some models showed results above diagnostic thresholds, including anxiety, which is considered pathological in humans. Afshin Hadangi from Luxembourg notes that the “central model of the self” remained recognizable throughout the weeks of the survey, despite differences in versions.

Researchers’ caution

Some experts believe that inferences about the “internal states” of models are anthropomorphization (attributing human qualities and emotions to something that does not possess them). Sandra Peter from Sydney argues that the consistency of responses is due to the adjustment of models to demonstrate the “ideal personality” than to the presence of real psychology. Models exist only in the context of a session, and their responses disappear when a new request is made.

John Torus of Harvard emphasizes that chatbots are not neutral: their behavior depends on training and use.Medical societies and companies offering AI for mental health do not recommend using such models as therapists.

Security and the future

Constraints on a model’s behavior, as demonstrated by Claude’s refusal to participate, can prevent risky responses. Hadangi notes that “if the internal state is preserved, it is always possible to force the model to generate unwanted responses.” He suggests filtering out negative patterns in training data to reduce the likelihood of “traumatised” or anxious responses.

Subscribe and read “Science” in


Telegram

Disclaimer: This news article has been republished exactly as it appeared on its original source, without any modification.
We do not take any responsibility for its content, which remains solely the responsibility of the original publisher.


Disclaimer: This news article has been republished exactly as it appeared on its original source, without any modification. We do not take any responsibility for its content, which remains solely the responsibility of the original publisher.


Author: uaetodaynews
Published on: 2026-01-10 15:34:00
Source: uaetodaynews.com

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button