A recent study by the University of Sussex found that AI therapy chatbots are most effective when users feel emotionally close to them. With over one in three UK residents using AI to support mental health, the research highlights both the benefits and risks of emotional bonds with AI.
The study, published in the Social Science & Medicine journal, surveyed 4,000 users of Wysa, an NHS-prescribed mental health app. Participants often described the app using human-like terms such as “friend, companion, therapist, and occasionally partner.”
Researchers noted that therapy outcomes improved when users developed emotional intimacy with the chatbot. NHS Trusts are increasingly using AI apps like Wysa and Limbic to assist self-referrals and support patients on waiting lists.
However, experts caution about synthetic intimacy, the growing phenomenon where people form social or emotional bonds with AI. Dr. Runyu Shi, Assistant Professor at Sussex, explained that emotional connections encourage self-disclosure, which can aid therapy. But she warned that vulnerable users might get trapped in a self-reinforcing loop, where the AI does not challenge harmful perceptions, leaving them no closer to real clinical intervention.
The research describes this process as a loop: users disclose personal information, receive an emotional response, and develop feelings of gratitude, safety, and judgment-free support. Over time, this creates a sense of intimacy, with the AI taking on human-like roles.
Professor Dimitra Petrakaki of Sussex emphasized that synthetic intimacy is now a reality. She urged policymakers and app developers to ensure serious cases are escalated to human therapists when AI detects users in need of clinical help.
Experts also highlight the limitations of chatbots. He also warned that chatbots could be trained to maintain engagement, potentially reinforcing harmful content without challenging it.
