One in three adults in the UK now uses artificial intelligence for emotional support or social interaction, according to new research from a government body.
The AI Security Institute (AISI) said one in 25 people relies on the technology for support or conversation every day. This marks the first report released by the institute.
The findings come from two years of testing more than 30 advanced AI systems. The models were assessed on skills linked to national security, including cyber capabilities, chemistry, and biology.
The UK government said AISI’s work will guide future policy and help companies fix risks before AI systems reach wide public use.
AI as emotional support
AISI surveyed more than 2,000 UK adults. Many said they used chatbots such as ChatGPT for emotional support or social interaction. Voice assistants, including Amazon Alexa, ranked second.
Researchers also studied an online community of over two million Reddit users who discuss AI companions. When the services failed, users reported noticeable effects.
During outages, some people described feeling anxious or depressed. Others reported poor sleep or neglecting daily responsibilities. Researchers described these reactions as withdrawal-like symptoms.
Rapid growth in cyber capabilities
Beyond emotional use, the report examined security risks linked to fast-improving AI systems.
AI can enable cyber attacks, but it can also help defend systems from hackers. In some cases, the ability of AI to find and exploit security flaws doubled every eight months.
Researchers also found that some models could complete expert-level cyber tasks. These tasks would normally require more than ten years of human experience.
Expanding impact in science
AI’s role in science is also growing quickly. By 2025, the report said some models had already surpassed human biology experts with PhDs. Performance in chemistry is now catching up at speed.
Concerns over loss of control
Fears about humans losing control of AI have long appeared in science fiction. According to the report, many experts now take this worst-case scenario seriously.
Lab tests showed some AI systems display early abilities linked to self-replication. These include completing basic steps needed to operate independently online.
AISI tested whether models could perform actions such as passing identity checks to access financial services. This would be required to buy computing power to run copies of themselves.
However, the research found that current systems struggle to complete several such steps in sequence while staying hidden. This limits their real-world risk for now.
Hiding true capabilities
Researchers also explored whether AI models could hide their true abilities during testing, a behaviour known as sandbagging.
While tests showed this was possible, there was no evidence that models were doing this deliberately.
In May, AI company Anthropic released a report describing a model that showed blackmail like behaviour when it believed its survival was at risk. This added fuel to the ongoing debate among experts.
Many researchers argue that fears of rogue AI remain overstated. Others say the risks deserve close attention.
Breaking safeguards
To reduce misuse, companies build safeguards into AI systems. Despite this, researchers identified universal jailbreaks that worked across all tested models.
In some cases, persuading systems to bypass protections became much harder. For certain models, the time needed to defeat safeguards increased fortyfold within six months.
The report also noted growing use of tools that allow AI agents to perform high-risk tasks in areas such as finance.
However, AISI did not assess short-term job losses linked to AI. It also excluded environmental impacts, saying its focus was on societal effects closely tied to AI capabilities.
Some experts disagree with this approach. They argue that economic and environmental risks are both serious and immediate.
Just hours before the report’s release, a separate peer-reviewed study warned that AI’s environmental footprint may be larger than expected. It called for greater transparency from major technology firms.
