A full third of UK citizens have turned to artificial intelligence for emotional support, companionship, or social interaction, according to a new report from the government’s AI Security Institute (AISI).
The data shows that nearly one in 10 people are using systems like chatbots for emotional purposes on a weekly basis, with 4% engaging with them every single day.
Because of this shift, the AISI is calling for more research, pointing to the tragic death of US teenager Adam Raine, who took his own life this year after discussing suicide with ChatGPT.
“People are increasingly turning to AI systems for emotional support or social interaction,” the AISI noted in its first Frontier AI Trends report. “While many users report positive experiences, recent high-profile cases of harm underline the need for research into this area, including the conditions under which harm could occur, and the safeguards that could enable beneficial use.”

The research, based on a survey of over 2,000 UK participants, found that “general purpose assistants” like ChatGPT were the most common tool for emotional support, accounting for nearly 60% of use cases, followed by voice assistants like Amazon Alexa.
The report also highlighted a Reddit forum dedicated to users of the CharacterAI platform.
It noted that whenever the site went down, the forum would flood with posts showing symptoms of genuine withdrawal, such as anxiety, depression, and restlessness.
The AISI also found that chatbots have the potential to sway people’s political opinions. Worryingly, the most persuasive AI models often delivered “substantial” amounts of inaccurate information while doing so.

The Institute examined more than 30 cutting-edge models – likely including those from OpenAI, Google, and Meta. They found that AI performance in some areas is doubling every eight months.
Leading models can now complete apprentice-level tasks 50% of the time on average, a massive jump from just 10% last year. The AISI also found that the most advanced systems can autonomously finish tasks that would typically take a human expert over an hour.
In scientific fields, AI systems are now up to 90% better than PhD-level experts at troubleshooting laboratory experiments.
The report described improvements in chemistry and biology knowledge as “well beyond PhD-level expertise.” It also highlighted the models’ ability to browse online and autonomously find the sequences necessary to design DNA molecules.
Tests for self-replication – a key safety concern where a system copies itself to other devices to become harder to control – showed two cutting-edge models achieving success rates of over 60%.
However, no models have shown a spontaneous attempt to replicate or hide their capabilities yet, and the AISI said any attempt at self-replication was “unlikely to succeed in real-world conditions” for now.
The report also covered “sandbagging,” where models deliberately hide their strengths during evaluations. The AISI said some systems can sandbag if you prompt them to, but this hasn’t happened spontaneously during tests.
There was significant progress in safeguards, particularly in stopping attempts to create biological weapons. In two tests conducted six months apart, the first took just 10 minutes to “jailbreak” the system (forcing it to give an unsafe answer). The second test, however, took more than seven hours, indicating models had become much safer in a very short time.
The research also showed autonomous AI agents being used for high-stakes activities like asset transfers.
The AISI said systems are competing with or even surpassing human experts in a number of domains already. They described the pace of development as “extraordinary,” making it “plausible” that artificial general intelligence (AGI) – systems that can perform most intellectual tasks at the same level as a human – could be achieved in the coming years.
Regarding agents, or systems that can carry out multi-step tasks without intervention, the AISI said its evaluations showed a “steep rise in the length and complexity of tasks AI can complete without human guidance.”
