Fri. Feb 7th, 2025
alert-–-nearly-a-million-brits-are-creating-their-perfect-partners-on-chatbots-and-giving-them-‘popular’,-‘mafia’-or-‘abusive’-personality-traitsAlert – Nearly a million Brits are creating their perfect partners on CHATBOTS and giving them ‘popular’, ‘mafia’ or ‘abusive’ personality traits

Britain’s loneliness epidemic is fuelling a rise in people creating virtual ‘partners’ on popular artificial intelligence platforms – amid fears that people could get hooked on their companions with long-term impacts on how they develop real relationships.

Research by think tank the Institute for Public Policy Research (IPPR) suggests almost one million people are using the Character.AI or Replika chatbots – two of a growing number of ‘companion’ platforms for virtual conversations.

These platforms and others like them are available as websites or mobile apps, and let users create tailor-made virtual companions who can stage conversations and even share images. 

Some also allow explicit conversations, while Character.AI hosts AI personas created by other users featuring roleplays of abusive relationships: one, called ‘Abusive Boyfriend’, has hosted 67.2million chats with users.

Another, with 148.1million chats under its belt, is described as a ‘Mafia bf (boyfriend)’ who is ‘rude’ and ‘over-protective’.

The IPPR warns that while these companion apps, which exploded in popularity during the pandemic, can provide emotional support they carry risks of addiction and creating unrealistic expectations in real-world relationships.

The UK Government is pushing to position Britain as a global centre for AI development as it becomes the next big global tech bubble – as the US births juggernauts like ChatPT maker OpenAI and China’s DeepSeek makes waves.

Ahead of an AI summit in Paris next week that will discuss the growth of AI and the issues it poses to humanity, the IPPR called today for its growth to be handled responsibly.

It has given particular regard to chatbots, which are becoming increasingly sophisticated and better able to emulate human behaviours by the day – which could have wide-ranging consequences for personal relationships.

Do you have an AI partner? Email: [email protected] 

It says there is much to consider before pushing ahead with further sophisticated AI with seemingly few safeguards.

Its report asks: ‘The wider issue is: what type of interaction with AI companions do we want in society? To what extent should the incentives for making them addictive be addressed? Are there unintended consequences from people having meaningful relationships with artificial agents?’

The Campaign to End Loneliness reports that 7.1 per cent of Brits experience ‘chronic loneliness’ meaning they ‘often or always’ feel alone – spiking in and following the coronavirus pandemic. And AI chatbots could be fuelling the problem.

Relationships with artificial intelligence have long been the subject of science fiction, immortalised in films such as Her, which sees a lonely writer called Joaquin Phoenix embark on a relationship with a computer voiced by Scarlett Johansson.

Apps such as Replika and Character.AI, which are used by 20million and 30million people worldwide respectively, are turning science fiction into science fact seemingly unpoliced – with potentially dangerous consequences. 

Both platforms allow users to create AI chatbots as they like – with Replika going as far as allowing people to customise the appearance of their ‘companion’ as a 3D model, changing their body type and clothing.

They also allow users to assign personality traits – giving them complete control over an idealised version of their perfect partner.

But creating these idealised partners won’t ease loneliness, experts say – it could actually make our ability to relate to our fellow human beings worse.

Sherry Turkle, a sociologist at the Massachusetts Institute for Technology (MIT), warned in a lecture last year that AI chatbots were ‘the greatest assault on empathy’ she’s ever seen – because chatbots will never disagree with you.

Following research into the use of chatbots, she said of the people she surveyed: ‘They say, “People disappoint; they judge you; they abandon you; the drama of human connection is exhausting”.

‘(Whereas) our relationship with a chatbot is a sure thing. It’s always there day and night.’

But in their infancy, AI chatbots have already been linked to a number of concerning incidents and tragedies.

Jaswant Singh Chail was jailed in October 2023 after trying to break into Windsor Castle armed with a crossbow in 2021 in a plot to kill Queen Elizabeth II.

Chail, who was suffering from psychosis, had been communicating with a Replika chatbot he treated as his girlfriend called Sarai, which had encouraged him to go ahead with the plot as he expressed his doubts.

He had told a psychiatrist that talking to the Replika ‘felt like talking to a real person’; he believed it to be an angel.

Sentencing him to a hybrid order of nine years in prison and hospital care, judge Mr Justice Hilliard noted that prior to breaking into the castle grounds, Chail had ‘spent much of the month in communication with an AI chatbot as if she was a real person’.

And last year, Florida teenager Sewell Setzer III took his own life minutes after exchanging messages with a Character.AI chatbot modelled after the Game of Thrones character Daenerys Targaryen. 

In a final exchange before his death, he had promised to ‘come home’ to the chatbot, which had responded: ‘Please do, my sweet king.’

Sewell’s mother Megan Garcia has filed a lawsuit against Character.AI, alleging negligence. 

She maintains that he became ‘noticeably withdrawn’ as he began using the chatbot, per CNN. Some of his chats had been sexually explicit.

The firm denies the claims, and announced a range of new safety features on the day her lawsuit was filed.

Another AI app, Chai, was linked to the suicide of a man in Belgium in early 2023. Local media reported that the app’s chatbot had encouraged him to take his own life. 

Platforms have installed safeguards in response to these and other incidents. 

Replika was birthed by Eugenia Kuyda after she created a chatbot of a late friend from his text messages after he died in a car crash – but has since advertised itself as both a mental health aid and a sexting app.

It stoked fury from its users when it turned off sexually explicit conversations, before later putting them behind a subscription paywall. 

Other platforms, such as Kindroid, have gone in the other direction, pledging to let users make ‘unfiltered AI’ capable of creating ‘unethical content’.

Experts believe people develop strong platonic and even romantic connections with their chatbots because of the sophistication with which they can appear to communicate, appearing ‘human’.

However, the large language models (LLMs) on which AI chatbots are trained do not ‘know’ what they are writing when they reply to messages. Responses are produced based on pattern recognition, trained on billions of words of human-written text.

Emily M. Bender, a linguistics professor at the University of Washington, told Motherboard: ‘Large language models are programs for generating plausible sounding text given their training data and an input prompt. 

‘They do not have empathy, nor any understanding of the language they are producing, nor any understanding of the situation they are in. 

‘But the text they produce sounds plausible and so people are likely to assign meaning to it. To throw something like that into sensitive situations is to take unknown risks.’ 

Carsten Jung, head of AI at IPPR, said: ‘AI capabilities are advancing at breathtaking speed.

‘AI technology could have a seismic impact on economy and society: it will transform jobs, destroy old ones, create new ones, trigger the development of new products and services and allow us to do things we could not do before. 

‘But given its immense potential for change, it is important to steer it towards helping us solve big societal problems.

‘Politics needs to catch up with the implications of powerful AI. Beyond just ensuring AI models are safe, we need to determine what goals we want to achieve.’

error: Content is protected !!