The mother of murdered toddler James Bulger has told of her agony after discovering TikTok ghouls have created AI videos of her son talking about his own murder.
James, two, was abducted, tortured and beaten to death by 10-year-olds Jon Venables and Robert Thompson in 1993 in a crime which shocked the nation.
The pair became the youngest ever to be convicted of murder after James’ body was found on a railway track two days later.
But mother Denise Fergus has told how sick individuals continue to take advantage of her grief three decades later by creating ‘disgusting’ AI videos.
The clips, which have been shared on TikTok, but also other platforms including YouTube and Instagram, feature AI-generated digital clones of James speaking about the murder in the first person.
Other sick films show AI-generated depictions of James’ kidnapping, and even police interviews in which the killers deny any wrongdoing.
Some even depicted AI images of the toddler’s body left abandoned on the railway tracks.
Ms Fergus told the BBC the videos were ‘absolutely disgusting’ and that the creators behind them ‘don’t understand how much they’re hurting people’.
She added: ‘It plays on your mind. It’s something that you can’t get away from. When you see that image, it stays with you.’
Ms Fergus said TikTok has failed to respond to her requests to have the videos taken down, as government warned that such videos are considered illegal under the Online Safety Act.
Similar content found on Instagram and YouTube has since been removed, the companies said. A TikTok spokesperson said: ‘We do not allow harmful AI-generated content on our platform and we proactively find 96% of content that breaks these rules before it is reported to us.’
A YouTube spokesperson said the platform’s guidelines ‘prohibit content that realistically simulates deceased individuals describing their death’.
The spokesperson said a channel called Hidden Stories had been terminated for ‘severe violations of this policy’.
Ms Fergus said she believes current legislation does not go far enough to protect people from harmful content online.
She added that she has a meeting with Justice Secretary Shabana Mahmood later today, in which she intends to raise the issue.
Ms Fergus said: ‘We go on social media and the person that’s no longer with us is there, talking to us. How sick is that?
‘It’s just corrupt. It’s just weird and it shouldn’t be done.’
Kym Morris, the chairwoman of the James Bulger Memorial Trust, told the BBC that the government could amend the Online Safety Act to include specific protections against harmful AI-generated content.
‘There must be clear definitions, accountability measures, and legal consequences for those who exploit this technology to cause harm – especially when it involves real victims and their families,’ Ms Morris said.
‘This is not about censorship – it’s about protecting dignity, truth, and the emotional wellbeing of those directly affected by horrific crimes.’
A government spokesperson told the BBC: ‘Using technology for these disturbing purposes is vile.
‘This government is taking robust action through delivery of the Online Safety Act, under which videos like this are considered illegal content where an offence is committed and should be swiftly removed by platforms.
‘Like parents across the country we expect to see these laws help create a safer online world.
‘But we won’t hesitate to go further to protect our children; they are the foundation not the limit when it comes to children’s safety online.’
Technology Secretary Peter Kyle earlier this year said he believes the government will have to ‘legislate again’ on online safety.
New guidance set out by Ofcom this morning means social media and other internet platforms will be legally required to block children’s access to harmful content from July or face massive fines.
The regulator has published the final version of its Children’s Codes under the Online Safety Act, setting out what sites must do to follow the law and protect children online.
Under the Codes, any site which hosts pornography, or content which encourages self-harm, suicide or eating disorders must have robust age verification tools in place in order to protect children from accessing that content.
In addition, platforms will be required to configure their algorithms to filter out harmful content from children’s feeds and recommendations, ensuring they are not sent down a rabbit hole of harmful content.
The Codes also require sites to have easier reporting and complaints systems in place to help users more quickly flag harmful content, and sites themselves will be expected to respond remove harmful content quickly.
Ofcom chief executive Dame Melanie Dawes said: ‘These changes are a reset for children online.
‘They will mean safer social media feeds with less harmful and dangerous content, protections from being contacted by strangers and effective age checks on adult content. Ofcom has been tasked with bringing about a safer generation of children online, and if companies fail to act they will face enforcement.’