The amount of deepfake and AI-generated child abuse images online is doubling every six months, a National Crime Agency director has warned.
The next five years will see ‘significant increases’ in the creation of indecent images using artificial intelligence, Alex Murray said.
The images are generated by software which learns from photos and videos of real child abuse, Mr Murray, who is also the AI lead for the National Police Chiefs’ Council, added.
‘What we are seeing at the moment is on a six-month basis the amount of deepfake images of child abuse is doubling on the internet,’ he said.
‘So this means that we as police must move very fast in this space.’
Artificially-generated indecent images are illegal in the same way as those involving real children.
And it is a misconception that software-generated abuse imagery does not affect real victims, Mr Murray said.
‘We have got to remember that models are trained on data, so for a model to be able to create child abuse imagery, it needs to be trained on real child abuse,’ he said.
‘So you will have videos of children being abused, the model will learn from that and produce imagery so there are real victims.
‘And then of course there’s the sociological question around whether ingesting that child abuse material could lead to contact offending.’
Mr Murray said it was not possible to know exactly how many such images were being created and published online, but it was many ‘thousands and thousands and thousands.’
As the technology improves it will become harder to distinguish AI-generated abuse images from the real thing and their creation will become more common, he said.
‘People using this sort of software at the moment are still quite niche but in fact it becomes very easy to use, so ease of entry, realism and availability are the three vectors which will probably increase,’ Mr Murray said.
Last month, a budding filmmaker who used AI to make indecent images of children was jailed for 18 years as footage emerged of him admitting to having a ‘warped’ mind.
Hugh Nelson said he was providing a ‘valuable service’ by taking ‘commissions’ from relatives and family friends of youngsters – charging them £80 to turn a real image of a child into a 3D ‘character’ being physically and sexually abused.
The 27-year-old was shown on camera confessing to detectives that his actions had been ‘absolutely grotesque’ after being confronted with evidence about his ‘sick’ deepfake factory.
Operating from his family home near Bolton, Nelson used AI technology to manipulate innocent photographs of real children as young as four into scenes of nudity, rape and torture – making around £5,000 but giving out others for free.
In August, Neil Darlington, who used AI to make indecent images of children in order to try and blackmail them had his sentence trebled to three years.
The 52-year-old was sent fully clothed pictures of the girls. He then used AI to manipulate the images to make them sexually explicit before threatening to send them to the children’s family and friends.
However, Darlington was unaware that the young girls he was trying to blackmail were in fact undercover police officers.
Darlington was jailed for a year in June after pleading guilty to 10 offences at Stoke Crown Court including making indecent images of children and blackmailing two girls he believed were aged 11 and 14, who he had met in a chat room.
The Solicitor General referred Darlington’s sentence to the Court of Appeal, arguing that it was ‘unduly lenient’, with three judges increasing the jail term to one of three years and placing him on the sex offenders register indefinitely on Wednesday.