Regulators must ‘act quickly’ to introduce safeguards protecting the electoral process from the threat posed by artificial intelligence, experts have urged.
The warning came as a study found that bogus photographs and video footage generated by AI could influence the coming general election in a string of sinister ways.
It concluded that so-called ‘deepfake’ imagery could be used for character assassinations on politicians, to spread hate, erode trust in democracy and to create false endorsements.
Research by The Alan Turing Institute’s Centre for Emerging Technology and Security (Cetas) urged Ofcom and the Electoral Commission to address the use of AI to mislead the public, warning it was eroding trust in the integrity of elections.
It said the Electoral Commission and Ofcom should create guidelines and request voluntary agreements for political parties setting out how they should use AI for campaigning, and require AI-generated election material to be clearly marked as such.
The research team warned that currently, there was ‘no clear guidance’ on preventing AI being used to create misleading content around elections.
Some social media platforms have already begun labelling AI-generated material in response to concerns about deepfakes and misinformation, and in the wake of a number of incidents of AI being used to create or alter images, audio or video of senior politicians.
In its study, Cetas said it had created a timeline of how AI could be used in the run-up to an election, suggesting it could be used to undermine the reputation of candidates, falsely claim that they have withdrawn or use disinformation to shape voter attitudes on a particular issue.
The study also said misinformation around how, when or where to vote could be used to undermine the electoral process.
Sam Stockwell, research associate at the Alan Turing Institute and the study’s lead author, said: ‘With a general election just weeks away, political parties are already in the midst of a busy campaigning period.
‘Right now, there is no clear guidance or expectations for preventing AI being used to create false or misleading electoral information.
‘That’s why it’s so important for regulators to act quickly before it’s too late.’
Dr Alexander Babuta, director of Cetas, said: ‘Regulators can do more to help the public distinguish fact from fiction and ensure voters don’t lose faith in the democratic process.’
Separately, the Commons science committee chairman warned that AI watchdogs in the UK are ‘under-resourced’ in comparison to developers of the technology.
The Science, Innovation and Technology Committee said in a report into the governance of AI that £10 million announced by the Government in February to help Ofcom and other regulators respond to the growth of the technology was ‘clearly insufficient’.
It added that the next government should announce further financial support ‘commensurate to the scale of the task’, as well as ‘consider the benefits of a one-off or recurring industry levy’ to help regulators.
Outgoing committee chairman Greg Clark said he was ‘worried’ that UK regulators were ‘under-resourced compared to the finance that major developers can command’.