UK’s deepfake detection plan unlikely to work, says expert • The Register
The UK government claims it will develop a “world-first” framework to evaluate deepfake detection technologies as AI-generated content proliferates.
The Home Office is working Microsoft, other tech corporations and academics to assess methods for identifying harmful forgeries. It estimates eight million deepfakes were shared in 2025, up from half a million in 2023.
Nik Adams, Deputy Commissioner for City of London Police, called the framework “a strong and timely addition to the UK’s response to the rapidly evolving threat posed by AI and deepfake technologies.”
“By rigorously testing deepfake technologies against real-world threats and setting clear expectations for industry, this framework will significantly bolster law enforcement’s ability to stay ahead of offenders, protect victims and strengthen public confidence as these technologies continue to evolve.”
However, Dr Ilia Kolochenko, CEO at ImmuniWeb, a Swiss cybersecurity biz, said the plan “will quite unlikely make any systemic improvements in the near future.”
Kolochenko pointed to numerous open source tools and groups of experts that already exist to track and expose AI-generated content.
“Even if an AI fake is detected, the biggest question is what to do next,” he told The Register. “Reputable media and websites will likely take it down rapidly even without scientific proof that it is an AI fake.”
Clandestine or anonymous media are unlikely to be as cooperative.
“We need a systemic and global amendment of legislation – not just legally unenforceable code of conduct or best practices – to stop the surging harm of AI-created content,” Kolochenko added. “In sum, while this commendable action is a solid start, we are still very far from a final solution.”
The Register asked the Home Office for a time frame for the framework and the technology being used, but did not receive a response. Microsoft directed us to the Home Office’s statement. ®


