UK Technology Companies and Child Safety Officials to Test AI's Capability to Create Exploitation Images
Technology companies and child protection agencies will be granted authority to evaluate whether artificial intelligence systems can generate child abuse material under new UK legislation.
Significant Rise in AI-Generated Illegal Material
The declaration came as revelations from a safety watchdog showing that cases of AI-generated child sexual abuse material have increased dramatically in the past year, growing from 199 in 2024 to 426 in 2025.
Updated Legal Framework
Under the changes, the government will allow designated AI developers and child protection organizations to inspect AI systems – the foundational technology for chatbots and visual AI tools – and verify they have sufficient safeguards to prevent them from producing images of child exploitation.
"Ultimately about preventing exploitation before it happens," stated the minister for AI and online safety, noting: "Specialists, under strict conditions, can now detect the danger in AI models promptly."
Addressing Regulatory Obstacles
The amendments have been implemented because it is illegal to create and own CSAM, meaning that AI developers and others cannot create such content as part of a evaluation regime. Previously, authorities had to delay action until AI-generated CSAM was published online before addressing it.
This legislation is designed to averting that problem by enabling to stop the production of those materials at their origin.
Legal Framework
The changes are being added by the government as modifications to the crime and policing bill, which is also implementing a ban on owning, producing or sharing AI models developed to create child sexual abuse material.
Practical Consequences
This recently, the minister toured the London base of a children's helpline and listened to a mock-up call to advisors featuring a account of AI-based exploitation. The interaction depicted a teenager seeking help after facing extortion using a explicit AI-generated image of himself, created using AI.
"When I learn about young people facing blackmail online, it is a cause of extreme anger in me and rightful concern amongst parents," he said.
Alarming Data
A leading online safety foundation stated that instances of AI-generated exploitation material – such as online pages that may include numerous images – had significantly increased so far this year.
Cases of category A material – the gravest form of exploitation – rose from 2,621 images or videos to 3,086.
- Girls were predominantly victimized, accounting for 94% of illegal AI images in 2025
- Depictions of newborns to two-year-olds rose from five in 2024 to 92 in 2025
Sector Reaction
The legislative amendment could "represent a crucial step to ensure AI products are safe before they are released," stated the chief executive of the internet monitoring organization.
"Artificial intelligence systems have enabled so victims can be victimised all over again with just a few clicks, providing offenders the ability to create possibly endless amounts of advanced, photorealistic exploitative content," she added. "Content which further exploits victims' trauma, and makes children, particularly female children, more vulnerable both online and offline."
Support Session Information
The children's helpline also published details of support sessions where AI has been mentioned. AI-related harms discussed in the sessions comprise:
- Employing AI to rate body size, physique and appearance
- Chatbots dissuading young people from talking to trusted adults about harm
- Being bullied online with AI-generated content
- Online blackmail using AI-faked images
Between April and September this year, Childline conducted 367 counselling interactions where AI, chatbots and associated topics were mentioned, significantly more as many as in the equivalent timeframe last year.
Half of the references of AI in the 2025 interactions were connected with mental health and wellbeing, including utilizing AI assistants for support and AI therapeutic applications.