UK Tech Firms and Child Protection Agencies to Examine AI's Capability to Create Exploitation Images
Tech firms and child safety organizations will receive permission to assess whether artificial intelligence systems can produce child exploitation images under recently introduced British laws.
Significant Increase in AI-Generated Harmful Content
The declaration came as findings from a safety watchdog showing that cases of AI-generated child sexual abuse material have more than doubled in the last twelve months, growing from 199 in 2024 to 426 in 2025.
Updated Legal Structure
Under the changes, the authorities will permit approved AI companies and child safety organizations to inspect AI models – the foundational technology for chatbots and visual AI tools – and ensure they have sufficient protective measures to prevent them from producing images of child sexual abuse.
"Fundamentally about stopping abuse before it happens," stated Kanishka Narayan, adding: "Specialists, under rigorous conditions, can now identify the risk in AI models promptly."
Addressing Legal Obstacles
The amendments have been introduced because it is against the law to create and own CSAM, meaning that AI developers and other parties cannot create such images as part of a evaluation process. Until now, officials had to delay action until AI-generated CSAM was uploaded online before dealing with it.
This law is designed to averting that issue by enabling to halt the creation of those materials at their origin.
Legislative Framework
The changes are being added by the government as modifications to the criminal justice legislation, which is also implementing a prohibition on owning, creating or sharing AI systems developed to generate exploitative content.
Practical Consequences
This week, the official toured the London headquarters of Childline and listened to a mock-up conversation to counsellors involving a report of AI-based exploitation. The call portrayed a teenager requesting help after facing extortion using a explicit deepfake of themselves, constructed using AI.
"When I hear about young people experiencing blackmail online, it is a source of intense anger in me and rightful concern amongst families," he stated.
Concerning Data
A leading internet monitoring foundation reported that cases of AI-generated exploitation material – such as online pages that may contain numerous images – had more than doubled so far this year.
Instances of category A material – the most serious form of exploitation – increased from 2,621 images or videos to 3,086.
- Female children were overwhelmingly victimized, accounting for 94% of prohibited AI depictions in 2025
- Portrayals of newborns to toddlers increased from five in 2024 to 92 in 2025
Industry Response
The law change could "represent a crucial step to ensure AI products are secure before they are released," stated the head of the online safety foundation.
"Artificial intelligence systems have enabled so survivors can be targeted repeatedly with just a few clicks, giving criminals the capability to make possibly limitless amounts of advanced, lifelike child sexual abuse material," she continued. "Material which additionally commodifies survivors' suffering, and makes young people, especially female children, more vulnerable on and off line."
Counseling Session Information
Childline also released information of support sessions where AI has been mentioned. AI-related harms mentioned in the conversations include:
- Employing AI to rate body size, body and looks
- Chatbots discouraging young people from talking to safe guardians about abuse
- Being bullied online with AI-generated content
- Digital extortion using AI-faked images
During April and September this year, Childline delivered 367 counselling interactions where AI, chatbots and associated topics were discussed, four times as many as in the equivalent timeframe last year.
Fifty percent of the mentions of AI in the 2025 interactions were related to psychological wellbeing and wellbeing, encompassing using AI assistants for support and AI therapy applications.