British Tech Firms and Child Protection Agencies to Test AI's Capability to Create Abuse Images
Technology companies and child protection organizations will be granted permission to evaluate whether AI systems can generate child abuse images under recently introduced British laws.
Significant Rise in AI-Generated Harmful Material
The declaration coincided with findings from a safety watchdog showing that reports of AI-generated child sexual abuse material have more than doubled in the past year, growing from 199 in 2024 to 426 in 2025.
Updated Legal Framework
Under the changes, the government will permit designated AI companies and child safety organizations to inspect AI models – the underlying technology for conversational AI and visual AI tools – and ensure they have adequate protective measures to prevent them from producing images of child sexual abuse.
"Ultimately about preventing abuse before it occurs," declared Kanishka Narayan, adding: "Experts, under rigorous protocols, can now identify the risk in AI systems early."
Addressing Regulatory Challenges
The changes have been introduced because it is against the law to produce and own CSAM, meaning that AI creators and other parties cannot generate such content as part of a testing regime. Previously, authorities had to delay action until AI-generated CSAM was uploaded online before dealing with it.
This law is designed to averting that problem by enabling to stop the production of those images at source.
Legal Framework
The changes are being added by the authorities as modifications to the crime and policing bill, which is also establishing a ban on possessing, creating or distributing AI models developed to generate exploitative content.
Real-World Consequences
This week, the official visited the London headquarters of a children's helpline and heard a simulated conversation to advisors featuring a report of AI-based exploitation. The interaction portrayed a adolescent requesting help after being blackmailed using a sexualised deepfake of himself, constructed using AI.
"When I hear about young people facing extortion online, it is a source of intense anger in me and justified anger amongst parents," he stated.
Alarming Statistics
A leading internet monitoring organization reported that instances of AI-generated abuse material – such as webpages that may include numerous files – had significantly increased so far this year.
Cases of category A content – the most serious form of exploitation – increased from 2,621 images or videos to 3,086.
- Girls were overwhelmingly victimized, making up 94% of prohibited AI depictions in 2025
- Portrayals of infants to toddlers increased from five in 2024 to 92 in 2025
Industry Reaction
The legislative amendment could "represent a vital step to guarantee AI tools are secure before they are launched," commented the head of the internet monitoring foundation.
"AI tools have enabled so survivors can be targeted all over again with just a simple actions, giving offenders the capability to create potentially endless quantities of advanced, lifelike exploitative content," she added. "Material which further exploits survivors' trauma, and makes children, particularly girls, less safe on and off line."
Counseling Interaction Data
The children's helpline also published information of counselling sessions where AI has been mentioned. AI-related harms mentioned in the sessions include:
- Employing AI to rate weight, physique and looks
- AI assistants discouraging children from talking to trusted adults about abuse
- Facing harassment online with AI-generated material
- Online extortion using AI-faked images
Between April and September this year, Childline delivered 367 support sessions where AI, chatbots and related terms were mentioned, four times as many as in the same period last year.
Fifty percent of the mentions of AI in the 2025 interactions were related to mental health and wellbeing, including using chatbots for assistance and AI therapy applications.