British Technology Companies and Child Protection Officials to Test AI's Ability to Generate Abuse Content

Tech firms and child protection organizations will be granted authority to evaluate whether artificial intelligence systems can produce child abuse images under recently introduced UK laws.

Substantial Increase in AI-Generated Illegal Content

The declaration came as findings from a protection monitoring body showing that reports of AI-generated child sexual abuse material have increased dramatically in the past year, growing from 199 in 2024 to 426 in 2025.

New Legal Structure

Under the changes, the government will allow approved AI companies and child safety organizations to inspect AI models – the foundational technology for chatbots and visual AI tools – and verify they have sufficient safeguards to prevent them from producing depictions of child exploitation.

"Fundamentally about preventing exploitation before it happens," declared the minister for AI and online safety, adding: "Experts, under rigorous protocols, can now identify the risk in AI systems early."

Tackling Legal Challenges

The amendments have been implemented because it is against the law to produce and possess CSAM, meaning that AI developers and other parties cannot create such content as part of a testing regime. Until now, officials had to wait until AI-generated CSAM was uploaded online before addressing it.

This law is aimed at averting that problem by enabling to stop the production of those materials at source.

Legal Framework

The changes are being introduced by the government as modifications to the criminal justice legislation, which is also implementing a ban on possessing, producing or sharing AI models developed to generate exploitative content.

Practical Consequences

This recently, the minister visited the London headquarters of Childline and heard a mock-up call to counsellors featuring a account of AI-based exploitation. The call depicted a teenager requesting help after being blackmailed using a sexualised AI-generated image of themselves, created using AI.

"When I hear about young people experiencing extortion online, it is a source of intense frustration in me and rightful concern amongst parents," he stated.

Concerning Data

A leading internet monitoring organization stated that instances of AI-generated exploitation content – such as webpages that may include multiple images – had more than doubled so far this year.

Instances of the most severe content – the most serious form of abuse – increased from 2,621 visual files to 3,086.

  • Girls were overwhelmingly victimized, making up 94% of prohibited AI images in 2025
  • Portrayals of newborns to toddlers increased from five in 2024 to 92 in 2025

Industry Response

The law change could "constitute a vital step to guarantee AI products are secure before they are released," commented the chief executive of the internet monitoring organization.

"Artificial intelligence systems have made it so survivors can be targeted repeatedly with just a few clicks, giving offenders the capability to make potentially limitless amounts of sophisticated, lifelike child sexual abuse material," she continued. "Material which further commodifies victims' suffering, and renders young people, especially girls, less safe on and off line."

Counseling Session Data

The children's helpline also released details of support interactions where AI has been mentioned. AI-related harms mentioned in the sessions comprise:

  • Employing AI to evaluate weight, physique and looks
  • AI assistants discouraging young people from consulting safe adults about harm
  • Being bullied online with AI-generated content
  • Digital blackmail using AI-faked pictures

During April and September this year, Childline conducted 367 counselling sessions where AI, chatbots and associated topics were mentioned, four times as many as in the equivalent timeframe last year.

Fifty percent of the references of AI in the 2025 interactions were connected with mental health and wellbeing, encompassing using AI assistants for support and AI therapy applications.

Christopher Martin
Christopher Martin

A seasoned gambling analyst with over a decade of experience in the casino industry, specializing in game reviews and responsible betting practices.