Tech firms and child protection organizations will be granted authority to evaluate whether artificial intelligence systems can produce child abuse images under recently introduced UK laws.
The declaration came as findings from a protection monitoring body showing that reports of AI-generated child sexual abuse material have increased dramatically in the past year, growing from 199 in 2024 to 426 in 2025.
Under the changes, the government will allow approved AI companies and child safety organizations to inspect AI models – the foundational technology for chatbots and visual AI tools – and verify they have sufficient safeguards to prevent them from producing depictions of child exploitation.
"Fundamentally about preventing exploitation before it happens," declared the minister for AI and online safety, adding: "Experts, under rigorous protocols, can now identify the risk in AI systems early."
The amendments have been implemented because it is against the law to produce and possess CSAM, meaning that AI developers and other parties cannot create such content as part of a testing regime. Until now, officials had to wait until AI-generated CSAM was uploaded online before addressing it.
This law is aimed at averting that problem by enabling to stop the production of those materials at source.
The changes are being introduced by the government as modifications to the criminal justice legislation, which is also implementing a ban on possessing, producing or sharing AI models developed to generate exploitative content.
This recently, the minister visited the London headquarters of Childline and heard a mock-up call to counsellors featuring a account of AI-based exploitation. The call depicted a teenager requesting help after being blackmailed using a sexualised AI-generated image of themselves, created using AI.
"When I hear about young people experiencing extortion online, it is a source of intense frustration in me and rightful concern amongst parents," he stated.
A leading internet monitoring organization stated that instances of AI-generated exploitation content – such as webpages that may include multiple images – had more than doubled so far this year.
Instances of the most severe content – the most serious form of abuse – increased from 2,621 visual files to 3,086.
The law change could "constitute a vital step to guarantee AI products are secure before they are released," commented the chief executive of the internet monitoring organization.
"Artificial intelligence systems have made it so survivors can be targeted repeatedly with just a few clicks, giving offenders the capability to make potentially limitless amounts of sophisticated, lifelike child sexual abuse material," she continued. "Material which further commodifies victims' suffering, and renders young people, especially girls, less safe on and off line."
The children's helpline also released details of support interactions where AI has been mentioned. AI-related harms mentioned in the sessions comprise:
During April and September this year, Childline conducted 367 counselling sessions where AI, chatbots and associated topics were mentioned, four times as many as in the equivalent timeframe last year.
Fifty percent of the references of AI in the 2025 interactions were connected with mental health and wellbeing, encompassing using AI assistants for support and AI therapy applications.
A seasoned gambling analyst with over a decade of experience in the casino industry, specializing in game reviews and responsible betting practices.