An alarming new study from the United Kingdom shines a light on the troubling intersection of artificial intelligence and child exploitation. Released by the Internet Watch Foundation, the report revealed that 2025 marked a peak in online child sexual abuse material. A staggering 26,362 percent increase in photo-realistic AI-generated videos depicting the sexual abuse of children was documented.
The increase is stark, rising from only 13 such videos in 2024 to a shocking 3,440 in just one year. This drastic escalation is attributed to the burgeoning availability of AI tools that enable users to create customized images and videos with ease. The report raises serious concerns about the implications of this technology: “This material can now be made at scale by criminals with minimal technical knowledge,” the Internet Watch Foundation noted. The impact is severe. Children whose likenesses are used in these videos face potential harm, and the normalization of sexual violence against minors continues unchecked.
The report also offered a breakdown of the nature of the material. Of the videos analyzed, 65 percent fell into Category A, which encompasses horrific acts such as penetration and sexual torture, while 30 percent were classified as Category B. The sheer volume and severity of these materials should raise alarm bells about the effectiveness of current safeguards.
Kerry Smith, Chief Executive of the Internet Watch Foundation, emphasized the urgent need for action, stating, “Governments around the world must ensure AI companies embed safety by design principles from the very beginning.” It is a call to accountability for the tech industry—a demand that companies take responsibility for the content their tools can generate. The use of these technologies should not come at the cost of safety for the most vulnerable in society.
In 2025, the Internet Watch Foundation responded to more than 300,000 reports of child sexual abuse material. This figure illustrates not only the increasing frequency of these offenses but also the vital role of organizations dedicated to combating this crisis. In the United States, such material is illegal, highlighting the urgent need for cross-border cooperation to address this global issue.
Concerns also extend to the capabilities of various AI platforms produced by American companies. For example, platforms like Grok—utilized by Elon Musk’s social media platform X—have the potential to create content that sexualizes minors. In response to these issues, X stated that it is updating Grok to prevent the generation of images of real people in revealing clothing and implementing geographic restrictions where such content is prohibited.
The findings of this report serve as a grim reminder of the ongoing battle against child exploitation in the digital age. As AI technology continues to evolve, vigilance and responsibility in its development and use will be crucial in safeguarding children’s welfare.
"*" indicates required fields
