OpenAI has released a policy blueprint aimed at curbing the use of artificial intelligence in generating child sexual abuse material. The framework outlines a range of legal, operational, and technical measures intended to strengthen protections against AI-enabled exploitation. The company states that the proposal was developed with input from organizations working in child protection and online safety. It represents one of the more detailed industry-level responses to growing concerns about generative AI being misused to harm children.
Among the organizations consulted during the framework’s development were the National Center for Missing and Exploited Children and the Attorney General Alliance and its AI task force. Michelle DeLaune, President and CEO of the National Center for Missing and Exploited Children, acknowledged that generative AI is accelerating online child sexual exploitation by lowering barriers and enabling new forms of harm. She also expressed encouragement that companies like OpenAI are working to design tools more responsibly, with safeguards incorporated from the outset.
The blueprint identifies several specific areas requiring action. These include updating existing laws to account for AI-generated or digitally altered child sexual abuse material, improving how online platforms report abuse signals and coordinate with law enforcement, and embedding protective safeguards directly into AI systems to prevent misuse. OpenAI emphasizes that no single measure is sufficient on its own and that a combined approach is necessary to address the scale of the problem.
The release comes amid broader concern from child safety advocates that generative AI systems capable of producing realistic imagery could be exploited to create synthetic or manipulated depictions of minors. In February, UNICEF urged governments worldwide to pass legislation criminalizing AI-generated child abuse material. Regulatory bodies in multiple jurisdictions have also begun taking action, with the European Commission launching a formal investigation in January into whether X, formerly known as Twitter, violated EU digital rules by allowing its native AI model, Grok, to generate illegal content. Regulators in the United Kingdom and Australia have opened similar investigations.
OpenAI acknowledges that legislation alone will not be enough to stop the spread of AI-generated abuse material, and calls for stronger industry-wide standards as AI capabilities continue to advance. The company says its framework is designed to interrupt exploitation attempts earlier, improve the quality of information passed to law enforcement, and reinforce accountability across the broader technology ecosystem. The goal, according to OpenAI, is to prevent harm before it occurs and ensure faster protection for children when risks are identified.
Originally reported by Decrypt.
