OpenAI Reports Surge in Child Exploitation Content

▼ Summary
– OpenAI’s child exploitation reports to the NCMEC CyberTipline increased 80-fold in the first half of 2025 compared to the same period in 2024.
– The company attributes this spike to investments in review capacity, product growth, and new features allowing image uploads.
– Increased reports can reflect changes in a platform’s moderation systems or reporting criteria, not just a rise in harmful activity.
– The broader CyberTipline has seen a massive increase in reports involving generative AI, rising over 1,300% from 2023 to 2024.
– OpenAI reports all instances of child sexual abuse material, including uploads and requests, across its products like ChatGPT and API access.
A significant increase in reports concerning child exploitation material has been noted by OpenAI, with the company submitting 80 times more incident reports to the National Center for Missing & Exploited Children in early 2025 compared to the same period the previous year. The NCMEC’s CyberTipline serves as the central reporting system for child sexual abuse material, and federal law mandates that technology companies file reports when they encounter such content. It is important to understand that a rise in these statistics does not automatically signal a surge in criminal activity. Changes can also stem from improved detection systems, shifts in internal reporting criteria, or simply greater platform usage.
The raw numbers show a dramatic jump. For the first half of 2025, OpenAI submitted 75,027 reports related to 74,559 distinct pieces of content. In contrast, during the first half of 2024, the company filed only 947 reports concerning 3,252 content items. A company spokesperson linked the increase to substantial investments made in late 2024 to enhance review capabilities, which were necessary to manage rapid user growth. They also pointed to the introduction of new product features that accept image uploads and the overall rising popularity of their services as contributing factors.
Interpreting these figures requires nuance. A single piece of harmful content can generate multiple reports, and one report might reference several different items. To provide clarity, some platforms, including OpenAI, disclose both the total number of reports and the count of content pieces involved. The company states it reports all instances of CSAM to NCMEC, including both uploads and user requests. This reporting covers its main ChatGPT application, which allows file uploads and can generate text and images, as well as access to its models through developer APIs. The recent data does not include any material from the video-generation tool Sora, which launched after the reporting period.
This trend at OpenAI mirrors a broader industry pattern observed by the NCMEC. The center’s analysis indicates that reports involving generative AI content saw an increase of over 1,300 percent from 2023 to 2024 across all companies using the CyberTipline. While other major AI developers like Google also publish their NCMEC report statistics, they typically do not break down what portion is specifically linked to AI-generated material. The latest full-year data from the center for 2025 is not yet available, leaving the full scale of the current year’s shift still to be quantified.
(Source: Wired)





