A recent investigation by AI Forensics, a European non-profit research institute, has uncovered a massive, organized network of abuse operating on Telegram. The report reveals an “ecosystem of abuse at scale,” where nearly 25,000 users in Spain and Italy are actively distributing nonconsensual sexual material and child pornography.
By analyzing 2.8 million messages across 16 different groups over a six-week period, researchers identified a sophisticated, cross-platform pipeline used to exploit victims and monetize illegal content.
The Anatomy of the Abuse Network
The study highlights a disturbing pattern in how this content is sourced, distributed, and sold. The cycle typically follows a multi-platform trajectory:
- Sourcing: Perpetrators often harvest original material from private exchanges on Instagram and WhatsApp.
- Distribution: Telegram serves as the central hub for these large-scale groups. Content is often highly targeted; women featured in videos are frequently named, tagged, and even geolocated via shared profile links.
- Recruitment: Platforms like Reddit act as “gateways,” providing links that direct users toward the private, paid Telegram channels.
- Redistribution: Once leaked, the content often migrates back to mainstream platforms like TikTok and Instagram.
A Monetized Threat
The abuse is not merely social; it is a commercial enterprise. Perpetrators monetize their archives through:
– One-time access fees of up to €50 for full archives.
– Monthly subscriptions costing approximately €5.
– “Nudifying” bots, which use AI to strip clothing from images, allowing users to generate new nonconsensual content with unprecedented speed and ease.
The Scale of the Problem
The reach of these networks is vast, affecting approximately 52,000 people (27,000 in Italy and 25,000 in Spain). This cross-border movement suggests that digital sexual violence is not a localized issue but a structural European crisis that ignores national boundaries.
The findings arrive at a critical time for digital regulation. Recent data from the European Institute for Gender Equality shows that one in three women in the EU has experienced sexual violence since age 15, a figure that includes the rising tide of cyberviolence.
The Failure of Moderation
A central finding of the report is the inadequacy of Telegram’s current moderation efforts. While the platform reportedly shuts down abusive groups, researchers observed that these groups are often re-established under identical names within hours.
This “whack-a-mole” dynamic raises serious questions about the platform’s ability to police its own ecosystem. AI Forensics argues that Telegram’s Premium subscription model may inadvertently assist perpetrators by providing a streamlined way to monetize illegal content.
Proposed Regulatory Solutions
To combat this, the report suggests several urgent interventions:
1. VLOP Designation: The European Commission should designate Telegram as a Very Large Online Platform (VLOP) under the Digital Services Act (DSA). This would force the platform to undergo stricter risk assessments and provide transparency regarding its algorithms.
2. Stricter AI Regulation: There is a call to expand the EU AI Act to include specific provisions that make the removal of nonconsensual and child sexual abuse material (CSAM) more efficient.
3. Improved Reporting: Enhanced reporting mechanisms are needed to ensure that when groups are flagged, they cannot simply reappear immediately.
“This is a structural problem that is European in scope and demands a European response.” — AI Forensics
Conclusion
The investigation exposes a highly organized, profitable, and borderless network of digital abuse that exploits both AI technology and platform loopholes. Addressing this requires moving beyond simple content deletion toward systemic regulatory oversight of how large messaging platforms operate within the EU.






























