Fiona Nanna, ForeMedia News

5 minutes read. Updated 7:02AM GMT Tues, 13August, 2024

In a groundbreaking study published by Anglia Ruskin University, researchers have revealed a disturbing trend in the misuse of artificial intelligence (AI) on the dark web. The study, conducted by Dr. Deanna Davy and Professor Sam Lundrigan, uncovers a clear and growing demand among online offenders for AI-generated images of child sexual abuse. This alarming research highlights the pressing need for enhanced cybersecurity measures and a deeper understanding of how such technology is being exploited.

Over the past year, Dr. Davy and Professor Lundrigan analyzed discussions and activities within various dark web forums. Their research indicates that members of these clandestine communities have increasingly been focusing on the creation of AI-generated child sexual abuse material. This shift is driven by offenders who are actively seeking out guides, videos, and advice on how to produce such illicit content using advanced AI tools.

The study reveals that forum members are not only sharing knowledge on how to create these disturbing images but are also using existing non-AI content as a reference to enhance their skills. Some offenders even refer to those developing AI imagery as “artists,” underscoring a troubling normalization of this activity. There is also a growing expectation among these users that technological advancements will simplify the creation of such material, further escalating the issue.

Dr. Davy emphasized the severity of the problem, stating that AI-produced child sexual abuse material represents a “rapidly growing problem.” She highlighted the need for a comprehensive understanding of how these images are being created, distributed, and the impact they have on offender behavior. Dr. Davy stressed the misconception that AI-generated images are “victimless,” pointing out that many offenders source real images of children to manipulate and create new content. The study also found that there is a concerning trend of escalating desires for more explicit content, moving from ‘softcore’ to ‘hardcore’ imagery.

This study sheds light on the evolving nature of online offenses and the critical need for increased vigilance and intervention strategies. As AI technology continues to advance, understanding its misuse and developing effective countermeasures becomes increasingly vital in protecting vulnerable populations.