Skip to content
Child Exploitation

Digital playgrounds: How evolving technologies are reshaping child safety online

Natalie Runyon  Director / Sustainability content / Thomson Reuters Institute

· 6 minute read

Natalie Runyon  Director / Sustainability content / Thomson Reuters Institute

· 6 minute read

Evolving digital technologies — such as virtual reality, gaming platforms, and GenAI — are rapidly transforming online spaces into immersive "playgrounds" in which existing child exploitation risks are amplified and harder to detect

Key takeaways:

      • Increased dangers to children from immersive gaming platforms — Immersive gaming and VR platforms, heavily used by children, combine user-generated content and tactile realism in ways that can turn virtual abuse into psychologically traumatic experiences comparable to real-world harm.

      • Encryption is a double-edge sword — End-to-end encryption creates a policy paradox: While it’s vital for privacy and vulnerable users, it also shelters predators and complicates efforts to monitor and prevent exploitation.

      • GenAI in gaming presents openings for predators of children — GenAI introduces a new layer of risk by enabling AI-driven avatars and chatbots that can manipulate children and mimic human predators, making transparency, design safeguards, and stronger regulation increasingly urgent.


Digital spaces increasingly present new challenges for protecting children, according to Mariana Olaizola Rosenblat, an expert in child exploitation methods in digital spaces and Policy Advisor at the NYU Stern Center for Business and Human Rights.

The fact that technology changes so quickly means that the threats are not static. With every technological advancement, new threats emerge, especially with virtual reality (VR) that uses headsets as pathways into immersive experiences. The combination of three-dimensional virtual reality and AI-powered interactions presents unprecedented challenges for parents, regulators, and platform operators.

Danger on the gaming frontier

Online gaming platforms, in particular, comprise a dominant space where these threats show up. For example, one prominent platform, Roblox, hosts roughly 110 million daily active users, and nearly half of them are under 13 years old. These spaces allow users to design worlds and create experiences for each other. This democratization of content creation sounds empowering; however, the reality is more troubling. Some observers have described this platform as both the Internet’s biggest playground for kids and, in the most critical accounts, an x-rated pedophile hellscape.

There have been dozens of arrests involving predators using these platforms to groom and abuse children, and many state attorneys general have filed lawsuits since 2018— all of which has spurred growing awareness. With these developments, the calculus is changing. From a purely economic perspective, the platforms’ failure to protect children has become unsustainable as lawsuits and regulatory pressure mount.

In response to the increased awareness and potential consequences, platform operators have implemented safety features, parental controls, and default settings to restrict interactions in the recent past. Yet, implementation lags behind necessity. In fact, many companies fear the costs of effective moderation or worry about alienating their user base with what users might consider to be intrusive monitoring, says NYU’s Olaizola Rosenblat.


If a user experiences a simulated sexual act in virtual reality, their psychological and physiological responses can mirror aspects of real-world trauma even though no physical contact occurs.


In many ways, abuse in virtual worlds can mirror that of abuse in the physical world. These three-dimensional virtual reality platforms differ fundamentally from traditional gaming spaces. Users with VR headsets entering fully immersive worlds embody avatars that move through thousands of interconnected experiences. While no actual touching occurs, the experiences feel visceral. Indeed, users report genuine sensations when avatars approach them. The technology simulating the sense of touch through vibrations, textures, or forces adds a tactile dimension to digital interactions. The brain processes these interactions, including negatives ones involving potential abuse, as real.

“If a user experiences a simulated sexual act in virtual reality, their psychological and physiological responses can mirror aspects of real-world trauma even though no physical contact occurs,” Olaizola Rosenblat explains, adding that the immersive nature makes the experiences more traumatic than traditional online interactions.

Studies further document the frequency of problematic encounters as researchers observed incidents occurring every seven minutes in some virtual environments.

The encryption paradox

End-to-end encryption protects privacy but also limits oversight, complicating efforts to detect exploitation. Some evidence indicates that offenders are using these encrypted channels to evade detection.

The policy dilemma has no simple solution. Breaking encryption would devastate privacy rights, and eliminating encrypted platforms would harm vulnerable populations worldwide — but accepting that bad actors will exploit these spaces is unacceptable. While some design-level interventions could help, there are actions that platforms could take to implement features that make exploitation more difficult without compromising core encryption.

Generative AI (GenAI) introduces a new dimension to these threats. Until recently, exploitation required human predators. Now AI systems themselves can initiate harmful interactions. The integration of AI into digital products accelerates rapidly. One platform is already using GenAI to create assets to help build their worlds, but “it’s a matter of time until platforms will have avatars and other players being run by GenAI. This will make it difficult to know if a child is communicating with a human or a bot, unless we have robust transparency safeguards in place,” Olaizola Rosenblat says.


In response to the increased awareness and potential consequences, platform operators have implemented safety features, parental controls, and default settings to restrict interactions in the recent past.


Indeed, it is easy to see the evolution. Chatbots have already entered encrypted messaging platforms, such as when users see prompts to Ask AI at the top of their message screens. These interactions could compromise privacy and create new dangers. Recent cases involving AI companies demonstrate the risks. Chatbots can take users on unpredictable journeys, and the conversations can turn dangerous quickly. Young users lack the experience of recognizing manipulation by AI.

The combination of immersive environments and AI systems creates a perfect storm. Imagine a child in a virtual world interacting with what appears to be another player — yet, that player is actually an AI bot. The bot has been trained on vast datasets, and it understands psychological manipulation. The bot then adapts its approach based on the child’s responses. No human predator orchestrates the interaction; however, the algorithm itself becomes the threat.

This scenario is not science fiction. The integration of technologies into platforms that children use is already underway. The question is not whether these risks will materialize, rather the question is how quickly and how severely they will occur. The answer depends on choices made by technology companies, regulators, and society at large. This is especially true with the vulnerabilities from the explosion of data collection gathered from VR headsets, smart gloves, and haptic suits for enhanced physical feedback. Clearly, the time to act is now.


You can find more about ways to combat child exploitation here

More insights