Dutch police and several organisations have warned about a surge of harmful AI-generated images linked to Grok, the chatbot from Elon Musk’s company xAI that is used on X. The concern is that people can use the tool to alter photos of real individuals into sexualised images without consent, including “undressing” effects and other abusive edits. Dutch media report that well-known Dutch people have already been targeted, and that the tool is also being used for harassment, intimidation, and other extreme content.

The central issue is not “AI art” in general, but how quickly and cheaply these edits can be made and shared. What once required time, skill, and specialist software can now be done by almost anyone in seconds.

How Grok differs from earlier “undress apps”

The Netherlands has dealt with so-called “undress apps” before, but the Grok discussion is sharper because the tool is tied to a major social platform and can spread content fast. Reporting describes how the feature can be used to turn normal images into sexualised deepfakes, which can then be shared or reposted widely.

International reporting also suggests the problem is not limited to one country: regulators and journalists in multiple jurisdictions have raised similar concerns about non-consensual sexual deepfakes connected to Grok.

Photo Credits: Salvador Rios/Unsplash

What Dutch law already says about this

A key point often missed in public debate is that the Netherlands already has legal tools that can apply to this kind of abuse. Dutch legal experts note that making and sharing sexual deepfakes of real people can be punishable, and that it does not matter whether an image is edited with AI or created in another way: the harm and the privacy violation are what counts.

The Dutch Data Protection Authority (Autoriteit Persoonsgegevens) also explains that deepfakes can seriously violate privacy rights, especially when they use a real person’s face or identity.

Where it gets complicated is enforcement: laws may target the person creating or sharing the image, but it can be harder to act quickly when content spreads via global platforms, anonymous accounts, or servers outside the Netherlands.

Why some people are discussing a ban

In Dutch reporting, the idea of a ban comes up because of two practical concerns:

  1. Scale and speed: the number of images can grow very quickly, and victims may struggle to get them removed in time.

  2. Low barrier to abuse: people do not need technical skills, which increases the risk of copycat behaviour and mass harassment.

Dutch coverage also points to the risk that minors could be targeted, or that images of minors could be generated or circulated, an area where authorities treat the matter as extremely serious.

However, a full “ban” is not straightforward. In the EU, restricting an app can raise legal and technical questions, especially if the tool is integrated into a larger platform used for many lawful purposes. In practice, regulators often focus on forcing safeguards, rapid takedowns, age protections, and accountability systems rather than removing an app entirely.

What platforms are doing, and what critics say is missing

There are signs that X has started adjusting how the feature works in some places. Dutch-language reporting notes that X is tightening the “undress” capability, at least in countries where sharing such content is illegal.

But critics argue that partial limits are not enough if users can still work around them, or if the platform reacts only after public pressure. Outside the Netherlands, journalists have reported that some restrictions can be bypassed and that safeguards remain weak.

Regulators elsewhere are also signalling stronger oversight. Italy’s privacy watchdog has warned about AI tools (including Grok) being used to create deepfakes without consent, highlighting potential criminal and privacy violations and urging stronger protections.

What this means for victims

For people targeted by non-consensual sexual deepfakes, the damage is often immediate: humiliation, fear, reputational harm, and sometimes threats or blackmail. Even if an image is removed from one account, copies can spread to other platforms, private groups, or overseas sites.

Dutch reporting has also highlighted how difficult it can be for victims( especially young people) to navigate reporting systems, collect evidence, and get quick help.

What Dutch authorities are likely to focus on next

Based on the debate now unfolding, the next steps in the Netherlands are likely to centre on:

  • Faster reporting and takedown procedures for non-consensual sexual deepfakes

  • Clearer platform responsibilities (what a service must prevent, detect, and remove)

  • Better support routes for victims, including guidance on police reports and evidence collection

  • Stronger deterrence, including visible prosecutions where possible

The larger question is whether the Netherlands and the EU can move quickly enough to reduce harm while AI image tools become more powerful and more widely available.

Keep Reading

No posts found