The EU has now singled out an X – and this time it is not a political issue, based on disinformation or a vague argument about freedom of speech.
It has to do with pornography: it's specifically about this sexually explicit images that can be created using Grokartificial intelligence associated with Elon Musk's platform and whether some of it was used to create “digital teardown” content.
This is the kind of stuff that makes your stomach churn as you read it, because it's not just the harm that's abstract. It is targeted, personal and in some cases may be illegal.
And also about the mood. This is not a melodramatic EU. It is the EU that says: 'Enough'.
Regulators are concerned about the speed of this type of content spreads on the Internet, but also the fact that once something like a fake image appears on the Internet, it does not disappear.
The damage will be done even if the platform removes it, even if the account expires.
Now here's the kicker. People seem surprised when artificial intelligence is used for the worst purposes. But let's face it, are we really surprised?
You run a delicious image creation tool on millions of people, and the Internet does what it always does: throws its shiny new toy into the trash and looks for ways to hurt someone else.
That's why this investigation isn't just about “EU angry at chatbot.” This is the case with the Digital Services Act, which essentially orders large platforms to behave like responsible adults.
It should always be possible to determine whether X has taken a reasonable approach to risk assessment and installed sufficient safety guardrails. Not after the damage. Before.
X has apparently taken some steps in response, such as paying more attention to certain features and tightening controls (for example, by placing some image-generating features behind a paywall).
It's… something, I guess. But if you are the person whose image has been altered and spread, it probably doesn't feel like a win. It's like locking the front door only after your house has been robbed.
Here's another inconvenient fact: Today's platforms don't just “host content.” They reinforce it. They recommend it. They push it into the sewers.
So the EU isn't just interested in the explicit images displayed by Grok – it's interested in whether X's systems caused content to travel faster and farther than it ever should have.
The scary thing is that this will soon become the new normal.
AI-generated images aren't going anywhere. In fact, it's only better, faster, cheaper and more realistic.
This means that “gross uses” will also increase. Today Grok. Tomorrow, another model, another platform, another set of victims.
And it's not just about the stars anymore; they are classmates, girlfriends, ex-lovers and random women on the internet who posted one selfie in 2011 and still rue the day they even existed on the internet.
That's why the EU investigation is important. And not because it's cool to see big tech's sweat (although OK, that part lingers).
But it matters because it's one of the first high-profile tests of whether governments can actually force platforms to treat AI-related harms as a real emergency, rather than just a sideline.
What if X fails this test? Regulators should also be expected to be more aggressive overall, as the next platform in their sights may not have as much of a chance.















