UK regulator keeps X under pressure

Britain has no intention of letting this situation go. Even as other investigations quietly fall into bureaucratic limbo, one thing remains unchanged.

– the British media watchdog reported on Thursday that it will continue to investigate X for spreading fake AI-generated images – despite the platform's insistence that it is cracking down on harmful content.

At the center of the dispute are deepfake images – often of a sexual nature; often falsified – which spread to X. The regulator's fear is by no means hypothetical.

Reputations can be ruined in minutes with photos like these, and once they appear, preventing them from being made public is an almost impossible task.

Officials say they need to know whether X systems actually prevent the material or just react after the damage is done.

And that's a good question, right? We've heard the promises before. This greater concern about AI becoming a self-propelled monster image generator has led to similar investigations such as the German analysis Musk and Japan's Grok chatbot I'm just starting to investigate this for the same type of image threats.

What's fascinating – and maybe even a bit ironic – is that X's owner, Elon Musk, has long seen the platform as a defender of free speech.

But regulators don't treat free speech as an abstraction; they have to face harm.

When artificial intelligence generates fake porn featuring real people who happen to be women, it is no longer a philosophical debate but a public safety issue.

Meanwhile, countries other than the UK are already making decisions based on this logic.

For example, Malaysia recently cut off access to Grok entirely after clear AI-generated images emerged, sending a shudder through the tech community.

The UK investigation also comes at a time when regulators are generally becoming more involved in the management of artificial intelligence.

Europe is moving in the opposite direction, introducing sweeping regulations aimed at holding platforms accountable for how they use and manage AI systems.

The way forward seems quite simple when we see the EU's groundbreaking AI legislation being presented as a template for the world to use.

Here's my hot take, regardless of value. This query is not primarily about X in isolation. The issue is whether tech companies can continue to demand trust by providing tools that can be misused on a large scale.

The UK regulator appears to be saying politely but firmly: “Show us it works – or we'll keep looking.”

And honestly, I think it's overdue. Deepfakes are no longer just a future threat. They're here, it's a mess, and regulators are finally starting to act like it.

LEAVE A REPLY

Please enter your comment!
Please enter your name here