One evening I was scrolling through my feed when I came across a short clip of a friend speaking fluent Japanese at an airport.
The only problem? My friend doesn't know a word of Japanese.
Then I realized it wasn't him at all – it was artificial intelligence. More specifically, it looked suspiciously like something made of Soraa new video app that caused a storm.
According to recent reportSora is already becoming a dream tool for fraudsters. The app can generate incredibly realistic videos and, more worryingly, remove the watermark that usually marks content as AI-generated.
Experts warn this opens the door to deepfakes, disinformation and impersonation at levels we've never seen before.
And honestly, seeing how quickly these tools are developing, it's hard not to feel a little concerned.
What's amazing is how Sora's “cameo” feature allows people to upload their faces to appear in AI videos.
Sounds fun – until you realize that technically, someone could use your image in a fake news clip or compromising scene before you even know about it.
Reports have shown that users have already seen themselves doing or saying things they have never done, leaving them confused, angry, and in some cases publicly embarrassed.
While OpenAI insists it's working on adding new safeguards, such as allowing users to control the appearance of their digital doppelgängers, so-called “guardrails” appear to be disappearing.
Some have already noticed brutal and racist images created via the app, which suggests that the filters aren't catching everything they should.
Critics say this isn't about one company – it's about a broader issue of how quickly we're normalizing synthetic media.
Still, there are signs of progress. OpenAI is reportedly testing more stringent settings, giving people more control over how their AI is used.
In some cases, users can even block appearances in political or vulgar content, as noted Sora added new identity checks. This is certainly a step forward, but whether it will be enough to deter misuse remains to be seen.
The bigger question is what happens when the line between reality and fiction completely blurs.
As one tech columnist put it in a how-to article Sora makes it almost impossible to tell what's real anymoreit's not just a creative revolution – it's a crisis of credibility.
Imagine a future where every movie can be questioned, every testimony can be dismissed as “AI,” and every fraud looks legitimate enough to fool your own mother.
In my opinion, we are in the midst of a decline in trust in digital technologies. The answer is not to ban these tools – they need to be outsmarted.
We need stronger detection technology, really effective transparency laws, and a bit of old-fashioned skepticism every time we start playing.
Whether it's Sora or the next brilliant AI application that comes after it, we'll need sharper eyes – and thicker skin – to tell what's real in a world that's learning to fake everything.