Terrible call. Crazy 911 call. Police try to stop what they thought was a kidnapping – only to find out it was all a hoax.
Such was the case recently in Lawrence, Kansas, where a woman answered her voicemail and discovered she had been hijacked by a voice eerily similar to her mother's, who claimed she was in trouble.
The voice was generated by artificial intelligenceactually quite false. And suddenly this wasn't the plot of a crime novel – this was real life.
The voice on the other end “sounded exactly like her mother,” police say, with matching tone, intonation and even a heightened emotional state.
The whole thing gives the impression that fraudsters are taking public audio recordings (perhaps from social media or voicemail greetings) and sending them via some AI for voice cloningand watched the world burn.
So the woman called 911; the police traced the number and stopped the car, but it turned out there was no kidnapping. Just a virtual threat designed to deceive people's senses.
This isn't the first time something like this has happened. With just a snippet of audio, today's AI can generate the sweet tone of Walter Cronkite or, say, Barack Obama – whether or not the former president said something similar to what you're hearing… segments using deepfakes to manipulate people's actions in new and compelling ways.
A recent report by a security company found that in about 70 percent of cases, people have difficulty distinguishing a cloned voice from a real voice.
And it's not just about one-off jokes and minor scams. Scammers use these tools to parrot public officials, trick victims into transferring huge sums of money, or impersonate friends and family members in emotional situations.
The result: a new breed of fraud that is harder to spot and easier to commit than any other fraud we've seen recently.
The tragedy is that trust so easily becomes a weapon. When your ear – and your emotional response – buys into what it hears, even the simplest premonitions can disappear. Victims often don't realize the call was a fake until it's far too late.
So what can you do if you get a call that seems “too real”? Experts suggest small but crucial safeguards: predetermining a “family safe word,” checking, calling your loved ones back on a known number rather than the one you were called from, or asking questions only a real person would know.
OK, so it's an old-fashioned way to check your phone, but in the age of artificial intelligence that can recreate tone of voice, laughter, and even sadness – it might just be the ticket to safety.
Lawrence's case in particular is a wake-up call. As artificial intelligence learns to imitate our voices, scams have become much, much worse.
It's no longer just about fake emails and clicking phishing links – now it's about listening to your mother's voice on the phone and wanting to believe with all your being that nothing terrible has happened.
It's cooling. And that means we all need to stay a few steps ahead – maintaining skepticism, validation and a happy dose of disbelief.


















