Late Dr. Michael Mosley used in fraud about health AI Deepfake

Trust can evaporate in an instant when technology becomes mischievous. This is the latest in Wild World of Ai, in which fraudsters use the films of the Deepfake Dr. Michael Mosley – after a trusted face in giving health – to Jastrzębie, such as Ashwagandha and Beetroot Gummies.

These clips appear in social media, in which Mosley with passion advises viewers false claims about menopause, inflammation and other health mods – which he has ever supported.

When familiar faces sell fiction

By scrolling Instagram or Tiktok, you can trip up about the video and think: “Wait – is Mosley?” And you would be right … in a sense. These AI creations use clips from well -known podcasts and performances, complex to imitate its tons, expressions and fluctuations.

This is incredibly convincing until you stop to think: hold on – he died last year.
The researcher at the Turing Institute warned that progress happens so quickly that it would soon be almost impossible to rely with false content by the sight.

The Fallout: Health non -infusion in the nail

Things here become sticky. These deep films are not harmless illusions. They push unverified claims – like beet jellies treating arterials or Moringa balancing hormones – which are not dangerous from reality.

The dietitian warned that such a sensational content seriously cuts the public understanding of nutrition. Supplements are not a shortcut and exaggeration such as these racial confusion, not wellness. The British medicine regulator, MHRA, analyzes these claims, while public health experts still call on people to rely on reliable sources – think NHS and a family doctor – not bold AI promotions.

Platforms in a hot seat

Social media platforms were in the crosshairs. Despite the policies against deceptive content, experts say that technological giants such as Meta are fighting to keep up with the volume and viruses of these deep wardrobes.

Pursuant to the Act on online security in the UK, platforms are now legally obliged to deal with illegal content, including fraud and impersonation. Ofcom observes the enforcement, but so far bad content often appears as quickly as it is destroyed.

Echoes of real fake: a disturbing trend

This is not a isolated hiccups – it is part of a growing pattern. In the recent report, CBS News revealed dozens of films with a deep fake impersonating the conclusions of real doctors providing medical advice around the world, reaching millions of viewers.

In one example, the doctor discovered a deep emphasis on the product he never supported – and the similarity was cool. The viewers were cheated, comments related to the praise of the doctor – all based on production.

My opinion: when technology misleads

What is most difficult for me is not only this technology can imitate reality – people believe it. We built our trust in experts and voices that sound calm and competent. When this trust is armed, it results from the foundation of scientific communication itself.

A real fight not only detects artificial intelligence – it rebuilds trust. Platforms need more solid checks, bright labels and maybe – maybe simply – a check from users before reaching “Share”.

LEAVE A REPLY

Please enter your comment!
Please enter your name here