Companies claiming that their artificial intelligence detection tools can separate human handwriting from machine handwriting with near-perfect accuracy are now in the FTC's crosshairs.
The commission recently finalized an injunction against Workado, LLC (which promoted its detector through a website formerly known as Content at Scale, now renamed Brandwell), citing claims that the tools were extensively trained when in fact they were trained primarily for academic writing. This is the conclusion of the investigation detailed in this article report.
The FTC has marked the boldly claimed “98% accuracy” rate as unsupported. Under the order, Workado must withdraw these claims, send notices to previous users, maintain compliance logs, and retain evidence for future advertising claims.
The message: If you say your AI can detect AI like the human eye can detect a counterfeit bill, you better have receipts.
Time speaks. With AI-generated texts flooding classrooms, newsrooms and corporate messaging apps, the pressure on detection tools is enormous, but at the same time there is a temptation to oversell.
Separate analysis by Reddit users found that “accuracy” is a rough metric in detection tools, especially when the base rate (how often something Is generated by artificial intelligence) are very diverse.
From another point of view: this is part of a broader regulatory trend. For example, the FTC's previous enforcement crackdown targeting other “AI promises” including tools that provided automated legal services or “AI-powered” storefronts but were unable to substantiate these claims. In other words: the AI label is not a free pass.
What I personally find a bit disturbing is the knock-on effect that the institutions that use these detectors have – schools, publishers and even governments.
If detectors overestimate their accuracy or are used without transparency, false positives become a serious problem.
Some study of university artificial intelligence detection systems found that students were being mislabeled, sometimes without human control. So this action by the FTC may be a welcome wake-up call.
Here are some things to keep an eye on: 1) Will other companies receive similar orders? 2) Will buyers of detection tools demand more transparency – e.g., detailed training data, error rates and test splits? 3) Will institutions re-evaluate how they use these tools, perhaps making them one layer of the process rather than the final word?
My take: AI content detectors play a role – especially as AI-generated content becomes more and more ubiquitous – but they are far from a silver bullet. Treating them this way is a recipe for trouble.
If you rely on him, ask the hard questions: What was he trained on? What is the false positive rate? Who audited it?
Because if the FTC says you can't just write “98% accurate” on a box without proof, then you should probably demand that proof too.

















