Grand News Asia Close

Dubious AI detectors drive ‘pay-to-humanise’ scam by misidentifying text

ដោយ៖ Morm Sokun ​​ | 2 ម៉ោងមុន ទស្សនៈ-Opinion 1015
Dubious AI detectors drive ‘pay-to-humanise’ scam by misidentifying text While even reliable AI detectors can produce false results, researchers say a crop of fraudulent tools has emerged online, easily weaponised to discredit authentic content and tarnish reputations. AI image

#opinion

Anuj Chopra, Ede Zaborszky, Magdalini Gkogkou and Liesa Pauwels / AFP

How do AI detectors misidentify text in the “pay-to-humanise” scam?

Feed an Iranian news despatch or a literary classic into some text detectors, and they return the same verdict: artificial intelligence-generated.

Then comes the pitch: pay to “humanise” the writing, a pattern experts say bears the hallmarks of a scam.

As AI falsehoods explode across social media, often outpacing the capacity of professional fact-checkers, bogus detectors risk adding another layer of deception to an already fractured information ecosystem.

While even reliable AI detectors can produce false results, researchers say a crop of fraudulent tools has emerged online, easily weaponised to discredit authentic content and tarnish reputations.

AFP’s fact-checkers identified three such text detectors that claim to estimate what percentage is AI-generated.

The tools—prompted in four languages—not only misidentified authentic text as AI-generated but also attempted to monetise those errors.

One detector, JustDone AI, processed a human-written report about the United States-Iran war and wrongly concluded it contained “88 per cent AI content”. It then offered to scrub any trace of AI for a fee.

“Your AI text is humanising,” the site claimed, leading to a page where “100 per cent unique text” was locked behind a paywall charging up to $9.99.

Two other tools—TextGuard and Refinely—produced similar false positives and sought to monetise them.

AFP presented its findings to all three detectors.

“Our system operates using modern AI models, and the results it provides are considered accurate within our technology,” said TextGuard’s support team.

“At the same time, we cannot guarantee or compare results with other systems.”

JustDone also reiterated that “no AI detector can guarantee 100 per cent accuracy”.

It acknowledged the free version of its AI detector “may provide less precise results” due to “high demand and the use of a lighter model designed for quick access”.

Echoing AFP’s findings, one user on a review platform complained that “even with 100 per cent human-written material, JustDone still flags it as AI”.

AFP fed the tools multiple human-written samples in Dutch, Greek, Hungarian, and English. All were wrongly flagged as having high AI content, including passages from an acclaimed 1916 Hungarian classic.

The tools returned AI flags regardless of input—even for nonsensical text.

JustDone and Refinely appeared to operate even without an internet connection, suggesting their results may be scripted rather than genuine technical analysis.

“These are not AI detectors but scams to sell a ‘humanising’ tool that will often return what we call ‘tortured phrases’”—unrelated jargon or nonsensical alternatives—said Debora Weber-Wulff, a Germany-based academic who has researched detection tools.

The tools tested by AFP sought to lure students and academics as clients, with two of them claiming their users came from top institutions such as Cornell University.

Cornell University said it “does not have any established relations with AI detector companies”.

“Generative AI does provide an increased risk that students may use it to submit work that is not their own,” said the university.

“Unfortunately, it is unlikely that detection technologies will provide a workable solution to this problem. It can be very difficult to accurately detect AI-generated content.”

Fact-checkers, including those at AFP, often rely on AI visual-detection tools developed by experts, which typically look for hidden watermarks and other digital clues.

However, they, too, can sometimes produce errors, necessitating the supplementation of their findings with additional evidence, such as open-source data.

The stakes are high as false readings from unreliable detectors threaten to erode trust in AI verification broadly and feed a disinformation tactic researchers have dubbed the “liar’s dividend”: dismissing authentic content as AI fabrications.

“We often report on misinformers and other hoaxsters using AI to fabricate false images and videos,” said Waqar Rizvi from misinformation tracker NewsGuard.

“Now, (we are) monitoring the opposite, but no less insidious phenomenon: claims that a visual was created by AI when in fact, it’s authentic.”

-Khmer Times-
—————–