According to a recent survey from intelligent.com, a third of college undergraduates surveyed admitted to using ChatGPT on their academic work in the last year. According to a Reuters/Ipsos poll, about 28% of workers report having used the AI tool as well. This can be disastrous in a professional setting—AI is often wildly inaccurate and should never be used as a substitute for research (particularly if you’re an attorney) so it makes sense that professors and supervisors are watching carefully, trying to suss out whether people are “cheating” at school or work.
There are dozens of apps and services that purport to detect AI writing, but the accuracy of these tools vary widely. According to some research, the most reliable AI-checking program for writing, Turnitin, gets it right about 80% of the time. Not the worst record, but that’s still a lot of false positives.
The Washington Post recently laid down some defenses for people who fall into that unlucky 20%. Their article is geared toward students, but the broad advice applies in the professional world too.
What to do if you’re falsely accused of using AI
Keep calm
This is good advice in most confrontational situations, especially one with a power imbalance like a student/professor relationship or an employee/employer relationship. As Christian Moriarty, professor of ethics and law at St. Petersburg College in Florida, put it, “Escalating makes everybody go on the defensive.”
Instead of getting angry, explain as honestly as possible your use (or non-use) of artificial intelligence tools in your work. Focus on the facts and try not to dwell on the injustice of the situation—getting defensive is understandable but rarely effective.
Turnitin is the most widely used tool in academia to detect AI in writing, but it’s also used as a plagiarism detection tool. Turnitin is very good at detecting whether you swiped text from somewhere else, but not so good at AI detection. So a teacher or professor who has been using it for years may give Turnitin’s verdict more weight than it deserves. Calmly explaining this with some backup material might help.
Arm yourself with facts
Even when working as intended, AI detectors are fallible. They are only detecting patterns that are likely to have been created by AI. This is not proof. AI detectors are particularly prone to false positives when analyzing writing on technical subjects, because there are only so many ways to express some concepts. Writing from non-native English speakers also tends to set off detectors’ alarm bells, as does overly simplistic writing.
Present evidence of your innocence
If you have older drafts of your work saved, it could help show that you developed your work without AI assistance. So can outlines and other research material. If you collaborated with anyone over email, that can help too. So can a colleague vouching for you.
Writing something original and running it through an AI detector could help too, but this is risky because it may not flag your work as AI—it’s hard to know whether a given piece of writing will set off alarm bells, and that’s the whole problem.
Find out what your employer’s AI policy is
Using AI to write a term paper or a report that you sign your name to is not acceptable to most schools or employers, but that’s not the only way AI can be used. If your company has specific guidelines about how employees can use artificial intelligence, and which tools are “acceptable,” ideally you’d be trained in what those guidelines are, but if you haven’t been, ask.
Know your rights
In most states, at-will employment laws mean you could be fired for using AI even if you’re innocent, but anti-discrimination laws might protect you. Questions about AI-assisted hiring discrimination suggest that artificial intelligence may be prone to unlawful discrimination in hiring. An AI-detection system could have bias baked in, too. If you feel you’re being discriminated against for being a member of a protected class at work, consult with an employment lawyer.