(PatriotWise.com) – Artificial intelligence made headlines in the last few weeks after OpenAI created ChatGPT, which generates responses to questions a user asks. Now the research firm has released another tool that can determine whether a text is AI-generated or human-generated, according to Search Engine Journal.
The tool is not 100% accurate, and the research firm says that it would be impossible to detect differences between AI and human text with perfection, however, they reportedly believe that their new tool can help reduce false claims that humans wrote AI-generated content.
In the announcement for the tool, OpenAI stated that the “classifier correctly identifies 26% of AI-written text (true positives) as “likely AI-written,” while incorrectly labeling human-written text as AI-written 9% of the time (false positives).” They add that as the length of the text increases, the tool becomes more reliable.
But the company cautions against using the classifier as a “primary decision-making tool,” saying that it should be used alongside other methods. The length of the text is an issue for the tool, which is why it requires at least 1,000 characters to run a test.
Other limitations surrounding the classifier are as follows: it may mislabel the text as either AI or human-generated; text generated by AI may evade classification as such if there are minor edits; and because the tool was primarily trained on English content, it can get content written by children and non-English text wrong.
But the tool is reportedly simple and easy to use. It will conclude from a range of possibilities that a text is AI-generated: “very unlikely, unlikely, unclear if it is, possibly, and likely.”
Using an essay generated by AI, Search Engine Journal then tested the classifier tool by submitting that essay. The tool concluded that the content was “possibly” generated by AI. After a few minor edits in Grammarly, the outlet reported that the tool changed its decision to “unclear.”
Copyright 2023, PatriotWise.com