The OpenAI-created algorithm assesses whether or not a text was written by Chat GPT.
OpenAI, the company that launched ChatGPT, has today released a new tool that can determine whether or not a sentence was generated using AI. However, because it is not completely dependable, AI authoring tools can still get around it.
OpenAI characterised the programme in a recent blog post as a classifier tool that can identify between text written by a person and text created by an AI text generating tool.
In addition to text generated by Chat GPT, this application can recognise text generated by several other well-known text-generating AI systems.
According to the designers of Chat GPT, Open AI, this categorization tool will be beneficial in detecting academic dishonesty in assignments and exams, as well as in circumstances where a Chatbot looks to be a person. However, the tool’s creators recognised that it is currently unable to discriminate between articles created by AI systems and those produced by human authors. They argued instead that the algorithm is actively learning new classifiers.
According to the details, only 26% of the text generated by AI was recognised by the classification software, with the rest getting unreported. However, the approach classified approximately 9% of human-written text as AI generated.
This illustrates that the tool is still susceptible to error and should not be totally relied upon, but it will provide academics who were worried about submitting AI-generated assignments and homework with a sigh of relief.
It should be used in conjunction with other techniques for determining the author of a piece of text, rather than as the single tool for making choices. In terms of the solution’s capabilities, OpenAI stated.