Get Even More Visitors To Your Blog, Upgrade To A Business Listing >>

Is Artificial Intelligence Smart Enough to Spot the Content Generated by AI Software

Identifying spin content and plagiarized work has been one of the most daunting things about academic and online content in general. There are multiple paid and free tools that help in spinning and rewriting the content by replacing words in specific patterns. Recently, developers and researchers at Harvard University and MIT-IBM Watson lab developed an AI-tool that has the capability to identify if the text that has been submitted is Written by a person or any AI-generating spinning tool. Usually, there is an algorithm behind a program, which means that when something has been written by a program there will be a specific code running behind it which will make the spinning very predictable. Although, words are replaced to make the content completely unique usually this type of content shows predictable patterns and incomprehensible meanings.

The GLTR (Giant Language Model Test Room) system has been developed to spot if the text has been written via a language model algorithm. Since there are multiple AI-based tools that have been designed to make fake news and spin-off content, this software will help in identifying all such content. With the help of GLTR system, the precision of detecting of fake text can be increased up to 72%. Usually, researchers know that AI-based software swap words, as a result, the sentence structure and grammar remain fine but they create mismatched-contextual phrases. When this type of content is scanned by the GLTR system, the result shows a predictable pattern that is colored yellow and green only. However, if the content is properly written by a human, the predictability is different because the vocabulary and way of expression are different for everyone.


  • Also read: Using Algorithms to Guard Against Deepfakes

In this case, if the content is genuine, it is colored, purple, red, yellow and green, which shows that the content is unique and meaningful. With the help of this system, the user will not only be able to determine if the online web content has been copied but also academic content that has been stolen. Another most important feature that will be groundbreaking in the field of social media marketing and branding will be that GLTR will also help in determining fake and spin-off tweets that can spread misinformation.



Read next: Why Machine Learning is Going to Explode and How You Can Prepare for it


This post first appeared on Digital Information World, please read the originial post: here

Share the post

Is Artificial Intelligence Smart Enough to Spot the Content Generated by AI Software

×

Subscribe to Digital Information World

Get updates delivered right to your inbox!

Thank you for your subscription

×