Get Even More Visitors To Your Blog, Upgrade To A Business Listing >>

Google is teaching search algorithms to better spot offensive, factually incorrect results

Google is giving thousands of contractors who normally evaluate Search results a new, additional task: help the company downrank blatantly upsetting, offensive, and false content. Search Engine Land has a thorough explainer on the updated guidelines used by Google’s quality raters. Those are the people who rank the usefulness and accuracy of search results to keep improving the company’s search algorithms, which ultimately determine what ranks where.
Raters now have access to a new “Upsetting-Offensive” flag which Google says should be used in the following instances:

– Content that promotes hate or violence against a group of people based on criteria including (but not limited to) race or ethnicity, religion, gender, nationality or citizenship, disability, age, sexual orientation, or veteran status.

– Content with racial slurs or extremely offensive terminology.

– Graphic violence, including animal cruelty or child abuse.

– Explicit how­ to information about harmful activities (e.g., how tos on human trafficking or violent assault).

– Other types of content which users in your locale would find extremely upsetting or offensive.

Just being upsetting isn’t enough for raters to flag search results. Google points to an example regarding a “Holocaust history” search: one result is a Holocaust denial site, which the company says deserves the flag. The other, a website from The History Channel, might be upsetting due to subject matter but is a “factually accurate source of historical information” and doesn’t promote the hateful content mentioned above.

Search Engine Land notes that simply being hit with the Upsetting-Offensive flag won’t immediately demote or downrank search results. Instead, those flags are used as data points for Google’s employees as they continue to iterate on search algorithms. Eventually the algorithm will learn to flag upsetting and factually content on its own, which would impact search rankings in cases where Google believes users are after “general learning.” But it’s not censoring or hiding anything; if someone’s specifically searching for, say, a white nationalist website by name, Google will still deliver it at the top of results. “We will see how some of this works out. I’ll be honest. We’re learning as we go,” Paul Haahr, a senior executive on the search team, told Search Engine Land.

Google already switched some raters over to the new guidelines and has used the resulting data to improve search rankings. But the company’s Google Home speaker is still spouting off idiotic, untrue answers to certain questions, and featured snippets at the top of web results continue to occasionally surface bad info as well.

The post Google is teaching search algorithms to better spot offensive, factually incorrect results appeared first on Innovation Village.



This post first appeared on Innovation Village, please read the originial post: here

Share the post

Google is teaching search algorithms to better spot offensive, factually incorrect results

×

Subscribe to Innovation Village

Get updates delivered right to your inbox!

Thank you for your subscription

×