Get Even More Visitors To Your Blog, Upgrade To A Business Listing >>

It’s time for Facebook’s News Feed to explain itself

One reason Facebook struggles to earn our trust is because at the individual level, no one at the company can tell us why we’re seeing what we’re seeing in the News Feed. The company can talk about the content of the feed in general terms — mostly posts from friends and family, ranked by how close Facebook believes you to be with them — but were an engineer to browse your feed alongside you, they couldn’t explain why the posts appeared in the exact order they did.


A few years ago I was interviewing Chris Cox, who leads product across the company, and asked something about my feed I had always wanted to know. Sometimes I would open Facebook after being away for an hour or so and the News Feed would show me one or two posts I had already seen. Was that an effort to get me to add a comment? Did Facebook think I’d be more likely to share something after I saw it a second time? No, Cox said. That was just a bug.

The conversation stuck with me for two reasons. One, we talk about Facebook primarily in the context of its power, and the bug was a good reminder that the News Feed is just a flawed piece of software like any other. Two, it was one of the only times I could remember hearing something definitive about the content of my own News Feed.

I thought of that conversation again this week while reading the venture capitalist Fred Wilson’s post about “explainability.” Wilson starts seeing a bunch of items about Kendrick Lamar in the feed of content that appears underneath the Google search bar, and wonders why.

That leads him to an AI startup named Bonsai, which attempts to build systems that can ultimately explain their decisions to users. Bonsai writes:

Explainability is about trust. It’s important to know why our self-driving car decided to slam on the breaks, or maybe in the future why the IRS auto-audit bots decide it’s your turn. Good or bad decision, it’s important to have visibility into how they were made, so that we can bring the human expectation more in line with how the algorithm actually behaves.

Wilson thinks about how this might ultimately manifest itself in a consumer product:

What I want on my phone, on my computer, in Alexa, and everywhere that machine learning touches me, is a “why” button I can push (or speak) to know why I got that recommendation. I want to know what source data was used to make the recommendation, and I’d also like to know what algorithms were used to produce confidence in it.


It’s time to start a conversation about explainability at Facebook. Why did that highly partisan article appear in your News Feed? Why do you see every post about breakfast from a random acquaintance but not the new baby of your college roommate? Why am I seeing this ad in my feed, just minutes after I had a conversation about it in real life with a friend?

Answering the “why” question would be an enormous technical challenge for Facebook. But solving it could go a long way in establishing trust with users. As the company continues to beat the drum about its work in artificial intelligence, explainability should be an important part of the conversation.


This post first appeared on Ajakai ICT, please read the originial post: here

Share the post

It’s time for Facebook’s News Feed to explain itself

×

Subscribe to Ajakai Ict

Get updates delivered right to your inbox!

Thank you for your subscription

×