Get Even More Visitors To Your Blog, Upgrade To A Business Listing >>

As AI Meets the Reputation Economy, We’re All Being Silently Judged

TongRo Images Inc./Getty Images

In the Black Mirror episode Nosedive, the protagonist, Lacie, lives in a saccharine world of pleasantries in which every personal or business interaction is scored. Everything depends on the social score, and everyone is desperate to move up in the rankings. But the omnipresent rating game has one big catch: ranking up is incredibly hard, while ranking down is rapid and easy, like a free-fall.

Welcome to the Reputation economy, where the individual social graph — the social data set about each person — determines one’s value in society, access to services, and employability. In this economy, reputation becomes currency.

The Reputation Economy is based on the simplistic, but effective star ratings system. Anyone who’s ever rated their Uber driver or Airbnb host has actively participated. But what happens when algorithms, rather than humans, determine an individual’s reputation score based on multiple data sources and mathematical formulas, promising more accuracy and more flexibility via machine learning?

70% of U.S. companies currently use social media to screen employees. And many AI-enabled startups are competing in the HR assessment market, using AI to crawl potential candidates’ social media accounts to filter out bad fits.

Insight Center

  • The Risks and Rewards of AI
    Sponsored by SAS
    Assessing the opportunities and the potential pitfalls.

In 2012, Facebook applied for a patent that would use an algorithm to assess the credit ratings of friends, as a factor in one’s eligibility to get a mortgage. And China is aiming to implement a national social score for every citizen by 2020, based on crime records, their social media, what they buy, and even the scores of their friends.

When AI starts determining an individual’s social worth, the stakes are high. As Kim Darah writes in The New Economy: “Far from being neutral and all-knowing decision tools, complex algorithms are shaped by humans, who are, for all intents and purposes, imperfect.” We must ask ourselves: How good is the data? How good is the math? How ready is society to be judged by AI? And what could possibly go wrong?

Bad data

Algorithms learn by extracting patterns from large historical data sets, and then applying those patterns to predict the future. When the data set is flawed, the prediction will be wrong.

In 2012, the state of Idaho cut Medicaid benefits for 4,000 people with developmental and intellectual disabilities by a whopping 20-30%. After the American Civil Liberties Union (ACLU) sued to get insights into the algorithm used to determine the cuts, they found that two thirds of the historical data had been corrupt, resulting in a predictive algorithm that was based on an still mildly flawed subset of one third of the existing data. Bad data led — amongst other things — to bad results.

The potential reputation economy data is equally flawed. 79% of online Americans use Facebook, but only 32% are on Instagram, and 24% are on Twitter. This variance in penetration of social networks makes triangulation of data from multiple networks possible for only a subset of users; it’s an incomplete data set. Furthermore, fragmentation across communication channels makes weighing connections by true level of affiliation impossible.

But the bigger issue is the proven fact that one’s digital presence is seldom reflective of one’s true self. People post things they think will make them look good, and depending on affiliation and life stage, that can result in exaggeration — in any direction. This skew makes the use of social media data questionable in lots of cases.

Bad math

Algorithms don’t have a conscience; they repeat what they learn. When algorithms repeat and perpetuate bias or opinion, we need to consider mathwashing.

Unintended mathwashing occurs when the algorithm is left unchecked, and, learning from historical data, amplifies social bias. The U.S. justice system uses an algorithm called COMPAS to determine a criminal’s likelihood to re-offend. COMPAS has been proven by Pro Publica to predict that black defendants will have higher rates of recidivism than they actually do, while white defendants are predicted to have lower rates than they actually do.

Deliberate mathwashing occurs when the algorithm is tweaked in order to course correct or skew the bias. Facebook allegedly mathwashed when it routinely suppressed conservative news in 2016.

Unconscious bias is deeply ingrained in America’s social fabric. Continuing to let algorithms perpetuate social bias would be irresponsible, and basing life-changing decisions on that information could slow progress toward true equality.

Unintended consequences on society

Social pressure is a powerful and subtle form of control. And when this social pressure is amplified by an obscure algorithm presumably watching every digital move, freedom of speech can be jeopardized. People may simply be afraid to speak out, for fear of the affect it might have on their ability to obtain employment, goods, or services. Such “social cooling” describes a culture of self-censorship, where people (voluntarily) adjust their behavior to conform to a social norm, out of fear that their digitally monitored behavior could affect their reputation.

Successful Uber drivers are practicing social cooling by adapting to fit a common expectation of service: As one Uber driver described in an interview with The Verge: “The servant anticipates needs, does them effortlessly, speaks when spoken to, and you don’t even notice they’re there.”

Airbnb exhibits social cooling in its host/guest review system, where generic words of highest praise mirror the hosts’ and guests’ reluctance to judge or be judged.

Due to the abstract and obscure nature of machine learning, people feel they never know when they are being judged (How is the ecosystem connected?), or by whom (Who has access to the data?), or how (What’s the algorithm?). This leads to risk aversion — which could suppress the expression of non-conformist ideas, and could kill creativity. Taken to the extreme, this creates a society where people are afraid to speak their minds.

Where do we go from here?

As we continue to awaken to our new digital reality, the reputation economy will become a reality for all of us. And opting out is not a viable option, when 57% of employers say that if they can’t find a candidate online, they skip to the next. Our reputations will indeed become currency.

Lawmakers and civil rights groups alike are grappling with the question of how to regulate the use of algorithms, and how to maintain quality control over the formulas. Efforts like the EU’s General Data Protection Regulation (GDPR) aim to put the user back in control of their own personal data.

Meanwhile, individuals will need to become vigilant about their personal data online. For many teens, online reputation management is a daily reality that they’re well versed in. Their profiles are often private, regularly groomed, and highly curated. Their need for uncensored self-expression and the opportunity to make mistakes is — for now – outsourced to ephemeral platforms like SnapChat. As AI continues to infiltrate the reputation economy, discipline in how we interact and in how we judge online will be required of all of us.

If we expect to gain access to employment, goods, and services in the future, social platforms can no longer be a playground for the ego. Our online reputations will precede us all.



This post first appeared on 5 Basic Needs Of Virtual Workforces, please read the originial post: here

Share the post

As AI Meets the Reputation Economy, We’re All Being Silently Judged

×

Subscribe to 5 Basic Needs Of Virtual Workforces

Get updates delivered right to your inbox!

Thank you for your subscription

×