Get Even More Visitors To Your Blog, Upgrade To A Business Listing >>

Why Technology Is Not Neutral And Why You Should Care

 

I usually don't like to write opinion pieces as in most cases those are based on personal judgments, entirely subjective. However, as I deal more and more with Technology, I get the same answer from most people when I show them my concerns about recent tech giants like Google and Facebook "those are just tools, they are neutral, they are not an end," Although those considerations might make sense at first they are wrong!

There is no such thing as neutral technology; there is no such thing as a tech tool that is just a tool. The applications we use on a daily basis aren't neutral at all. They have built-in biases, incentives that make us take certain actions. In this piece, I want to focus on two applications that each day are used by billions of people: Google and Facebook.

I'm selecting those two - this is not cherry picking - based on the applications those tech giants have been able to build or acquire throughout the years. Those applications are part of the daily habits (or vices) of billions of people. Apps like WhatsApp, Instagram, Facebook, Google, Google Docs, Google Maps and so on are so ingrained in our daily routine that we barely think they belong to the same companies that although offer us free tools they also have a clear commercial logic!

The curse of engineers turned advertisers

Matt is an engineer; he got just hired by Google. He wakes up one morning extremely excited to go to work until he sits at his desk and he's given his first tasks "figure out a way to make more money for our Advertising network" they told him.

Although, there is no Mark (it is a fictional character) imagine the curse of an engineer graduated with top grades, just hired by Google, excited and ready to "make the world a better place" that ends up in having a bunch of tasks mostly related on how to make more money to Google's advertising networks!

This isn't just a thought that permeated my mind. But it seemed to be a legitimate worry that Google's founders had when back in the 90s they were trying to figure out how to make money with their search engine. At the time after looking for existing companies to acquire their technology and not able to find the right deal, Brin and Page went on to figure out how to make a profit with their tool.

In the article "The Future of Google: The Curse of Engineers Become Advertisers" I looked at a bit of history of the most prominent search engine on earth. And the most interesting part is that when Google started out, Brin and Page didn't want it to be associated with advertising. There was a specific reason for that as they specified too in a paper entitled "The Anatomy of a Large-Scale Hypertextual Web Search Engine" they specified "historical experience with other media, we expect that advertising funded search engines will be inherently biased towards the advertisers and away from the needs of the consumers."

Things swiftly changed when Google adopted the Google AdWords network which allowed businesses to bid on keywords. That's how it became a commercial search engine. Most people think of Google as the place where they can find the "best" information. But what does "best" mean?

The rise of deceptive AI tools

Recently Google published for the first time a manifesto that comprised the most important principles in developing AI tools. That manifesto consisted of seven principles:

1. Be socially beneficial 

2. Avoid creating or reinforcing unfair bias

3. Be built and tested for safety

4. Be accountable to people

5. Incorporate privacy design principles

6. Uphold high standards of scientific excellence

7. Be made available for uses that accord with these principles 

Although those principles make sense, ethics is not an easy matter, and of course, the role of the engineer is to find practical applications but weight them against ethical and social concerns. Just about a month before Google had launched a machine learning tool able to simulate a human conversation with incredible accuracy. If you look at the video you won't be able to notice any difference between Google Duplex and the human on the other side of the phone line:

As pointed out recently on Tech Crunch:

“Google’s experiments do appear to have been designed to deceive,” agreed Dr Thomas King, a researcher at the Oxford Internet Institute’s Digital Ethics Lab, discussing the Duplex demo. “Because their main hypothesis was ‘can you distinguish this from a real person?’. In this case it’s unclear why their hypothesis was about deception and not the user experience… You don’t necessarily need to deceive someone to give them a better user experience by sounding naturally. And if they had instead tested the hypothesis ‘is this technology better than preceding versions or just as good as a human caller’ they would not have had to deceive people in the experiment.

In short, although Google might be successful in tackling engineering issues, it might be less so in tackling ethical ones.

Data? Facebook takes it all

In 2014 Facebook bought WhatsApp, a messaging app used by millions of people for $19 billion. Since the start, WhatsApp has not been thought of as an application to be used for advertising purposes. WhatsApp founders who worked for a combined twenty years at Yahoo didn't want their app to have anything to do with advertising.

As pointed out in a 2012 blog post entitled "we don't sell ads," which is worth reading:

Brian and I spent a combined 20 years at Yahoo!, working hard to keep the site working. And yes, working hard to sell ads, because that's what Yahoo! did. It gathered data and it served pages and it sold ads.

We watched Yahoo! get eclipsed in size and reach by Google... a more efficient and more profitable ad seller. They knew what you were searching for, so they could gather your data more efficiently and sell better ads.

These days companies know literally everything about you, your friends, your interests, and they use it all to sell ads.

When we sat down to start our own thing together three years ago we wanted to make something that wasn't just another ad clearinghouse. We wanted to spend our time building a service people wanted to use because it worked and saved them money and made their lives better in a small way. We knew that we could charge people directly if we could do all those things. We knew we could do what most people aim to do every day: avoid ads.

No one wakes up excited to see more advertising, no one goes to sleep thinking about the ads they'll see tomorrow. We know people go to sleep excited about who they chatted with that day (and disappointed about who they didn't). We want WhatsApp to be the product that keeps you awake... and that you reach for in the morning. No one jumps up from a nap and runs to see an advertisement.

Advertising isn't just the disruption of aesthetics, the insults to your intelligence and the interruption of your train of thought. At every company that sells ads, a significant portion of their engineering team spends their day tuning data mining, writing better code to collect all your personal data, upgrading the servers that hold all the data and making sure it's all being logged and collated and sliced and packaged and shipped out... And at the end of the day the result of it all is a slightly different advertising banner in your browser or on your mobile screen.

Remember, when advertising is involved you the user are the product.

At WhatsApp, our engineers spend all their time fixing bugs, adding new features and ironing out all the little intricacies in our task of bringing rich, affordable, reliable messaging to every phone in the world. That's our product and that's our passion. Your data isn't even in the picture. We are simply not interested in any of it.

When people ask us why we charge for WhatsApp, we say "Have you considered the alternative?"

In 2016, almost two years after the Facebook acquisition, WhatsApp terms of service were changed to include more "integrations" between WhatsApp and Facebook products. That put it shortly meant Facebook started to leverage on WhatsApp data to become more valuable to businesses, thus earn more money on their advertising platform.

"There is nothing new to the rise of those advertising giants," said average Joe

Another common phrase that you might hear around is the fact that advertising isn't new. And that just as mass media technologies like TV and Radio have taken over the world in the previous century, so search engines and social media are taking over our world now.

That isn't the case for several reasons; I'll focus on a couple of reasons here. First, TV and Radio were technologies and not necessarily tech giants. Today, we're assisting to the rise of tech giants, like Google and Facebook that alone can control the attention of billions of people on a global scale. Second, back in the days, advertisers had a colossal power thanks to mass media technologies. However, their message was highly undifferentiated. In short, they had to communicate the same message to millions of people. That made the message wide and spread, thus creating the so-called "pop cultures." Today instead social media like Facebook allow us to send very specific messages to a narrow audience by accessing personal data to segment them at the whole level. That instead of creating pop cultures, it generates "filter bubbles" that might reinforce our biases.

Beware of the commercial logic

One heuristic that I think each of us can use to be more aware and conscious of the logic behind those tech giants is to understand the way they make money. Just like in real life when dealing with someone that is trying to sell us something we become more aware of the fact that person might have hidden interests, I believe the best way to deal with those modern technology tools is similar to the way you'd deal with the salesman on the street. That's also why on this blog I often tackle business models. I believe those can give you a great insight into how those companies "think" and want you to behave to maximize their profits.

Shouldn't technology create more sustainable business models?

When by the end of the 1800s newspaper had become advertising outlets that sold for a meager fee while monetizing mostly on businesses ads. It was the beginning of industry - that of news - that carried intrinsic biases and distortions that persist nowadays. As technology advances, humanity has hope for it to build a better world. In part, this belief is based on the fact that people behind those technologies are engineers, thus not subject to common biases. In reality, an engineer might not be equipped to understand or even think ethical matters are important. But those might be the worse people to advance our society, for a simple reason. Often ethics is not a matter of optimization. Quite the opposite, when you introduce a utilitarian or optimization metric to an ethical dilemma, you might make it worse.

As Google and Facebook showed, those same engineers that invented super smart tools, also adopted old business models, mainly borrowed by the media industry that adopted them in modern times (by the mid-1800s). Thus, the question that keeps staring back at me is "shouldn't technology also innovate regarding monetization strategies to create more sustainable business models?"

Once again, probably myself, just like the rest of the world is putting too much hope into what technology can and should do. Therefore, the best way to go ahead is to be always a skeptic and ask any tech giant promising us the moon "how do you make money?"

The post Why Technology Is Not Neutral And Why You Should Care appeared first on FourWeekMBA.



This post first appeared on FourWeekMBA, please read the originial post: here

Share the post

Why Technology Is Not Neutral And Why You Should Care

×

Subscribe to Fourweekmba

Get updates delivered right to your inbox!

Thank you for your subscription

×