Get Even More Visitors To Your Blog, Upgrade To A Business Listing >>

AI And "Pseudo-AI": Knowing Which Works Better Within Their Own Limitations

When there is an emerging trend, eyes are into it, curious about what it may hold. But not for long, if that trend appears to be interesting and beneficial, everyone wants to take part.

The same goes for Artificial Intelligence (AI). the intelligence that is demonstrated by machines.

While Google, Apple, Amazon, Facebook, IBM and some other big tech Companies are leveraging AI into their business, as well as developing the technology by research, some others who really want to jump into the bandwagon, are forced to stay put, think of a cheaper way to implement it, or just lie.

It's considered difficult to build a service using AI. It's so hard that in fact, many companies, big or small, found that it's indeed cheaper to make humans to behave like computers, than making computers to behave like humans.

And for a lot of time when it comes to AI, there is a person behind the curtain rather than "pure" algorithms.

The reason is because to build an AI, companies are required to have a ton of data. And sometimes, companies wanted to know whether there is sufficient of demand for a service before making an investment.

So not only that the AI trend have started a race between capable companies in reaching what's called "AI supremacy", but those that found it difficult to use AI but really wanted to, have quietly use humans to do bots' work,

This so-called "pseudo-AI" is not a secret as some companies have kept their reliance on humans a secret from both users and investors.

Some tech companies sitting on a lot of resources have offered their services as on-demand to allow those with no machine learning expertise to build custom AI models. These services may include features like importing data to tagging and training the model until the AI is complete.

However, humans in doing an AI's job allows companies with limited resources and time to skip the technical and business development challenges. Obviously, this strategy won't scale, but it allows them to build something and skip the hard part early on.

What's more, humans are also more reliable than robots when it comes to some problem solving and decisions.

Like for example, fake news trend that populate the web got viral because they were tagged as popular by AIs on search engines and social media network. But considering that those two are dealing with huge numbers of user-generated contents and the web itself, humans are less reliable than algorithms.

If companies the size of Google and Facebook should totally rely on humans to do all the work, the employees' job would be overwhelming.

But even so, there is no denying that humans can still do some of those work better than computers.

For this particular reason, humans should work side-by-side with AI.

Facebook has experienced this when it suffered from fake news surfacing on its ill-fated Trending section. The company first relied on humans, to then use algorithms instead. When it found the AI failed, it updated it but again it failed. Facebook then removed the feature.

How Human Do We Want Computers To Be?

The ultimate goal for creating AI, is to make computers capable of doing humans' work without humans' intervention. And that the AI can do the work faster and more reliable, cheaper and more efficiently.

Then there is the case of privacy. Here, humans and bots receive different treatment.

For example, email companies that use AI to scan users' personal email messages to improve "smart replies" or inject one of a few ads based on emails' context, may find themselves innovative and capable by users. Those users may even be happy with the AI in doing its work to improve experience.

Research has even shown that people tend to disclose more when they think they are talking to a machine, rather than a person, because of the stigma associated with seeking help for one's vulnerability.

But if those companies publicly say that "humans" are the ones that were doing all the work, would result in catastrophe. Not only that the companies will lose users' trust, they may as well lose investors.

The case can be more acceptable if companies say that they use humans to train the AI to improve the system, by using training data that doesn't involve users' personal data. They can unleash the AI, with users' permission, only when its ready.

But since AI has become a trend, everyone wants to take part in it. In this case, lesser capable companies, or even those who could use AI at its maximum potential at one time but to scared to unleash its full potential, may fake it until they make (or trust) it, telling investors and users they have developed a scalable AI technology while secretly relying on human intelligence.

The next case is when computers started becoming more humans, and that they scared us.

One example is the Google Duplex. When the company first revealed it to the public, the robot assistant made eerily lifelike phone calls complete with "ums" and "ers" to book appointments and make reservations. We can see that while at one point we want computers to be more like humans, seeing computers becoming more humans also scared us.

After an initial backlash, Google said its AI would identify itself to the humans it spoke to.

Elon Musk publicly said that the greatest risk we face as a civilization, Is AI. To anticipate this, he recommended humans to merge with computers.

Musk along with Bill Gates, Stephen Hawking and many other luminaries, has already warned that AI could be a major threat to humans' existence.

While not intentionally, AI has indeed killed humans.

Example include Elaine Herzberg in Tempe, Arizona. She died in an accident involving a self-driving car, Joshua Brown from Ohio, put his Tesla Model S Tesla into Autopilot mode and crashed the car into an articulated lorry.

Just like what Google's Sundar Pichai once said: AI is more profound than electricity or fire. Both can kill people. And so can AI.

Humans need to learn how to harness them for the benefit of humanity by overcoming their downsides.

Conclusion

AI is just like any other technologies we've discovered, invented and developed throughout our existence.

When humans created fire, we could cook food, making it more delicious and healthy. We could fend off wild animals and warm our homes. We can even travel at nighttime as if it was daylight, something that was otherwise difficult or impossible at the time.

But we also grew fear of fire. If it's not controlled, fire is deadly. Not only at that time, as humans are still afraid of fire.

The same goes for AI.

The technology can indeed help us in doing things faster and more efficiently. A capable AI can do humans' work multiple times more effectively and also cheaper. Some of us have even put faith on AI that birth a religion based on artificial intelligence.

But still, AIs can pose a lot of problem if we don't know how to control them.

Companies should always be transparent about how their services operate. Because at the age of internet and connectivity, everyone is a stranger. And there is no saying that the humans we know on the web, are actually bots.

Trust has become something even more expensive; it's either them, or us.

AI
Trends
Business


This post first appeared on Eyerys | Eyes For Solution, please read the originial post: here

Share the post

AI And "Pseudo-AI": Knowing Which Works Better Within Their Own Limitations

×

Subscribe to Eyerys | Eyes For Solution

Get updates delivered right to your inbox!

Thank you for your subscription

×