Get Even More Visitors To Your Blog, Upgrade To A Business Listing >>

Living with AI – Part 1: Sooner Than We Think

From Fritz Lang’s 1927 film Metropolis to 2017’s Alien: Covenant (Ridley Scott), science fiction cinema has invoked intelligent machines as part of mankind’s future.  For nearly a century, we have been speculating, revising our visions based  on the form and nature of what artificial intelligence (AI) might look like once technology reaches that critical tipping point where machines might appear to think, act, and feel like humans.  And then speculating again as our knowledge and understanding pushes forward even further.  All the while we cast our eyes to a distant horizon, a “someday” in an unspecified future when we may need to address the real world issues of living in an age of intelligent machines.  But now there are some in the scientific community who are concerned not just that we may see the rise of AI in our lifetime, but perhaps even as soon as the next decade.

It could be that the technological leaps in information technology over the past 50 years has lulled us into a sense false security.  As it stands, we have been able to absorb many life altering technologies without even minor disruptions to our daily lives.  Email, the World Wide Web, cellular communications, the pervasive spread of wifi networks, the inclusion of dozens of devices in our homes and cars designed to move information from place to place without intruding on our time.  Even the Smartphone and digital assistants like Google Home and Amazon Alexa are in our living rooms to answer any question that might flit across our mind.  But what about that tipping point?  That point when we ask our AI device for something and it says “No.”  That day may be sooner than any of us want to think about.

Even as I write this, corporations like Google, Tesla, Apple, Amazon, Microsoft, and others are racing to be the first to bring a Artificial General Intelligence (AGI) to market and consequently into our lives.  No doubt there are countless smaller companies and academic institutions who are also working on the problem. The economic advantages of developing and licensing the worlds first AGI could be staggering.  But we have been warned by films like “The Matrix” and “Terminator” that a super-intelligent AGI might not have the best interests of flesh and blood humans foremost in its programming.  And there may be no guarantee we can control it once we let the genii out of the bottle.

Beyond task-oriented AI

What exactly is Artificial General Intelligence and what distinguishes it from common notions of AI represented by systems like IBM’s Watson that competed successfully on the TV quiz show Jeopardy?  To date, systems like Watson and the Google AI AlphaGo that defeated world champion Lee Sedol at the complex strategy game Go are based on super fast computation of potential answers/moves from a database provided by human programmers.  In essence, these systems are solving logic and selection problems at incredible speeds, processing more information faster than any human could.  We could call this Artificial Narrow Intelligence (ANI) since the systems are restricted to a narrow set of tasks and have been programmed to solve them using specific criteria.  Make no mistake, ANI systems can look plenty smart and can even come up with creative ways to use their programming to solve tasks but they are limited in one critical area.

Artificial General Intelligence (AGI) would allow systems to break free of these human-imposed limitations.  Imagine a system that could, upon discovering that it did not have enough hardware to solve a task, go on to manufacture and integrate that new hardware into itself and extend its capabilities.  Additionally, consider that such a system might discover that it did not have enough information to complete a task and could then add whatever hardware (for extending input options) or software (for data connections or improved analysis and storage) it needed to fill in the gaps in its knowledge.  Finally, consider that the system could evaluate the tasks it has been assigned to determine if there might be a better or more appropriate set of tasks that it should pursue in order to achieve its goals.  If this sounds a lot like how humans operate, you would be exactly right.  The easiest way to conceptualize AGI is to think in terms of an AI that would be indistinguishable from a human mind with one important difference – it could be superior to our capabilities.

It has long been believed that the barrier to true AGI was in the hardware.  How long will it take until we develop the computer technology to simulate the human brain’s one hundred billion neurons and the thousands of connections each neuron requires?  The rate of advancement in computing hardware has been nothing short of breathtaking and with the advent of research into the relatively new science of quantum computing, there is every possibility that our computing technology will advance even more rapidly.  As it stands, researchers from IBM have already been able to assemble systems that can accurately simulate Human Brain activity although at greatly reduced speeds compared to human thought.  But perhaps the most important lesson learned is that we may not need to create computing systems that emulate the human brain in order to create AGI.  New research has demonstrated that new technology such as Neural Network computing can achieve stunning levels of AI without the need to model the human brain at all.

Harbingers

Research into AGI has been ongoing for decades.  Perhaps the greatest danger in this research has been that separate groups have been pursuing this goal of AGI in a competitive, rather that cooperative environment.  There is incredible financial incentive to be the first to bring an AGI online.  But do we know who will be the first and can we be sure that they have thought through all of the potential problems an AGI might bring?  There are those in the scientific community like Elon Musk and Sam Harris that fear a future of AGI overlords who might make humans slaves to their own goals at best or decide we are unnecessary and wipe us all out at worst.  There are also the optimists like inventor and futurist Ray Kurzweil and Larry Page of Google who believe that with the help of super intelligent AGI, humans will be able to build a utopian society free of disease and suffering where every human can reach their potential for happiness.

But how will that future be decided if we have pockets of researchers all competing to be the first to bring an AGI online?  Enter Max Tegmark and the Future of Life Institute.   Founded in 2015, the Future of Life Institute brings together a veritable who’s who of the worlds leading AI researchers in a cooperative venture to share concerns and solutions to the problems of safety and human prosperity in an age of AGI.  The Future of Life Institute has set itself the task of being a coordination point for information and a facilitator for cooperative research and development of not just AI systems but technologies that can provide security and monitoring of both future and existing systems in order to detect and mitigate threats from malicious systems.

The future could be scary.  And that future might not be very far away at all.  As Sam Harris speculated, it could happen in our lifetime.  But thanks to the efforts of a handful of visionaries, the wider scientific and technological communities are thinking in much broader terms about the impact AGI could have on human society and how we will survive as a species after its arrival.  The Institute has produced an Open Letter on AI that currently has over 8000 signatories including Bill Gates, Elon Musk, Stephen Hawking, dozens of university professors and researchers from around the world, representatives from tech companies such as Google, Microsoft, Facebook, Amazon, IBM, and many others.

Coming soon to a planet near you?

AGI is not here yet, but it is coming.  Perhaps faster than we ever thought possible.  Fortunately, a lot of great minds are starting to consider what it will mean for our world as it gets closer to reality.  Philosophers, sociologists, ethicists, technologists, and other disciplines are now beginning to consider what it might be like to have a super intelligent general AI among us.  The questions that will need to be answered are far too large and numerous to be tackled in a single article or even a series of articles.  But there are a few questions and ideas that we should all be thinking about as we go into the future.  I’ll be covering some of those in upcoming articles in this series.

There are a lot of reasons to be hopeful about Artificial General Intelligence and humanity’s future.  I strongly recommend the book Life 3.0 by Max Tegmark for a great overview of what the near future might hold.  We’ll take a look as some of the issues raised in that book in the next installment of this series.

In the mean time, ask Google Home or Amazon Alexa – What is AI?  Don’t be surprised that the answer is coming FROM an AI!

Photo Credit

Featured Image – A Health Blog  2012



This post first appeared on LIFE AS A HUMAN – The Online Magazine For Evolvi, please read the originial post: here

Share the post

Living with AI – Part 1: Sooner Than We Think

×

Subscribe to Life As A Human – The Online Magazine For Evolvi

Get updates delivered right to your inbox!

Thank you for your subscription

×