Get Even More Visitors To Your Blog, Upgrade To A Business Listing >>

Why Did AI Research Drift From Strong To Weak AI?

The Field of artificial intelligence was founded in the 1950s on a platform of ambition and optimism. Early pioneers were confident they would soon create machines displaying “Strong” or “human-like” AI. Rapid developments in computational power during that era contributed to an overall buoyant atmosphere among researchers.

Nearly 70 years later, Strong AI continues to lie out of reach while the market overflows with “Weak” or “Narrow” AI programs that learn through rote iteration or extract patterns from massive curated datasets rather than sparse experience like humans do.

What happened to derail the ambitions of those early researchers? And how are cutting edge programmers today looking to kickstart a resurgence towards true thinking machines?

The History of AI

While the notion of AI has drifted about in the minds of writers, scientists and philosophers for generations, the formal beginning of the field can be traced to several key developments in the mid-20th century. The Church-Turing thesis emerged in this time period and is considered by some to be the central thesis of the AI movement. The thesis states that if a problem is unsolvable by a Turing Machine, then it is also not solvable by human thought. The corollary is that if a human can behave in an intelligent way, so too can a machine. Some have even proposed “attempts to provide practical demonstrations of the Church-Turing thesis” as an alternative definition for AI research. Concurrently, 1943 saw McCullouch and Pitts present the first model for an artificial neurons. Their work examines how basic cells work in tandem to produce the immense complexity of the human brain is the basis for the artificial neural networks that dominate many aspects of AI research today.

The key event precipitating the field of AI, and indeed the origin of the moniker “artificial intelligence” itself, was the 1956 Dartmouth Summer Research Project on Artificial Intelligence. It brought together all the best and brightest in AI and related fields, including major players like Newell, Simon, McCarthy, Minsky and Samuel, for a summer-long brainstorming session. Outcomes included the development of symbolic information processing which offered a new paradigm in brain modelling. They also extensively discussed “deep” versus “shallow” methods of problem solving. Some researchers were interested in working on many “light” problems to seek their commonality while others advocated a deeper focus on a single challenging problem. This division was accompanied by another split between deductive logic-based methods and inductive probabilistic methods.

Coming out of the Dartmouth conference, the late 50s and 60s were a time of considerable optimism in the AI field. Computers began to accomplish tasks that seemed miraculous at the time – solving algebra problems, playing checkers and controlling robots. In 1965, Herbert Simon declared that “machines will be capable, within twenty years, of doing any work a man can do.” Funding flowed into AI research with few strings attached and researchers were encouraged to pursue any and all avenues.

By the early 1970s, the honeymoon period was over. AI researchers failed to grasp the enormity of the task they faced particularly with the limited computing power available at the time. Funders were disappointed with the lack of results and the cash dried up. This period eventually became known as the AI Winter.

To facilitate the rebirth of the field, scientists in the 1990s refocused. Rather than try to achieve general human-level intelligence, they concentrated on isolated problems that yielded palpable results. The huge increase in computing power between the 70s and 90s was largely responsible for the sudden successes in this arena.

In 1997, IBM’s Deep Blue became the first computer to beat reigning chess world champion Gary Kasparov. The media attention given to this feat ushered artificial intelligence back into the limelight, but also diminished the original ideals of the field. Deep Blue had nothing like human-level general intelligence: it was narrowly programmed to seek out the ideal response to each situation in a chess game. A subsequent IBM delivery, Watson, defeated several champions of the game show Jeopardy in 2011, though this too was more a result of tedious programming and computing power than human-like reasoning ability.

IBM’s breakthroughs brought forth the new paradigm of ‘intelligent agents’ and this seems to be where the field has settled in the past decade. Rather than pursuing true general intelligence, more companies and researchers are settling for Weak AI programs like Siri, Alexa, Cortana and chatbots. While each new revelation receives media attention and further hypes the AI brand, it is also causing a drift away from the original goals of the field. Indeed, artificial intelligence is both more and less successful than it has ever been.

Modern Approaches to AGI

Not all researchers are content to settle for the Weak AI compromise and dedicated purists continue to pursue true AGI. Industry pioneer Ben Goertzel teaches that there are four main approaches to consider, each with pros and cons but none providing a total solution.

The symbolist approach centers on the physical symbol system hypothesis of Nils Nilsson that “minds exist mainly to manipulate symbols that represent aspects of the world or themselves”. This approach remains popular as symbolic thought is considered by many to be the core of human intelligence, allowing the broad generalizations that characterize our intellect. However, symbolic processing alone, when divorced from perception and motivation, has not yielded human-like intelligence.

An alternative route is the emergentist or “sub-symbolic” approach in which human-like behavior is proposed to emerge from very low level structures and programs. This approach is modeled on the human brain wherein a massive number of simple constructs (neurons) work in tandem to create remarkable complexity. Both computational neuroscience and artificial life are fields developed under the umbrella of the emergentist approach. Ben Goertzel argues that this approach is misguided as  developing the underlying structures without the information processing architecture cannot yield a human-like artificial brain. Our knowledge of the brain is too limited and forcing algorithms evolved in a living body onto a silicon substrate introduces unnecessary challenges.

A hybrid approach to developing AGI has been presented in recent years as a means to minimize the downsides present in both the symbolic and emergentist approaches. The philosophy behind the hybrid approach is that the whole is often more than the sum of its parts; that the correct components in combination will produce more intelligent emergent behaviors than either the symbolist or emergentist approaches. This theory finds favor because it is consistent with the working of the human brain itself. However, there is also a risk that merging two non-functional theories will only yield a more brittle non-functional result.

Finally there is the universalist approach which lies separate from the previous three. In this approach, “perfect” AGI algorithms are designed such that a functional AGI could result given unlimited computing power. The algorithms are then modified to fit within the computing constraints of this era. The primary case study of this approach belongs to Marcus Hutter’s AIXI system.

Tracing the history of AI development, it is clear to see how the cycle of funding yielding results yielding more funding has dictated the progression of AI from Strong to Weak. Breakthroughs and developments in Weak AI can be rapid and each receives considerable public attention coupled with further resource investment.

At this point, the line between Strong and Weak AI has become so blurred in the eyes of the public that few even know it exists. True AGI offers the potential to revolutionize computer science, robotics, and even philosophy, but its development requires a group, or perhaps even generations, of dedicated researchers.
 

FREE BOOK PREVIEW

Applied Artificial Intelligence:
How Machine Learning Transforms Our Lives & Work

DOWNLOAD FREE PREVIEW

The post Why Did AI Research Drift From Strong To Weak AI? appeared first on TOPBOTS.



This post first appeared on Leading Strategy And Research Firm For Applied AI, please read the originial post: here

Share the post

Why Did AI Research Drift From Strong To Weak AI?

×

Subscribe to Leading Strategy And Research Firm For Applied Ai

Get updates delivered right to your inbox!

Thank you for your subscription

×