Get Even More Visitors To Your Blog, Upgrade To A Business Listing >>

When Will AI become Less Artificial and More Intelligent?

From Wikipedia:

“Artificial intelligence (AI) is intelligence exhibited by machines. In computer science, the field of AI research defines itself as the study of “intelligent agents”: any device that perceives its environment and takes actions that maximize its chance of success at some goal. Colloquially, the term “artificial intelligence” is applied when a machine mimics “cognitive” functions that humans associate with other Human minds, such as “learning” and “problem solving”.

The idea is to make machines (SW, HW) function in a manner similar to humans, automating the capacity that humans have to learn, make decisions, to accumulate experience.

AI algorithms fall into many classes. Wikipedia reports these:

  • Search and optimization
  • Logic
  • Probabilistic methods for uncertain reasoning
  • Classifiers and statistical learning methods
  • Neural networks
  • Deep feed-forward neural networks
  • Deep recurrent neural networks
  • Control theory
  • Languages
  • Evaluating progress

Most of the above techniques, not to say all of them, are pretty old, and anyone who has some grey hair has seen this stuff 20-30 years ago. We all agree that these techniques are quite successful and that AI is still in its infancy. But that is not the point of this blog.

The question we wish to ask is this: what, at this point, can we do to make AI less artificial and more intelligent, almost indistinguishable from humans? What can we do to take AI to another level, to provoke a quantum leap?

Let us first examine what the word “intelligence” means, let us look at its etymology. From Latin “intelligentia”: understanding, knowledge, power of discerning, from assimilated form of inter “between” ( inter) + legere “choose, pick out. Let us focus on the “choose”, “pick out” part in conjunction with the “inter”, i.e. between. Humans make choices, select options, weigh scenarios and discriminate between different solutions, hundreds if not thousands of times a day. Sometimes this is done using tools, in other cases based on gut, on intuition or experience. It is this capacity to select an option or a strategy that makes humans unique. Life, with all its complexities and nuances, offers almost infinite ways and means of setting goals and then reaching them. No two people will do things in exactly the same manner.

Now, we don’t want to get too philosophical here. The idea is to simply state that a generic problem solving process goes more or less like this:

  1. Problem statement (definition)
  2. Verification if the problem actually has a solution
  3. Selection of solution method (there may be many)
  4. Solution
  5. Verification of result

This is of course a very gross description. A generic example is illustrated below. Suppose that one has to cross the network (or some domain) from left to right.

Suppose that we identify three possibilities represented by the three paths shown in the figure. Suppose, just for the sake of discussion, that each of these paths entails very similar energy expenditure, time, cost, risks, etc. Which path would you choose? All things being equal (or not necessarily all) an experienced and wise individual (or a good engineer!) would probably select the least complex alternative. Humans instinctively imagine multiple scenarios and assess their Complexity trying to stay away from the ones that will potentially make life complex in the next few minutes, days, or years. This logic applies to running a family, a corporation or a battle scenario. High complexity leads to fragility.

The key, therefore, is to be able to measure complexity. Since 2005, thanks to Ontonix this is possible. We have now an Italian Standard, the UNI 11613, which shows corporations how to do a “Business Complexity Assessment”.

The bottom line is that today we have a consolidated technology known as the QCM, or Quantitative Complexity Assessment. QCM provides measures of complexity, not sensations.

QCM is the foundation of Artificial Intuition and has been under development since 2005.

The way QCM works is simple. Suppose you need to design a turbine and you come up with two candidate designs, which may be represented by the two Complexity maps shown below. The maps illustrate which parameters of our turbine are correlated with which other parameters. More correlations – interdependencies – means the system is intricate, difficult to understand and to fix. The first solution has a complexity of  3.03 cbits. It has 12 correlations between the 10 parameters.

The more complex solution, shown below, has a complexity of 5.2 cbits and has 19 correlations between the said ten parameters.

Providing both solutions are acceptable, the less complex one is clearly the better choice. This is intuitive.

We believe that AI-based systems should incorporate a QCM layer which would measure the complexity of the solutions provided by the computational kernel – clearly the kernel would need to provide multiple solutions – so as to  select the least complex ones that satisfy objectives and constraints. If done in real-time, it will be difficult to distinguish man from machine.

One important difference between man and machine is that humans are capable of original and creative ideas. It is very difficult to hard-wire something like that into an algorithm. But QCM can help here too. The two simple complexity maps illustrated above are in reality topological sums of a number of other maps called modes or attractors. These modes may be selected and assembled in a myriad of ways, some of which may be counter-intuitive or, simply, original. In spaces having a large dimension, the number of such modes can be very large indeed.

In essence, we propose to move AI to AI+QCM.

Imagine what great benefits can be obtained if AI, which will penetrate the industry, pervade our lives, our homes, would reduce complexity just how humans instictively do. Imagine, for example, driving strategies for autonomous vehicles that reduce traffic complexity. Just imagine how less complexity can mean more efficiency, less delays, less waste and less risk. Bottom line:

To become less Artificial, AI should integrate QCM, i.e. Artificial Intuition

Our world is quickly getting more complex. We measure every year the complexity of the world as a system based on over 250 000 parameters published by the World Bank. We can say that today the world is approximately 500% more complex than in the early 1970s. Moreover, we have created technologies that are rapidly increasing complexity everywhere. Think of the Internet of Things. How far do we think that we can take things without actually managing complexity? Can we just grow to be more complex with impunity? Certainly not. There exists the so-called critical complexity, which is a sort of Pandora’s Box, except that it tells you how far you can go. You need to stay away from critical complexity if you want to avoid a systemic collapse. AI, in conjunction with QCM can play a crucial role, not just in delivering sexier, more human-like solutions to a bunch of problems, but it can also help our global society to stay on a path of resilient sustainability.

www.ontonix.com



This post first appeared on Quantitative Complexity Management By Ontonix, please read the originial post: here

Share the post

When Will AI become Less Artificial and More Intelligent?

×

Subscribe to Quantitative Complexity Management By Ontonix

Get updates delivered right to your inbox!

Thank you for your subscription

×