Get Even More Visitors To Your Blog, Upgrade To A Business Listing >>

Explainable Artificial Intelligence: How Does it Work?

It has been said that questions of “why” are best left to philosophy, while science is the best realm to tackle questions of “how.”

The field of Artificial Intelligence (AI) is often where these worlds intersect and at times, where they collide. “I don’t know why it works, but it works” doesn’t actually work when we talk about AI. Instead, we need Explainable Artificial Intelligence (XAI).

For example, why did the Software used by the cardiac surgery department select Mr. Rossi out of  hundreds of people on the list for a heart transplant? Why did the autopilot divert the vehicle off the road, injuring the driver but avoiding the mother pushing a stroller who was crossing the street? Why did the monitoring system at the airport choose to search Mr. Verdi instead of other passengers?

In a previous post we talked about the need to understand the logic behind an AI system, but we still need to go deeper to understand the benefits of XAI versus an unknown.

AI presents many dilemmas for us, and the answers are neither immediate or unambiguous. However, I believe we can say that “how” and “why” should be addressed. “To trust AI, I have to better understand how it works,” is one of the comments that we hear most often.

The wave of excitement around AI, no doubt fueled by marketing investments by major IT players, is now being mitigated. The disappointment caused by the resounding failure of software that promised to learn perfectly and automatically (and magically) and to therefore reason at the human level, is bringing expectations about the real uses and benefits of AI back in line with reality.

Today AI is generally a little bit “artificial” and a little bit “intelligent.” In many situations, it is not very different from traditional software: a computer program capable of processing input, based on precise pre-coded instructions (the so-called source code), that returns an output. The difference compared to 20 years ago is the greater computing power (today we have supercomputers) and a much larger number of inputs (the infamous big data).

So, to understand how AI works, you need to know how the software works and if it operates with any prejudices; that is, is an output the result of an input or is it predetermined (regardless of the input)?

The first aspect can be tackled by an XAI system, where X is explainable. In practice, the mechanisms and the criteria by which the software reasons and why it produces certain results must be clear and evident.

The second aspect requires an ethical AI that is free from prejudices—determined by those who programmed the software, the dataset used for learning or by way of a cyber attack.

An AI system whose functioning is impossible to understand is a black box and the opposite of an XAI and an ethical system. To me, a black box is a far worse solution than a glass box (or a white or clear box, for that matter).

That’s why the “I don’t know why it works, but it works” approach can’t work when we talk about Artificial Intelligence.

English translation by Analisi Difesa

The post Explainable Artificial Intelligence: How Does it Work? appeared first on .



This post first appeared on Blog - Expert System | Semantic Intelligence, please read the originial post: here

Share the post

Explainable Artificial Intelligence: How Does it Work?

×

Subscribe to Blog - Expert System | Semantic Intelligence

Get updates delivered right to your inbox!

Thank you for your subscription

×