Get Even More Visitors To Your Blog, Upgrade To A Business Listing >>

MIT and Microsoft create model that detects blind spots in the AI ​​of autonomous vehicles

Researchers at MIT (Massachusetts Institute of Technology) and Microsoft have developed a novel model that identifies cases in which autonomous systems have “learned” from training examples that do not match what is really happening in the real world. In other words, these models identify the “blind spots” of Artificial Intelligence in autonomous systems.

This model, according to MIT News , could be used in the near future to improve the security of Artificial Intelligence Systems, such as vehicles and autonomous robots.

For example, Artificial Intelligence systems that control autonomous vehicles are trained in many virtual simulations to be ready at the time of a situation on the road to make the right decision to stop, turn or perform any other maneuver, but sometimes errors occur and They do not react as they should react.

The MIT researchers, together with those of Microsoft that participated in the project, presented a document at the Autonomous Agents and Multiagent Systems conference and soon will present at the conference Association for the Advancement of Artificial Intelligence , in which they describe a model that allows to discover “Blind spots” and in addition to machine learning includes human intervention.

To the traditional training system of these types of artificial intelligence systems the researchers incorporated a monitoring done by a human being that is aware of all the actions of the system when acting in the real world, including comments when the system is about to commit a error or already committed it.

Once the training is finished, the researchers combine the training data with the data offered by the human through the comments and through automatic learning they produce the model that identifies these situations so that the error does not occur.

One of the authors of the document, Ramya Ramakrishanan, who is a graduate of the Laboratory of Computer Science and Artificial Intelligence at MIT, noted the following,

The model helps autonomous systems to know better what they do not know. Many times, when these systems are implemented, their trained simulations do not match the real world configuration [and] they could make mistakes, like getting into accidents. The idea is to use humans to close that gap between simulation and the real world, safely, so that we can reduce some of those mistakes.


The post MIT and Microsoft create model that Detects Blind Spots in the AI ​​of autonomous vehicles appeared first on Pypur.com.



This post first appeared on Pypur, please read the originial post: here

Share the post

MIT and Microsoft create model that detects blind spots in the AI ​​of autonomous vehicles

×

Subscribe to Pypur

Get updates delivered right to your inbox!

Thank you for your subscription

×