Get Even More Visitors To Your Blog, Upgrade To A Business Listing >>

Researchers Created 'PLATO'. And Try To Improve The AI By Making It 'Think' Like A Baby

If someone hides a pen behind his back, everyone would expect the pen to just there. Even though they cannot see it anymore, everyone would agree that the pen still exists.

In a world that is full of biases and different opinions, it's safe to say that everyone agrees with this fact. Hiding a pen behind one's back and expect it to still be there is just common-sense of the physical world that are universally understood by all humans.

All humans here include children.

Even toddlers who still couldn't speak, share this common understanding.

Researchers are still puzzled by this fact, and wonder how humans are able to achieve this. Researchers hope that if they know the reasons, the knowledge can be used to develop more advanced computers.

In an approach, a research by Luis Piloto and his colleagues at Princeton University, tries to create a deep-learning AI system that is able to learn from some of humans' common-sense laws.

And to do that, they created an AI to think like a child.

On their research paper:

The researchers nicknamed it 'PLATO', short for Physics Learning through Auto-encoding and Tracking Objects.

"‘Intuitive physics’ enables our pragmatic engagement with the physical world and forms a key component of ‘common sense’ aspects of thought. Current artificial intelligence systems pale in their understanding of intuitive physics, in comparison to even very young children. Here we address this gap between humans and machines by drawing on the field of developmental psychology."

When researchers start an AI project, the Model they are creating typically start with a blank state.

The things that will make them 'smart' is the data being used for training, and the methods being used for the training. Give it enough time, the AI can construct knowledge based on the data's pattern.

While researchers agree that the human brain work similarly, researchers also agree that babies think rather differently.

Instead of building knowledge from scratch, children can start with some principled expectations about objects, and develop that knowledge from there.

Just like when a pen is hidden from view, the child would expect the pen to still be there.

Sooner than later, the children can refine their knowledge with experience.

And here, Piloto's research with his colleagues suggest that deep-learning AI system modeled on what babies do outperforms a typical system that begins with a blank slate and tries to learn based on experience alone.

They did this by introducing an open-source machine-learning dataset designed to "evaluate conceptual understanding of intuitive physics, adopting the violation-of-expectation (VoE) paradigm from developmental psychology." Then, the team built a deep-learning system that "learns intuitive physics directly from visual data, inspired by studies of visual cognition in children."

When the researchers compared both approaches, they see substantial differences.

Videos from the ‘freeform’ data used to train the AI models.

In the research, the researchers compared a typical blank-state version, with one model that had "principled expectations" built into it.

In the blank-state version, the AI model was given several visual animations of objects, like a cube would slide down a ramp, and a ball bouncing off a wall. Just as expected, the model is able to detect patterns from the various animations.

The team then tested its ability to predict predict outcomes with new visual animations of objects.

Then, Piloto and his colleagues compared the results to a model that had the principled expectations built into it before it experienced any visual animations.

The researchers expect PLATO to think "logically" in a common sense way, hoping that the model can understand that a cube cannot bounce, whereas a ball can.

Piloto and his team found the deep-learning model that started with a blank slate did a good job, but the model based on object-centered coding-inspired by infant cognition did significantly better.

The latter model was able to accurately predict how an object would move, and was more successful at applying the expectations to new animations.

The model managed to this by learning from a smaller set of examples. In this case, it managed to accomplish this "common-sense" understanding after seeing an equivalent of 28 hours of video.

PLATO displays robust effects across the probes in our dataset.

"We demonstrate that our model can learn a diverse set of physical concepts, which depends critically on object-level representations, consistent with findings from developmental psychology. We consider the implications of these results both for AI and for research on human cognition," the paper said.

The research revealed that learning through time and experience is important. But the research also showed that things can be more complicated than that.

Piloto and his colleagues show that the role perceptual data can play when it comes to artificial systems acquiring knowledge.

And of course, it also shows how studies on babies can contribute to building better AI systems that simulate the human mind.

In one way or another, AI's can learn from a multitudes of ways.

But children's approach to the physical world with their "principled expectations" show that pre-programmed knowledge can help AI to better understand what to expect.

Published: 
20/07/2022
News
AI
Research


This post first appeared on Eyerys | Eyes For Solution, please read the originial post: here

Share the post

Researchers Created 'PLATO'. And Try To Improve The AI By Making It 'Think' Like A Baby

×

Subscribe to Eyerys | Eyes For Solution

Get updates delivered right to your inbox!

Thank you for your subscription

×