What is it about Human learning that allows us to perform so well with relatively little experience? MIT Technology Review: Today we get an answer of sorts thanks to the work of Rachit Dubey and colleagues at the University of California, Berkeley. They have studied the way humans interact with video games to find out what kind of prior knowledge we rely on to make sense of them. It turns out that humans use a wealth of background knowledge whenever we take on a new game. And this makes the games significantly easier to play. But faced with games that make no use of this knowledge, humans flounder, whereas machines plod along in exactly the same way. Take a look at the computer game shown here. This game is based on a classic called Montezuma's Revenge, originally released for the Atari 8-bit computer in 1984. There is no manual and no instructions; you aren't even told which "sprite" you control. And you get feedback only if you successfully finish the game. Would you be able to do so? How long would it take? You can try it at this website. In all likelihood, the game will take you about a minute, and in the process you'll probably make about 3,000 keyboard actions. That's what Dubey and co found when they gave the game to 40 workers from Amazon's crowdsourcing site Mechanical Turk, who were offered $1 to finish it. "This is not overly surprising as one could easily guess that the game's goal is to move the robot sprite towards the princess by stepping on the brick-like objects and using ladders to reach the higher platforms while avoiding the angry pink and the fire objects," the researchers say. By contrast, the game is hard for machines: many standard deep-learning algorithms couldn't solve it at all, because there is no way for an algorithm to evaluate progress inside the game when feedback comes only from finishing.
Read more of this story at Slashdot.