MACHINES are getting to know each other better. An artificial intelligence, developed by Google-owne

B
MACHINES are getting to know each other better. An artificial intelligence, developed by Google-owned research firm DeepMind, can now pass an important psychological assessment that most children only develop the skills to pass at around age 4. The achievement in this key theory of mind test may lead to AIs that are more human-like.
Most humans regularly think about other people’s desires, beliefs or intentions. For a long time, this was thought to be uniquely human, but an increasing body of evidence suggests that some other animals, such as orangutans(红毛猩猩) and ravens(乌鸦) may have theory of mind. However, the idea that machines could share these abilities is normally reserved for science fiction.
DeepMind thinks otherwise. The firm created its latest AI with the intention of it developing a basic theory of mind. The AI is called Theory of Mind-net, or ToM-net for short. In a virtual world, ToM-net is able to not just predict what other AI agents will do, but also understand that they may hold false beliefs about the world.
For humans, the idea that others can hold false beliefs seems very natural. However, humans don’t actually understand that other people can hold false beliefs until around age 4. “It’s a classic developmental stage for young children,” says Peter Stone at the University of Texas at Austin. One of the main reasons we know about this is a psychology experiment called the Sally-Anne test. In the test, Anne watches Sally leave an object somewhere, only for it to be moved without Sally seeing. Anne, who has seen everything, is then asked where Sally will first look for the object. To pass the test, Anne needs to be able to distinguish between where the object is and where Sally thinks it is. In other words, Anne needs to understand that Sally may hold a false belief about the object’s location.
To copy this set-up for AIs, ToM-net plays the role of Anne in a virtual world consisting of an 11-by-11 grid, some internal walls and four objects. Surprisingly, ToM-net basically passes this form of the Sally-Anne test and exhibit some basic theory of mind. This is a big step. Making computer programs that copy behaviors like theory of mind could improve our understanding of people and other animals, says Christopher Lucas at the University of Edinburgh, UK. But Alan Wagner at Georgia Tech Research Institute says the 11-by-11 grid set-up is too simplistic for the researchers to claim they have got the idea of theory of mind.
Outside of the debate as to whether ToM-net truly exhibits theory of mind, there is a possibility that it might help make more human-like AIs. “The more our machines can learn to understand others, the better they can interpret requests, help find information, explain what they are doing, teach us new things and tailor their responses to individuals,” says Rabinowitz at DeepMind.
24. According to the underlined sentence in Para 3, DeepMind thinks ________.
A. machines are likely to develop theory of mind
B. only human beings can develop theory of mind
C. some animals may know about people’s intentions
D. machines may know better about the world than humans
25. By mentioning the Sally-Anne test, the author intends to prove ________.
A. Anne has theory of mind since she passes the test
B. it is natural for humans to hold false beliefs
C. age 4 is a classic developmental stage for children
D. ToM-net possesses some basic theory of mind
26. What can we infer from the last two paragraphs?
A. ToM-net failed the Sally-Anne test surprisingly.
B. Alan Wagner and Christopher Lucas share similar views.
C. Rabinowitz is optimistic about the future of ToM-net.
D. Humans will largely depend upon human-like AIs.

24-26 ADC 
留言与评论(共有 0 条评论)
   
验证码: