At the end of the film Terminator 2, as the Terminator robot sinks into a vat of molten metal to inevitable destruction, it aims a thumbs up sign at the humans John and Sarah Connor. The deadly robot reveals glimpses of humanity earlier in the film, but this gesture leaves the audience with the strongest sense yet that there must be some kind of a soul in the machine.
This question of when we do and don’t perceive robots and computers as having their own minds is the subject of a new brain imaging study. Soren Krach and colleagues began by introducing 20 male participants to a human opponent, a computer opponent, a functional robotic opponent and an anthropomorphic robot opponent (see figure below, taken from the journal paper doi:10.1371/journal.pone.0002597.g002), all of which were ‘sat’ in a briefing room. The functional robot looks like a computer wired up to two robot arms designed to press the necessary keys to play the game. The anthropomorphic robot, by contrast, resembles a small child.
Afterwards, the participants had their brains scanned while they played the four opponents one at a time (a picture was flashed up before each game showing who their next opponent was). The participants witnessed the cable from their opponents’ computers in the briefing room being plugged into the monitor in the brain scanner – thus giving them the strong impression they really were playing these opponents. In reality, the game decisions made by the human, computer and robots were fully randomised, to ensure that any effects on the participants’ brain activity were not triggered by variations in playing style.
Regardless of who their opponent was, the participants exhibited activity in the regions of their brains associated with representing other people’s minds. Crucially, however, this activity was stronger the more human-like their opponent, being strongest for the anthropomorphic robot and human.
These brain differences were also reflected in the participants’ reports on how they found the games. For example, they enjoyed playing the human and human-like opposition more than the computer and functional robot. They also rated the human-like robot as more competitive and less cooperative than the computer and functional robot.
“…[T]he more an…agent or entity exhibits human-like features, the more we build a model of its ‘mind’,” the researchers said. “This process occurs irrespective of its behavioural responses and independently of whether we interact with real human partners of ‘just’ machines.”
Perhaps another way to test our representation of robot minds would be to see at what point people begin exhibiting embarrassment in their presence. Undressing in front of your desk-top PC is unlikely to make you blush, but perhaps the presence of a human-like android would.
Sören Krach, Frank Hegel, Britta Wrede, Gerhard Sagerer, Ferdinand Binkofski, Tilo Kircher, Edwin Robertson (2008). Can Machines Think? Interaction and Perspective Taking with Robots Investigated via fMRI PLoS ONE, 3 (7) DOI: 10.1371/journal.pone.0002597