By Michele McDonald
By studying how people build interpersonal trust, George Mason University researchers are finding out how to create a similar bond between humans and machines. The work could change how we interact with the machines around us.
Frank Krueger, a cognitive psychologist and neuroscientist at George Mason, is working on a three-year, $700,000 grant from the U.S. Air Force Office of Scientific Research to look at human-to-machine trust.
“Our brains haven’t caught up with all the change around us and that includes how we interact with machines,” says Krueger, co-director of the Center for the Study of Neuroeconomics. “The brain is still living as if we are hunter-gatherers even though we’re surrounded by high-tech tools.”
Human rules apply because our brains are hardwired in how we trust, says Krueger, who also heads the Social Cognition and Interaction: Functional Imaging Lab at the Krasnow Institute for Advanced Study. Trust can mean we rely on machines too much or too little, depending on our individual makeup. And other nuances apply. If a machine makes an error, how do we forgive—or do we?
“If you think machines are perfect and then they make a mistake, you don’t trust them again,” he says.
But you may regain trust if some basic social etiquette is used and the machine simply says: “I’m sorry,” Krueger explains. Such niceties are also why some robots smile.
The closer the machine-human interaction is, the more important social balms may become. For example, in high-stress jobs such as flying drones for the military, a computer in the future may ask, “How are you doing today?” because the idea is to make the machine more human, Krueger says.
Krueger, who was studying psychology and physics in East Berlin when the wall came down in 1989, came to Mason after a postdoctoral fellowship at the National Institute of Neurological Disorders and Stroke to work with noted neuroeconomics professor Kevin McCabe and neuroergonomics professor Raja Parasuraman studying the neural underpinnings of human-human and human-automation trust.
Krueger and graduate student Kim Goodyear currently are looking at machine and human reliability and what that means to relationships.
“Does my trust increase or decrease with machines and how does it compare to my interactions with humans?” Krueger asks.
They’ve set up a neuroimaging experiment that uses a Transportation Security Administration scenario where bags are being X-rayed for possible weapons at an airport. In one scenario, study participants were told the technician was a highly rated 20-year veteran. In another scenario, a computer with the most advanced software available was making the call.
Researchers looked at the study participants’ brain activity as the human expert and the computer offer advice to search the bag or clear it as safe. People initially trusted the machine more than the human expert. But as the human’s advice proved correct, trust increased.
And human-like punishments are given to machines when they err, Krueger says. People want to punish wrongdoing, even when it comes to impersonal machines and especially as more decisions are automated. If a computer makes the wrong call, Krueger says, it could receive the ultimate punishment—the off button.