Skip to main content
Participant in human-robot collaboration research trial
February 26, 2024

Looking into the brain, through the eyes to uncover sex differences in human-robot collaboration

Written By: Tom Ziemer

As human workers and robots collaborate more and more frequently, particularly in the manufacturing sector, it’s important—for safety and productivity—their partnership includes a key ingredient: trust.

And to design and optimize those “cobot” systems, researchers need to be able to measure trust in a way that’s more nuanced and accurate than surveys that rely on human perceptions. Ranjana Mehta, a professor of industrial and systems engineering at the University of Wisconsin-Madison, is exploring new methods for assessing trust in automated systems such as human-robot collaborations.

In a recent paper published in the journal ACM Transactions on Human-Robot Interaction, Mehta and her graduate students detail how they used brain imaging and eye-tracking to cross-examine human perceptions of trust, illuminate underlying trust-behavior relationships, and uncover differences across sexes in a manufacturing use case.

Ranjana Mehta
Ranjana Mehta

“It’s very important that humans and their autonomous teammates play well, that they are fluent in their interactions,” says Mehta, who investigates questions at the mind-motor-machine nexus in manufacturing contexts as well as emergency response, offshore energy work, human space flight and more. “And for that to happen, they need to communicate really well, and trust is a major driver of that communication.”

In the study, Mehta and her students in the NeuroErgonomics Lab asked 38 human participants to work with a collaborative robot to assemble a gear system, with the robot handing pieces to the humans in order. However, the researchers also programmed the robot to perform unreliably in one set of trials, to assess the impact on trust. In addition to surveying participants afterward, Mehta’s team recorded videos to track errors the humans made; used functional near-infrared spectroscopy to monitor brain activity; and had participants wear an eye-tracking device.

While both men and women self-reported their levels of trust the way you’d expect—trusting the robot in its reliable state, distrusting it when unreliable—it turned out their behaviors diverged. Women kept their eyes on the robot when it behaved unreliably and, in the video analysis, made fewer errors; men, meanwhile, took their eyes off the robot even when it performed unreliably and made more mistakes in assembling the pieces when they were delivered out of order.

“Men reverted to the default expectation that the robot would perform well, even when it didn’t. And that’s why they made more mistakes,” says Mehta. “Women, on the other hand, kept looking at the robot and were able to compensate for their robot teammate’s unreliability in their performance. They didn’t let robot unreliability affect their own performance.”

When reviewing the brain imaging data, the researchers found men exhibited increased activation in the prefrontal cortex, which Mehta says corresponds with top-down processing (based on assumptions and expectations). The brain imaging data, combined with results from a previous study from her lab, revealed disruptions in several “trust networks” in the brain that regulate trusting and distrusting behaviors.

Differences across sexes could prove important for designing inclusive, effective human automation systems that work for all. Mehta says she hopes her work will also encourage other researchers to probe deeper into human behaviors when studying interaction with artificial intelligence systems.

“We found these sex differences only when we looked very closely,” she says. “We are constantly being exposed to AI systems in all aspects of our lives. I think it’s really important to design trustworthy systems based on these very nuanced behaviors.”

The research was supported by the National Science Foundation (award number 1900704). Other UW-Madison authors on the paper include Yinsu Zhang and Aakash Yadav, PhD students in industrial and systems engineering. Other authors include Sarah Hopko, a former Neuroergonomics Lab member, and Prabhakar Pagilla from Texas A&M University.

Top photo caption: A participant in the research study grabs a component delivered by a collaborative robot. Photo courtesy NeuroErgonomics Lab.


Categories