A search-and-rescue team converges on a disaster site and deploys an autonomous drone or a four-legged robot to navigate dangerous terrain. While the advantages are obvious, the ripple effects of mixing humans and robots within an operation are much more opaque.
Not only does the introduction of a robot influence the performance of individual human workers across various tasks in subtle ways, but it can also reshape how the entire team thinks, coordinates and collaborates.
By collecting and analyzing neural data, Ranjana Mehta, the Grainger Institute for Engineering Professor of industrial and systems engineering at the University of Wisconsin-Madison, hopes to tease out the specifics of those alterations and their impacts.
“We have relied a lot on traditional measures of teamwork, and they have worked well when we are looking at interpersonal teamwork. As we introduce new agents into our different work domains, they impact us differently,” says Mehta, who studies human performance in tandem with emerging technologies such as robotics, virtual reality and wearables. “We need to modify and expand how we measure teamwork.”
In a paper in the journal Frontiers in Neuroergonomics, Mehta and PhD student Robert Spenceley investigated what happens inside teams when a robot replaces a human teammate during a simulated search-and-rescue mission.
To go beyond subjective feedback from participant surveys, Mehta and Spenceley analyzed brain activity using electroencephalography from 22 teams as they searched for victims in a virtual environment. PhD student Aakash Yadav designed the open-access virtual simulation in consultation with emergency responders. The teams consisted of a human mission commander, a human safety officer and a navigator who alternated between a stand-in human or a virtual robot. Though the navigator’s performance remained consistent, the only thing that changed was the mission commander and safety officer’s perceptions of whether they were working with another human or a robot.
Mehta and Spenceley found that introducing a robotic navigator subtly altered how leaders processed teamwork. Spenceley, a National Science Foundation INTEGRATE trainee, says the leaders appeared to overcompensate, devoting more time and attention to collaboration that ultimately did not improve the team’s performance.
At the same time, teams working with robots showed greater neural synchrony across certain brain regions, suggesting that human teammates reorganized their coordination strategies to adapt to the robot’s presence.
“With our neural data, we uncover some kind of impacts to the team that we don’t see in our subjective data,” says Spenceley. “So, by using a different modality to analyze things, we can get a more specialized look at how the team is functioning.”
Mehta and Spenceley say their results point to the need for training, both at the individual and team levels, before incorporating robotic teammates, as well as earlier collaborations between those designing new technologies and end-users like emergency responders.
Moving forward, the Mehta lab plans to push the research beyond simulation, testing how teams respond when the robotic navigator’s performance is deliberately manipulated and, eventually, moving the experiments into real-world environments.
“We know embodiment matters,” says Mehta. “When you are physically interacting with a robot, not just software, it changes how people present themselves socially and how they collaborate. Understanding that shift is the next step for our research.”
This research was supported by the National Science Foundation (grant #2349138).