Ranjana Mehta is the Grainger Institute for Engineering Professor of Industrial and Systems Engineering, who studies human behavior and the brain in working environments. Her research focuses on how people interact with emerging technologies such as robotics and augmented or virtual reality, and how those technologies can be designed to maximize impact while accounting for the challenges and limitations that accompany new developments. She also studies fatigue, especially in high-stakes environments such as emergency first response and the energy sector. In this interview, Mehta discusses how we interact with new technologies and how fatigue affects our ability to work with them.
Your work touches on rapidly evolving technologies like robotics, artificial intelligence, and virtual and augmented reality. How do you see these technologies impacting workspaces as they continue to mature?
Across all these domains, my research focuses on supporting human performance. The goal of any engineer, when building these systems, is to harness human potential and do no harm.
Robots are a way of replacing humans for dangerous or dirty work. They are becoming more intelligent, but they can’t replace human intelligence and cognitive flexibility. However, they can augment human strength where needed. What we are seeing is more people designing wearable robotics, or exoskeletons, that augment and support human strength while retaining cognitive flexibility. However, that fundamentally changes how humans do their work, because we are now thinking about wearing something beyond what we usually would for workplace protection.
Another challenge is how to train workers to use these nascent technologies. Industries are gravitating toward more accessible, cost-effective, and realistic training regimes. I expect augmented and virtual reality to play a role in that, especially where more intricate, hands-on training is needed. For example, in nursing, patient simulators are available to help hospitals and universities provide a team-based learning environment where trainees can mimic the conditions they might encounter when caring for a patient. As technology advances, we may see it evolve to a system where everyone in the training uses virtual reality headsets, allowing them to receive valuable training experiences even if they are not in the same physical location.
A project in my lab, in collaboration with the UW-Madison Police Department, is developing AI-powered virtual reality training for police de-escalation. Current approaches rely on human actors, either in person or controlling virtual avatars in real time, making training resource-intensive and thus infrequent.
Our project explores whether generative AI can power virtual and augmented reality training. By combining large language model-driven virtual personas with a physics engine, trainees can engage in real-time, embodied interactions with responsive crisis scenarios. Because de-escalation is highly subjective, we are integrating automated assessments using eye tracking and physiological responses to build intelligent, adaptive training. While our approach promises scalable, immersive training, the use of AI tools also introduces new operational and ethical challenges, which we are actively working to understand and address.
A recent study suggests that overreliance on generative AI tools may be harmful to how we think. As these new and innovative technologies develop, how can we use them to our benefit?
Tools shape how we think. Just as calculators changed the way we handle complex arithmetic, generative AI like ChatGPT is changing how we process information and perform tasks. The key is understanding the intent behind its use. For example, a physician could use AI to draft patient notes more efficiently, ensuring clear and timely documentation.
Now, of course, we know these models are not perfect. They hallucinate or make mistakes, and thus we need careful design and oversight. But right now, we are living in a “push system,” where technologies are being created and we are “prompted” to use them. I believe we need more bi-directional communication between developers and end-users to share feedback so that AI can be designed to truly support people at work without introducing unintended harm.
There’s also been a lot of concern about tools like ChatGPT in educational settings. What’s your experience been like with those tools, as someone who’s both a researcher and an educator?
I recently taught a course on human-AI teaming, where we explored how embedding AI agents in teams can change interactions. For a makeup assignment, I actually instructed a student to use ChatGPT to critique a journal article. The student was then required to manually annotate and revise ChatGPT’s output and critique his teaming process with ChatGPT.
This assignment helped him reflect on the gap between ChatGPT’s output and his own thinking. He concluded ChatGPT was not close enough to fully rely on, but the process gave him insight into how to use such tools effectively.
That kind of experiential learning is really valuable. It helps students understand the strengths and limitations of generative AI. AI can support one’s work, but it cannot replace critical thinking. Like any tool, you need to know what it can and cannot do. As educators, it is our duty to ensure students recognize these boundaries and that we develop educational, experiential and assessment content accordingly.
You also research fatigue—how do some of the technological innovations we’ve talked about interact with fatigue in the workplace?
Fatigue is a critical issue, especially for first responders. During Hurricane Harvey in 2017, we found that by the second day, drone operators monitoring flooding were so fatigued that their performance was equivalent to being legally drunk. This level of exhaustion is dangerous, especially when they are required to provide critical and timely public safety information.
The challenge with fatigue is that by the time you feel it, it is already too late. Recovery requires rest, sleep or time off, so organizations need proactive strategies to ensure employees can stay alert.
AI and automation offer potential support, but fatigue can change how people interact with these tools. For example, fatigued responders may over-trust automated vehicle features and semi-autonomous robotic capabilities or over-rely on decision-support tools, such as those used for wildfire prediction, beyond safe limits. And we’ve seen that in our lab and field studies. When someone is fatigued, they are looking for ways to minimize the metabolic cost of thought and decision-making, because that is what fatigue does to you. In fatigued states, we aim to reach a point where we can function with the least energy possible. Designing AI solutions for responders requires accounting for fatigue, and ensuring that such systems are reliable, adaptive and support critical decision-making even when human attention is compromised.
Featured image caption: Professor Ranjana Mehta talks with a student in her lab during an experiment. Mehta studies how humans interact with new technologies and how fatigue impacts work in high-stakes environments. Photo: Joel Hallberg.