Artificial intelligence is spreading across industries, handling tasks like credit card and insurance approvals, assisting with medical diagnoses and even surgeries, and driving autonomous vehicles. Before long, AI could make scheduling and pay decisions in a workplace and quickly spit out personalized treatment plans for patients.
However, AI carries serious risks along with its tantalizing potential to unlock new levels of efficiency and precision. Algorithms can perpetuate existing biases and inequalities. Errors in autonomous vehicles can have deadly consequences. And applications like facial recognition software could easily infringe upon privacy.
Navigating those safety and ethical concerns requires intentional consideration during the development process, which is where Yonatan Mintz hopes to influence the field.
“I’m an engineer, I’m a technologist, I’m a techno optimist at heart,” says Mintz, a University of Wisconsin-Madison assistant professor of industrial and systems engineering whose research includes safety, fairness and transparency in AI. “I want this stuff to work.”
For AI to succeed in achieving widescale societal acceptance, though, companies, organizations, academics and even governments will need to develop policies and mechanisms to prioritize safety and ethical use and allow for public feedback.
Historically, Mintz notes, companies have largely self-regulated, relying on internally created ethics boards to conduct reviews—though the European Union is working on its proposed AI Act and a number of U.S. states and cities have in recent years introduced or passed bills toward varying levels of oversight. Still, in an MIT Sloan Management Review and Boston Consulting Group study published in September 2022, just 52% of the more than 1,000 managers surveyed reported their organizations had “responsible AI” programs in place (and 79% of those programs were limited in scale or scope).
“It’s good to regulate yourself, but if you’re the only one answering to yourself, it’s like letting the mice regulate the cheese storage,” says Mintz, who in November 2021 coauthored a paper in the journal Artificial Intelligence laying out a framework for working through “hard choices” in AI development.
He says documenting the rationale behind those choices—often made between comparable competing options—and creating ways to respond to future feedback from the humans interacting with AI systems are two keys.
To facilitate progress on the latter front, it’s important to understand the differences in the ways humans think about and solve problems compared to AI algorithms. In that vein, Mintz has joined a research project exploring those differences and the gaps they create using a unique game environment that challenges players—human or AI—to clear a grid of pieces while learning the game’s hidden rules at the same time.
“So you could actually get close to apples-to-apples comparisons on performances between algorithms and humans on basically the same task,” he says.
In the interdisciplinary project, sparked by industrial and systems engineering colleagues Vicki Bier and Paul Kantor, Mintz and PhD student Eric Pulick are working on ways to compare human and AI players. How can humans and machines best communicate and collaborate on problems? It’s fundamental research that could inform efforts to make AI technology more receptive to stakeholder feedback. UW-Madison colleagues Xiaojin (Jerry) Zhu, a professor of computer sciences, and Gary Lupyan, a professor of psychology, are collaborators on the effort, which has earned funding from the National Science Foundation and the Wisconsin Alumni Research Foundation.
Through his work in the lab and the classroom, Mintz hopes to nudge the direction of future AI development toward more fully integrating considerations of safety, transparency and ethics.
“We’re a leader in analytics and data science here at UW-Madison,” he says. “The students who are in our classes today are going to end up in regulatory positions in government, as tech leads in Silicon Valley, and as professors developing future technology. These are the people that I want to get on board.”
Photo caption: Assistant Professor Yonatan Mintz, left, and PhD student Eric Pulick discuss ways to compare human and AI players in “The Game of Hidden Rules.” Photo by Tom Ziemer.