Xiaopeng Li is a leader in connected and autonomous vehicle research. Li, the Harvey D. Spangler Professor of civil and environmental engineering at the University of Wisconsin-Madison, studies how emerging technologies will influence the development of smart vehicles that communicate with other vehicles and equipment, or operate on their own. He leads the Connected and Autonomous Transportations Systems Laboratory, the Smart Highway Research Center, and has led or worked with multiple center-level research grants on emerging transportation technologies.
In this interview, Li discusses connected and autonomous vehicles, how they function, and both challenges and bright spots for their continued development.
Q: What are connected and autonomous vehicles, and can a vehicle be connected without being autonomous?
Connected vehicles have the technology to communicate with other vehicles or road users to facilitate cooperative driving. The idea behind that is to improve safety, mobility, energy efficiency and performance—all these different factors—to make traveling better.
Autonomous vehicles get into the notion of vehicle control, all the way up to the concept of having the vehicles help people drive or completely replace human drivers. There are different levels of autonomy, from one to five. Levels 1 through 2 are what we call low-level autonomy—or advanced driving assistance systems—because humans are still responsible for the driving tasks and consequences, while the vehicles assist. A lot of today’s vehicles already have these functions, like adaptive cruise control or automatic lane changing.
Levels 3 through 5 are when driving systems get more automated. At those levels, humans play smaller and smaller roles. Level 3 would be an instance where the vehicle can handle some driving tasks, but will need to transfer control back to the human driver at times.
All the way up at level 5 is like an elevator, where the humans are just passengers. The vehicle is responsible for the driving and also for responding to driving conditions and scenarios as they occur.
Q: How do autonomous vehicles “see” when they’re driving?
There are two different scenarios playing out for that technology realization. Some companies like Tesla have bet that as technology evolves, they can solve some self-driving challenges, such as driving in precipitation or identifying pedestrians, with cameras only. That’s based on improvements in artificial intelligence; Tesla is betting that AI will help make decisions in some of the more difficult scenarios. An advantage of that is because cameras are cheap, Tesla can keep its vehicles cheaper for consumers.
Other companies like Waymo are betting on LIDAR, which can cost an order of magnitude more than a video camera, but is very reliable. LIDAR shoots laser beams into the environment and reads what gets bounced back. LIDAR systems are very accurate with distance detection and can give you 3D information about the environment around you. While this is very accurate and reliable, because it’s so much more expensive, Waymo is operating its business as a taxi service, so the cost is split among users instead of through vehicles sold directly to consumers.
Q: Autonomous vehicles are rapidly changing. How do you see them continuing to evolve over the next few years?
I’d say there are two technical drivers. One is software, through advances in AI technology. The other is hardware, with computing devices becoming less expensive and more powerful. As that trend continues, we’ll see these vehicles get smarter and drive more like human beings.
A few years ago, the technology could tackle, for example, 99% of driving tasks. That remaining 1% of tasks were the “extreme” challenges, like if there were animals walking across the road, or if there was poor weather and the road markings weren’t clear. These were abnormal situations called “corner cases” that really challenged the vehicle.
So if you let those vehicles drive, 99% of the time, they’re fine. But that 1% of the time—when they encounter those extreme challenges—they might injure or kill someone. So even if you’re good 99% of the time, it’s still not good enough.
So the challenge has been closing the gap on those corner cases, and many of the technology developments we’ve seen have focused on solving those challenges. One way we can continue to work toward addressing those challenges is by screening real-world data to identify these corner cases, create modeled adversary scenarios, and train automated vehicles against them.
Q: To build on that, stories about semi-autonomous vehicle accidents make it into mainstream news. What can we learn from those instances—both to make these vehicles safer and to inspire public confidence in them?
We need data and objective measures to evaluate their performance. They’re already real, mingling with other human-operated vehicles on the road. But companies like Tesla and Waymo are private companies that may not want to share their data because that might give away some of their competitive advantages.
So, the dilemma is that autonomous vehicles are already impacting the public driving environment—but at the same time, we don’t have objective performance, safety, and mobility measures to understand their impact.
Public agencies or professional societies could play a role in developing some quantitative metrics to evaluate the vehicles’ performance. To help move the needle, we’ve been working with professional societies like IEEE (the Institute of Electrical and Electronics Engineers). Many scholars who work in this area want to have open data initiatives to share testing data of all these vehicles. Companies won’t tell us how their vehicles perform with comprehensive open-source data, but we could get a third party to evaluate to understand their behaviors.
There’s also an education component. Anecdotes can be very influential for the public. Every year, 30,000 to 40,000 people die in traffic accidents in the United States, but those accidents rarely make national news. If someone is hurt or killed by an autonomous vehicle, right now, that makes the news. And so we have to be able to convince the public that, while this technology might not be perfect, it has the potential to be much safer than a human driver.
In the future, maybe we will implement new methods to evaluate the safety of autonomous vehicles. Just like today’s drivers have to pass a test to get a driver’s license, autonomous vehicles might need to pass tests conducted by an independent or government agency before they can enter the market. A responsible, sensible system to evaluate and test and “license” autonomous vehicles may boost public confidence in them.
Q: When we think about autonomous vehicles, we usually think about standard passenger vehicles. What are other current or future application areas for them?
Some autonomous vehicles have been deployed in places like national parks and tourist destinations, where people can ride and enjoy the beauty of nature while experiencing cutting-edge technology. There are everyday uses like the little Starship delivery vehicles (we see these delivering food around our campus). Major corporations like Amazon are deploying drones for delivery services.
Autonomous vehicles are also used in other areas like construction sites, in warehouses, and in mining and agriculture—essentially, wherever there’s a need to move something. That’s just the tip of the iceberg. This technology can and will continue to evolve, and as it does, we’ll see more autonomous vehicles used in utility roles, as well as carrying passengers.
Xiaopeng Li, a professor of civil and environmental engineering, talks with one of his students in the Connected and Autonomous Transportations Systems Lab. Li is a leader in connected/autonomous vehicles research. Photo: Joel Hallberg