« All Events
This seminar will be presented in person and streamed via Zoom. See room info and Zoom info below.
Abstract:As we strive to integrate robots into our daily lives, it is essential to control how these systems interact with their surroundings. Classical techniques leverage models derived from first principles to produce precise control strategies. However, these techniques often break down in more complex real-world scenarios where the gap between model and reality widens. Data-driven techniques from AI and machine learning such as deep reinforcement learning promise to overcome this challenge. In theory, these paradigms have the ability to learn high-performing controllers directly from real world data, eschewing the need for a first-principles model. However, in current practice they are too data-inefficient and unreliable to see widespread practical deployment. In this talk I will discuss how to fuse these disparate paradigms in a principled way, by embedding design techniques from feedback control into reinforcement learning setups. This approach enables reinforcement learning algorithms to leverage known structures in approximate dynamics models, while maintaining the flexibility to learn from unmodeled dynamics. First, I will discuss principled reward shaping approaches that use feedback design techniques to produce stable controllers. Second, I will discuss how to codesign feedback controllers and policy gradient algorithms to make learning in the real world efficient and reliable. I will show analytically how these solutions lead to inherent robustness guarantees, while empirically reducing the amount of real-world data required by an order of magnitude. Finally, I will discuss new directions in safety analysis, human-robot interaction, and the co-design of novel hardware and learning-based control algorithms.
Bio: Tyler Westenbroek is a postdoctoral scholar in the Oden Institute for Computational Engineering and Sciences at UT Austin, where he is hosted by Ufuk Topcu and David Fridovich-Keil. He completed his Ph.D. in Electrical Engineering and Computer Sciences at UC Berkeley in February 2023, under the supervision of Shankar Sastry. His work aims to develop scalable data-driven tools for controlling complex, high-dimensional robotic systems in the real world. His work leverages techniques from machine learning and control theory, and has appeared in top conferences and journals across robotics, control and machine learning.