« All Events
Abstract: Incredible progress has been made the last few years with deep learning frameworks, however their real-world performance is yet unreliable. In this talk, I will showcase how we can design more reliable models by a) understanding the precise set of assumptions (known as inductive bias) and b) improving the out-of-distribution performance. Firstly, I will focus on comprehending the inductive bias as an essential step towards principled network design, where the choice of specific architectural components can result in widely different biases. For instance, capturing high-order interactions between inputs is crucial for learning in large-scale models, like StyleGAN and Transformers, and for improving their extrapolation behavior. In addition, the perspective of high-order interactions enables the design of efficient variants of existing popular networks, such as the self-attention mechanism. In the second part of the talk, I will focus on the design of robust models against adversarial performance and out-of-distribution samples. Lastly, I will demonstrate an application of synthesizing images with novel attribute combinations.
Bio: Grigorios Chrysos is a Post-doctoral researcher at Ecole Polytechnique Federale de Lausanne (EPFL) following the completion of his PhD at Imperial College London. His research interests focus on reliable machine learning, and in particular comprehending the inductive bias and out-of-distribution performance of deep networks. His recent work has been published in top-tier conferences (NeurIPS, ICLR, CVPR, ICML) and prestigious journals (T-PAMI, IJCV, T-IP). Grigorios has co-organized several workshops and tutorials (CVPR, ICCV, AAAI), while he has been recognized as an outstanding reviewer in journals and conferences (NeurIPS’22, ICML’21/22, ICLR’22).