BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//College of Engineering - University of Wisconsin-Madison - ECPv6.15.20//NONSGML v1.0//EN
CALSCALE:GREGORIAN
METHOD:PUBLISH
X-ORIGINAL-URL:https://engineering.wisc.edu
X-WR-CALDESC:Events for College of Engineering - University of Wisconsin-Madison
REFRESH-INTERVAL;VALUE=DURATION:PT1H
X-Robots-Tag:noindex
X-PUBLISHED-TTL:PT1H
BEGIN:VTIMEZONE
TZID:America/Chicago
BEGIN:DAYLIGHT
TZOFFSETFROM:-0600
TZOFFSETTO:-0500
TZNAME:CDT
DTSTART:20250309T080000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0500
TZOFFSETTO:-0600
TZNAME:CST
DTSTART:20251102T070000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:-0600
TZOFFSETTO:-0500
TZNAME:CDT
DTSTART:20260308T080000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0500
TZOFFSETTO:-0600
TZNAME:CST
DTSTART:20261101T070000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:-0600
TZOFFSETTO:-0500
TZNAME:CDT
DTSTART:20270314T080000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0500
TZOFFSETTO:-0600
TZNAME:CST
DTSTART:20271107T070000
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTART;TZID=America/Chicago:20260414T113000
DTEND;TZID=America/Chicago:20260414T130000
DTSTAMP:20260505T205321
CREATED:20251205T192648Z
LAST-MODIFIED:20251205T192650Z
UID:10001386-1776166200-1776171600@engineering.wisc.edu
SUMMARY:ECE Undergraduate Research Symposium
DESCRIPTION:The Department of Electrical and Computer Engineering invites you to join us for the second annual ECE Undergraduate Research Symposium. The symposium will be an exciting showcase of undergraduate research beyond the classroom. Selected students and teams will present their innovative work during a poster session where all are encouraged to talk with the undergraduate researchers about their discoveries.  \n\n\n\nThis event is open to the public—come support our students and explore their cutting-edge research! \n\n\n\nEvent schedule:11:30 – 1:00 Poster session with light refreshments – Discovery Building Town CenterAwards to follow
URL:https://engineering.wisc.edu/event/ece-undergraduate-research-symposium-2/
LOCATION:Discovery Building\, 330 N. Orchard St.\, Madison\, Wisconsin\, 53715
CATEGORIES:Electrical & Computer Engineering
ATTACH;FMTTYPE=image/jpeg:https://engineering.wisc.edu/wp-content/uploads/2025/12/ECE-UG-Research-Symposium.avif
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/Chicago:20260410T120000
DTEND;TZID=America/Chicago:20260410T130000
DTSTAMP:20260505T205321
CREATED:20260402T131700Z
LAST-MODIFIED:20260402T131702Z
UID:10001513-1775822400-1775826000@engineering.wisc.edu
SUMMARY:ECE RISE-AI SEMINAR SERIES: Associate Professor Salman Asif
DESCRIPTION:Learning to See\, Adapt\, and Forget: From Computational Imaging to TrustworthyMultimodal AI\n\n\n\n\n\n\n\nAbstract: A central challenge in modern AI is that the world at test time does not match what was assumed at training time. Physical sensors operate under constraints\, modalities go missing\, data shift out of distribution\, and models retain information they were never meant to keep. Building systems that remain robust and reliable under incomplete\, shifted\, or misaligned information is the organizing question of my research program. \n\n\n\nIn this talk\, I will present our research spanning physically grounded inverse problems to large-scale trustworthy AI\, showing how robust behavior across different applications can be achieved through principled\, low-dimensional representations and adaptations. I will begin with computational imaging\, where we seek robust recovery of multidimensional data from indirect or incomplete measurements. I will discuss domain expansion and wavefront sensing\, showing how principled algorithmic innovations lead to robust models for challenging inverse problems. I will then discuss multimodal learning\, where we seek robustness against missing and imbalanced modalities at train or test time via parameter-efficient adaptation\, proxy token generation\, and model merging across modalities. Finally\, I will discuss targeted adversarial attacks and unlearning\, where we seek to exploit model vulnerabilities or remove targeted information (e.g.\, identities\, concepts\, unsafe content) without affecting unrelated capabilities.  \n\n\n\nI will close with a discussion of ongoing work and open problems spanning robust multimodal AI at scale\, continual learning with efficient unlearning\, and AI-guided sensing for medical\, agricultural\, and scientific applications. \n\n\n\nSalman Asif\n\n\n\nBio: M. Salman Asif is an Associate Professor in the Department of Electrical and Computer Engineering at the University of California\, Riverside. Dr. Asif received his Ph.D. from the Georgia Institute of Technology\, Atlanta\, Georgia. He worked as a Senior Research Engineer at Samsung Research America\, Dallas (2012–2014) and as a Postdoctoral Researcher at Rice University (2014–2016). He has received an NSF CAREER Award (2021)\, Google Faculty Research Award (2019)\, Hershel M. Rich Outstanding Invention Award (2016)\, and UC Regents Faculty Fellowship (2017) and Faculty Development (2021) Awards. Dr. Asif currently serves as Senior Associate Editor for the IEEE Transactions on Computational Imaging and as Area Chair for several top-tier venues including CVPR\, NeurIPS\, ICLR\, and AAAI. His research interests lie at the intersection of machine learning\, signal processing\, and computational imaging\, with a focus on building robust and trustworthy AI systems that perform reliably under incomplete\, shifted\, or misaligned information. Current research directions include robust multimodal learning\, model editing and unlearning\, and domain adaptation and generative models for computational imaging and inverse problems. \n\n\n\nLocation details: Discovery Building – Room 2329\, 2nd floor of Discovery Building (access through glass doors behind information desk)
URL:https://engineering.wisc.edu/event/ece-rise-ai-seminar-series-associate-professor-salman-asif/
LOCATION:Discovery Building\, 330 N. Orchard St.\, Madison\, Wisconsin\, 53715
CATEGORIES:Electrical & Computer Engineering,Seminar
ATTACH;FMTTYPE=image/jpeg:https://engineering.wisc.edu/wp-content/uploads/2025/02/Rising-Stars-Seminars-Plain.avif
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/Chicago:20260409T113000
DTEND;TZID=America/Chicago:20260409T123000
DTSTAMP:20260505T205321
CREATED:20260330T210830Z
LAST-MODIFIED:20260402T130501Z
UID:10001505-1775734200-1775737800@engineering.wisc.edu
SUMMARY:ECE RISE-AI SEMINAR SERIES: Dr. Omar Chehab
DESCRIPTION:Toward efficient inference in complex systems\n\n\n\n\n\n\n\nAbstract: I will present a line of work on efficient inference in complex systems\, spanning both the foundations of machine learning and applications to brain imaging data. The talk is organized around two complementary directions.  \n\n\n\nIn the first part\, I will study modern algorithms for sampling\, estimating normalizing constants\, and estimating likelihoods. These methods often rely on a probability path that connects a complex target distribution to a simple base distribution\, such as a Gaussian. I will highlight fundamental limitations of classical approaches\, and show how path-guided algorithms can substantially improve efficiency. I will also discuss principled strategies for designing these probability paths\, explaining when and why such methods succeed. \n\n\n\nIn the second part\, I will turn to machine learning algorithms that are applied in neuroscience\, presenting recent results on learning representations and discovering causal structure from brain imaging data. This line of work is a step toward using machine learning to obtain new scientific insights. \n\n\n\nI will conclude with open questions in the field and future directions at the intersection of generative modeling\, sampling\, and their scientific applications. \n\n\n\nOmar Chehab\n\n\n\nBio: Omar Chehab is a postdoctoral researcher in the Machine Learning Department at Carnegie Mellon University. He completed his graduate training in France\, earning a PhD in Mathematical Computer Science at Inria under the supervision of Aapo Hyvärinen and Alexandre Gramfort\, followed by a postdoctoral position in the Statistics Department of ENSAE/CREST with Anna Korba. \n\n\n\nHis research focuses on principled methods for efficient inference from complex probability distributions. This includes estimating likelihoods from data\, generating samples from unnormalized densities\, as well as learning representations and discovering causal structure from brain imaging data. His work draws on a range of modern methods\, including diffusion models\, annealed MCMC\, score matching\, multi-view independent component analysis\, and noise-contrastive estimation. More broadly\, he studies these algorithms through the lens of computational and statistical efficiency\, aiming to understand their fundamental limits and guide their design. \n\n\n\nHe regularly publishes at leading machine learning conferences such as NeurIPS\, ICML\, and ICLR\, where his work has been recognized with a spotlight and top reviewer awards. \n\n\n\nLocation details: Discovery Building – Room 2329\, 2nd floor of Discovery Building (access through glass doors behind information desk)
URL:https://engineering.wisc.edu/event/ece-rise-ai-seminar-series-omar-chehab/
LOCATION:Discovery Building\, 330 N. Orchard St.\, Madison\, Wisconsin\, 53715
CATEGORIES:Electrical & Computer Engineering,Seminar
ATTACH;FMTTYPE=image/jpeg:https://engineering.wisc.edu/wp-content/uploads/2025/02/Rising-Stars-Seminars-Plain.avif
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/Chicago:20260313T120000
DTEND;TZID=America/Chicago:20260313T130000
DTSTAMP:20260505T205321
CREATED:20260227T161039Z
LAST-MODIFIED:20260227T161338Z
UID:10001477-1773403200-1773406800@engineering.wisc.edu
SUMMARY:ECE RISE-AI SEMINAR SERIES: Kunhe Yang
DESCRIPTION:Designing and Evaluating AI Algorithms in Strategic Environments\n\n\n\n\n\n\n\nKunhe Yang\n\n\n\nAbstract: As AI models are increasingly deployed in environments shaped by complex human behaviors\, there is a critical need for algorithmic principles that account for human values and strategic incentives. In this talk\, I will introduce my research on the theoretical foundations for designing and evaluating AI in human-centered strategic environments. I will focus on two key representative lines of my research: first\, I will discuss incentive-aware evaluation\, with the goal of designing metrics that remain robust even when they become targets of optimization. I will illustrate this in the context of online probability forecasting and introduce algorithmic principles for designing calibration measures that incentivize truthful predictions. Second\, I will discuss AI alignment with heterogeneous human preferences by introducing a framework called the distortion of AI alignment. Within this framework\, I will characterize the information-theoretic limits of learning from sparse heterogeneous feedback\, and compare the robustness of different alignment approaches including RLHF and NLHF. I conclude by discussing future directions and a broader vision for integrating these algorithmic principles into the design of trustworthy\, human-centric AI. \n\n\n\nBio: Kunhe Yang is a fifth-year PhD candidate in Electrical Engineering and Computer Sciences at the University of California\, Berkeley\, where she is advised by Professor Nika Haghtalab. Her research focuses on the theoretical foundations of AI in human-centered environments by drawing on tools from machine learning theory and algorithmic economics. Her work has been recognized by several awards\, including EECS Rising Star\, invited speaker at the Cornell Young Researchers workshop\, finalist for the Meta Research PhD Fellowship in the Economics and Computation track\, and a SIGMETRICS best paper award. \n\n\n\nLocation details: Discovery Building – Research’s Link\, 2nd floor of Discovery Building (access through glass doors behind information desk)
URL:https://engineering.wisc.edu/event/ece-rise-ai-seminar-series-kunhe-yang/
LOCATION:Discovery Building\, 330 N. Orchard St.\, Madison\, Wisconsin\, 53715
CATEGORIES:Electrical & Computer Engineering,Seminar
ATTACH;FMTTYPE=image/jpeg:https://engineering.wisc.edu/wp-content/uploads/2026/02/2026-Faculty-Recruiting-Seminars-Plain-for-website.avif
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/Chicago:20260309T160000
DTEND;TZID=America/Chicago:20260309T170000
DTSTAMP:20260505T205321
CREATED:20260226T173837Z
LAST-MODIFIED:20260226T174052Z
UID:10001474-1773072000-1773075600@engineering.wisc.edu
SUMMARY:ECE RISE-AI SEMINAR SERIES: Dr. Jingfeng Wu
DESCRIPTION:Towards a Less Conservative Theory of Machine Learning: Unstable Optimization and Implicit Regularization\n\n\n\n\n\n\n\nAbstract: Deep learning’s empirical success challenges the “conservative” nature of classical optimization and statistical learning theories. Classical theory mandates small stepsizes for training stability and explicit regularization for complexity control. Yet\, deep learning leverages mechanisms that thrive beyond these traditional boundaries. In this talk\, I present a research program dedicated to building a less conservative theoretical foundation by demystifying two such mechanisms:  \n\n\n\n1. Unstable Optimization: I show that large stepsizes\, despite causing local oscillations\, accelerate the global convergence of gradient descent (GD) in overparameterized logistic regression.  \n\n\n\nDr. Jingfeng Wu\n\n\n\n2. Implicit Regularization: I show that the implicit regularization of early-stopped GD statistically dominates explicit $\ell_2$-regularization across all linear regression problem instances. \n\n\n\nI further showcase how the theoretical principles lead to practice-relevant algorithmic designs (such as Seesaw for reducing serial steps in large language model pretraining). I conclude by outlining a path towards a rigorous understanding of modern learning paradigms. \n\n\n\nBio: Dr. Jingfeng Wu is a postdoctoral fellow at the Simons Institute for the Theory of Computing at UC Berkeley. His research focuses on deep learning theory\, optimization\, and statistical learning. He earned his Ph.D. in Computer Science from Johns Hopkins University. Prior to that\, he received a B.S. in Mathematics and an M.S. in Applied Mathematics\, both from Peking University. In 2023\, he was recognized as a Rising Star in Data Science by the University of Chicago and UC San Diego. \n\n\n\nLocation details: Discovery Building – Research’s Link\, 2nd floor of Discovery Building (access through glass doors behind information desk)
URL:https://engineering.wisc.edu/event/ece-rise-ai-seminar-series-dr-jingfeng-wu/
LOCATION:Discovery Building\, 330 N. Orchard St.\, Madison\, Wisconsin\, 53715
CATEGORIES:Electrical & Computer Engineering,Seminar
ATTACH;FMTTYPE=image/jpeg:https://engineering.wisc.edu/wp-content/uploads/2026/02/2026-Faculty-Recruiting-Seminars-Plain-for-website.avif
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/Chicago:20260224T113000
DTEND;TZID=America/Chicago:20260224T123000
DTSTAMP:20260505T205321
CREATED:20260217T151252Z
LAST-MODIFIED:20260220T151455Z
UID:10001463-1771932600-1771936200@engineering.wisc.edu
SUMMARY:ECE RISE-AI Seminar Series: Eshaan Nichani\, Princeton University
DESCRIPTION:Foundations of language models: scaling and reasoning\n\n\n\n\n\n\n\nEshaan Nichani\n\n\n\nAbstract: Modern deep learning methods\, most prominently language models\, have achieved tremendous empirical success\, yet a theoretical understanding of how neural networks learn from data remains incomplete. While reasoning directly about these approaches is often intractable\, formalizing core empirical phenomena through minimal “sandbox” tasks offers a promising path toward principled theory. In this talk\, Nichani will demonstrate how proving end-to-end learning guarantees for such tasks yields a practical understanding of how the network architecture\, optimization algorithm\, and data distribution jointly give rise to key behaviors. First\, they will show how neural scaling laws arise from the dynamics of stochastic gradient descent in shallow neural networks. Next\, they will study how and under what conditions transformers trained via gradient descent can learn different reasoning behaviors\, including in-context learning and multi-step reasoning. Altogether\, this approach builds theories that provide concrete insight into the behavior of modern AI systems. \n\n\n\nBio:Eshaan Nichani is a final-year Ph.D. student in the Electrical and Computer Engineering (ECE) department at Princeton University\, jointly advised by Jason D. Lee and Yuxin Chen. His research focuses on the theory of deep learning\, ranging from characterizing the fundamental limits of shallow neural networks to understanding how LLM phenomena emerge during training. He is a recipient of the IBM PhD Fellowship and the NDSEG Fellowship\, and was selected as a 2025 Rising Star in Data Science. \n\n\n\nLocation details: Discovery Building – Research’s Link\, 2nd floor of Discovery Building (access through glass doors behind information desk)
URL:https://engineering.wisc.edu/event/ece-rise-ai-seminar-series-eshaan-nichani-princeton-university/
LOCATION:Discovery Building\, 330 N. Orchard St.\, Madison\, Wisconsin\, 53715
CATEGORIES:Electrical & Computer Engineering,Seminar
ATTACH;FMTTYPE=image/jpeg:https://engineering.wisc.edu/wp-content/uploads/2026/02/2026-Faculty-Recruiting-Seminars-Plain-for-website.avif
END:VEVENT
END:VCALENDAR