BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//College of Engineering - University of Wisconsin-Madison - ECPv6.15.20//NONSGML v1.0//EN
CALSCALE:GREGORIAN
METHOD:PUBLISH
X-WR-CALNAME:College of Engineering - University of Wisconsin-Madison
X-ORIGINAL-URL:https://engineering.wisc.edu
X-WR-CALDESC:Events for College of Engineering - University of Wisconsin-Madison
REFRESH-INTERVAL;VALUE=DURATION:PT1H
X-Robots-Tag:noindex
X-PUBLISHED-TTL:PT1H
BEGIN:VTIMEZONE
TZID:America/Chicago
BEGIN:DAYLIGHT
TZOFFSETFROM:-0600
TZOFFSETTO:-0500
TZNAME:CDT
DTSTART:20250309T080000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0500
TZOFFSETTO:-0600
TZNAME:CST
DTSTART:20251102T070000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:-0600
TZOFFSETTO:-0500
TZNAME:CDT
DTSTART:20260308T080000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0500
TZOFFSETTO:-0600
TZNAME:CST
DTSTART:20261101T070000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:-0600
TZOFFSETTO:-0500
TZNAME:CDT
DTSTART:20270314T080000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0500
TZOFFSETTO:-0600
TZNAME:CST
DTSTART:20271107T070000
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTART;TZID=America/Chicago:20260309T160000
DTEND;TZID=America/Chicago:20260309T170000
DTSTAMP:20260421T062525
CREATED:20260226T173837Z
LAST-MODIFIED:20260226T174052Z
UID:10001474-1773072000-1773075600@engineering.wisc.edu
SUMMARY:ECE RISE-AI SEMINAR SERIES: Dr. Jingfeng Wu
DESCRIPTION:Towards a Less Conservative Theory of Machine Learning: Unstable Optimization and Implicit Regularization\n\n\n\n\n\n\n\nAbstract: Deep learning’s empirical success challenges the “conservative” nature of classical optimization and statistical learning theories. Classical theory mandates small stepsizes for training stability and explicit regularization for complexity control. Yet\, deep learning leverages mechanisms that thrive beyond these traditional boundaries. In this talk\, I present a research program dedicated to building a less conservative theoretical foundation by demystifying two such mechanisms:  \n\n\n\n1. Unstable Optimization: I show that large stepsizes\, despite causing local oscillations\, accelerate the global convergence of gradient descent (GD) in overparameterized logistic regression.  \n\n\n\nDr. Jingfeng Wu\n\n\n\n2. Implicit Regularization: I show that the implicit regularization of early-stopped GD statistically dominates explicit $\ell_2$-regularization across all linear regression problem instances. \n\n\n\nI further showcase how the theoretical principles lead to practice-relevant algorithmic designs (such as Seesaw for reducing serial steps in large language model pretraining). I conclude by outlining a path towards a rigorous understanding of modern learning paradigms. \n\n\n\nBio: Dr. Jingfeng Wu is a postdoctoral fellow at the Simons Institute for the Theory of Computing at UC Berkeley. His research focuses on deep learning theory\, optimization\, and statistical learning. He earned his Ph.D. in Computer Science from Johns Hopkins University. Prior to that\, he received a B.S. in Mathematics and an M.S. in Applied Mathematics\, both from Peking University. In 2023\, he was recognized as a Rising Star in Data Science by the University of Chicago and UC San Diego. \n\n\n\nLocation details: Discovery Building – Research’s Link\, 2nd floor of Discovery Building (access through glass doors behind information desk)
URL:https://engineering.wisc.edu/event/ece-rise-ai-seminar-series-dr-jingfeng-wu/
LOCATION:Discovery Building\, 330 N. Orchard St.\, Madison\, Wisconsin\, 53715
CATEGORIES:Electrical & Computer Engineering,Seminar
ATTACH;FMTTYPE=image/jpeg:https://engineering.wisc.edu/wp-content/uploads/2026/02/2026-Faculty-Recruiting-Seminars-Plain-for-website.avif
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/Chicago:20260310T122000
DTEND;TZID=America/Chicago:20260310T125000
DTSTAMP:20260421T062525
CREATED:20260109T221548Z
LAST-MODIFIED:20260109T221551Z
UID:10001397-1773145200-1773147000@engineering.wisc.edu
SUMMARY:ECE Discovery Panel: Optimization and Control
DESCRIPTION:Engineering undergraduates! Join us in 1610 Engineering Hall as faculty members explore the technical area of Optimization and Control! All undergraduate students are welcome as Assistant Professor Jeremy Coulson\, Associate Professor Line Roald\, and Assistant Professor Manish Singh talk about application ideas\, advanced course electives\, and future job opportunities in this area. It’s a great place to ask your questions about classes and career paths in this exciting ECE field. \n\n\n\nCome for the insights\, stay for the Jimmy John’s sandwiches! \n\n\n\n\n\nJeremy Coulson\n\n\n\n\n\nLine Roald\n\n\n\n\n\nManish Singh
URL:https://engineering.wisc.edu/event/ece-discovery-panel-optimization-and-control/
LOCATION:1610 Engineering Hall\, 1415 Engineering Drive\, Madison\, 53706
CATEGORIES:Electrical & Computer Engineering,Information Session
ATTACH;FMTTYPE=image/jpeg:https://engineering.wisc.edu/wp-content/uploads/2026/01/ECE-Discovery-Panel-Series-.avif
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/Chicago:20260313T120000
DTEND;TZID=America/Chicago:20260313T130000
DTSTAMP:20260421T062525
CREATED:20260227T161039Z
LAST-MODIFIED:20260227T161338Z
UID:10001477-1773403200-1773406800@engineering.wisc.edu
SUMMARY:ECE RISE-AI SEMINAR SERIES: Kunhe Yang
DESCRIPTION:Designing and Evaluating AI Algorithms in Strategic Environments\n\n\n\n\n\n\n\nKunhe Yang\n\n\n\nAbstract: As AI models are increasingly deployed in environments shaped by complex human behaviors\, there is a critical need for algorithmic principles that account for human values and strategic incentives. In this talk\, I will introduce my research on the theoretical foundations for designing and evaluating AI in human-centered strategic environments. I will focus on two key representative lines of my research: first\, I will discuss incentive-aware evaluation\, with the goal of designing metrics that remain robust even when they become targets of optimization. I will illustrate this in the context of online probability forecasting and introduce algorithmic principles for designing calibration measures that incentivize truthful predictions. Second\, I will discuss AI alignment with heterogeneous human preferences by introducing a framework called the distortion of AI alignment. Within this framework\, I will characterize the information-theoretic limits of learning from sparse heterogeneous feedback\, and compare the robustness of different alignment approaches including RLHF and NLHF. I conclude by discussing future directions and a broader vision for integrating these algorithmic principles into the design of trustworthy\, human-centric AI. \n\n\n\nBio: Kunhe Yang is a fifth-year PhD candidate in Electrical Engineering and Computer Sciences at the University of California\, Berkeley\, where she is advised by Professor Nika Haghtalab. Her research focuses on the theoretical foundations of AI in human-centered environments by drawing on tools from machine learning theory and algorithmic economics. Her work has been recognized by several awards\, including EECS Rising Star\, invited speaker at the Cornell Young Researchers workshop\, finalist for the Meta Research PhD Fellowship in the Economics and Computation track\, and a SIGMETRICS best paper award. \n\n\n\nLocation details: Discovery Building – Research’s Link\, 2nd floor of Discovery Building (access through glass doors behind information desk)
URL:https://engineering.wisc.edu/event/ece-rise-ai-seminar-series-kunhe-yang/
LOCATION:Discovery Building\, 330 N. Orchard St.\, Madison\, Wisconsin\, 53715
CATEGORIES:Electrical & Computer Engineering,Seminar
ATTACH;FMTTYPE=image/jpeg:https://engineering.wisc.edu/wp-content/uploads/2026/02/2026-Faculty-Recruiting-Seminars-Plain-for-website.avif
END:VEVENT
END:VCALENDAR