May 1st | Discover how amortized optimization accelerates solvers using past solutions and problem similarities. Learn its applications in RL, Bayesian optimization, and AI-driven computational biology.

An image representing a webinar on amortized optimization showcasing its use in learning to augment
Amortized optimization webinar

Date & Time: May 1, 2025, 15:00–15:45 CEST (UTC+2)

Registration: get your link to the event

SpeakerBrandon Amos, Research Scientist, Meta Fundamental AI Research (FAIR), NYC

Abstract: Amortized optimization is the use of learning to augment and accelerate optimization solvers by leveraging the knowledge and predictability of past solutions and similarity between problem instances. It is widely-deployed across fields such as reinforcement learning, variational inference, and meta-learning and improves upon classical solvers by many orders of magnitude. The first part of this talk will briefly overview the basic foundations of amortization, focusing on policy learning in RL and control as the main application. At a general level, classical optimization solvers are akin to Kahneman's system-2 style of slow thinking by solving each problem from scratch every time, and the foundations here will show how to think of amortization as a system-1 style of fast thinking learned and distilled by past experiences. With these foundations in hand, we will turn to how amortization can improve solvers for acquisition functions in Bayesian optimization settings, e.g., as in the paper Learning to Learn without Gradient Descent by Gradient Descent. Then, we will conclude with a discussion on what amortization would bring to the AI and computational biology space when repeatedly-solved optimization problems arise, such as in Wasserstein Flow Matching and Meta Flow Matching for modeling cellular dynamics with flows conditional on patient and drug treatment information.

About the speakerBrandon Amos is a Research Scientist in Meta's Fundamental AI Research group in NYC. He holds a PhD in Computer Science from Carnegie Mellon University and was supported by the USA National Science Foundation Graduate Research Fellowship (NSF GRFP). Prior to joining Meta, he has also worked at Adobe Research, Google DeepMind, and Intel Labs. His research interests are in machine learning and optimization with a recent focus on reinforcement learning, control, transport, flows, and diffusion.