Decision Awareness
in Reinforcement Learning

Workshop at the International Conference on Machine Learning (ICML) 2022

July 22, HALL G T2500 at the Baltimore Convention Center

@DARL_ICML · #DARL_workshop_ICML2022

Invited Speakers

Junhyuk Oh is a research scientist at DeepMind. He obtained a Ph.D. at the University of Michigan, co-advised by Honglak Lee and Satinder Singh. Junhyuk's research broadly focuses on deep reinforcement learning. Some of his work has been covered by MIT Technology Review, Daily Mail, and VentureBeat.
Brandon Amos is a research scientist at Facebook AI Research (FAIR) in NYC. He focuses on integrating structural information and domain knowledge into learning systems to represent non-trivial reasoning operations. A key theme of his work in this space involves the use of optimization as a differentiable building block in larger architectures that are learned end-to-end.
Mengdi Wang is an associate professor at Princeton University. Mengdi’s research group studies the statistical and algorithmic foundation of reinforcement learning and sequential decision-making, as well as their applications.
Christopher Grimm is a Research Scientist at DeepMind. He obtained a PhD in Computer Science and Electrical Engineering at the University of Michigan, advised by Satinder Singh. Prior to that, he was an undergraduate research assistant at Michael L. Littman's lab at Brown University. Chris contributed to decision-aware RL by publishing fundamental work on model learning.
Erin Talvitie is an associate professor of Computer Science at Harvey Mudd College. She graduated from Oberlin College in 2004 with majors in Computer Science and Mathematics and received her Ph.D. from the University of Michigan in 2010. She was a founding member of the Department of Computer Science at Franklin & Marshall College before moving on to Harvey Mudd College in 2019. Her research interests focus on reinforcement learning, specifically with the aim of understanding how autonomous agents can learn to act flexibly and competently in complicated, unknown environments. Her NSF CAREER grant "Using Imperfect Predictions to Make Good Decisions" has funded recent work studying model-based reinforcement learning in the case where the agent's model class is insufficient to capture the true environment dynamics.
Louis Kirsch is a fourth-year PhD student at the Swiss AI Lab IDSIA, advised by Prof. Jürgen Schmidhuber. He received his MRes in Computational Statistics and Machine Learning from University College London (1st rank) and interned at DeepMind. His research focus is on meta-learning general-purpose learning algorithms. His work includes MetaGenRL on meta-learning reinforcement learning algorithms as objective functions and VSML on discovering novel general learning algorithms that do not rely on backpropagation. Louis has co-organized the BeTR-RL workshop at ICLR 2020, the NERL workshop at ICLR 2021, and has been an invited speaker at Meta Learn NeurIPS 2020. Further, he won several GPU compute awards for the Swiss national supercomputer Piz Daint.