Mathematics Data Science Seminar: Ethan Brooks, In-Context Policy Iteration
This event is in the past.
Speaker: Ethan Brooks, Technical Staff at Reflection AI
Time: Wednesday, March 27, 2:30pm-3:30pm
Place: Virtual
Zoom link:
https://wayne-edu.zoom.us/j/96316494795?pwd=Ylc3M0R0R1BYaUZGSnB2dkI2UFRVQT09
Meeting ID: 963 1649 4795
Passcode: 271178
Title: In-Context Policy Iteration
Abstract:
In this talk, we present In-Context Policy Iteration, an algorithm for performing Reinforcement Learning (RL), in-context, using foundation models. While the application of foundation models to RL has received considerable attention, most approaches at the time of publication relied on either (1) the curation of expert demonstrations (either through manual design or task-specific pretraining) or (2) adaptation to the task of interest using gradient methods (either fine-tuning or training of adapter layers). Both of these techniques have drawbacks. Collecting demonstrations is labor-intensive, and algorithms that rely on them do not outperform the experts from which the demonstrations were derived. All gradient techniques are inherently slow, sacrificing the "few-shot" quality that made in-context learning attractive to begin with. In this work, we present an algorithm, ICPI, that learns to perform RL tasks without expert demonstrations or gradients. Instead we present a policy-iteration method in which the prompt content is the entire locus of learning. ICPI iteratively updates the contents of the prompt from which it derives its policy through trial-and-error interaction with an RL environment. In order to eliminate the role of in-weights learning (on which approaches like Decision Transformer rely heavily), we demonstrate our algorithm using Codex, a language model with no prior knowledge of the domains on which we evaluate it.