TheoryCoder: Bilevel Planning with Synthesized World Models

Best AI papers explained - A podcast by Enoch H. Kang - Tuesdays

Categories:

This research paper introduces TheoryCoder, a novel reinforcement learning agent. TheoryCoder integrates large language models (LLMs) for synthesizing code-based world models with a bilevel planning approach, utilizing high-level symbolic abstractions and low-level Python-based transition models. The paper addresses limitations in prior theory-based reinforcement learning by enabling more expressive theories and scalable planning. TheoryCoder learns a domain by grounding abstract concepts using program synthesis via an LLM, and it employs a bilevel planner to efficiently solve tasks in complex grid-world environments, including video games. The agent refines its world model through interaction and by comparing predicted and actual outcomes, using the discrepancies to prompt the LLM for updates. Evaluations demonstrate TheoryCoder's improved sample efficiency and planning capabilities compared to LLM-only agents across various challenging domains.