GEPA: Generative Feedback for AI System Optimization
Best AI papers explained - A podcast by Enoch H. Kang

Categories:
This paper introduces GEPA (Genetic-Pareto), a novel prompt optimizer for large language models (LLMs) that significantly outperforms traditional reinforcement learning (RL) methods like GRPO and other prompt optimizers such as MIPROv2. GEPA achieves this by leveraging natural language reflection from system-level trajectories and a Pareto-based multi-objective evolutionary search, allowing it to learn from significantly fewer "rollouts" or trials. The research demonstrates GEPA's superior sample efficiency and robust generalization across various tasks, including question answering and fact extraction, while also producing shorter, more computationally efficient prompts. This approach offers a practical solution for optimizing complex AI systems in data or budget-constrained environments.