LLMs as Judges: Survey of Evaluation Methods
Best AI papers explained - A podcast by Enoch H. Kang

Categories:
This survey explores the increasing use of Large Language Models (LLMs) as evaluators, termed "LLMs-as-judges," across various fields due to their effectiveness and adaptability. It examines this paradigm from multiple angles, including their functionality (why they are used), methodology (how to implement them, such as single or multi-LLM systems and human-AI collaboration), applications across diverse domains (from general tasks like translation to specialized areas like legal and medical), and how to meta-evaluate their performance using specific benchmarks and metrics like accuracy and correlation coefficients. The paper also addresses significant limitations such as various types of biases (positional, social, cognitive), vulnerability to adversarial attacks, and inherent weaknesses like knowledge gaps, concluding with discussions on future research directions for more efficient, effective, and reliable LLM evaluators.