Societal Frameworks and LLM Alignment

Best AI papers explained - A podcast by Enoch H. Kang - Tuesdays

Categories:

This academic paper proposes that aligning Large Language Models (LLMs) with human values can be improved by adopting frameworks from societal alignment. The authors frame the interaction between an LLM developer/user (the principal) and the LLM (the agent) as a contract, where alignment challenges stem from the inherent incompleteness of this contract. They argue that lessons from social, economic, and contractual alignment in human societies can provide guidance for navigating this incomplete contracting environment. The paper also addresses the role of uncertainty in LLM alignment and suggests that the under-specified nature of LLM objectives presents an opportunity for more participatory alignment processes, moving beyond purely technical solutions.keepSave to notecopy_alldocsAdd noteaudio_magic_eraserAudio OverviewmapMind Map