Controllable Safety Alignment (CoSA): A New Approach to AI Safety Standards

Digital Innovation in the Era of Generative AI - A podcast by Andrea Viliotti

The episode analyzes the problem of aligning large language models (LLMs) with safety norms, highlighting the limitations of a uniform approach and introducing the Controllable Safety Alignment (CoSA) framework. CoSA offers an adaptive solution that allows users to configure safety policies during inference, without the need to retrain the model. CoSAlign, the underlying methodology of CoSA, relies on synthetic training data and an error scoring mechanism to ensure compliance with safety configurations. The CoSA-Score, used to assess the model's effectiveness, takes into account both the utility of the responses and their compliance with safety rules. The text emphasizes the advantages of CoSA in terms of customization, risk management, inclusiveness, and user engagement, and presents CoSA as a step forward for safer and more responsible use of large language models.