EA - Is fear productive when communicating AI x-risk? [Study results] by Johanna Roniger

The Nonlinear Library: EA Forum - A podcast by The Nonlinear Fund

Categories:

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Is fear productive when communicating AI x-risk? [Study results], published by Johanna Roniger on January 23, 2024 on The Effective Altruism Forum.I want to share some results from my MSc dissertation on AI risk communication, conducted at the University of Oxford.TLDR: In exploring the impact of communicating AI x-risk with different emotional appeals, my study comprising of 1,200 Americans revealed underlying factors that influence public perception on several aspects:For raising risk perceptions, fear and message credibility are keyTo create support for AI regulation, beyond inducing fear, conveying the effectiveness of potential regulation seems to be even more importantIn gathering support for a pause in AI development, fear is a major driverTo prompt engagement with the topic (reading up on the risks, talking about them), strong emotions - both hope and fear related - are driversAI x-risk introSince the release of ChatGPT, many scientists, software engineers and even leaders of AI companies themselves have increasingly spoken up about the risks of emerging AI technologies. Some voices focus on immediate dangers such as the spread of fake news images and videos, copyright issues and AI surveillance. Others emphasize that besides immediate harm, as AI develops further, it could cause global-scale disasters, even potentially wipe out humanity.How would that happen? There are roughly two routes. First, there could be malicious actors such as authoritarian governments using AI e.g. for lethal autonomous weapons or to engineer new pandemics. Second, if AI gets more intelligent some fear it could get out of control and basically eradicate humans by accident. This sounds crazy but the people creating AI are saying the technology is inherently unpredictable and such an insane disaster could well happen in the future.AI x-risk communicationThere are now many media articles and videos out there talking about the risks of AI. Some announce the end of the world, some say the risks are all overplayed, and some argue for stronger safety measures. So far, there is almost no research on the effectiveness of these articles in changing public opinion, and on the difference between various emotional appeals.Study set upThe core of the study was a survey experiment with 1200 Americans. The participants were randomly allocated to four groups: one control group and three experimental groups each getting one of three articles on AI risk. All three versions explain that AI seems to be advancing rapidly and that future systems may become so powerful that they could lead to catastrophic outcomes when used by bad actors (misuse) or when getting out of control (misalignment).The fear version focuses solely on the risks; the hope version takes a more optimistic view, highlighting promising risk mitigation efforts and the mixed version is a combination of the two transitioning from fear to hope. After reading the article I asked participants to indicate emotions they felt when reading the article (as a manipulation check and to separate the emotional appeal from other differences in the articles) and to state their views related to various AI risk topics. The full survey including the articles and the questions can be found in the dissertation on page 62 and following (link at the bottom of page).FindingsOverview of results1. Risk perceptionTo measure risk perception, I asked participants to indicate their assessment of the risk level of AI risk (both existential risk and large-scale risk) on a scale from 1, extremely low, to 7, extremely high with a midpoint at 4, neither low nor high. In addition, I asked participants for their estimations on the likelihood of AI risk (existential risk and large-scale risk, both within 5 years and 10 years, modelled after the rethink prio...