EA - Abuse in LessWrong and rationalist communities in Bloomberg News by whistleblower67
The Nonlinear Library: EA Forum - A podcast by The Nonlinear Fund
Categories:
Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Abuse in LessWrong and rationalist communities in Bloomberg News, published by whistleblower67 on March 7, 2023 on The Effective Altruism Forum.This is a linkpost for#xj4y7vzkgTry non-paywalled link here.Silicon Valley’s Obsession With Killer Rogue AI Helps Bury Bad BehaviorSonia Joseph was 14 years old when she first read Harry Potter and the Methods of Rationality, a mega-popular piece of fan fiction that reimagines the boy wizard as a rigid empiricist. This rational Potter tests his professors’ spells with the scientific method, scoffs at any inconsistencies he finds, and solves all of wizardkind’s problems before he turns 12. “I loved it,†says Joseph, who read HPMOR four times in her teens. She was a neurodivergent, ambitious Indian American who felt out of place in her suburban Massachusetts high school. The story, she says, “very much appeals to smart outsiders.â€A search for other writing by the fanfic’s author, Eliezer Yudkowsky, opened more doors for Joseph. Since the early 2000s, Yudkowsky has argued that hostile artificial intelligence could destroy humanity within decades. This driving belief has made him an intellectual godfather in a community of people who call themselves rationalists and aim to keep their thinking unbiased, even when the conclusions are scary. Joseph’s budding interest in rationalism also drew her toward effective altruism, a related moral philosophy that’s become infamous by its association with the disgraced crypto ex-billionaire Sam Bankman-Fried. At its core, effective altruism stresses the use of rational thinking to make a maximally efficient positive impact on the world. These distinct but overlapping groups developed in online forums, where posts about the dangers of AI became common.But they also clustered in the Bay Area, where they began sketching out a field of study called AI safety, an effort to make machines less likely to kill us all.Joseph moved to the Bay Area to work in AI research shortly after getting her undergraduate degree in neuroscience in 2019. There, she realized the social scene that seemed so sprawling online was far more tight-knit in person. Many rationalists and effective altruists, who call themselves EAs, worked together, invested in one another’s companies, lived in communal houses and socialized mainly with each other, sometimes in a web of polyamorous relationships. Throughout the community, almost everyone celebrated being, in some way, unconventional. Joseph found it all freeing and exciting, like winding up at a real-life rationalist Hogwarts. Together, she and her peers were working on the problems she found the most fascinating, with the rather grand aim of preventing human extinction.At the same time, she started to pick up weird vibes. One rationalist man introduced her to another as “perfect ratbaitâ€â€”rat as in rationalist. She heard stories of sexual misconduct involving male leaders in the scene, but when she asked around, her peers waved the allegations off as minor character flaws unimportant when measured against the threat of an AI apocalypse. Eventually, she began dating an AI researcher in the community. She alleges that he committed sexual misconduct against her, and she filed a report with the San Francisco police. (Like many women in her position, she asked that the man not be named, to shield herself from possible retaliation.) Her allegations polarized the community, she says, and people questioned her mental health as a way to discredit her. Eventually she moved to Canada, where she’s continuing her work in AI and trying to foster a healthier research environment.“In an ideal world, the community would have had some serious discussions about sexual assault policy and education: ‘What are our blind spots? How could this have happened? How can we design mechanisms to pr...
