EA - My updates after FTX by Benjamin Todd

The Nonlinear Library: EA Forum - A podcast by The Nonlinear Fund

Podcast artwork

Categories:

Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: My updates after FTX, published by Benjamin Todd on March 31, 2023 on The Effective Altruism Forum.Here are some thoughts on what I’ve learned from what happened with FTX, and (to a lesser degree) other events of the last 6 months.I can’t give all my reasoning, and have focused on my bottom lines. Bear in mind that updates are relative to my starting point (you could update oppositely if you started in a different place).In the second half, I list some updates I haven’t made.I’ve tried to make updates about things that could have actually reduced the chance of something this bad from happening, or where a single data point can be significant, or where one update entails another.For the implications, I’ve focused on those in my own work, rather than speculate about the EA community or adjacent communities as a whole (though I’ve done some of that). I wrote most of this doc in Jan.I’m only speaking for myself, not 80,000 Hours or Effective Ventures Foundation (UK) or Effective Ventures Foundation USA Inc.The updates are roughly in logical order (earlier updates entail the later ones) with some weighting by importance / confidence / size of update. I’m sorry it’s become so long – the key takeaways are in bold.I still feel unsure about how best to frame some of these issues, and how important different parts are. This is a snapshot of my thinking and it’s likely to change over the next year.Big picture, I do think significant updates and changes are warranted. Several people we thought deeply shared our values have been charged with conducting one of the biggest financial frauds in history (one of whom has pled guilty).The first section makes for demoralising reading, so it’s maybe worth also saying that I still think the core ideas of EA make sense, and I plan to keep practising them in my own life.I hope people keep working on building effective altruism, and in particular, now is probably a moment of unusual malleability to improve its culture, so let’s make the most of it.List of updates1. I should take more seriously the idea that EA, despite having many worthwhile ideas, may attract some dangerous people i.e. people who are able to do ambitious things but are reckless / self-deluded / deceptive / norm-breaking and so can have a lot of negative impact. This has long been a theoretical worry, but it wasn’t clearly an empirical issue – it seemed like potential bad actors either hadn’t joined or had been spotted and constrained. Now it seems clearly true. I think we need to act as if the base rate is >1%. (Though if there’s a strong enough reaction to FTX, it’s possible the fraction will be lower going forward than it was before.)2. Due to this, I should act on the assumption EAs aren’t more trustworthy than average. Previously I acted as if they were. I now think the typical EA probably is more trustworthy than average – EA attracts some very kind and high integrity people – but it also attracts plenty of people with normal amounts of pride, self-delusion and self-interest, and there’s a significant minority who seem more likely to defect or be deceptive than average. The existence of this minority, and because it’s hard to tell who is who, means you need to assume that someone might be untrustworthy by default (even if “they’re an EA”). This doesn’t mean distrusting everyone by default – I still think it’s best to default to being cooperative – but it’s vital to have checks for and ways to exclude dangerous actors, especially in influential positions (i.e. trust, but verify).3. EAs are also less competent than I thought, and have worse judgement of character and competence than I thought. I’d taken financial success as an update in competence; I no longer think this. But also I wouldn’t have predicted to be deceived so thoroughly, so I negatively updat...