EA - "Doing Good Best" isn't the EA ideal by Davidmanheim

The Nonlinear Library: EA Forum - A podcast by The Nonlinear Fund

Podcast artwork

Categories:

Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: "Doing Good Best" isn't the EA ideal, published by Davidmanheim on September 16, 2022 on The Effective Altruism Forum. Holden recently claimed that EA is about maximizing, but that EA doesn't suffer very much because we're often not actually maximizing. I think that both parts are incorrect. I don't think EA requires maximizing, and it certainly isn't about maximizing in the naïve sense that it often occurs. It is also my view that Effective Altruism as a community has in many or most places gone too far towards this type of maximizing view, and it is causing substantial damage. Holden thinks we've mostly avoided the issues, and while I think he's right to say that many possible extreme problems have been avoided, I think we have, in fact, done poorly because of a maximizing viewpoint. Is EA about Maximizing? I will appeal to Will MacAskill's definition, first. Effective altruism is: (i) the use of evidence and careful reasoning to work out how to maximize the good with a given unit of resources, tentatively understanding ‘the good’ in impartial welfarist terms, and (ii) the use of the findings from (i) to try to improve the world. Part (i) is obviously at least partially about maximizing, in Will's view. But it is also tentative and cautious, rather than a binary - so even if there is a single maximum, actually doing part (i) well means we want to be very cautious about thinking we've identified that single peak. I also think it's easy to incorrectly think this appeals to utilitarian notions, rather than benficentric ones. Utilitarianism is maximizing, but EA is about maximizing with resources dedicated to that goal. It does not need to be totalizing, and interpreting it as "just utilitarianism" is wrong. Further, I think that many community members are unaware of this, which I see as a critical distinction. But more importantly, part (ii), the actual practice of effective altruism, is not defined as maximizing. Very clearly, it is instead pragmatic. And pragmatism isn't compatible with much of what I see in practice when EAs take a maximizing viewpoint. That is, even according to views where we should embrace fully utilitarian maximizing - again, views that are compatible with but not actually embraced by effective altruism as defined - optimizing before you know your goal works poorly. Before you know your goal exactly, moderate optimization pressure towards even incompletely specified goals that are imperfectly understood usually improves things greatly. That is, you can absolutely do good better even without finishing part (i), and that is what effective altruism has been and should continue to do. But at some point continuing optimization pressure has rapidly declining returns. In fact, over-optimizing can make things worse, so when looking at EA practice, we should be very clear that it's not about maximizing, and should not be. Does the Current Degree of Maximizing Work? It is possible in theory for us to be benefitting from a degree of maximizing, but in practice I think the community has often gone too far. I want to point to some of the concrete downsides, and explore how maximizing has been and is currently damaging to EA. To show this, I will start from exclusivity and elitism, go on to lack of growth, to narrow vision, and then focusing on current interventions. Given that, I will conclude that the "effective" part of EA is pragmatic, and fundamentally should not lead to maximizing, even if you were to embrace a (non-EA) totalizing view. Maximizing and Elitism The maximizing paradigm of Effective Altruism often pushes individuals towards narrow goals, ranging from earning to give, to AI safety, to academia, to US or international policy. This is a way for individuals to maximize impact, but leads to elitism, because very few people are ideal for any giv...