Methinks effective altruism has been tarred with a wide brush

The arrest and conviction of crypto criminal Sam Bankman-Fried has given effective altruism a bad name. That’s unfortunate, as it is a movement worthy of support. Some might argue that the best use of charitable expenditures is the local or national ballet or their cat and at any rate it isn’t our business: charity is about how it makes the giver feel and they have a right to do with they want with their money.

Look, most of philanthropy is devoted to enticing well-heeled donors to give more, never mind on what. So it’s not like actual alternatives to effective altruism are especially compelling.

Time to define our terms. Effective altruism (EA) is a charitable endeavor using evidence and reason to figure out how to benefit others as much as possible, and taking action on that basis. So basically it’s a merging of empiricism and utilitarianism. Again, there are lots of potential motives for charity: these include the Lord’s command, public relations, a warm fuzzy feeling, or the acclaim of your peers. But if a person wants to actually do good and they are of a scientific mindset, then they will naturally be led to evidenced-based practice or EA.

Now to address some concerns.

EA isn’t new. Firstly, who cares? Secondly, EA’s sort of framing isn’t typical in the philanthropic community. Yes, Bill Gates deserves accolades for his evidence-based approach and furthermore for grasping the greater humanitarian benefits giving in the third world. While the Gates Foundation does give domestically, it’s a step away from the typical unfocused broad-portfolio approach of foundations like Ford or Rockefeller.

EA takes things a step further, putting charitable programs in head to head competition. Criteria include evidence of impact, cost-effectiveness, and whether the program operates in a crowded donor space relative to need. Over at Givewell, they currently have a total of 4 organizations on their top-charity list. That’s not a high number and these aren’t multi-billion dollar operations.

Cite? I have one. Givewell provides links so you can donate to their recommended charities directly. If you donate to their top charities fund (4 beneficiaries) they charge a very modest and very reasonable fee of zero percent. Oh sorry, they do charge a credit card fee. That’s it.

You can also donate to their unrestricted fund, which they are forthright about:

We generally use unrestricted funding for operating expenses (which includes staff salaries, travel expenses, website maintenance, and other routine operational costs). However, we have an “excess assets” policy, which provides that once we surpass a certain level of unrestricted assets, we earmark the excess for granting rather than continue to hold it ourselves. You can read more about our excess assets policy here.
To avoid overly relying on a single source of support, we cap the amount of funding any individual or entity may provide to our operations at 20 percent.

Compare this policy to the entirely emotional appeals of Save the Children or the nonempirical but possibly worthy efforts of Oxfam et al or hell any charitable junk mail solicitation. I say EA is better. Givewell publishes a blog if anyone wants to tease out some of their objectionably utilitarian behavior or argument.

There are other EA organizations of course: Givewell is just the one I’m relatively familar with (not that I visit their website often). Feel free to provide other links.

Where’s the debate? “Those wishing to do the greatest amount of good with their charitable dollars should use evidence to decide which organizations will do the greatest amount of good with their charitable dollars. That implies an EA approach.”

I think that while its aims are laudable, there are no realistic means by which to carry out the EA project, because it essentially depends on just that systemic inequality that would need to be eliminated to meaningfully alleviate suffering across the world. You can only really ‘earn to give’ if you’re capable of harvesting a significant surplus in value; but this means that the structures must be such as to allow such a disparity; but that’s just what creates most of the suffering around the world in the first place. It seeks to exploit wealth gradients to alleviate grievances largely created by the existence of such gradients in the first place.

Thus, EA effectively creates an impetus towards perpetuating the techno-capitalist structure that has produced the need for cheap labor and resources that leads to the exploitation and despoliation of both people as workforce and nature as source of raw material. A better way would involve remodeling this structure towards a more equitable system, but in such a system, the sort of high earners capable implementing Singer’s self-taxation would probably largely be absent. EA tells you to be a good cog in the consumerist machinery to try and fix the problems caused by that very machinery: it’s a bit of a ‘war for peace’-kind of situation.

Furthermore, the sum of locally optimal strategies rarely equals a good global strategy. It might be that the most bang you can get for your buck (the most QUALYs per dollar) is buying mosquito nets for regions in need; but there is a bound on how many can successfully implement that strategy before you’re drowning in mosquito nets.

Also, it misplaces the burden of restoration. It’s a bit like the carbon footprint, which seeks to saddle the individual with the responsibility of living a climate-neutral life, rather than the companies and governments who are overwhelmingly responsible for the environmental burden humanity has imposed on the planet. Here, too, the individual is tasked with remediating failures on a systemic level; which leaves little to no impetus for the systemic change that is really necessary to get out of this quagmire in the long term.

I’m also not convinced that the strategy of arguing people into giving is really well attuned to human reality. After all, Singer et al have been making similar arguments for close to 50 years now, albeit under various different brandings, and we haven’t yet all set up some tidy little donation portfolios. I think the issue here, and with many of the ‘rationalist’ and adjacent projects, is that rationality commands no action: when we act, we do so on the basis of moral emotion, of value judgement, of visceral need. That’s why we save the kid drowning in the pond before us, but don’t feel the same need to make some equivalent donations according to a moral calculus: the emotional investment isn’t there. EA essentially ignores this, and draws up rules on how we ought to be; but I don’t think any strategy that ignores the brute reality of what sort of being we humans are is likely to be successful.

It would be nice if we could use the system as it is set up to skim off the excess wealth and make the world a better place, acting from the point of view of enlightened rationality. But I don’t think this is going to work: it is the system itself that needs reform, and any hope for doing so needs to take the limited human vantage into account.

It’s not that EA is wrong or bad, but it is an incomplete approach. The impact of many worthwhile charitable activities is difficult or impossible to measure objectively with scientific precision, not to mention the cost of such evidence gathering.

Randomized control trials (RCT) are generally acknowledged as the best way to measure impact, but in many cases such trials are wildly impractical. Let’s say my charity teaches art to elementary students in an impoverished community. Enormous good could ultimately come out of that: improved self-esteem, higher performance across a range of academic subjects, lower drop-out rates, and who knows - maybe even a wonderful artist or two. But those are impacts that take years to develop, and teasing out the exact cause of individual success stories is impossible: maybe it was the art classes, maybe it was more nutritious school lunches, maybe it was a kindly mentor, or maybe it was a combination of factors.

I had an interesting discussion with an engineer turned nonprofit manager about this not too long ago. His feeling is that the Silicon Valley nerd approach to analyzing all aspects of life has had an unreasonable amount of influence on spheres where it doesn’t really belong. I tend to agree. Not only are RCT often out of the question, I’ve observed a number of foundations make half-baked demands for concrete evidence of funding results (“list three quantifiable impacts your activities will have and specify the percentage of improvement you will achieve”) in situations where it makes no sense and simply wastes the time of grant writers and project administrators as they are forced to jump through idiotic hoops.

In such cases, I get the distinct sense that a bunch of mid-level grant managers were sent off to a three-day workshop on M&E (or as the newer buzz phrase goes, AME - Assessment, Monitoring, & Evaluation) and came back determined to zealously implement what they learned, regardless of their actual understanding of its applicability.

Two excellent replies.

I’ll just add that one of the difficulties I have with effective altruism is that it’s a very slippery concept. We could be talking about:

  • effective altruism: the idea that if one is going to give to, say, and anti-poverty charity, one should use evidence to select the specific kind of anti-poverty charity who will use the money to do the most good
  • Effective Altruism: a general approach to philanthropy which includes the above but also notions of earn to give, strictures about what does and does not count as evidence and what does and does not count as ‘good’.
  • Effective Altruism: the organisation or cluster of organisations dominated by Oxford philosophers and tech billionaires which control vast amounts of potential philanthropic funding and which appear to be following what they would call the logic of lower-case effective altruism in a move away from funding mosquito nets in poor parts of Africa and towards ‘long-termism’-motivated investment in AI research and also in medieval abbeys.

There are various critiques that can be made of all of these versions of (E/e)ffective (A/a)ltruism but one of the more challenging things about engaging with them is that there is often a motte and bailey effect in action here. Indeed, EA-as-institutions can be and has been sharply critiqued from the perspective of effective altruism as a giving heuristic. But at the same time, there is a clear evolution of ideas and practice from one to the other such that it’s difficult to separate - and how could it not be, given that the adherents are at pains to claim that pure logic leads from one to the next.

I appreciate that the OP is, as it were, re-trenching to the “motte” of “it’s about using evidence to decide which charities to fund” but there are still serious criticisms to be made about the theory and practice. E.g.

Is the evidence to be used solely to decide between charities with particular beneficiaries and goals or should evidence to be used to decide what the beneficiaries and goals of charitable giving should be? If that latter, are we convinced that this is something that can be calculated in an objective and value-free way?

What evidence counts, and why? What is the time-frame over which we should judge interventions effectiveness? What about second or third order effects (e.g. undermining of government institutions through reliance on aid instead, or the creation of targets for violent theft via cash incentives)? Is absence of evidence for effectiveness the same as evidence for absence of effectiveness? What if, for example, a charity can show that running an RCT in the war-torn country it is trying to help is a practical impossibility - should it be denied funding for lack of evidence or not?

There’s an excellent essay here on the difficulties iwth the simple EA approach, with specific examples about Givewell’s use of evidence:

In fact, even when GiveWell reports harmful side effects, it downplays and elides them. One of its current top charities sends money into dangerous regions of Northern Nigeria, to pay mothers to have their children vaccinated. In a subsection of GiveWell’s analysis of the charity, you’ll find reports of armed men attacking locations where the vaccination money is kept—including one report of a bandit who killed two people and kidnapped two children while looking for the charity’s money. You might think that GiveWell would immediately insist on independent investigations into how often those kinds of incidents happen. Yet even the deaths it already knows about appear nowhere in its calculations on the effects of the charity.

And more broadly, GiveWell still doesn’t factor in many well-known negative effects of aid. Studies find that when charities hire health workers away from their government jobs, this can increase infant mortality; that aid coming into a poor country can increase deadly attacks by armed insurgents; and much more. GiveWell might try to plead that these negative effects are hard to calculate. Yet when it calculates benefits, it is willing to put numbers on all sorts of hard-to-know things.

I think this is the essence of the critique of effective altruism: it starts with what is a useful criteria of giving, but falls into the trap that “only that which can be counted, counts” and thus ignores any complexities or perspectives which reject the idea that morality can be worked out by an algorithm.

OK, I haven’t researched them thoroughly, but let’s say for the sake of argument that Givewell is a legit organization that’s really doing what they’re saying they’re doing. What does that have to do with anything? They’re not Effective Altruism. I can’t find them using that term anywhere, and they don’t seem to be connected with the Effective Altruism movement at all.

The fact that Effective Altruism is a scam doesn’t mean that organizations like Givewell are scams. Why would it?

This gets into the question above of whether “effective altruism” is a organisation, a movement, a philosophy, an approach, a set of institutions etc. I think its inarguable though that the core idea behind Givewell (evidence of effectiveness should guide philanthropy) is also a core idea of effective altruism.

Beyond that, here is a Givewell blog being explicit about the connection:

and here is this month’s Effective Altruism forum “EA Organization Updates” which lists Givewell as an EA org:

I think the OP is clearly talking about the the philosophy of effective altruism, and i don’t think there’s any doubt that that’s the philosophy the motivates Givewell. Also, among people who identify as trying to act on the philosophy of effective altruism (and i know a group of them, who, like the OP, are frustrated with the bad rap SBF gave it) Givewell is a popular resource.

Also, this is very much citation needed. Oxfam and Save the Children absolutely put their work through evaluation, they publish this on their website, you can find it very easily if you care to look.

If their mass-market charity appeals are built around emotion rather than utilitarian calculus, that might well be an evidence-based decision based on what gets people to give.

Agreed.

There are two elements to the bad rap SBF gave EA. The first is: he spent a lot of time talking about how EA shaped his career and worldview and was his main motivation for earning money, and then he turned out to be a lying grifter. This made people skeptical of other EA evangelists who are not themselves guilty of billion dollar fraud. That’s on him.

The second is: a lot of very serious people who were and are senior within the EA movement and institutions came out and publicly said words to the effect of: this guy is the real deal, a hero of rationality, you should heed his sage counsel, we - people who set great store by the use of evidence and believe, among other things, that the consequences of actions can be calculated well into the future - absolutely see this guy as doing good in the long-term. And then he turned out entirely predictably to be a lying grifter, calling into question the extent to which the EA movement is still in touch with ideals vs the extent to which it has essentially been taken over by self-serving techbros and naive idealists. And that’s on them.

I always understood it to be a philosophy more than some sort of rubric to be rigorously applied when considering where to give.

I mean, those St. Jude commercials are certainly heartstring-tugging, but it may well turn out that their cure rate (or however that’s evaluated) may not be great, and some other unsexy charity may well be doing work that’s helps more people more profoundly.

So as a donator, you have some obligation to actually evaluate things on a rational basis rather than just letting bald kids ringing bells or sad dogs & Sarah McLachlan drive your donating habits.

Once you get past that basic concept, it starts running afoul of measurement issues and issues with near vs. far and issues of short term vs. long term, and a lot of people are uncomfortable with some of the tradeoffs this might cause, because they often seem to require some short-term harshness for maximal long term benefit.

@spiceweasel is in this line of work and might have some useful insight for us all.

Yes, i think most people are more comfortable donating near than far. And that’s rational. You have a much better idea of the actual impact of your donation to the local soup kitchen than to a project halfway around the world.

And the time frames of some of the extreme “effective altruists” is so large that they can justify a lot of bad for a hypothetical future good. That was certainly an issue with some of what SBF advocated.

I get hung up on the way that EA seems to prioritize the future long-term good over what amounts to short-term callousness.

For example, I’m pretty sure that an example where you could feed 3/4 of people in a given area NOW, but not set them up with any future seed would be considered sub-optimal in EA vs. some method that would let 1/3 starve, but set up the remaining 2/3 with seed corn for the future.

Either way 1/4 die, but the first one feeds more now, and the second feeds fewer, but for longer. It’s the harsh acknowledgement that 1/12 will be deliberately left to starve in the second example. I mean, I get the math behind it- you let 1/12 die, so that 2/3 will be set up indefinitely with food, but it’s still harsh.

This is what I understood the real problem with EA was. Basically, an excuse to fund someone’s favorite sci-fi-like project rather than helping actual living people today.

I imagine most (non-scam) charities try and be as effective as possible, and work to reduce fraud and to make the most of their work.

I kind of don’t understand this issue. When I give to a charity, it’s because I support the work that particular charity is doing. I have no compunction to fix all the problems of the world, and I certainly don’t feel obligated to check how its returns compare to other charities.

Real life example: I occasionally give to our local public library foundation, because I want to support our libraries. What difference does it make if the library foundation in a city 200 miles away is more efficient?

It’s letting the perfect become the enemy of the good, which a lot of things boil down to in the end

It’s easy to chide somebody for not being maximally efficient with their charitable giving but if the alternative is no giving at all, how is that any better?

Well, I’d like those people 200 miles away to have a functional library, too. :woman_shrugging:

OK, I will have a go at answering this with different levels of EA thought, as I understand it:

  1. Choose the beneficiaries of your donations based on what matters to you, then give in the most effective way to support the beneficiaries you wish to support: you are altruising effectively! Your cause is “supporting your local public libraries”. There is literally no more effective way of doing this that to donate to the foundation. You could, for example, volunteer with them but unless you are a trained librarian or have other skills they need, you and they are much better off with the current option, whereby you earn money with the skills you have, and give it to them to spend on skilled staff and whatever the skilled staff decide is necessary.
  2. Choose the beneficiaries of your donations based on an objective assessment of need that is not tied to subjective concerns such as immediacy or locality: Ineffective altruising! There is no moral difference between a bad thing that happens in front of you and a bad thing ten thousand miles away. In this case, your choosing to support your local public libraries over public libraries 200 miles away is irrational. If you want to support public libraries you should find the libraries where your donation can do the most good and donate there.
  3. Choose the beneficiaries your donations will support with an objective assessment of methodology as well as need: Even less effective altruising! Just as the early days of EA were largely a debate about the merits of de-worming pills vs cash transfers vs mosquito nets for achieving the goal of improving lives in the developing world, so too might we need to scrutinise the purpose of your donations. [This is becoming speculative, but it is purely for illustration]. If your support of libraries is because you value e.g. literacy or promoting reading, it would be fair under EA to ask if donating to any library is the best way to achieve your actual goal. Maybe you would get better results supporting literacy programmes, or a scheme to make reading cool. We won’t know until we carry out a number of RCTs and digest the evidence.
  4. Choose the cause you support based on utilitarian principles: Woefully ineffective altruising! Your selection of “the work of libraries” as a cause can only be justified by weighing up the ultimate good it generates compared to other causes you could support, such as the environment, global poverty, animal welfare etc. This takes us beyond RCTs and into the realms of moral philosophy, as we work out what we actually mean by “doing good”.

I am, after a point, somewhat sceptical about EA, but I hope that’s a reasonably fair- as in, shows both the good and the bad - illustration of EA thinking and how it applies to your choice to donate.

My point is the above is the only realistic way to get people to donate to charity at all.

Taking St. Jude Children’s Research Hospital as an example, yes, they help cure children. But they already have billions of dollars in reserve. They need money in the same way that Harvard does; both already have lots of it. If your goal is to save children’s lives, perhaps there is a third-world hospital (in Mumbai or Lagos, perhaps) that can save more lives with the same money?

Incidentally, the first I heard of effective altruism was in this Washington Post article.