Methinks effective altruism has been tarred with a wide brush

Excellent replies in this thread: thanks.

The OP contends that EA is awesome and we should all genuflect. Support for that contention is mixed. Stanislaus is correct that you can define it in many ways and that approaches include the philosophical, sociological, and institutional. Here’s one perspective on the pro-side.

Say you set aside a share of your income every year for charity and you have at least vaguely utilitarian inclinations. That second assumption is key: there are lots of reasons to give to charity and getting the most bang for the buck is only one consideration. We’ve debated utilitarianism on this board, and the general thinking is that it doesn’t survive strict philosophical scrutiny as a moral system (similar to all known moral systems). More narrowly, if you want to donate to the local ballet, knock yourself out. Nobody is stopping you.

Here comes the trolley
But if you have some utilitarian inclinations then utilitarian reasoning and utilitarian thought experiments will be helpful, though not necessarily decisive. And that leads you to EA. What criteria could you use?

The first screen would involve looking at administrative expenses: a number of institutions were set up during the 1980s and 1990s to do this. A very broad collection of IRS reports are available at Charity Navigator. A smaller set of high profile organizations are profiled by Charity Watch, a venerable Chicago organization known as the pit bull of charity evaluation organizations. Let’s look up St. Jude’s Children’s Research Hospital. They receive an C. Huh: for years they received an F because their assets divided by their spending was about 12. Now it’s down to 3.7. This is progress, though I don’t know the details. Looking strictly at program costs, they receive a B. Their top 3 executives all earn more than $1 million annually. They list 14 cancer organizations that earn an A or A- and 11 organizations of B or higher whose top executives each receive less than $600,000 salary. Hey, it’s your money.

Utilitarians don’t like suffering, the poor suffer, and poverty is worse in the third world. So greater good can be obtained sending funds to a well run developmental organization. The Ford and Rockefeller Foundations, Save the Children (ranked A- by Charity Watch), and United States Fund for UNICEF (ranked A, by Charity Watch) all do program evaluation (or so I assume - I haven’t verified). (There are also scammy organizations that receive an F.)

But a utilitarian will to go further and will want to send funds where benefits are maximized, as opposed to crossing some plausible threshold. That will depend upon what the charity is doing, but also on how crowded that part of the donor space is. Because for any project there are economies of scale. They may increase when you’re setting up the project, but eventually diminishing returns set in.

Visit various 501c(3)'s and each will tell you how great they are and how huge the need is. What they typically don’t do is explain why giving to them is superior to giving to anyone else. Charity Watch can help you to some extent with that, but honestly I’m not clear whether the difference between an A and A- is especially important. So I appreciate the analysis of EA organizations who study and uncover causes like malaria nets, vitamin A (now) and iodine (earlier) supplements, cash incentives for child vaccines, and -controversially- handing out cash to the poor. Since I have utilitarian inclinations, I care about what will produce the biggest bang for my donor buck, though that isn’t my only consideration. It doesn’t have to be.

I also like the idea of encouraging innovation and evidence based practice. Accion International was very good at that during the 1970s- 1990s, though again EA has taken it to another level.

I also think that human extinction would be somewhat regrettable, so it would be better to devote a little attention to delineating those risks as opposed to just blowing it off like we did last century. IMO, it was an understudied problem to the extent that it basically wasn’t studied at all. Sure you can do a thought experiment putting extinction and helping the poor on the same humanitarian scale, but honestly once you get initial funding for studying human extinction (now complete) I’m skeptical about whether it’s the sort of slam dunk that it used to be. It’s the addition of another donor buck at the margin that is the relevant consideration.

Thanks for the considered post - I do agree with most of it but this is a particular bugbear of mine: admin costs are a terrible and hideously counter-effective way of judging charities’ effectiveness.

For example, we all agree that in any good charity there should be somebody administering the collection of outcomes data (e.g. distributing surveys, collating results, filing and storing them). This is not programme delivery. Does that mean that it is worthless? Does it not contribute to the charity’s effectiveness?

There’s a person at our charity part of whose role is to make sure that every time that meetings happen everyone has had the papers in advance, minutes are taken, action points are recorded and shared and that progress is updated against these before the next meeting. This makes sure our time is being used valuably and things get done. But alas! It’s “admin” so all the salary costs pertaining to this should be counted against us.

Don’t even get me started on the person at my last charity whose role was to make sure we had all the equipment we needed in place and in good condition, and that the staff were up to date with their training and licences. Just a load of form filling! It would be more effective to abandon our training programme, stop fussing over whether our kit is actually usable and just get out there with whatever we can scrabble together and let untrained people have a good old go! Or if that doesn’t suit, why don’t we take one of these trained people and have them do the inventory management and training admin - a job completely unrelated to their skill set or aptitudes? Surely that would be the best use of donors money?!

Of course charities can waste money and be badly run. But sadly, detecting that isn’t as simple as drawing an arbitrary line between “good, noble programme delivery” and “wicked, wasteful admin”. In particular the correlation between “badly run” and “spend on admin” is often the reverse of what the “look at admin expenses and pick the lowest” heuristic would suggest.

Administrative expenses were all donors had to go by before EA. Ok, that’s an exaggeration - you could look at annual reports, news articles and book length profiles - but comprehensive comparisons were lacking.1

That said, for established charities extremely high administrative expenses can be a sign of scamming for D or F grades. Furthermore, if a charity is sending out a lot of junk mail, has aggressive television commercials, and high administrative expenses, I might question whether I want my charitable donations to go to fundraising. (A utilitarian might say, “Maybe”, then make the best rough calculation they could with limited data. A squishy leaner like myself will probably turn their attention to other options. I have in the past.)

Practically speaking, I’m skeptical of organizations with a grade of C or lower. I like A organizations (caution - probably dubious). I’ve opted for B organizations over A organizations when the B organization had a reputation for program evaluation. As you noted, evaluation is probably an admin expense. (It should be obvious that I don’t run a foundation, sheesh.)

I trust EA has created headaches for some in the non-profit community, but for donors it has provided a helpful basis for comparing competing uses of the charitable dollar. That’s a productive conversation and good thing: we need more of it, implemented wisely.

1Disclosure policies are another metric that makes for easy comparison.

Not sure if I’m reading this right, but if you’re meaning to say that utilitarianism leads to EA, I think that even most EAs would disagree with that. Sure, both have closely related commitments to the existence of some objective measure of ‘goodness’ that’s largely context-independent, comparable, effectively calculable and meaningfully quantifies our moral commitments, but in general, the two aren’t the same. As for instance MacAskill puts it:

So I think things are a bit more nuanced than ‘if you have utilitarian tendencies, you should get with the EA program’.

How does a utilitarian budget amongst competing projects? Assume we have a manageable list of 700 from Charity Watch rather than the hundreds of thousands listed at Charity Navigator. Calculate the marginal benefits each would receive from an additional amount of money and order them from top to bottom. Devote all resources to the top ranking charity, until diminishing returns drops it to number two on the list. Then send money to newest number one project. Rinse and repeat. If you have limited means (in other words, you are neither Thurstan Howell III nor Scroodge McDuck) you will probably only send funds to one charity. Straightforward.

Also operationally and computationally impossible, not to mention emotionally challenging. I’ve read non-EA articles advising the give-to-only-one-exceptional-charity strategy, oddly enough so this isn’t entirely a straw-man argument. How might an effective altruist operationalize the spirit of this utilitarian advice? Open Philanthropy applies 3 criteria for what they call focus area selection:

Importance : “How many individuals does this issue affect, and how deeply?”

Neglectedness : “All else equal, is this cause receiving less attention from other actors, particularly other major philanthropists?” This partly how the diminishing returns concept is operationalized.

Tractability : “Are there clear ways in which a funder could contribute to progress in this area?” In my avoiding human extinction example, there was a lot of tractability when the question was identifying and roughly weighing the range of extinction risks. After that, diminishing returns set in quickly IMHO.

Over at GiveWell, they evaluate charities on the basis of cost-effectiveness and quality of evidence, though the previous 3 criteria often find a way into their analysis.

What separates EA from previous approaches is their willingness to put charitable expenditures in head-to-head competition: the above principles allow such comparisons to be operationalized. This is helpful I say.

That’s not what I wrote, but it’s pretty close to what I meant: those with utilitarian tendencies should consider EA arguments, though they don’t have to accept them. In light of your remarks, I honestly don’t know whether even that goes a little far: the transition between utilitarianism and EA isn’t as seamless as I thought. I’ve downloaded two articles on the topic and hope to look over your cite in the coming weeks. Don’t get your hopes up though: my philosophical skills are strictly at the sophomore level.

I suspect that the underlying issue is that I am labeling “Biggest-bang-for-the-buck-ism (BBFTB)” as utilitarianism. Which is even worse: the former has nowhere near the intellectual scaffolding of hedonic consequentialism. There does remain a useful distinction between BBFTB and the wide variety of remaining charitable motivations, many of which I consider valid.

Also, the utilitarian I depicted at the top of this post was obviously a caricature, one that was addressed in the 1800s IIRC, certainly by the 1970s - unsurprisingly you get to incorporate cost of calculation into your hedonics to avoid paralysis by analysis.

Just because you can see the ground under the streetlight better doesn’t mean that’s where you should be looking for your keys.

The good bit - and it is a good bit - about EA is that it does push charities to provide actual evidence and it does allow us to find the most effective ways of achieving specific charitable goals. That is huge and greatly to be welcomed.

As long as the charity is a) delivering results and b) fundraising at positive ROI, you should definitely want your funds to go into fundraising. Put it this way: if offered a choice between a) giving the charity a dollar and b) putting your dollar in a magic box that turns it in to $1.50 and giving that to charity, you should always (on utilitarian grounds!) pick b). And that’s what (effective) fundraising is.

Yes - EA gives much better charity vs charity comparison than looking at costs or salaries and this is really good.

Great discussion.

This is an issue we tend to run into with funders of the nonprofit for which I work. We must report annually on specific targets in order to be considered for funding in the next round. Overall I’m fine with an evidence-based approach to virtually everything, charity included, but an issue we run into is that our clients have deeply entrenched, pervasive issues preventing them from achieving independence from their abuser. They may come to our shelter, leave and go back to their abuser, come back to shelter, and then maybe start some counseling here, and over the course of years begin to understand the dynamic they are caught in and what they need to do to get out of it. But you can’t show that kind of growth in one year. You can show it in maybe five years. But if a funder looks at a program and says, “Well, 40% of people are returning to their abuser” (not sure of the actual stat off the top of my head, but I know it’s high) does that mean the program is ineffective? It’s designed to address everything from complex trauma to achieving affordable housing. This is deep work.

I also worry about the assumption that saving lives is de-facto the best possible outcome that can be imagined. What about improving someone’s quality of life? Are ten lives saved equal to 500 people’s poverty alleviated? How can we make that determination? Death is inevitable. Poverty isn’t.

I like the idea of effective altruism as I am a bit hard-nosed about using evidence to make decisions, but I recognize its limitations, and I also think it runs counter to the way most people donate, myself included. Most of my donations go to the agency I work for, in part because I am intimately acquainted with how it runs and what it does with its money. But also because I have a personal connection with the mission. But I don’t earmark my funds for the shelter, I earmark them for our Prevention Education program, because I think that’s where it’s going to have the biggest impact.

One thing that really turns me off about effective altruism is the idea that we should make as much money as humanly possible. This is a profoundly self-serving narrative in that, as mentioned upthread, you have to participate in the system that’s causing inequality and benefit from the harm of other people, as Sam Bankman Fried happily did by participating in the crypto scam. He was able to justify a more and more lavish lifestyle by papering it over with this supposed ethical framework that allowed him to do so because he was in the process of making so much money for charity. I mean, come on. Is there anything more American?

I have somewhat mixed opinions on the way philanthropy takes place in the real world. One of the fundamental principles of fundraising is you have to make the donor feel like they are getting something valuable in return, even if that valuable thing is a nice feeling. I have participated in our Annual Gala over the years, both in terms of helping with set-up and actually being a donor and just enjoying the evening. It’s a nice time, and because I work there pretty exciting to see funds raised in real time through the auctions and such. But fundamentally I think it’s self-serving just as much as it is serving people in need. The purpose of being at one of these functions is to feel rich and magnanimous, and we do everything in our power to appeal to the identifies of wealthy people. That does actually bring needed dollars through the door. But it’s definitely nowhere in the neighborhood of effective altruism. Effective altruism would be if, instead of spending $3,000 on a table for all your friends, a nice meal and pretty things to bid on, you just cut us a check for $3,000. (I say this as someone who plans to buy a table next year, but I see that act as explicitly supporting my team, and my own job.) So you see, to an extent, you can’t escape the link between personal identity and charity. Whether you puff yourself up by calling yourself an Effective Altruist or just by spending a little extra on a ticket to a Country Club event, your sense of self is intimately wrapped up in what and how you give. You wouldn’t do it unless you got something out of it.

Co-signed. I’m technically considered admin staff, with my key responsibilities being to apply for and administer grants. The grants I write and administrate probably run about $3m per year. A pittance compared to my salary, which is pretty good for a nonprofit of my size.

In addition to grants, marketing is hugely important to nonprofits for a number of reasons.

  1. Community visibility attracts potential clients as well as donors. Clients knowing where they can go if they need help with X is hugely important, especially in a community like ours where we are the only DV shelter in the county. People who have seen an organization’s billboard or flyer are more likely to seek services. Familiarity can be a lifeline for traumatized people.
  2. Community visibility attracts community partnerships who can assist us in delivering services to clients. This might include wrap-around services, referrals to housing programs, medical and dental care, hospital partnerships (in the case of our sexual assault forensic exam program), assistance with looking for jobs, etc. Then there’s the advocacy piece. It’s better, when you go to influence a specific piece of legislation, to meet with a politician who has already heard of you.

It is very, very, very important to be known.

Both marketing and development positions require a certain degree of education and a set of specific skills. That costs money. Is it worth it? Every penny.

Now, when we see CEOs paid salaries we think are too high, we have to consider what the purpose of a CEO is. The purpose of a CEO is, at least in part, to publicly face the organization and to know a lot of rich donors. The more rich you actually are, the more rich people you will know. So if Susan G. Komen pays its CEO $1,000,000 per year, and that CEO has contacts that bring in $30,000,000, and those dollars go to support breast cancer research or whatever, how wasteful is that salary really?

A lot of it I think just stems from ignorance about how nonprofits work, or at least an unwillingness to understand that it is functionally a business. It has to do a lot of business things. But it faces extra challenges in that we don’t always get to choose how to spend the funds we receive. Businesses are free to allocate funding however they please. Nonprofits are not. So my best advice to anyone wanting to donate effectively – always donate operating funds that the business can use as it needs. We have more than enough funds for our shelter program because that’s what funders want to support. Nobody wants to support overhead. But guess where we need the money the most?

(Thinking also of the money that FLOODED in to support construction of our Pet Shelter, whereas it took years to raise funds to install a simple playground for children at the shelter. What donors most want to fund is not always where the highest need is. So please, just ask what’s needed!)

If that’s what happens, I see two problems.

First, the funder was not given sufficient information. The statement that 40% of the people return to their abuser is pretty meaningless in a vacuum. It could be great news, if the rate in the general public is 80%, for example. If we are trying to estimate the effectiveness of your intervention, we must know both numbers.

Second, if the funder cut funding based on this incomplete info, then (assuming their goal was to evaluate you on effectiveness) they were wrong to make a decision based on incomplete information.

I don’t usually report on that metric, it was just an example of how tricky this gets. What we report on depends on the funder and purpose of the funds. Some funders, like the state government, give us their own report metrics (survey responses and data, raw numbers served, etc.) For others we have to make up our own. We are currently working on developing better metrics, or at least more consistent metrics from program to program, which is something I would also write into a report.

To be 100% honest with you, I am not convinced our current metrics are good at measuring effectiveness, a survey is but a snapshot in time. But it’s challenging to collect long-term data, because that would require clients to respond to your follow-up inquiries, which they usually don’t. And let’s say, worse comes to worst, they do end up moving back in with their abuser. Why did that happen? Was it the agency’s fault? Was it a programming flaw? Was it some other factor we aren’t measuring at all, like a substance abuse issue or because they were part of an insular community which makes leaving all the more difficult?

The kind of hard-nosed research that would really create the best data possible would require hiring more staff to conduct evaluations and probably paying for a research consultant with a local university, all of which requires funding at a time when domestic violence agencies across the nation are seeing funding cut across the board. It’s not realistic for most nonprofits to do that kind of research. We have a three month counseling waiting list due to an insane increase in demand since the end of COVID, we have NO funding for Prevention Ed, we have seen a dramatic uptick in lethality and severity of violence, and my current funding priority is to figure out how not to cut the staff we have, and maybe by some miracle pulling a newly funded counseling position out of my hat.

That doesn’t mean we can’t always strive to do better. But we don’t really have a very handy metric for “changed the trajectory of someone’s life over a long period of time.”

This discussion gets to the heart of a real difficulty with EA, which is simply the difference between theory and practice.

Theory - as EA takes hold, more and more charities will organise themselves to produce gold-standard evidence of performance against a small number of widely recognised metrics, and funding decisions will become more rational, and thus more good will be done more effectively.

Practice - as funders and commissioners vaguely read some stuff about effective altruism and measurement theory, they will get it into their head that good philanthropy is about seeing a graph in which certain numbers go up. Each and every one will select a different number that they believe is the sole measure of success and start to demand of their charities that they produce reports in which this number goes up. In any given sector, any given charity working with multiple funders will now spend more and more of its time developing more and more numbers to meet the “evidence-based” needs of the funders. Many of the numbers will be basically the same and measuring basically the same thing, but they won’t be directly comparable and will require needless multiplication of effort to gather. Charities will try to push back so they don’t get caught in this bind but they are the junior partner and money talks. In some cases a funder (often a government body) will attain such importance within a sector that they can dictate the metric to be used, but only on rare occasions will this be actually meaningful, or arrived at with any consultation with charities still less actual beneficiaries. In every case, reporting cycles will mean that undue emphasis is given to short term and intermediate outcomes rather than genuine long-lasting impact.

No-one will offer to fund the work necessary to rigorously develop appropriate metrics and reporting practices.

Couldn’t have said it better myself.

What I do want to mention is that many smaller nonprofits are using evidence based practices and models proven to work in larger studies. For example our agency is based on the Family Justice Center model which has been proven to reduce minimization, recanting and recidivism in victims and save their lives. For our social action team we can point to our high risk response teams which have reduced domestic violence homicides locally. So it’s not like we’re not paying attention to what works. And I think that can be a good practice for donors - find out what interventions are being used and whether or not those are evidence based. You can do that without expecting smaller orgs to finance their own research teams.

There’s a lot of economics to unpack here, but it never occurred to me that NGOs in general and EA in particular might save the world. That’s the job of governments, who have sufficient resources. Business’ job is creating wealth. NGO’s job is partly to apply governmental funding, but mostly to fill in gaps that governments are bad at doing, like experimental programs or SETI.

Ok EA does a terrific job with this: see the Givewell website and the principles from Open Philanthropy I outlined earlier. If a donor space gets too crowded, they can shift funds to another project. Before that HMHW stated, “Furthermore, the sum of locally optimal strategies rarely equals a good global strategy,” which I think is a potential problem. Optimal charity would involve ordering projects from best to worst if your information is perfect. In reality, such an approach might favor the measurable over the non-measurable.

None of this is a problem now IMHO, because Givewell hasn’t hit a point of diminishing returns. Above I traced the history of small donor philanthropic analysis. I see EA as the latest step in advancing NGO quality, a positive development in making the world a little better. Measuring fundraising costs is also a good thing as it weeds out scammers: EA is next level.

EA is already a success, insofar as it has secured additional funding for very worthy projects (but not too much funding). Though again, I agree it won’t save the world and I also agree that it won’t capture most charitable dollars. Moving the ball a little further forward doesn’t require that.

Disagree. It’s a decent thought experiment though. Charity reports that $1 spent on fundraising produces $1.50 of donations. Is this a win for humanity?

  1. Firstly, this study was invariably produced by a NGO marketing department: results are dubious. At a minimum, it needs review by a statistician who understands causality and instrumental variable analysis (don’t ask).

  2. And even then, there’s an unobserved donor problem. Sure on net a dollar spent on fundraising increases your donations from some donors. Other potential donors get turned off when they see the blatant emotionally laden appeals of late night STC advertisements, or the inane bow-wow of fundraising letters. That’s something that would require academic study and not merely review by a statistician.

2a. So super-emotional fundraising can have external effects on all charitable giving. Quite possibly negative ones. (Or positive! You need to study it!)

  1. To the extent that the public tithes or budgets for charity, a dollar spent on fundraising is just a dollar taken away from another charity. So in equilibrium, that results in an arms race of fundraising expenditure and emotional appeal.

  2. Worse, it shapes the type of projects that are funded. Save the Children’s child sponsorship program tugs at the heartstrings, but also involves a fair amount of photography and cajoling third world kids to write a letter to their sponsor-family. I see that STC is transitioning away from this model to some extent:

“In the past, our child sponsorship program matched one sponsor with one child. Now, to have an even greater impact, we are transitioning to a model in which multiple sponsors are matched with a child or a specific country. Sponsors will receive regular updates on the child or country — as well as other children and countries where we work. To reduce operational costs and use donations more efficiently, new sponsorships no longer provide an option to write to a child. This also prevents children from being disappointed if they don’t receive a letter or don’t have a sponsor. Now, your gifts go even further to help more children and families around the world, and we couldn’t be more grateful for your support.”

Whew. Maybe pressure to lower administrative costs had some beneficial effects.

  1. To the extent that the fundraising taps into new sources of funding, some of it might represent a shift from, say, lower middle class Americans to a) middle class American fundraisers, b) middle class American charity workers, c) third world photographers, d) third world bricklayers building rural schoolhouses (infrastructure being favored over paying teachers - it’s all about the emotions).

  2. What about $3000 a plate dinners to support a worthy local organzation? Again, you have to consider the observed and unobserved. It might attract extroverts: good! It probably won’t alienate introverts who don’t attend: also good! Companies like it because it gives them a chance to showcase their support for local organizations: it’s a form of PR. Extroverted CEOs have a grand time. It drives introverted CEOs nuts (they sometimes complain), but hey that’s their job. And I don’t think it affects provision of services to a large extent, since company philanthropy is about PR regardless. The gala framework doesn’t particularly favor one project over another. Big Corp’s funding decisions may be full of distortion, but that’s a given.

I should also say that Save the Children has good ties with the media: they often help reporters landing in war-torn areas. So I’m a little nervous about picking on them. They are good for this thought experiment though.

Why do you say this? What’s your evidence base for the statement that ROI figures are “invariably” produced by marketing departments?

Don’t ask”? Why not? Would it be too much for my tiny little brain? Am I really coming across to you as a guy who couldn’t possibly hope to understand what you understand? As it happens, I am not going to ask you about instrumental variable analysis, because I don’t need to.

You do understand of course that there is a considerable academic literature on the positive and negative impacts of emotional appeals? Again, this is something you can discover through internet search engines.

I’m led to believe that epistemic humility is a key value of effective altruism. Sounds like a good idea in theory.

Honestly, I will come back to your other points but this wound me up. As is not unusual with EA, I’m getting strong flashbacks to this xkcd cartoon.

I welcome an example of a charity that provides ROI estimates of fundraising to its small donors: I will read the report with interest. I’m curious about who would create such estimates in-house other than their development or marketing department.

My post had a broader audience than you. I am heartened that you possess statistical education, but honestly I didn’t assume otherwise. Your posts have been high quality. (As an aside, I’m not convinced that instrumental variable analysis would be appropriate in this case: I’m saying that a statistical claim by an interested party needs to be reviewed by someone with a certain level of statistical training. I was just working with your thought experiment.)

I’m not familiar with that literature. My point was that any claim about donation elasticity to marketing expenditure should be reviewed by donors a) narrowly, based upon statistics and b) less narrowly, based upon social science.

Well, but the two goals may in fact be oppositely aligned: perhaps the world needs saving, but EA diverts resources from that. It’s a local/global optimum thing: EA might be good at finding a local optimum, within established power structures and resource flows. But that might be still a state in which we’re burning too many fossil fuels, exploiting resources too quickly, and failing to address the systemic injustices damning large swathes of the global populace to abject poverty. But everybody (that can afford it) will have that nice warm glow of having done the optimum amount of nudges to get the best possible outcome within the given constraints.

On the other hand, if people had taken over the wheel to radically course-correct, then maybe a kick to the system yields enough impetus to jump to a different valley in the optimization landscape, involving a wholesale revision of said power structures, and leading to a better outcome for all, maybe even averting global catastrophe.

Now, I’m not saying things are that way; I have no way of knowing either that we are on a course of doom (I hope not!), or that there is another optimum we could jump to. But what I’m saying is that if there is, then EA is unlikely to find it, so that’s a possibility that’s excluded by following EA tenets.

There’s also the issue that @CairoCarol mentioned, that a measure ceases to be a good measure once it becomes a target, because these measures are usually imperfectly correlated proxy quantities with the actual, but more difficult to assess, end goals. Take the case of academic research: good research is often highly cited, but that doesn’t entail that optimizing for citation counts maximizes academic research quality. In that sense, EA is effectively built on the fallacy of affirming the consequent: the hope is that by optimizing the easily-measured quantities one also ends up maximally increasing the overall outcome. But in practice, I fear this creates perverse incentives too easily.

Then there’s also the issue that the optimal charities/NGOs/whatever may not be the ones most needed. Suppose there’s a badly-run charity that’s the only one in its sector: by a metric of favoring optimal charities, this one will receive less funding; but then, the issue it attends to will go unresolved. This then gets into the question @Stanislaus raised: what do you optimize for—the effectiveness of the charities at what they’re doing, or the set of causes charities ought to attend to?

Each of these involves, at bottom, a value judgment. And that’s another issue with EA: there’s a lot of supposed objectivity; but rational argument alone can’t settle questions of what outcomes you ought to favor. At best, it yields results of the form ‘if you value education, you ought to fund xy charity’, or ‘if you value individual lives, you ought to ruin your shoes by jumping into the lake and pull the little boy out’. But it’s perfectly rational to not value these things. (Morally monstrous, but that’s again not a rational judgment.) No rational argument could compel you to act; it can only clarify how you should act, if you wish to achieve a certain end.

So in the end, EAs, like everyone, just act according to their values. Consequently, if your values align well with the values that EA serves to promote, you should follow EA tenets. But that’s an empty statement: if you think you ought to donate to charities optimal according to some metric, then that’s what you should do. Moreover, it doesn’t bear the burden EAs themselves place upon their arguments; there’s no convincing a non-EA unless they actually already share EA’s commitments.

I made my views on EA clear in the SBF thread

And this twitter thread has made it even more explicit with damning quotes from many of the early top figures in EA:

I challenge anyone to read the quotes and still have any illusions about what EA actually is vs what you wish it could be from the deliberate PR campaign.

Nirit Weiss-Blatt: “Effective Altruism was a Trojan horse”.

This seems excessively conspiratorial. If it was a Trojan horse, it was an elaborately constructed one: AFAICT Givewell doesn’t fund Existential Risk programs. Cite: this spreadsheet: Airtable - GiveWell Grants

Millions of dollars were directed towards the third world without a penny going to existential risk: this is quite a head fake. Furthermore, I disagree with Ms. Weiss-Blatt that funding the study of human extinction is a bad thing. In fact, in its earliest stages I’d say it was a great thing for reasons expressed upthread. But returns diminish and now tractability becomes more of an issue.

Let’s get a sense of scale. I see that Open Philanthropy does and has donated funds to existential risk, as well as other programs. Their spreadsheet, along with a list of AI risk programs, is here: https://www.openphilanthropy.org/grants/?q=&focus-area[]=potential-risks-advanced-ai&focus-area[]=biosecurity-pandemic-preparedness&focus-area[]=global-catastrophic-risks-capacity-building

Let’s see what awful things they have funded for the past 10 years of so:

Biosecurity & Pandemic Preparedness $226 million, of which $66 million was spent in 2019 and earlier, before the COVID pandemic. Clearly a waste of funds, falling under the existential risk umbrella.

Global Catastrophic Risks and ditto capacity building: $338 million

Potential Risks from AI: $370 million

Science Supporting Biosecurity and Pandemic Preparedness 37 million

So the existential risk program over the past 10 years has been about $971 million. AI related is $370 and they’ve also funded innovation policy.

Here are the Trojan Horse categories in the spreadsheet:

Effective Altruism (Global Health and Wellbeing) $10 million, spent in 2022-2024. Though Givewell spent $1.5 billion on global health from 2014 to the present, presumably as a Trojan Horse for the existential risk Umbrella.

Givewell Recommended Charities: This is a separate category: $826 million for the Trojans

Global Aid Polices: $19 million

Global Health and Development and Wellbeing and R&D and Public Policy $401 milllion

Human Health and Well Being $147 million

So that’s $1.4 billion spent on Trojan Horses by OP to cover up a $370 million AI existential risk program. As noted Givewell spent $1.5 billion (possible double counting of $826 million, not sure). Quite clever really.

History of Philanthropy: $342 thousand. DAMN YOU OPEN PHILANTHROPY, DAMN YOU TO HELL!!!

Who were the largest dastardly recipients of this AI funding? The top 2 were national security organizations: $55 million for Georgetown University — Center for Security and Emerging Technology in Jan 2019 and $39 million for the same in August 2021.

Number 3 was $30 million to OpenAI -General Support in March 2017 the exact same group that is busy developing our future AI overlords. But that was a long time ago and anyway the board resigned.

Number 4 is $16 million to Funding for AI Alignment Projects Working With Deep Learning Systems in April 2022. I can’t parse that.

Number 5 is $13 million to MIT for AI Trends and Impacts Research (2022).

Number 6 is $12 million to UC Berkeley Center for Human-Compatible Artificial Intelligence in Jan 2021.

Redwood Research and Rand Corp received $11 million each. All remaining entries were less than $10 million.

I don’t see much of a bait and switch. Some people want to fund existential risk, others public health. They both seem like worthy programs, though opinions differ. I think human extinction would be bad.

ETA: Half Man Half Wit: As always, interesting remarks. I’ll try to answer moving forward, but Shalmanese’s post was very much on topic: is EA humanity’s worst monster?

I do think the local/global optimum distinction is relevant for EA. But I doubt whether EA is counterproductive. Furthermore, your argument seems to apply to all charity and not just EA in particular.

We’ve taken serious steps on climate in the past 10 years or so and I’m optimistic that most of the earth’s land masses will remain inhabitable in 200 years, though I suspect the sea level will be 200 feet higher and humans will avoid traveling outside during the day in Bangkok, Manila, Phnom Penh, and Khartoum. Millions will die due to climate related causes. IOW yes the world needs savings, but politics is the slow boring of hard boards, and high marginal benefit projects can do a lot of good. If anyone can design a constructive and effective kick to the system, then yes I think it deserves support, but I’m not seeing it.

AFAICT, EA is about project selection. Some funders appear to really care about whether their ideas are working, but they aren’t all EA affiliated. Matthew Yglassias linked to a prison reform post by John Arnold: I see that his foundation supports evidenced based interventions. I’d like to say it’s EA, but he’s not on the EA forum’s lengthy link list.

Locally optimal project selection is a great thing and will remain a great thing even if GiveWell’s budget increases tenfold IMHO. But if it increased a thousand times, they would have to develop new methodologies I think (eg those of John Arnold). But that’s ok because as you have stated, EA doesn’t have particularly broad appeal anyway. So one criticism neutralizes another: it will remain a marginally beneficial program that saves millions of lives.

ETA: OP mentions Arnold Ventures (and Pew, etc) Criminal Justice Reform | Open Philanthropy

With respect to the charge that EA’s focus on global health and wellbeing is “bait” designed to pull people in before they are “switched” to xrisk causes, it might be more useful to look at the time series rather than totals over 10 years.

Going by the linked spreadsheet, health/wellbeing/aid related funding went from 82% of funding in 2015 to 55% in 2023, while global catastrophic risk inc. AI went from 6% in 2015 (first year it appears) to 30% in 2023.

So there is definitely a case to be made that the priorities of the EA movement have shifted significantly in the last decade. That’s just looking at funding and not e.g. at the fact that one of the philosphical founders of EA recently wrote a book called “What We Owe To The Future” which is all about longtermism and why we’re morally obligated to fund AI techbros.

The move to longtermism within EA is real and visible, in both funding and in the things that EA advocates do and say.