Alexander Berger is the Co-CEO of Open Philanthropy, focused on their Global Health and Wellbeing grantmaking. He speaks to The Browser's Applied Divinity about how the world is getting better, why that means philanthropic opportunities are probably getting worse, and the normal awesome altruism of having kids.
On Donating Kidneys, Giving Money, And Being Normal
Applied Divinity: Several years ago, you donated a kidney to a complete stranger. At the time, The Economist wrote of you:
Mr Berger would be angry that I called him a saint. He thinks "deifying donors only serves to make not donating seem normal." He'd rather such donations be seen as "one of the many ways a reasonably altruistic person can help others."
I also don’t want to deify anyone, but objectively speaking, this isn’t normal behavior right? How would you like to be seen?
Alexander Berger: I think kidney donation (and donating money actually) are a kind of good fit for me because they are relatively rare big decisions that reward thought and diligence -- you can think about them and keep coming back to try to get the right answer, you don’t have to get it right in the moment, or the first time, or change a deeply ingrained habit (all of which are things I struggle more with). I remember learning that the risk of death in surgery was something like 1 in 3,000, which in my mind at the time was an expected value gamble equivalent to sacrificing myself to save 3,000 other people. I thought, “I want to be the kind of person who would do that,” and then there was just a process I could follow with a few specific concrete steps to be that kind of person! I didn’t have to change my everyday habits or behavior, and I think that made it much easier for me.
But donating a kidney to a stranger should be a lot more common than it is. For a lot of people, donating a kidney would be consistent with how they want the world to work or how they want themselves to be, and there's just not enough of a norm around it, not enough of a social expectation or social support. It's a bigger concentrated cost than things we normally ask people to take on purely altruistically, and I wish the government would compensate people to do it.
Applied Divinity: In addition to your philanthropic work, you’ve talked about the ethical good of having children, donating a kidney, and generally doing good in a variety of common-sense ways. Meanwhile, your co-CEO Holden is off writing about science-fiction cloning scenarios and galaxy-scale expansion. Is it fair to say that you’re the most normal person working at Open Philanthropy?
Alexander Berger: I’m definitely not! It’s hard to define “normal” but I would say I’m happily in the middle of the distribution of my colleagues.
Applied Divinity: On the topic of common-sense motivations and common-sense goods, I recall you became a father a few months back. That’s really wonderful, and obviously, we’re all excited for a future with more people like you.
I would love to hear how it’s affected you personally. Or alternatively, if your professional and philosophical views had any influence on your decision to have children in the first place.
Alexander Berger: I am excited about being a parent and I really enjoy getting to see my daughter at the end of the day. But I feel like that's all very normal. In terms of professional and philosophical implications, I think the causality ran the other way.
Before we got married, my now-wife knew she wanted to have kids, and if I didn't want to have kids, we would not end up together, so I had fairly strong motivated reasons to get to yes on wanting kids. I used to have more person-affecting population ethics and was more skeptical about the goodness of creating people, and I really wanted to deny that the goodness of having a kid was analogous to the goodness of saving a life, which is what total utilitarianism would roughly be inclined to say. I had wanted to deny that because it seemed weird to think that all these parents walking around are like moral heroes, akin to people who do costly or dangerous life-saving things. (Of course, on deeper reflection, praise-worthiness and moral goodness don’t have to track, but at the time, the relationship seemed important to me.)
And then as I was thinking of whether I wanted to have kids, I was reflecting on my relationship with my own parents, and frankly, how little narrowly selfish value they get from having invested a ridiculous amount of care and love in me, and how the relationship is, at least partially, one of very deep altruism on the part of my parents. Not that my parents, or other parents, would necessarily agree! It seems like (a) people mostly become parents for selfish or at least self-regarding reasons; (b) parents typically report having kids being one of the most meaningful things in their life, and rarely regret it; (c) the evidence on life satisfaction seems to suggest that being a parent lowers it modestly, though it’s mixed. When I say little “narrowly selfish” value I’m not trying to deny the meaningful psychological benefits in (b) or the satisfaction of the desires in (a) but to refer to the combination of (c) and the way in which (it seems to me!) parents invest a lot in their kids and reap limited purely private/selfish benefits. Another argument for the limited purely private/selfish benefits would be the rareness of adoption - it’s not like raising a child is so independently fun and selfishly valuable that most people are leaping at the prospect. (Side note: I actually think that having and raising kids is a way bigger cost than donating a kidney, though it probably typically also has much bigger personal benefits!)
I always felt total utilitarianism was a very natural, simple view that you need to have reasons to want to oppose, and reflecting more on the ethics of parenting, I lost my main reason for wanting to oppose it. Now, creating and raising a person and saving a life actually seem quite similar to me in terms of the benefits they enable (someone gets to live many more years of potentially healthy and happy life). I don’t want to defend changing my mind based on this as super rational! But I think my original position was the less rational one, more motivated by an intuitive desire to have the moral goodness of parents’ actions map to my perception of their (lack of) altruistic motivation.
It’s very un-cost-effective relative to donating to GiveWell’s top charities, so it’s not the most EA action in the world, but to me having and raising kids seems like an awesome, and very normal, altruistic thing to do.
Applied Divinity: In your interview with Rob, you talk about reading some Peter Singer, taking time off from school to live in India, and then shortly after donating to GiveWell and asking if you could work there unpaid.
A lot of people read Peter Singer, and do not then take those steps. So the explanatory power seems limited somehow.
It seems maybe more apt to describe you as someone with fairly normal ethical views, but extraordinary commitment to pursuing them. Is that fair?
What’s the difference, psychologically or just in terms of character, between you and someone who reads Singer, shrugs, and then moves on with their life?
Alexander Berger: I read Singer at the right point in my life. I had gone to college thinking that I wanted to be a physicist and understand the universe, and after a couple of quarters of middling grades in the advanced freshman physics series, it was nope, nevermind, I'm not going to be an excellent physicist. I felt like I could kinda see myself doing anything, and I was looking for a set of principles to guide my life, and I found that utilitarianism resonated a lot. Finding those ideas at a time of a lot of openness let me take them very seriously and meant that I felt more deeply tied to them than I think a lot of people do, and I think that has shaped a lot of the actions I've taken since.
What Ordinary People Can Do To Improve The World
Applied Divinity: There are, increasingly, some very big donors in Effective Altruism. Facebook stock is up about 10x in the last 10 years, putting Dustin Moskovitz’s net worth at $23 billion. More recently, there have been some explosive returns to crypto assets, putting Sam Bankman-Fried’s net worth at around $23 billion as well.
What’s the case for earning-to-give when there’s already so much money in the ecosystem?
Alexander Berger: I think one of the things that's underrated about earning to give is that I think you can build really valuable career talent and skills. Sometimes people think about earning to give as burning the time and then you get the money, and they conclude that's a bad trade. But there are jobs where there's a lot of clarity and support about how to perform excellently and how to climb a ladder, and those can provide valuable career development for a lot of people. It depends a lot on your opportunity set, but people who have been earning to give and doing it very well can end up being extremely talented and very capable of doing a lot of good in their next stage.
Another way I look at this is that it’s always a little bit weird to me when people identify so strongly with the EA ecosystem that they think the fact that Dustin or Sam have so much money mitigates the desire or need for them to have their own philanthropic resources. I get it, and obviously I identify a lot with Open Phil’s work, but if you're trying to have your own impact in the world, and if you have the skills such that somebody wants to pay you a lot of money to do interesting work and then you can go and donate tens or hundreds of thousands of dollars a year to high-impact stuff, I think that's really cool. That's a huge amount of impact that you can have in the world and that that's worth a lot.
There was a recent post on the EA Forum arguing in favor of, if you’re earning to give, trying to make tens of billions of dollars (similar to an argument you’ve made). I found it pretty compelling. If you're going to try to earn to give for altruistic reasons, you should be really risk-tolerant and really swing for the fences. As an aside on that: it seems like having tens of billions of dollars is not much better for your personal wellbeing than having tens of millions of dollars, but it's 1,000 times more money, which is huge from an altruistic perspective. And so it seems like the very highest risk, highest return opportunities should rationally be disproportionately appealing to people who are altruistic rather than selfish, which I think is a mildly counterintuitive conclusion.
Applied Divinity: As the world improves and we pick the low-hanging altruist fruit, should we expect there to be fewer giving opportunities in the future? Or will we get so much better at identifying opportunities that the moral value of money just increases over time?
Alexander Berger: My colleague Peter Favaloro has done some analysis on this that we’re hoping to publish at some point. The short version is that I think the world basically is getting better, at least from the perspective of global health, and giving opportunities should continue to get worse over time. Today, the marginal opportunities to save the life of a kid are in the range of $5,000. When we were rolling out the first generation of vaccines that were covering kids in low income countries in the 1990s with UNICEF, the cost to save a life was at least several times less than that, maybe even less. And in 1950, the opportunity to eradicate smallpox was still ahead of a prospective donor.
And that's not just inflation. Globally, we're bidding out the curve of giving opportunities. And I think that's awesome. Fewer kids are dying as a result of that. But it means that, in the near term, the moral value of money won’t just increase over time with investment returns.
That said, you can imagine crazier futures where it becomes much cheaper to do good again: my colleague Holden has written about Malthusian dynamics and the history of economic growth and the implications of that for our possible digital future. The Global Health and Wellbeing team doesn’t account for those considerations in our thinking right now.
In, say, the next 30 years, I don't expect that we're going to get so much better at identifying opportunities. I think opportunities are getting worse because the world is getting better faster than returns to assets. That’s broadly consistent with our main donors’ commitment to spend down in their lifetimes.
Applied Divinity: Open Philanthropy has a reputation for having an incredibly high bar for hiring. There’s some anecdotal evidence that the acceptance rate is very low. 80,000 hours once called you the “world’s most intellectual foundation”. Several specific Open Phil staff members have reputations for being incredibly smart, hard working and conscientious people.
What should an ordinary non-genius do with their life? Asking for a friend of course…
Alexander Berger: Open Phil does have a really high bar, but we're almost always looking for more people. We are really excited for people to apply and we actually have a bunch of open roles right now. But I’ll say that I do feel like there's too much of a meme sometimes in effective altruism spaces that the only good jobs are “officially” EA jobs. I really don't agree with that. I don't think that's right. That's one of the things I really like about earning to give -- you can just do it, and live your life, and have a bunch of counterfactual impact. If you’re giving to GiveWell’s top charities, we think that’s more than five times better than just giving money to the world's poorest people, which our model says is a hundred times better than your own consumption. So if you can donate a lot, that’s awesome. You should be happy and proud to be able to do that. The EA community is pretty explicit about maximizing and optimizing and doing the absolute most possible good, which pushes people in quantitative directions where they’re making interpersonal comparisons that are probably not healthy. I really don’t like the implication that there are only a few ways to do good.
On Effective Giving And The Opportunities Still Out There
Applied Divinity: When I talk to people about GiveWell, there seem to be all these knee jerk reactions. It’s too “top-down”, too paternalistic, foreign aid enables corruption, you can’t distill the value of human life to QALYs, etc, etc.
What’s your elevator pitch reply to someone who holds that broad perspective?
Alexander Berger: These actually seem like a few different criticisms, so I’ll try to respond to them one at a time.
First, in response to the argument that GiveWell specifically and effective altruism broadly are too “top-down paternalistic”: I take those concerns seriously, and I think donors who are most worried about them should consider GiveDirectly, which just transfers wealth to the poorest people in the world and lets them decide what to do with it. When there are so many people who, through no fault of their own, don't get enough food to eat and have really inadequate medical care and get wet when it rains, giving to them if you have more money than you need is a great, simple option, and I think that avoids the paternalism critique.
Now, the vast majority of what we fund isn’t GiveDirectly, and that’s because we think we can have substantially more positive impact than that (high) benchmark. Take antimalarial bednets. A lot of the benefit is an externality -- they're insecticide-treated, so they don’t just protect the user from mosquitoes but they actually kill mosquitoes, which protects other people, and accordingly you wouldn’t expect selfish maximizers to buy/use enough of them. And most of the beneficiaries are young kids under 5 rather than adults, and it's not like these kids have the money to buy a bednet for themselves or understand the benefits from them. So I think it’s worth grappling with the paternalism critique and thinking hard about the compelling opportunity GiveDirectly presents, but at the end of the day it seems overconfident to me about our inability to do better than cash transfers.
On corruption, again, I think bednets are useful to consider. As with any human enterprise, there are occasional issues with fraud and theft, but they’re small (and we know they’re small because we can do independent surveys to see if the intended beneficiaries get nets, and they overwhelmingly do).
You may have in mind a more macro critique: “well, if we let the kids die this time, maybe their governments would invest more in public provision of bednets” (or, even more strongly, “governments would invest in better public services, which would enable more growth”). I think putting it baldly makes the stakes a little clearer. I think it’s plausible there is significant crowding out of public provision via these kinds of cheap health interventions (GiveWell’s case studies on bednets don’t find much, but more macro evidence from middle income countries in the GAVI context suggest nearly full). But overall I find it really doubtful that, given how many lives we can save for so little money, we could create better outcomes by refusing to provide really basic global health goods.
And then lastly, in response to criticisms of QALYs and measurement. I think there are plenty of super reasonable critiques of QALYs or any other specific measurement. But overall I’d say that numbers allow for optimization. If there is a number that you want to make go up or down, like if you want to decrease the number of child deaths, thinking about that number and what it is and where it comes from and how you can improve it is going to help a lot versus going by the seat of your pants and doing what feels good to you. Our goal is to have as much impact as possible. We are interested in helping people more rather than less. And that is not a goal that everybody shares. A lot of people want to give back to their community because they appreciate the services it provides to them, or want to give to their alma mater because they enjoyed their time in college. That is a very different project than coming to philanthropy because you have more than you need and want to have as much impact as you can to help other people. That latter project is what motivates us and is what we're trying to do. And you just can't do that project without getting pretty philosophical and analytical and trying to compare and rank things to see where money can go further.
Applied Divinity: In a recent book, Bykvist, Ord and MacAskill call for much greater investment in fundamental moral questions. They want to make progress on moral theories, and on some views, having moral theories we’re much more confident in feels almost prerequisite to thinking seriously about the value of the far future.
In contrast, I would interpret your work on Global Health and Wellbeing seems mostly content to say “okay, we care about QALYs, there’s a bit of uncertainty with regards to the discount rate, a bit of uncertainty when it comes to increasing consumption versus improving health outcomes, but we did the sensitivity analysis and all reasonable ranges result in broadly similar conclusions so this isn’t a critical bottleneck.”
Is that a fair interpretation? Is it just that the differences in charity effectiveness are so high that moral weights get washed out?
Alexander Berger: I have less confidence than that suggests. I think the idea of fundamental moral progress is pretty interesting, and I feel really conflicted about whether this is over- or under-invested in by society. On the one hand, I do not feel like the sum total of Anglophone philosophy departments are producing so much fundamental moral progress that I think that’s a great use of human capital on the margin. On the other hand, ideas like long-termism, caring a lot about farm animal welfare, and some other original ideas that have motivated us a lot have come out of philosophy academia in some form. So there’s value in that work, and I think there could be value in more.
But I’m not prepared to wait. The ethos of the Global Health and Wellbeing team is a bias to improving the world in concrete actionable ways as opposed to overthinking it or trying so hard to optimize that it becomes an obstacle to action. I'm really glad my longtermist colleagues are continuing to fund work on global prioritization research. That's a pretty small community right now and I'd be excited for it to grow. But I also don't feel like it would be appropriate for people in general to have an attitude of “I'm waiting to see where philosophy will lead us.” We feel deep, profound uncertainty about a lot of things, but we have a commitment to not let that prevent us from acting.
I think we could very reasonably be critiqued from both sides -- by philosophers wanting us to be slower and more rigorous, and by Silicon Valley types saying how can you sit around all day thinking rather than moving faster to solve the world’s problems. That’s not to say they aren’t both right in some respects!
Applied Divinity: Imagine there’s a Freaky Friday scenario, you wake up tomorrow in Holden’s body, and you’re now responsible for the longtermist side of Open Philanthropy. What would you do differently? Presumably you wouldn’t just shut it down, but your differing worldview and uncertainty would influence grantmaking somehow right?
Alexander Berger: Things would not be that different, honestly. I would want to continue the work that Holden and Ajeya and Claire and Luke and Nick and the rest of that team have all already been working on -- a lot of work on AI safety, a lot of work on biosecurity and pandemic preparedness, and a lot of work on growing the longtermist community. I think those are all really worthwhile, valuable goals that I'm excited to see progress on and to defend. On the margin, relative to Holden, I would probably be a little bit more excited about things that look legible and actionable from multiple worldviews, and a little less about things that primarily look good if you're motivated by particular contrarian views, but I think those are marginal differences. If I was running that team, the overall perspective and work look very similar to how it does now.
On Working At Open Philanthropy And What Comes Next
Applied Divinity: There’s a class of risks, which are not, in the traditional sense existential. They probably won’t kill literally every human, they probably won’t cause suffering on a truly cosmic scale, but there seems to have been more interest from EA in them recently. I’m thinking of Luisa Rodriguez’s work on civilizational collapse, or Mike Cassidy and Lara Mani on volcanic eruptions. Where do these fit? Both abstractly in your moral framework, and logistically at Open Phil.
Alexander Berger: Abstractly, I think these kinds of risks are worth paying attention to, but potential interventions typically end up being less exciting to us than those aiming at existential risks because, as you noted, things that might affect the full course of the future of human civilization have such enormous stakes that even small probabilities of influencing them multiply out to be huge in expected value. As a result, working to prevent low-probability catastrophes that don’t have the potential to affect the full course of the future of human civilization isn’t necessarily worth the same trade-offs in terms of tractability/knowing whether you’re making progress.
One way I think about this is: on one end of the spectrum, we have the GiveWell top charities that are doing really straightforward, evidence-backed, scalable interventions that help people today. And on the very far other end of the spectrum, we have work focused on things like risks from advanced AI that we think could potentially affect the very, very long-run trajectory of human civilization. And often, we end up less optimistic about the set of interventions that fall in the middle of the spectrum because you give up a lot of the measurability and concreteness that you get from the actually existing current problems that you can make visible progress on, but you also give up the super-long-run stakes that come from existential risk. And so we often are actually more compelled by the two ends of the spectrum rather than things in between. (That said, it’s not like we only pay attention to threats that would literally kill everyone - a lot of our longtermist work on biosecurity for instance is based on addressing threats that could have existential stakes because they kill so many people that there’s a chance civilization doesn’t recover. We’ve written about this here.)
All that said, I really enjoyed both Luisa’s series and the work on volcanic eruptions. My takeaway from the volcanic eruptions piece was that a lot of what you’d want to do are general preparedness interventions, like stockpiling food, the same kind of stuff that you’d want to do for things like nuclear winter. We’ve funded some work on that and I could imagine there’s more out there, but I doubt there’s hundreds of millions of dollars a year that would pencil out as being worth funding.
Applied Divinity: If I’m getting this story right, you worked at GiveWell because you thought it was small and you could therefore have a large counterfactual impact. Holden stayed up late, convinced you there were some flawed assumptions at play, and that wasn’t a good reason to join.
But what happened next? What was the reason instead?
Alexander Berger: At the time, I was thinking that if I didn't go to GiveWell, I would go to some bigger nonprofit consulting firm that was more established. And I figured that the marginal person that type of firm would otherwise hire was basically just as good as me, but if GiveWell didn't hire me, they said they wouldn’t hire anybody else very quickly. So compared to the next-best alternative for each organization, I was going to get more of the counterfactual credit for my work at GiveWell. (I thought at the time that that would be a bigger counterfactual portion of a smaller impact pie because GiveWell was much smaller and less well-established than the nonprofit consulting firm I was thinking about going to, but of course GiveWell grew quickly so that thinking doesn’t look great in retrospect.)
Holden's point was that I shouldn’t just think about the difference between me and the next person, I should also consider the full chain: if I took the job instead of person X, person X would take some other job instead of person Y, who would in turn take another job instead of person Z, ... etc. If you make some strong assumptions--which very much might not match reality!--about how impact is distributed and how people sort across jobs, you can get that the sum of the full series of those steps looks more like the impact of the original role than like the difference between the first and second candidates in the first role. In other words, just thinking about how much good I might do in the role, myself, could be a better approximation than thinking about the difference between my impact and the next-best person's.
I think there are a lot of ways in which the world is more chaotic than that model accounts for. But the important takeaway for me was that you often want to think in terms of general equilibrium and not just about the first-stage consequence of your actions, and sometimes trying to be clever by one extra step can be worse than just using common sense, because if you think about it maximally hard, you may end up back where the person who doesn't think about it very hard at all ends up. I think in the case of finding a job, particularly a first job after school, there’s a lot to be said for optimizing for fit and learning a lot, which seems like pretty common-sense advice, as opposed to trying to have a super high counterfactual impact immediately. I think the sophisticated EA advice has converged towards common sense on this but I’m not sure I or others trying to apply it naively have always done better than common sense.
Applied Divinity: Thanks for taking the time to talk, this has been a pleasure. If someone is looking to get involved either with Effective Altruism broadly or Open Philanthropy directly, where should they look?
Alexander Berger: A pleasure on my end as well! To follow capital E capital A Effective Altruism, I’d recommend checking out the 80,000 Hours podcast or the EA forum, but I think for a lot of people a good place to start to engage is clicking around on GiveWell’s super in-depth charity reviews (e.g. Against Malaria Foundation) and thinking about where to donate. For Open Phil, you can sign up for updates here, and we’re almost always hiring these days - would love for folks to check out our open roles and apply if anything looks up their alley.
Not yet a subscriber? Every day, The Browser Newsletter sends you five fascinating pieces of writing to surprise and delight you, each one hand-picked and beautifully capsuled by our editors Caroline Crampton and Robert Cottrell. In a world consumed by bots, noise and breaking-news, The Browser gives you carefully-curated writing of lasting value.