How can we do the most good? How do we measure the impact of actions taken to help people in the past, present, and future? These are unlikely questions to lie at the heart of a major, Enron-style financial collapse, but they are core to the fall of 30-year-old former multibillionaire Sam Bankman-Fried and his bankrupt FTX crypto empire.
Bankman-Fried was an adherent of the philosophy of effective altruism, and its more recent offshoot longtermism. Effective altruism basically espouses the idea that the best way to do the most good for the future is to become as rich and powerful as possible right now. Longtermism takes it a step further by saying the greatest good you can do is allocating resources to maximize the potential happiness of trillions of humans in the far future that will only be born if we minimize the risk of humanity’s extinction. Both ideologies have found currency among the Silicon Valley elite in recent years—Elon Musk has called longtermism a "close match" to his personal philosophy.
Bankman-Fried made effective altruism his brand as "the most generous billionaire," as one since-deleted YouTube video with over 1.5 million views, which Bankman-Fried participated in, put it. His benevolent and unassuming image—he typically wears a T-shirt and shorts—was promoted by backers like Sequoia Capital, and helped him gain influence in Washington. He became a significant political donor, and pledged to give away vast swaths of his $16 billion fortune, now thought to be in the millions after a sudden and dramatic collapse this month that saw FTX, FTX.US, and intertwined trading firm Alameda Research all file for bankruptcy after severe financial mismanagement and risk-taking, overlapping romantic relationships between top personnel, and an apparent lack of recognizable corporate structure all came to light. Anybody who had money still on FTX, or who invested in its nearly worthless token FTT, suffered.
Bankman-Fried made longtermism part of FTX's DNA, not just through his own views but its official philanthropy. Bankman-Fried explicitly launched FTX in 2019 to maximize the money he could give to longtermist causes, and then launched the philanthropic FTX Future Fund in 2022 to "support ambitious projects to improve humanity's long-term prospects."
In the aftermath of the company’s collapse and bankruptcy, every member of of the Future Fund team resigned, including with 35-year-old Scottish philosopher William MacAskill—co-founder of effective altruism, the face of the longtermist movement, and a longtime friend and mentor to Bankman-Fried who joined the company to help spend FTX's sizeable war chest on longtermist causes.
MacAskill wrote in a Twitter thread after the resignation that, "if there was deception and misuse of funds, I am outraged, and I don’t know which emotion is stronger: my utter rage at Sam (and others?) for causing such harm to so many people, or my sadness and self-hatred for falling for this deception."
These seismic shifts invite closer examination of effective altruism and longtermism, and the obvious question: How should we think about an ideology for the tech elite that promised to save the world but has instead devastated people's lives?
Australian philosopher Peter Singer, an early articulator of animal rights and liberation, is regarded as the intellectual father of effective altruism thanks to his 1972 essay “Famine, Affluence, and Morality.” In the essay, Singer argued people were morally obligated to maximize their impact through charity focused on causes that offered the greatest quality of life improvements, such as those aimed at reducing poverty and mortality in the Global South.
Others, including MacAskill, took it further by insisting that we must maximize our positive impact on people yet to be born thousands, millions, and even billions of years in the future.
This "longtermism" isn't solely MacAskill's brainchild, but he has quickly become the public face of it. The movement's intellectual roots lie at Oxford University—where MacAskill is a professor—among a fringe of philosophers who are concerned with risks that could bring about humanity’s extinction (so-called existential risks) such as superintelligent AI. MacAskill and company’s longtermism is distinct from effective altruism in a few key ways worth considering here, not only as its influence grows but as we learn more about Sam Bankman-Fried and FTX. More on that later.
One major node of the longtermist movement has been the Future of Humanity Institute (FHI) at Oxford, founded by Nick Bostrom—author of a seminal alarmist text about "superintelligence," or artificial intelligences that outpace our thinking ability—in 2005. FHI is connected to Toby Ord, a philosopher and FHI senior research fellow who wrote The Precipice: Existential Risk and the Future of Humanity and coined “longtermism” along with MacAskill. It's also connected to the longtermist Global Priorities Institute (GPI) at Oxford, where MacAskill holds a position as well as at FHI.
MacAskill's influence in this world has been significant. Besides his involvement with FHI, he runs the Forethought Foundation, which "addresses the question of how to use our scarce resources to improve the world by as much as possible," and co-founded the Centre for Effective Altruism with Ord. All of these organizations share a building with FHI.
According to a New Yorker profile on MacAskill, he pitched Bankman-Fried on effective altruism after the pair met at a talk at MIT. Bankman-Fried later worked at the Centre after quitting his job in traditional stock trading and before founding Alameda Research and FTX, becoming something of a disciple of MacAskill and longtermism during a key period in his development. The connection between longtermism and FTX is direct, as Bankman-Fried explained on an episode of the longtermist 80,000 Hours podcast.
"If your goal is to have impact on the world—and in particular if your goal is to maximize the amount of impact that you have on the world—that has pretty strong implications for what you end up doing," he said. "Among other things, if you really are trying to maximize your impact, then at what point do you start hitting decreasing marginal returns? Well, in terms of doing good, there’s no such thing: more good is more good."
Bankman-Fried said that he estimated $100 billion might be a good amount to spend on various causes, and shooting for that amount affects strategy. "One piece of that is that Alameda was a successful trading firm. Why bother with FTX? And the answer is, there was a big opportunity there that I wanted to go after and see what we could do there," he said. "It’s not like Alameda was doing well and so what’s the point, because it’s already doing well? No. There’s well, and then there’s better than well—there’s no reason to stop at just doing well."
Longtermism, MacAskill writes in his recent manifesto What We Owe the Future, is "the idea that positively influencing the long-term future is a key moral priority of our time" as "future people" yet to be born but sure to exist "count for no less, morally, than the present generation." But longtermism, for all of its apparent focus on people, is not simply a vision that prioritizes the well-being of future individuals above more immediate concerns like profits or political coalitions. For example, FHI research assistant and former FTX Future Fund member Nick Beckstead, argued at length in his 2013 dissertation for conclusions that seem to go against the origins of effective altruism.
“Saving lives in poor countries may have significantly smaller ripple effects than saving and improving lives in rich countries,” Beckstead wrote. “It now seems more plausible to me that saving a life in a rich country is substantially more important than saving a life in a poor country, other things being equal."
In this telling, the lives of Westerners and the lives of people yet to be born (likely descended from the Westerners who are prioritized today) are the real concerns our species should have. Building on that, extinction of humanity would be tragic on two counts: it would wipe out the population or cause immense suffering, and, even worse, it would ruin the potential of humanity to experience pleasure until the heat death of the universe.
Doctoral student and former longtermist Emile P. Torres wrote of the philosophy: "This is the central dogma of longtermism: nothing matters more, ethically speaking, than fulfilling our potential as a species of 'Earth-originating intelligent life.'"
Some of the movement's famous adherents—Bankman-Fried included—are famous for aggressively chasing wealth and power, often with chaotic results at best. For this and other reasons, people have argued the movement is a particularly dangerous one that is more a vehicle for elite power than improving the world.
Despite effective altruism coming on to the scene within the last decade, and finding its strongest promoters in the cutting-edge worlds of self-landing rockets and internet currencies, it has older intellectual origins.
Utilitarianism—which effective altruism springs from—was developed by Jeremy Bentham and John Stuart Mill nearly three hundred years ago. Its promoters have argued since then that we have a moral imperative to maximize humanity's sum well-being with happiness as a positive, suffering a negative, and probability requiring that we try to average or hedge our estimates.
At its core is a supposedly radical argument: we can't prioritize our own interests or those of people we're familiar with over complete strangers. Instead, we must prioritize the greater good. It can, however, lend itself to horrific conclusions: when faced with the choice to save millions today or shave percentage points off the probability of some existential risk which would preclude trillions of humans in the coming trillions of years, we're required to do the latter. Worse still, if shaving those percentage points can be achieved by those millions of people not being saved, then it's permitted so long as the effects are limited.
In a sense, longtermism’s logic is a "moral revolution" as MacAskill has claimed. If we are supposed to see the lives of present and future people as morally equivalent, and then go further than utilitarianism and assert that trillions more people will exist over trillions more years, then the conclusion is that we must direct our time, energy, and resources towards realizing their birth and minimizing their suffering by reducing the existential risks as much as we can today.
In his manifesto, MacAskill argues that we must pay attention to the "average value added" by looking at the long-term value coming from pursuing a certain outcome, its significance, and whether its possibility is contingent on our actions, among other factors. In many cases, it's relatively easy to imagine scenarios where future people's livelihoods are contingent on today's actions.
One example used by MacAskill is particularly evocative: the abolition of the West's imperial slave trade in the 1800s. Was slavery's abolition "a more or less inevitable development once the idea was there?" or was it "a cultural contingency that became nearly universal globally but which could quite easily have been different?" While this and other examples are interesting, MacAskill quickly jumps to concerning conclusions.
One notable example is population ethics, specifically MacAskill’s endorsement of what’s known as the "repugnant conclusion." Its original articulation by moral philosopher Derek Parafit in his book Reasons and Persons states: “For any perfectly equal population with very high positive welfare, there is a population with very low positive welfare which is better, other things being equal.” It may seem morally repugnant to double or even triple the world’s population while letting living standards worsen, but MacAskill suggests this is ideal because somewhere along the lines we will see other “improvements” that ensure this larger population and its descendants have lives worth living.
Or in other words, it may indeed be tragic if people today live horrible lives because we allocate scarce resources to improving future generations’ well-being, but if future people are equivalent to present people, and if there are more people in the future than today, then it is a moral crime to not ensure those generations have the greatest possible lives.
An illustrative example of how this thinking can go off the rails is Earning to Give, which MacAskill introduced to effective altruism. It was an attempt to convince people to take up high-paying jobs (even if they were harmful) to maximize their ability to contribute to charitable causes. In fact, it was at a talk on Earning to Give that MacAskill first hooked Bankman-Fried on longtermism.
In a 2017 piece co-authored by MacAskill and Benjamin Todd, the pair reinforce this claim, at one point looking to bankers as an instructive example: it is fine to shepherd people to become investment bankers because they are simply overpaid and have them contribute that excess income to charity, but it is not okay to encourage people to become bankers to commit fraud to generate excess returns and contribute that plunder to charitable causes. Their paper hedges, however, arguing that there are "exceptional circumstances" in which it makes sense to do harm in the name of a greater good.
If one considers entering a harmful industry in order to do good, "Our main advice is to be cautious and seek advice from people with an outside perspective," the pair wrote, before arguing when it could be justified. "Applying what came earlier, one situation might be outweighing—if the benefits are expected to be much larger (such as 100 times) than the harms."
If you enter a financial system during an asset bubble that, over the long-term, will burst and immiserate millions of people’s livelihoods, then what are you to do? Is it okay to develop or trade in financial instruments that generate excessive capital but have the potential to collapse parts of the system itself? If the circumstance is considered exceptional, or the benefits great enough, and as long as you don't cross too many vaguely-defined moral lines, then a longtermist might say yes.
It’s not only that longtermists seem eager to excuse working in harmful industries if it could conceivably net a positive impact, but it also seems like they have a hard time even registering the existential risk posed by joining systemically harmful industries. As Tyler Cowen wrote in a succinct blog post: “Hardly anyone associated with Future Fund saw the existential risk to…Future Fund, even though they were as close to it as one could possibly be. I am thus skeptical about their ability to predict existential risk more generally, and for systems that are far more complex and also far more distant. And, it turns out, many of the real sources of existential risk boil down to hubris and human frailty and imperfections (the humanities remain underrated).”
This is a moral theory that contributed to a crypto billionaire (on paper) creating such a disastrous operation that it lost $1 billion of client funds, letting a crypto hedge fund he founded lose $10 billion in customer funds that he lent it, owing as many as a million customers money, unleashing a contagion virus that will very likely bankrupt or devalue other crypto platforms and assets that people were convinced to put their money in, and costing institutions like Canada’s pension fund a cool $95 million.
In the end, longtermism has proven itself to be the least effective form of altruism and, ironically, an existential risk to the effective altruism movement itself.
It’s important to distinguish effective altruism from longtermism because the latter's embrace of repugnant imperatives and conclusions have more profound consequences both for the movement’s tens of billions in capital at its disposal and is part of an ongoing attempt to capture it. On the other hand, however, longtermism is being advanced by the co-founder of effective altruism in an active attempt by him and other members of the movement to reorient its goals and ambitions.
As Eric Levitz writes in the Intelligencer, effective altruism as a whole wasn’t complicit in Sam Bankman-Fried’s spectacular collapse but some effective altruists were—specifically those who subscribed to longtermism. Some were complicit in what Bankman-Fried did at FTX itself, some in developing a philosophy that helped Bankman-Fried rationalize his actions, and some of them will undoubtedly be hurt by Bankman-Fried’s actions. But beyond the carnage he’s unleashed onto the crypto ecosystem, what other harms will follow?
One immediate consequence is that FTX’s Future Fund—which provided funds to longtermist causes—will no longer be able to disperse the $160 million committed to a number of researchers and organizations over the next few years. Bankman-Fried wasn't "liquid enough" to create an endowment so he paid projects as they came along and often in tranches that are now incomplete. Forbes reported that some nonprofits and charities would have to overhaul their operations to make up for the funding gap. Millions more were committed to newsrooms such as ProPublica, Semafor, Vox, The Intercept, and The Law and Justice Journalism Project.
One has to wonder why so many people missed the warning signs. How did a movement that prides itself on quantifying risk, impact, and harm rationalize the ascent of and support for a crypto billionaire when the crypto industry itself is so fundamentally odious and awash in scammers and grifters using the language of community and empowerment? An ideology that sees entering harmful industries as acceptable, with vaguely defined moral limits, certainly played a role.
After news of FTX’s collapse broke, MacAskill took to Twitter to express betrayal and emphasize that effective altruism does not hold that it's morally permissible to act unethically in service of a greater good. Bankman-Fried, if he committed outright fraud versus merely entering a generally harmful industry, represented an aberration and deviation from effective altruism’s moral theory that shouldn’t discourage people from embracing it.
"For years, the EA community has emphasised the importance of integrity, honesty, and the respect of common-sense moral constraints. If customer funds were misused, then Sam did not listen; he must have thought he was above such considerations,” MacAskill wrote. “A clear-thinking EA should strongly oppose 'ends justify the means' reasoning. I hope to write more soon about this.”
MacAskill did not respond to Motherboard's request for comment.
Bankman-Fried and his inner circle seem to have agreed and been emphatic in their insistence that effective altruists should “be aggressive” in the risk they take to do good. Bankman-Fried in particular, with his actions, has cast doubt on whether he actually cares about short-term ethical concerns (like financially devastating nearly a million people) within this moral framework.
Caroline Ellison, a member of Bankman-Fried's inner circle and the chief executive of Alameda Research, said on her since-deleted Tumblr: "Is it infinitely good to do double-or-nothing coin flips forever? Well, sort of, because your upside is unbounded and your downside is bounded at your entire net worth. But most people don't do this," Ellison writes. "They really don't want to lose all of their money. (Of course those people are lame and not EAs; this blog endorses double-or-nothing coin flips and high leverage.)"
These comments seem clear cut: these people believed their iteration of effective altruism—longtermism—morally compelled them to make risky bets because the potential upside (increased wealth to increase contributions to help realize humanity's long term potential) massively outweighed the downside (losing all their money, and everyone else's too).
What to make of this, then, when we consider Bankman-Fried’s recent comments to Kelsey Piper? Piper is a staff writer at Vox’s Future Perfect—a vertical funded in part by Bankman-Fried’s FTX Future Fund foundation—an effective altruist herself, and a personal friend of his.
Piper references an earlier conversation with Bankman-Fried where he said that you can't simply say you'll do unethical things for good causes because there are second-order harms that spring from unethical businesses which would harm people, but also harm your ability to do good down the line (the line of reasoning advanced by MacAskill when talking about earning to give through harmful careers). When asked via DMs if that response was simply PR spin, he admitted that it was.
"Man all the dumb shit I said. It's not true really," Bankman-Fried wrote. "Everyone goes around pretending that perception reflects reality. It doesn't. Some of this decades's greatest heroes will never be known, and some of its most beloved people are basically shams."
Piper pressed forward, asking "So the ethics stuff - mostly a front? People will like you if you win and hate you if you lose and that's how it all really works?" Bankman-Fried didn't mince words: "Yeah. I mean that's not *all* of it. But it's a lot. The worst quandrant is 'sketchy + lose'. The best is 'win + ???' 'Clean + lose' is bad but not terribel[sic]."
In another message, Piper wrote that Bankman-Fried was "really good at talking ethics" despite this cynical worldview. He replied, "Ya. Hehe. I had to be. It's what reputations are made of, to some extent. I feel bad for those who get fucked by it. By this dumb game we woke westerners play where we say all the right shiboleths and so everyone likes us."
It may be tempting to interpret this as Bankman-Fried admitting he didn’t believe in effective altruism all along, but it’s really a doubling down of longtermism and its moral imperatives that are distinct and far more dangerous than effective altruism.
Bankman-Fried’s moral philosophy was one that prioritized the far future and its massively larger population over the present because of the exponentially larger aggregate happiness it would potentially have. In that sort of worldview, what does it matter if you build a crypto empire that may expose millions more people to fraud and volatile speculation that could wipe out their life savings—you’re doing this to raise enough money to save humanity, after all.
To this end, Bankman-Fried urged other effective altruists to adopt his high risk tolerance, worked hard to make his position seem good and ethical, and when it all collapsed, he revealed that his public comments on how we must carefully consider ethics and second-order harms were all bullshit so that he would garner a greater, more positive reputation that could lead to even greater wealth and greater impact in the name of longtermism.
All of this may suggest that the longtermist ideas of MacAskill and Bankman-Fried are the core issue and a war against those specific arguments would be sufficient, but that is not the full picture. If longtermism is morally repugnant, it’s only because effective altruism is so morally vacuous. Both were spearheaded by MacAskill, both are utilitarian logics that pave beautiful roads to hell with envelope math and slippery arguments about harm and impact, and both have led us to this current moment.
On some level, maybe it makes sense to ensure that your actions have the greatest possible positive impact—that your money is donated effectively to causes that improve people’s lives to the greatest degree possible, or that scarce resources should be mobilized to tackle the roots of problems as much as possible. But it's not clear why this top-down, from-first-principles approach is the right one, however. It's a fundamentally anti-democratic impulse that can lead to paralysis when uncertainty looms, massive blindspots for existential risks and moral hazards, or capture by opportunistic plutocrats and cynical adherents who simply want power, influence, and unblemished reputations.
The devastating collapse of FTX is the logical conclusion of this sort of thinking, where a lot of time is spent thinking and quantifying anxiety-inducing scenarios, where there are morally permissible cases to do great harm, where future generations of untold humans have equivalent moral worth to people today and should even be prioritized because of their greater aggregate value, and where major funders are capricious or cynical billionaires who are desperate for legitimacy narratives that justify the harm they cause today.
These sorts of moral frameworks are the real existential threats to humanity today. Not rogue superintelligent AIs, but human beings who help saboteurs of today’s world be recast as the saviors of a distant tomorrow.
MacAskill wasn’t the first philosopher to be useful to elites, just as Bankman-Fried wasn’t the first billionaire to cynically invoke ethics and morals when justifying unethical or immoral behavior. They certainly won’t be the last as long as the longtermist movement persists.