Around 2014, I started to notice that something was up in academic philosophy. Geeky researchers from fancy universities, having first made their names in abstract and technical domains such as metaphysics, were now recreating themselves as public-facing ethicists. Knowing some of the personalities as I did, I found this pivot amusing. If the ideal ethicist has delicate social awareness, a rich experience of life, lots of empathy, and well-developed epistemic humility, these people had none of those things.
What they did have was a strong career incentive to produce quirky arguments in favour of the progressive norms emerging at the time, an advanced capacity to handle abstraction and technicality, and huge intellectual confidence. In real life, these would be the last people any sane individual would trust with a moral dilemma. Luckily for the outside world, they tended to have little influence, mainly because nobody could understand what the hell they were talking about.
The same cannot be said for the philosopher geeks in charge of the hugely popular and influential Effective Altruism (EA) movement, which was given new vim last week with the publication of a new book by one of its leading lights, 35-year-old William MacAskill, accompanied by a slew of interviews and puff pieces. An Associate Professor at Oxford, MacAskill apparently still lives like a student, giving away at least a tenth of his income, living in a shared house, wild swimming in freezing lakes, and eating vegan microwave meals. (Student life isn’t what it used to be.)
But his influence is huge, as is that of EA. Beloved of robotic tech bros everywhere with spare millions and allegedly twinging consciences, EA and offshoot affiliate organisations such as GiveWell, 80,000 Hours, and Giving What We Can aim to apply strictly rational methods to moral action in order to maximise the positive value of outcomes for everyone. Unlike many metaphysicians-turned-ethicists, MacAskill sells this in a style that is comprehensible, even attractive, to civilians — and especially to those with a lot of dosh to give away. Quite frankly, this worries me a bit.
The background to EA is austerely consequentialist: ultimately, the only thing that counts morally is maximising subjective wellbeing and minimising suffering, at scale. Compared to better potential outcomes, you are as much on the hook for what you fail to do as for what you do, and there is no real excuse for prioritising your own life, loved ones, or personal commitments over those of complete strangers. MacAskill’s new book, What We Owe The Future: A Million Year View, extends this approach to the generations of humans as yet unborn. As he puts it: “Impartially considered, future people should count for no less, morally, than the present generation.” This project of saving humanity’s future is dubbed “longtermism”, and it is championed by the lavishly-funded Future of Humanity Institute (FHI) at Oxford University, of which MacAskill is an affiliate.
Longtermism is an unashamedly nerdy endeavour, implicitly framed as a superhero quest that skinny, specky, brainy philosophers in Oxford are best-placed to pursue — albeit by logic-chopping not karate chopping. The probability, severity, and tractability of threats such as artificial intelligence, nuclear war, the bio-engineering of pathogens, and climate change are bloodlessly assessed by MacAskill. As is traditional for the genre, the book also contains quite a few quirky and surprising moral imperatives. For instance: assuming we can give them happy lives, we have a duty to have more children; and we should also explore the possibility of “space settlement” in order to house them all.
Join the discussion
Join like minded readers that support our journalism by becoming a paid subscriber
To join the discussion in the comments, become a paid subscriber.
Join like minded readers that support our journalism, read unlimited articles and enjoy other subscriber-only benefits.
Subscribe