Longtermism„An odd and peculiar ideology“

Émile P. Torres calls one of the most influential philosophies of our time an ideology: Longtermism is the central school of thought of tech giants like Elon Musk and Skype founder Jaan Tallinn. In an interview, Torres explains why it is so dangerous.

Émile P. Torres, in the background a picture of the Virgo Supercluster
Émile P. Torres beschäftigt sich mit Fragen zum Ende der Menschheit und kommt dabei zu anderen Schlüssen als die Longtermisten. – Alle Rechte vorbehalten Portrait, Mindmap: É. P. Torres; Virgo-Supercluster: IMAGO / StockTrek Images; Montage: netzpolitik.org

Elon Musk has made headlines in recent months for his role as head of Twitter, which has seen him fundamentally transform the platform. What is less well known is that he is spending money to fund organizations whose mission is to fight artificial intelligence, which is supposedly threatening humanity.

The Future of Life Institute is also being advised by Musk. In the media, the institute caused a stir with an open letter warning of the destructive power of a possible superintelligence and calling for a pause in the development of new AI models.

Another organization that enjoyed funding from Musk is the Future of Humanity Institute. This is an interdisciplinary research center at Oxford University. Its director is the influential Swedish philosopher Nick Bostrom. He is a leading proponent of longtermism, a variety of effective altruism (EA). Both schools of thought call for research to be concentrated where it can have the greatest long-term positive impact on humanity.

An inhumane ideology

What may sound plausible at first, however, is the source of massive criticism. Émile P. Torres, currently a PhD candidate at Leibniz University Hanover and active on Twitter as @xriskology, explains the ideology behind effective altruism and longtermism in this interview.

Once a writer for the Future of Life Institute themselves, Torres was a research assistant to Ray Kurzweil, futurist and Director of Engineering at Google. He is the inventor of the Singularity Principle, which is related to longtermism. Soon Torres‘ book, „Human Extinction: A History of the Science and Ethics of Annihilation“, will be published by Routledge. In it, they take a critical look at longtermism, among other things, and explain the ideas behind it.

netzpolitik.org: Maybe you want to explain first what effective altruism is. Because as I understand, longtermism is something that grew out of this philosophical theory.

Émile P. Torres: Effective altruism is a sort of cultural and intellectual movement, that really emerged in the late 2000s. The central aim of EA is to do the most good. The roots of EA can be traced back to Peter Singer’s work. You know, I remember an article from 1972 called „Famine, Affluence, and Morality“. That article made the case that one should be an altruist. Maybe a significant portion of our disposable income should be spent helping individuals who perhaps are very far away, on the other side of the world. And the fact that there is some kind of physical distance between us and them shouldn’t be morally relevant. So the main thrust of that argument is, one ought to be more altruistic.

Then there’s a further question: If one buys Peter Singer’s arguments that we should become altruists, how should one use one’s finite resources to maximize the altruistic effect. For the most part, the world of philanthropy had been, up to EA emerged, driven by sort of an emotional pull to various causes. So Michael J. Fox, for example, ends up setting up the Parkinson’s Research Institute. Maybe somebody has some familial connection to a certain part of the world that gets hit with a natural disaster and they end up donating a significant amount of money.

What is different about EA, at least according to them, is their attempt to use reason and evidence to determine the very best ways to help the greatest number of people. Yet there are all sorts of pretty significant problems with this approach, which may sound compelling at first glance.

„Giving What You Can“

The first EA organization was officially founded in 2009 and that was „Giving What We Can“. And its initial focus was on alleviating global poverty. They accepted Singer’s global ethics and then tried to use a kind of rigorous scientific methodology to figure out the charities that will save the most numbers of lives. That’s what they claimed was original to their approach.

netzpolitik.org: When did longtermism appear?

Torres: Longtermism emerged in the early 2010s. You had some people who bought into the EA philosophy and then discovered the work of Nick Bostrom and a few others. In particular, they read Bostrom’s paper from 2003 called „Astronomical Waste“. It made the case that our lineage, Homo sapiens and whatever descendants we might have, could survive in the universe for an extremely long period of time. So we could live on this Spaceship Earth for another billion years or so. To put that in perspective, Homo sapiens has been around for like 300,000 years, a fraction of a billion. Civilization has been around for 6,000 years or so. It’s a really enormous amount of time in our future that just utterly dwarfs the amount of time civilization has existed so far.

But of course we can avoid the destruction of our planet by the sun in a billion years, if we colonize space. Then we can live a much longer period of time, maybe something like 10^40 years, which is a one followed by 40 zeros. It’s a really long period of time. Maybe we could exist for much longer than that, 10^100 years. That’s when the heat death is supposed to happen. And not only that, not only could we exist for this enormous amount of time across the temporal dimension, but also the universe is really huge.

An universe full of people

netzpolitik.org: This means that there would be a lot of people. About how many people do we talk here?

Torres: The first estimate I’m familiar with of how many future people there could be on Earth came from the cosmologist Carl Sagan. He was the host of the TV show Cosmos and very famous in the 1970s, 1980s.

He estimated in 1983 that if Homo sapiens exist for the period of time of an average male, mammalian species, which he thought was ten million years, and if the global population remained steady, it must have been something like, around four billion at that point, which is kind of amazing. If this stayed stable and if the individuals live for a hundred years, he estimated 500 trillion future people. So that’s just an enormous number of people in the future. So far, it’s been estimated that there will be 117 billion members of our species. This means the future population is much larger than the past population.

Returning to Bostrom, if we colonize the Virgo Supercluster, then the population could be 10^23 biological humans. There could be even more people, though, if we create these huge computer simulations that are running virtual reality worlds, trillions of digital people supposedly living happy lives for some reason.

You can fit more digital people per volumetric unit of space than biological humans. According to Bostrom that’s the whole reason you would want to go out and simulate people and simulate worlds. Because you could just fit more, so the future population could be even larger, you know, 10^38 in the Virgo supercluster, 10^58 in the universe as a whole.

netzpolitik.org: Where do the digital people come from? How do they get born or how do they exist?

Torres: It’s not really clear. Usually all that is said about these digital people is that they would have happy lives. And that’s important ethically for the longtermist view. But whether they would have lifecycles like we have, there’s no details about that. I think a lot of longtermists don’t appreciate just how weird the digital world would be. Assuming it’s possible in the first place–it might not be–maybe consciousness is like something that can emerge only from biological tissue.

The phantasma of superintelligence

netzpolitik.org: You have spoken out against longtermism. What is wrong with it in your view?

Torres: First I would have to underline the extent to which this view is influential in the world. Elon Musk calls it a „close match for my philosophy“. It’s really pervasive in the tech industry. It’s driving a lot of the research right now that is focused on creating artificial general intelligence, which is supposed to be this little stepping stone to artificial superintelligence.

Basically, it’s like subhuman AI, then AGI, and then superintelligence immediately after that. A lot of the research by OpenAI and DeepMind and these other very well funded companies with billions and billions of dollars behind them is driven by this kind of longtermist vision that superintelligence is this crucial vehicle that will enable us to colonize space, to maximize value and so on. It’s infiltrating the United Nations. It’s poised to shape the 2024 Summit of the future. It has billions of dollars behind its research projects and so on. So this is not just an odd and peculiar kind of ideology, something to just chuckle at. No, it’s really influencing our world in significant ways.

The mere ripples on the great sea of life

netzpolitik.org: What are the dangers of this ideology?

Torres: I feel like there are two big dangers that this ideology poses. One is that because it casts our eyes on the very far future and anticipates the possible creation of enormous numbers of future people, it can make current problems that are not „existential“ look very trivial.

Bostrom said that, if you look across the 20th century, for example, you will see all sorts of catastrophes, including global catastrophes causing the deaths of tens of millions of people: World War Two, the AIDS pandemic, the 1918 Spanish flu. All of these are terrible in absolute terms, but they are really nothing more than „mere ripples on the great sea of life,“ in Bostrom’s terms. Because in the grand scheme of things the loss of several tens of millions of people when compared to how many people could exist in the future just isn’t that much.

The triviality of climate change

Take the climate change for example, I find it impossible to read the longtermist literature and not come away with a fairly rosy picture. This means, it’s not going to be an existential catastrophe. There’s a small chance it might be, but it’s much more likely that it’s going to profoundly affect people in the Global South in particular, and it’ll probably cause tens of millions of deaths, hundreds of millions of people being displaced, maybe billions of people having to move. And it’ll be a catastrophe, but really, when you zoom out and take the cosmic vantage point, that’s just a little hiccup.

If we have this whole future ahead of us and given that we have finite resources, maybe the best way to spend or to allocate those resources is not necessarily fighting climate change or dealing with climate justice issues. There are bigger fish to fry, such machine superintelligence causing our annihilation. That means not only that 8 billion people die. That’s bad, but much worse is the loss of all of its value containers in the future. I think there’s a huge concern that it inclines people to minimize or to trivialize any current day problem that isn’t deemed to be existential in nature.

Utopian terror

netzpolitik.org: What is the second one?

Torres: The second issue is very much related to that. Because longtermism holds out this kind of utopian vision of the future, full of astronomical amounts of value, or pleasure, it could lead individuals who become passionate believers that this utopia could exist to potentially engage in extreme measures which might be violent in nature in order to ensure the realization of this techno-utopian future.

History is overflowing with examples of utopian movements that engaged in all sorts of horrifically violent acts of terrorism, genocide and so on for the sake of realizing utopia. The ingredients that enabled those utopian movements in the past to justify, to themselves, violent measures are right at the core of longtermism. All it takes is for the right individual who really believes in longtermism to find themselves at the right situation, specifically one where they are looking at utopia on the horizon and then seeing you in between them.

They’ll say: „Sorry, I don’t want to hurt anyone, but I have to. The stakes are so huge. I might need to kill five, ten, maybe a million, maybe 10 million people.“ There’s a real legitimate concern that people will take Bostrom’s and the longtermist’s worldview seriously and then find themselves in this kind of situation. I’m genuinely worried about that.

Altruism trimmed for effectiveness

netzpolitik.org: Can you say a little more about how the longtermist view did develop from the philosophy of effective altruism?

Torres: After a group of effectiv altruists discovered Bostrom’s work they came to the conclusion: if we want to do the most good, and if that means positively affecting the greatest number of people possible, and if the greatest number of people by far exists in the far future, then maybe what we should be doing is focusing less on the individuals who are suffering today, on positively influencing their lives, and focus instead more on positively influencing the lives of these people living millions, billions, trillions of years from now.

This is how the longtermist ideology came about. It was built on this idea of maximizing one’s positive impact on the world and then realizing that the future could be much, much bigger than the present is. And so even if there’s a very small probability of positively affecting 1 percent of the future population in expected value terms, that could still be enormously greater than helping, with a very high degree of certainty, a billion people right now.

There are today 1.3 billion people in multidimensional poverty, for example. If I can do something that affects 0.0001 percent of the 1058 people in the far future, then that’s a much greater number than 1.3 billion. It just makes sense within this framework of doing the most good that we should pivot towards thinking about the far future.

The kind of utilitarianism that is most influential among longtermists, says that actually what we should do is not just maximize value within a population, but maximize value within the universe as a whole. And so there are two ways to do that. One is what I just mentioned: increase the happiness of the people who exist.

Maximizing happiness

Another way is to create new people who are happy because that insofar as they have net positive amounts of value of happiness, then that is going to increase the total amount of value in the universe. Suddenly you have an argument for why creating new happy people is a moral obligation, in addition to making people happy. That is why they think it’s really important that we survive for as long as possible, colonize space, and ultimately create these huge computer simulations with trillions and trillions of digital people living in them.

netzpolitik.org: Why is it important to maximize value?

Torres: In the utilitarian view, it’s about creating the greatest total quantity of value in the universe. Historically, I don’t think it’s a coincidence this version of utilitarianism took shape around the time capitalism emerged. The parallels are pretty significant: For capitalists it’s about just maximizing profit, for utilitarians it’s sort of an ethical spin on that: it’s just about maximizing value. Value is your bottom line. Maybe it’s human happiness or something like that. But in both cases it’s just this kind of mindless „more is better“.

netzpolitik.org: It seems contradictory that the more people the better. Wouldn’t there be more problems or more complicated problems if there were a lot more people?

Torres: I would say philosophically, one of the criticisms you could make is that it seems to get the relationship between this thing that we consider to be value and people wrong. For utilitarians what ultimately matters is value. That’s the ultimate end.

What are the means? Well, since value has to be realized by something and since that something is a person, you need to create more people. People ultimately are seen as a means to an end. Whereas people are valuable instrumentally as a means to the end of value, or pleasure, or something, we should see this relationship as one where happiness matters because it’s good for people. Happiness is a means for the individual to improve their well-being.

Utilitarianism basically is the theory of persons as containers, just value containers. And this captures this idea that we’re a means to an end. The more value containers you have in the universe, the more opportunity you have to fill them with value. That’s good, because then the more value you have in total. This goes back to the idea that one way to maximize value is to increase the amount of value that each container has. Another way is to just create new containers. So it seems to get things wrong. It treats people as a means to an end, as mere containers, mere vehicles, mere vessels for value. Whereas other philosophers, Kantians in particular would see people as ends in and of themselves. That seems to be the better view as far as I’m concerned.


netzpolitik.org: Influential philosophers in the longtermist field are among others William MacAskill and Nick Bostrom. They seem to make differences between humans in that way that some are more valuable than others. And they seem to sell themselves as those who have the solutions for anything.

Torres: I would say that a lot of the longtermists do exhibit a kind of extreme self-importance. I know there have been some critics of the movement within the EA community itself. Carla Zoe Cremer for example criticized some of the leaders for not doing enough to reject a kind of pervasive hero-worship within the community.

People like Bostrom seem to believe that they have a very morally significant role to play in directing the course of future civilization development. They are very elitist and promote a very rigid hierarchy with the people at the top who use a huge amount of power that’s concentrated in their hands. I certainly get the impression that that’s the way they want it, because a lot of the people at the top believe that they have superior intellectual abilities and hence a sort of unique ability to determine the best course of action.

Superintelligence–our way to Utopia

Eliezer Yudkowsky and Bostrom both have a more or less explicitly elitist strain in their way of thinking about this. According to them, the creation of superintelligence is so important that its development should be left up to the individuals best suited, intellectually and morally, to make decisions about how to develop it. Those individuals are Bostrom, Yudkowsky and others at the Future of Humanity Institute. I find the community problematic for how undemocratic and even anti-democratic it is.

netzpolitik.org: That is very concerning. Why is superintelligence important for longtermists?

Torres: There are two things to say. On the one hand it’s generally been thought by longtermists that the outcome of superintelligence will be binary: Either it will almost immediately cause our annihilation, total annihilation, everybody on the planet is dead, or it will be the vehicle that will enable us to actually create a real utopia here on Earth and in the heavens. It’s a vehicle that will take us from our current position to Utopia, as well as enable us to colonize the vast cosmos and consequently create astronomical amounts of value.

It’s also thought that designing a superintelligence in a way that it does not destroy us, but instead enables us to create Utopia is very difficult. This is an intellectual, philosophical and engineering problem that people like Bostrom and others at the Future of Humanity Institute, maybe people at Open AI, might believe that they are uniquely positioned to actually solve.

Given that the stakes are so enormous–if not annihilation, then Utopia, if not utopia, then annihilation–it really matters that we design superintelligence in a way that reflects what we actually want, which is to maximize value, ultimately colonize space, become radically enhanced post humans, things of that sort.

„Impoverished vision of the future“

netzpolitik.org: It sounds like a bad sci-fi story.

Torres: It might be worse. I think longtermists have a really impoverished vision of the future. The metrics according to which longtermists measure the goodness of outcomes is very technological, capitalistic, quantitative in nature. This notion of just maximizing the value, I think it’s a deeply impoverished way of thinking about our relationship to value. There are other possible responses to value like cherishing, treasuring, protecting, prolonging, sustaining, and so on.

Furthermore, there’s some critics of technology who pointed this out, a lot of the attributes that are possessed by technological artifacts that make those artifacts valuable in some way to us are then taken and projected onto the human creator themselves. Like speed, computational processing capacity, functionality, reliability, and so on. You can ask: Is this a good car? Well, yes. Why? Well, because it’s reliable.

Reverse adaptation

They take those properties and project them on to human beings; they export these metrics from the domain of technology to the domain of humanity and finally judge people based on them. In Langdon Winner’s terms, it’s a phenomenon of reverse adaptation. Rather than technology adapting to us, we actually adapt ourselves to technologies: the same the metrics by which we judge those technologies to be good, we start to apply to ourselves.

Maybe it shouldn’t be like that. Maybe there is a richer way of thinking about the human experience that isn’t just rigid and quantitative. What you almost never see in the longtermist literature are discussions about meaning–about what makes a meaningful life. There’s almost no serious philosophical reflection on that.

Maybe it’s not just how much happiness I have, maybe it’s the quality of that happiness that matters. Maybe it’s the context in which that happiness arose. Maybe if I’m in a community and I’m very happy, but people around me suffering, that’s not a good state of affairs. And yes, there’s a lot of nuance that I think their future logical vision just kind of misses.

0 Ergänzungen

Wir freuen uns auf Deine Anmerkungen, Fragen, Korrekturen und inhaltlichen Ergänzungen zum Artikel. Bitte keine reinen Meinungsbeiträge! Unsere Regeln zur Veröffentlichung von Ergänzungen findest Du unter netzpolitik.org/kommentare. Deine E-Mail-Adresse wird nicht veröffentlicht.