Contents

S-risks: An introduction

by Tobias Baumann. First published in 2017.

Summary

Throughout human history, new technologies offered unprecedented opportunities, but also posed serious risks. Combined with insufficient concern for the wellbeing of others, technological progress sometimes caused vast amounts of suffering. While there are reasons to think these harms will be temporary, this may not be true in all cases.

Future technology will likely endow humanity with unprecedented power, potentially allowing us to shape the entire universe. But if this newfound power is used irresponsibly, it may lead to suffering on an astronomical scale. Such risks of astronomical suffering are also called suffering risks or s-risks for short.

While such scenarios may seem far-fetched at first, we have good reasons to believe that their probability is not negligible. People who want to help others as effectively as possible should, therefore, consider reducing s-risks a top priority – especially if they perceive them to be important on moral grounds or neglected in practice.

There are three broad ways to effectively reduce s-risk:

  • Attempting to shape the development of pivotal new technologies by implementing precautionary measures against s-risks.
  • Aiming for broad improvements in societal values and institutions, which increases the probability that future generations will work to prevent (or at least not cause) s-risks.
  • Focusing on higher-level approaches such as raising awareness and researching how to best reduce s-risks.

Introduction

Most people who want to improve the world tend to focus on helping individuals alive today. People more readily empathize with the suffering of those living now, than those who will exist a thousand or a million years in the future.

Yet we cannot justify this disregard of not-yet-existing individuals. From an impartial perspective, the fact that we live in a certain time does not grant this time any special ethical significance. Reducing suffering in the future is no less meaningful than reducing suffering now. In fact, many people argue that we should focus on shaping the far future because it will likely contain much larger populations.1

Imagine that future generations of humans look back on our time. Which developments would they judge as most important? What might they wish we had given more focus? To begin to answer these questions, we first take a look at the past.

Technology in human history

Throughout history, the emergence of new technologies often had a transformative impact. Agriculture was a key development in the rise of civilization as it allowed higher population densities and trade routes between different regions. More recently, the industrial revolution upended traditional ways of living and shaped the course of the 19th and the 20th century.

We reap the fruits of technological progress every day. We live longer, we managed to eradicate diseases, and we are, at least on average, richer than ever before. The other side of the story, however, is that the same technologies enabled the rise of industrial warfare, which brought with it chemical weapons, nuclear weapons, and total war.2

The risks from new technologies are exacerbated if we extend our moral circle to all sentient beings, including nonhuman animals. Industrialization has multiplied the number of animals that are raised and killed – often in deplorable conditions on factory farms – for human meat consumption.3

Crucially, factory farming is the result of economic incentives and technological feasibility, not of human malice or bad intentions. Most humans don’t approve of animal suffering per se – getting tasty food incidentally happens to involve animal suffering.4 In other words, technological capacity plus indifference is already enough to cause unimaginable amounts of suffering. This should make us mindful of the possibility that future technologies might lead to a similar moral catastrophe.

New technologies and astronomical stakes

Barring extinction or civilizational collapse, technological progress will likely continue. This means that new technologies will endow humanity with unprecedented power. Similar to technologies of the past, they will give rise to both tremendous opportunities and severe risks. If such advances allow us to colonize other planets, the stakes will become truly astronomical – there are more stars in the observable universe than grains of sand on Earth. This makes it even more important that we will use this newfound power responsibly.

As we have seen, technological capacity combined with moral indifference can lead to a moral catastrophe. A future development akin to factory farming might cause suffering on an astronomical scale, vastly exceeding anything we’ve done so far. Such events are called s-risks (an abbreviation of “suffering risks” or “risks of astronomical suffering”).

How s-risks could come about

See also: A typology of s-risks and Risk factors for s-risks

It is always hard to imagine what future developments might look like. Knights in the Middle Ages could not have conceived of the atomic bomb. Accordingly, the following examples are merely informed speculation.

Many s-risks involve the possibility that advanced artificial systems may develop sentience if they are sufficiently complex and programmed in a certain way.5 If such artificial beings come into existence, they also matter morally6, but it’s quite possible that people will not care (to a sufficient extent) about their wellbeing.

Artificial minds will likely be very alien to us, making it difficult to empathize with them.  What’s more, humanity might fail to recognize artificial sentience, just as many philosophers and scientists failed to recognize animal sentience for thousands of years. We don’t yet have a reliable way to “detect” sentience, especially in systems that are very different from human brains.7

Comparable to how large numbers of nonhuman animals were created because it was economically expedient, it is conceivable that large numbers of artificial minds will be created in the future. They will likely enjoy various advantages over biological minds, which will make them economically useful. This combination of large numbers of sentient minds and foreseeable lack of moral consideration presents a severe s-risk. In fact, these conditions look strikingly similar to those of factory farming.

Several thinkers have also explored more concrete scenarios. Nick Bostrom coined the term mindcrime for the idea that the thought processes of a superintelligent AI might contain and potentially harm sentient simulations. Another possibility is suffering subroutines: computations may involve instrumentally useful algorithms sufficiently similar to the parts of our own brains that lead to pain.

These are all examples of incidental s-risks, where an efficient solution to a problem happens to involve a lot of suffering. A different class of s-risks – agential s-risks – arises when an agent actively wants to cause harm, either because of sadism or as part of a conflict. For example, warfare and terrorism with advanced technology could easily amount to an s-risk, or a malevolent dictator might cause harm on a large scale.

It’s important to remember that technology is neutral in and of itself, and can also be used to reduce suffering. An example is cultured meat, which has the potential to render conventional animal farming obsolete. More advanced technology may also facilitate interventions to reduce the suffering of animals in the wild or even abolish suffering altogether.8 Whether new technologies will overall be good or bad is up to us. However, with so much at stake, it makes sense to consider the possibility of bad outcomes to ensure we will be able to avoid them.

Why should we take s-risks seriously?

S-risks are not extremely unlikely

One could get the impression that s-risks are just unfounded speculation. If the probability of s-risks is negligible, then many people would find it counterintuitive to suggest that we should nevertheless focus on preventing such risks simply because of the (supposedly) astronomical stakes.9 This objection is misguided, however, because we have good reasons to believe that the probability of s-risks is not so negligible after all.

First, s-risks are disjunctive. They can materialize in any number of unrelated ways. Generally speaking, it’s hard to predict the future and the range of scenarios that we can imagine is limited. It is therefore plausible that unforeseen scenarios – known as black swans – make up a significant fraction of s-risks. So even if any particular dystopian scenario we can conceive of is highly unlikely, the probability of some s-risk may still be non-negligible.

Second, while s-risks may seem speculative at first, all the underlying assumptions are plausible:

  • Barring globally destabilizing events, we have no reason to expect that technological progress will soon come to a halt.10 Thus, space colonisation will likely become feasible at some point, introducing truly astronomical stakes.11
  • Advanced technology will make it easier to create unprecedented amounts of suffering, intentionally or not.12
  • It’s at least possible that those in power will not care (enough) about the suffering of less powerful beings. Weak human benevolence might go a long way, but does not completely rule out s-risks.

Third, historical precedents do exist. Factory farming, for instance, is structurally similar to (incidental) s-risks, albeit smaller in scale. In general, humanity has a mixed track record regarding responsible use of new technologies, so we can hardly be certain that future technological risks will be handled with appropriate care and consideration.

To clarify, all these arguments are consistent with believing that technology can also benefit us, or that it may improve human quality of life to unprecedented levels. Working on s-risks does not require a particularly pessimistic view of the future trajectory of humanity. To be concerned about s-risks, it is sufficient to believe that the probability of a bad outcome is not negligible, which is consistent with believing that a utopian future free of suffering is also quite possible. I focus on s-risks for normative reasons, that is, because I think reducing (severe) suffering is morally most urgent.13

S-risks outweigh present-day suffering in expectation

A good measure of the seriousness of risks is its expected value, i.e. the product of its scope and the probability of occurrence. The scope of s-risks would be far larger14 than present-day sources of suffering like factory farming or wild animal suffering. Combined with a non-negligible probability of occurrence, this means that s-risks plausibly outweigh present-day suffering in expectation.

S-risks are neglected

So far, few people actively work on reducing s-risks. That comes as no surprise, given that s-risks are based on abstract considerations about the far future that don’t pull on our emotional heartstrings.15 Even people who care about long-run outcomes often focus on achieving utopian outcomes. While this is certainly a worthwhile endeavor, it means that relatively few resources are invested in s-risk reduction. On the other hand, this means we can still expect to find low-hanging fruit – that is, the marginal value of working on s-risk reduction may be particularly high.

How can we avert s-risks?

Main article: How can we reduce s-risks? 

Narrow interventions

One way to avert s-risks is to try to directly shape pivotal new technologies by implementing precautionary measures. An example of a potentially transformative technology is advanced AI, which experts say might be developed later this century. Since intelligence is key to shaping the world, the emergence of AI systems with superhuman levels of intelligence would unleash new power – for better or for worse.

We could therefore work on safety mechanisms for AI systems that are aimed at preventing s-risks, rather than shooting for a best-case outcome. A promising precautionary measure is to try to implement surrogate goals in AI systems. (But it’s not clear how to best prevent s-risks from AI, so further research would be valuable.)

To address agential s-risks, we could research how negative-sum dynamics like extortion or escalating conflicts can be prevented. Such research could focus on theoretical foundations of game theory and decision theory or on finding the best ways to change the empirical circumstances so that negative-sum dynamics can be avoided.

Broad interventions

Another class of intervention focuses on broad improvement of social values, norms, or institutions. This would increase the probability that future generations will use their power responsibly, even if we cannot accurately predict how the world will change. Promoting anti-speciesism, for example, will likely lead to less animal suffering in the future.

Here are some examples of promising broad interventions:

Research and movement building

Which of all these interventions is most effective? Given our uncertainty about this, the most effective way to reduce s-risks might be to research how to best reduce s-risks. More specifically, we should try to find out which s-risks are most probable, most tractable, or of the greatest magnitude – information that would provide valuable insight to help reduce s-risks more effectively.

Another simple, but potentially very effective intervention is to raise awareness of s-risks. If more people care about reducing s-risks, then it will be much easier to implement precautionary measures in practice.

Last, I’d like to emphasize that we should proceed with caution and seek to cooperate with other value systems when reducing s-risks. For instance, trying to stop technological progress would just lead to zero-sum fights with those who want to reap the benefits of advanced technology.16 Working towards a future that everyone would approve of instead is likely more productive.

Further reading

Acknowledgements

I am indebted to Max Daniel, Ruairi Donnelly, David Althaus, Stefan Torges and Adrian Rorheim for valuable comments and suggestions.

  1. I don’t, however, think that the far future is many orders of magnitude more important. See here and here for details.[]
  2. This isn’t to say that industrial warfare is necessarily worse than earlier forms of warfare. In his book The Better Angels of Our Nature, Steven Pinker argues that the number of military casualties (per capita) has decreased over time.[]
  3. One might argue that technological progress has overall reduced animal suffering because the expansion of human civilization reduced natural habitats and thereby reduced wild animal suffering. But this effect seems purely incidental, so it doesn’t say much about the ramifications of future technology.[]
  4. However, bad intentions and malevolent actors also pose serious risks.[]
  5. This is by no means a certainty, but at least in theory most experts agree that digital sentience is possible. Also, note that this raises philosophical issues which are beyond the scope of this text. See here for more details.[]
  6. Disregarding digital beings would be “discrimination based on the substrate”. Similar to antispeciesism, the idea of antisubstratism is that we should reject this as a form of arbitrary discrimination based on an ethically irrelevant characteristic (the substrate).[]
  7. Open Phil’s report on consciousness and moral patienthood provides an in-depth analysis of the vexing question of which beings worth moral consideration. It also discusses related philosophical questions such as what “consciousness” or “sentience” even means, or how we can “detect” it. []
  8. See also Brian Tomasik’s piece on Why I Don’t Focus on the Hedonistic Imperative.[]
  9. The idea that extremely large stakes can outweigh a tiny probability is discussed under the term “Pascal’s wager”, and most people consider Pascalian reasoning dubious. But people also often mistakenly reject an argument as “Pascalian” even though it’s not actually based on tiny probabilities – the so-called Pascal’s wager fallacy fallacy.[]
  10. Extinction or civilization collapse are possible, but that doesn’t seem highly likely either. The Global Priorities Project gives a figure of 19% in their existential risk report, though that estimate seems too high to me.[]
  11. Whether or not there will be a lot of interest in space colonisation is another question.[]
  12. Of course, advanced technology will make everything easier, so this does not prove anything unless combined with motivation (which I address elsewhere). See Giant cheesecake fallacy. []
  13. This touches on the thorny issue of population ethics, which is beyond the scope of this text. Whether one views existential risks or s-risks as more urgent depends on tricky moral judgments regarding population ethics.[]
  14. It’s not necessarily many orders of magnitude larger, though. See here and here for details on this point.[]
  15. David Althaus and Lukas Gloor argue that many people shy away from imagining the possibility of bad outcomes for psychological reasons.[]
  16. At least we shouldn’t unilaterally try to stop it. It’s conceivable that slower technological progress could be part of a political compromise.[]