A typology of s-risks

In a worst-case scenario, the future might contain astronomical amounts of suffering (so-called risks of astronomical future suffering or s-risks for short). I’ve argued before that the reduction of s-risks is a plausible moral priority from the perspective of many value systems, particularly for suffering-focused views.

In this post, I’d like to develop a typology of the different kinds of s-risks that might arise. I will group the space of possible scenarios into three categories: incidental s-risks, agential s-risks, and natural s-risks.1

(Disclaimer: Whenever I mention examples of possible s-risks, I do not claim that these scenarios are very likely, nor that the resulting s-risks are necessarily very severe. Also, depending on your normative views, an s-risk may take place in a world that is still net good overall.)

Incidental s-risks

Incidental s-risks arise when the most efficient way to achieve a certain goal creates a lot of suffering in the process. The agent or agents that cause the s-risk are either indifferent to that suffering, or they would prefer a suffering-free alternative in theory, but aren’t willing to bear the necessary costs in practice.

We can further divide incidental s-risks into subcategories based on the underlying motivation:

  • Economic productivity: Suffering might be instrumental in achieving high economic output. Animal suffering in factory farms is a case in point: it just so happens that the most economically efficient way to satisfy the demand for cheap meat involves a lot of suffering. This is not currently an s-risk because it’s not astronomical in scope, but it’s possible that future technology will enable similar structures on a much larger scale. For instance, the fact that evolution uses pain suggests that learning might be more efficient if negative reward signals are also used, and we might consider sufficiently advanced and complex reinforcement learners to be capable of suffering.
  • Information gain: Experiments on humans or other sentient creatures might be useful for scientific purposes (like animal testing), while causing harm to those experimented on. Again, future technology may enable such practices on a much larger scale as it may become possible to run a large number of simulations of artificial minds (or ems) capable of suffering.
  • Entertainment: Many humans enjoy forms of violent entertainment. There are countless historical examples (gladiator fights, public executions and torture, hunting, and much more). While nowadays such entertainment is often fictional (e.g. in video games or movies), some real-world instances still exist (e.g. torture and execution videos, illegal animal fights, hunting). It is conceivable that complex simulations will be used for entertainment purposes in the future, which could cause serious suffering if these simulations contain artificially sentient beings.

Agential s-risks

Agential s-risks involve agents that actively and intentionally want to cause harm.

Here are a few possible reasons why agents might do this:

  • Sadism: A minority of future agents might, like some humans, derive pleasure from inflicting pain on others. This will likely be quite rare, and it’s likely that such tendencies can be kept in check. However, it is conceivable that advanced technological capabilities multiply the potential for harm caused by sadistic acts.
  • Emotional hatred: Agents might harbor strong feelings of hatred towards certain outgroups. For instance, humans often despise those that belong to the wrong religion, ethnic group, or political ideology. In extreme cases, this spirals into a desire to harm the other side as much possible (e.g. in wars or through terrorism). Excessive criminal justice could be another example.
  • Strategic threats: In an escalating conflict, an agent might make strategic threats in order to make the other party give in, which could cause a lot of harm if such threats are carried out.

Natural s-risks

Natural s-risks are sources of (astronomical) suffering that occur naturally, i.e. without any (incidental nor deliberate) involvement of powerful agents.2

The following are examples of natural s-risks:

  • Wild animal suffering might take place not just on Earth, but on many planets throughout the cosmos. That said, our best models of the universe suggest that Darwinian life is probably rare (cf. Fermi paradox, Rare Earth hypothesis), so this scenario isn’t particularly likely. (Humans spreading wild animal suffering would count as an incidental s-risk.)
  • It is conceivable, albeit highly speculative, that even low-level physical process may contain sentience. More generally, it’s possible that further knowledge or moral reflection would lead us to think that “low-level” sentience is widespread in the universe3, in which case a natural s-risk becomes more plausible.

Other dimensions

This typology distinguishes by motivation, but we can also distinguish s-risks along other dimensions:

  • Known and unknown s-risks: A “known” s-risk is a scenario that we can already conceive of at this point; it doesn’t mean that the scenario is certain (or even likely) to be a serious s-risk. But we can only imagine a limited range of scenarios, and s-risks may emerge that we haven’t thought of or maybe can’t even comprehend. It is possible that unanticipated mechanisms will lead to large amounts of incidental suffering (unknown incidental s-risk), that future agents will have unanticipated reasons to deliberately cause harm (unknown agential s-risks), or that new insights reveal that, despite appearance to the contrary, our universe already contains astronomical amounts of suffering (unknown natural s-risks).
  • s-risks by action and s-risks by omission: “By action” means that the s-risk is caused by our civilisation; “by omission” means that our civilisation fails to seize an opportunity to prevent an s-risk. In most cases, natural s-risks are s-risks by omission, and incidental and agential are s-risks by action.4
  • Distinguishing by the kind of sentient beings that is affected: s-risks could involve human suffering, animal suffering, or potentially even suffering artificial minds.

Further research

Based on this typology, I recommend that further work considers the following questions:

  • Which types of s-risk are most important in terms of the amount of expected suffering?
  • Which types of s-risk are particularly tractable or neglected?
  • What are the most significant risk factors that make different types of s-risks more likely, or more serious in scale if they occur?
  • What are the most effective interventions for each respective type of s-risk? Are there interventions that address many different types of s-risk at the same time?

Footnotes

  1. Note that the same distinction can be applied to other risks, including x-risks.
  2. Arguably, it’s somewhat inaccurate to call these a “risk” if it’s already ongoing. There is no strong reason to expect that a natural source of astronomical suffering doesn’t exist yet but will in the future. However, for simplicity, I will still call it an s-risk.
  3. Brian Tomasik argues that consciousness is neither a thing which exists “out there” nor an ontologically fundamental property of matter; it’s a definitional category into which we classify minds. In this view, the question of “Is this mind really conscious?” is analogous to “Is a rock that people use to eat on really a table?”.
  4. Incidental and agential s-risks can also be s-risks by omission, if they’re done by alien civilisations, and our civiliation could have stopped them. HT Lukas Finnveden for this point.

Leave a Reply

Your email address will not be published. Required fields are marked *