Background
Pascal’s wager is a famous argument for why one should believe in God. If God exists, then eternal life in heaven or hell is at stake, but if God doesn’t exist, one’s belief does not matter much – so one should wager on the former. (The validity of this argument has been discussed at length.)
More generally, whenever we consider two hypotheses H1 and H2 about the world, the stakes may be higher in one of the two cases – say, if H1 is true. This is a reason to act as if H1 is true, even if it is not most likely. For instance, the precautionary principle emphasises caution towards potentially harmful innovations (e.g. a new medicine) as long as we have substantial uncertainty.
In this post, I will consider wagers that are relevant to effective altruism – that is, hypotheses that would allow us to have a particularly large impact. I’m most interested in reducing future suffering, but many of these wagers also apply to other goals.
There has been some discussion on how seriously we should take such wagers, especially in cases where they rely on extremely implausible hypotheses. One possible solution is to impose a leverage penalty on one’s prior probability for hypotheses according to which we can have an extraordinary impact. The key rationale for this is that it is impossible for everyone to be in such a special position, so such claims should prima facie be penalised – and the penalty should be proportional to the amount to impact, thereby exactly “cancelling out” the wager.
However, the leverage penalty is merely about priors. It leaves room for wagers in cases where we have a robust posterior probability through observing a lot of evidence, which renders the prior less relevant.
Three types of wagers
In the following, I will consider three types of hypotheses that would (significantly) increase how much impact we can have:
- Wagers that the world is large: it contains many sentient beings.
- Wagers that the world is bad: it contains a lot of suffering.
- Wagers that the world is tractable: we can, in our current position, do a lot to reduce future suffering.
“Large world” wagers
We can have much more impact if the world is big, in the sense of containing a large number of (potential) future sentient beings:
- Arguably, we should act as if our universe is large in terms of sheer size, density of stars, planets and galaxies, and the available amount of energy. However, we already have fairly precise estimates of at least some of these (e.g. we know how large the reachable universe is).
- This is only relevant if large-scale space colonisation will happen.
- We could wager that it will be possible to colonise space at a significant fraction of the speed of light, as that means that more of the reachable universe is colonised. (If humanity expands at a fraction f of the speed of light, then everything within a sphere of radius f times the Hubble length is reachable, which corresponds to a fraction f^3 of the Hubble volume.)
- Regarding possible resolutions of the Fermi paradox, we could wager that we have more impact if the universe is empty and humanity can in expectation colonise much of it. However, this needs to be balanced against the anthropic update and its implications.
- An extension of this is to wager that we live in a large multiverse, though this is only relevant if we can (causally or acausally) influence other parts of the multiverse.
- This is only relevant if large-scale space colonisation will happen.
- There is a wager that consciousness is common even in minds that are not usually assumed to be conscious, such as invertebrates or small digital entities.
- However, on a relativistic view of consciousness, the question of which beings we care about is at least partially normative, and then using a moral parliament may be a way better to deal with moral uncertainty than expected value calculations.
- Finally, we could wager that humanity will reach stable technological maturity, as that would enable the creation of far more sentient beings compared to a less advanced civilisation.
“Bad world” wagers
We can have more impact if the future contains a lot of suffering in expectation – that is, if s-risks are likely. Perhaps the future will be good, but if that is the case, then there is not much to do from a suffering-focused perspective. So it makes sense to focus on bad future scenarios:
- We could wager that the governance standards will deteriorate. In particular, we could assume that:
- The future will entail serious conflict between powerful actors, resulting in threats and other agential s-risks.
- The prospects for peaceful bargaining and compromise between different values will be slim, making it less likely that the concerns of future suffering reducers will be taken into consideration. (For instance, a totalitarian regime is less likely to implement such compromise than a democracy.)
- We could wager that it will not be possible to find easy solutions to avoid threats (or other risks) – if it is, then there will not be much suffering in the future anyway.
- The relevant kind of threats will not be illegal, or enforcement of such laws will be difficult or impossible.
- Rational actors do not automatically end up in a “no-blackmail equilibrium”.
- For surrogate goals, or indeed any anti-blackmail measure that would substantially reduce future disvalue, we could consider the following possible wagers:
- Surrogate goals will eliminate all disvalue where they work, so:
- we should focus our efforts on the kind of threat where surrogate goals do not work (e.g. involving human threatenees).
- we should not focus on threats at all and instead work to reduce incidental suffering, non-threat agential risks or suffering from unknown unknowns.
- we should just assume that surrogate goals don’t work (resulting in more suffering) – for some known (1, 2) or unknown reason.
- Even if surrogate goals work in principle, we could wager that the relevant agents will not implement them, or will implement them incorrectly. So we should focus on reducing that risk, e.g. by ensuring that advanced AI will reason correctly about surrogate goals, game theory, and decision theory.
- We could wager that surrogate goals involve serious tradeoffs with performance, bargaining power, or other things (resulting in them not being implemented, and hence more suffering).
- Surrogate goals will eliminate all disvalue where they work, so:
- We could wager that values and culture will evolve in a way that results in future agents that are much more inclined to create suffering (whether it’s through sadism, trolling, or threats), even though that currently seems far-fetched.
- We could wager that some unknown sources of suffering are very large. (This is for both incidental and agential suffering.)
Other value systems will depart significantly from this reasoning: if the future is good, you could reduce x-risks or further increase value. But there are some similar ideas:
- If you primarily care about ensuring that humanity has a long, flourishing future, and believe that survival will likely lead to such a future, then you might wager that extinction risk is high – i.e. that we live in a “time of perils”. (If extinction risk is close to zero, there is not that much to do from this perspective.)
- When working on AI alignment, you could wager that the problem is hard enough to not be solved anyway (though also not so hard as to be entirely intractable).
“Tractable world” wagers
We can clearly have more impact if changing the long-term future is more tractable, that is, if we can take actions now that significantly reduce future suffering:
- Our impact is diluted if the future contains large numbers of agents with ongoing value and influence drift. Crucially, this effect is proportional to future population sizes. By contrast, we can have more impact if contemporary humans are in a special situation to affect a lock-in of long-term outcomes. In particular:
- We could wager that civilisation settles on a steady state relatively soon.
- We could wager that at least some of the things we can influence, such as values or institutions, are highly persistent.
- We could wager that there are strong (and predictable) path dependencies from now to resulting steady state.
- We could wager that transformative AI will emerge in the near future, offering an exceptional lever to shape long-term outcomes.
- It’s easier to influence the future if it will be similar to the present in many ways.
- We could wager that the decisions of many other agents are correlated with ours, and a decision theory according to which we can affect all of these decisions is true. (Cf. The evidentialist’s wager.)
- It is easier to have an influence if future society will enable compromise and positive-sum trade (as opposed to e.g. conflicts where one side takes all).
- We have better chances of raising moral concerns in democracies with a functional civil society compared to an authoritarian or totalitarian state (unless the leadership happens to be supportive).
- These wagers directly contradict the corresponding “bad world” wagers.
- We could wager that there are, or will be at some point in the future, clear opportunities to reduce suffering.
- It is currently unclear how to best reduce suffering, but it will (so the wager goes) become much clearer what to do later on.
- As a movement, we are able to (or will be able to) find exceptionally good levers.
- Relevant interventions are neglected – and, crucially, will stay neglected. Future people will, by default, not work on the right interventions to reduce s-risks.
- Effective anti-threat measures, e.g. surrogate goals, are or will become possible. (This is, again, at odds with the corresponding “bad world” wager.)
What to make of all this?
As this list shows, we could consider many different wagers. Some of them are compatible, some point in exactly opposite directions.
How seriously we take wagers is related to the question of whether extreme outcomes dominate expected value calculations. We tend to give more weight to wagers if we think that most future suffering is due to (very) unlikely worst-case scenarios.
All told, I think we should not take wagers at face value, but moderate updates seem reasonable and not at odds with common sense. It is important, though, to clearly separate wagers from actual probabilities, as the risk of bias (e.g. through increased saliency of certain ideas) is significant. To the extent we accept a wager, we should act like the hypothesis is true, but we shouldn’t assume that it is.