Innovations in science and technology often bring unprecedented benefits, but can also pose serious risks, as the example of nuclear weapons shows. Similarly, new technologies could, if used irresponsibly, lead to vast amounts of suffering in the future – so-called suffering risks or s-risks.
Worst-case AI safety
The emergence of powerful artificial intelligence (AI) systems may be one of the most critical technological developments of the future. Worst-case AI safety is the subfield of AI safety that focuses on reducing s-risks of advanced AI technology.
Texts on cause prioritization
- Should altruists focus on artificial intelligence?
- Strategic implications of AI scenarios
- Arguments for and against moral advocacy
- Using surrogate goals to deflect threats
- Factors of extortion scenarios
- Heuristics to assess the feasibility of threats