Summary of Eric Drexler’s work on reframing AI safety
This post contains a bullet point summary of Reframing Superintelligence: Comprehensive AI Services as General Intelligence. (I wrote this in[…]
Read moreThis post contains a bullet point summary of Reframing Superintelligence: Comprehensive AI Services as General Intelligence. (I wrote this in[…]
Read moreMany effective altruists believe that efforts to shape artificial general intelligence (AGI) – in particular, solving the alignment problem –[…]
Read moreSummary I believe that advanced AI systems will likely be aligned with the goals of their human operators, at least[…]
Read moreSome rationalists and effective altruists have argued (1, 2, 3) that there is a non-negligible chance that artificial intelligence will[…]
Read moreA few weeks ago, I finished my paper on Adaptive Mechanism Design: Learning to Promote Cooperation. In this post, I’d[…]
Read moreSuppose that there is – as some rationalists and effective altruists claim (1, 2, 3) – a non-negligible chance (say, 10%)[…]
Read moreIntroduction The burgeoning field of AI safety has so far focused almost exclusively on alignment with human values. Various technical[…]
Read moreTo steer the development of powerful AI in beneficial directions, we need an accurate understanding of how the transition to[…]
Read moreEfforts to shape advanced artificial intelligence (AI) may be among the most promising altruistic endeavours. If the transition to advanced[…]
Read moreImagine a data set of images labeled “suffering” or “no suffering”. For instance, suppose the “suffering” category contains documentations of[…]
Read more