A framework for thinking about AI timescales

To steer the development of powerful AI in beneficial directions, we need an accurate understanding of how the transition to a world with powerful AI systems will unfold. A key question is how long such a transition (or “takeoff”) will take. This has been discussed at length, for instance in the AI foom debate.

In this post, I will attempt to clarify what we mean more precisely when we talk of “soft” or “hard” takeoffs.

(Disclaimer: Probably most of the following ideas have already been mentioned somewhere in some form, so my claimed contribution is just to collect them in one place.)

Defining useful reference points in time

The obvious question is: what reference points do we use to define the beginning and the end of the transition to powerful AI? Ideally, the reference points should be applicable to a wide range of plausible AI scenarios rather than making tacit assumptions about what powerful AI will look like.

A commonly used reference point is the attainment of “human-level” general intelligence (also called AGI, artificial general intelligence), which is defined as the ability to successfully perform any intellectual task that a human is capable of. The reference point for the end of the transition is the attainment of superintelligence – being vastly superior to humans at any intellectual task – and the “decisive strategic advantage” (DSA) that ensues.1 The question, then, is how long it takes to get from human-level intelligence to superintelligence.

I find this definition problematic. The framing suggests that there will be a point in time when machine intelligence can meaningfully be called “human-level”. But I expect artificial intelligence to differ radically from human intelligence in many ways. In particular, the distribution of strengths and weaknesses over different domains or different types of reasoning is and will likely be different2 – just as machines are currently superhuman at chess and Go, but tend to lack “common sense”. AI systems may also diverge from biological minds in terms of speed, communication bandwidth, reliability, the possibility to create arbitrary numbers of copies, and entanglement with existing systems.

Unless we have reason to expect a much higher degree of convergence between human and artificial intelligence in the future, this implies that at the point where AI systems are at least on par with humans at any intellectual task, they actually vastly surpass humans in most domains (and have just fixed their worst weakness). So, in this view, “human-level AI” marks the end of the transition to powerful AI rather than its beginning.

As an alternative, I suggest that we consider the fraction of global economic activity that can be attributed to (autonomous) AI systems.3 Now, we can use reference points of the form “AI systems contribute X% of the global economy”. (We could also look at the fraction of resources that’s controlled by AI, but I think this is sufficiently similar to collapse both into a single dimension. There’s always a tradeoff between precision and simplicity in how we think about AI scenarios.)

A useful definition for the duration of the transition to powerful AI could be how long it takes to go from, say, 10% of the economy to 90%. If we wish to measure the acceleration more directly, we could also ask how much less time it will take to get from 50% to 90% compared to 10%-50%. (The point in time where AI systems contribute 50% to the economy could even be used as a definition for “human-level” AI or AGI, though this usage is nonstandard.)

I think this definition broadly captures what we intuitively mean when we talk about “powerful”, “advanced”, or “transformative” AI: we mean AI that is capable enough to displace4 humans at a large range of economically vital tasks.5

A key advantage of this definition is that it makes sense in many possible AI scenarios – regardless of whether we imagine powerful AI as a localized, discrete entity or as a distributed process with gradually increasing impacts.

Different kinds of time

By default, we tend to think about “how long it takes” in terms of physical time, that is, what a wall clock measures. But perhaps physical time is not the most relevant or useful metric in this context. It is conceivable that the world generally moves, say, 20 times faster when the transition to powerful AI happens – e.g. because of whole brain emulation or strong forms of biological enhancement. In this case, the transition takes much less physical time, but the quantity of interest is some notion of “how much stuff is happening during the transition”, not the number of revolutions on grandma’s wall clock.

An natural alternative is economic time6, which adjusts for the overall rate of economic progress and innovation. Currently, the global economy doubles every ~20 years, so a year of economic time corresponds to a growth of ~3.5%. The question, then, is how many economic doublings will happen during the transition to powerful AI. Saying that it will take 40 years of economic time would mean that global economy quadruples during the transition (which might happen much less than 40 years of physical time).7

Another alternative – which I will call political time – is to adjust for the rate of social change. So, saying that the transition takes 10 years of political time would mean that there will be ten times as much change in the relative power of individuals, institutions, and societal values compared to what happens in an average year these days.8

(It is possible that the economic growth rate is a good approximation of the rate of political change, rendering these notions of time equivalent. But this isn’t obvious to me – in particular, the rate of political change might lag if there’s a disruptive acceleration in economic growth. Conversely, economic collapse may cause a lot of political change.)

Suggested terminology

Given this, we can now talk about slow vs. fast takeoffs in physical, economic, or political time. The takeoff may be slow along some axes, but fast among others. Specifically, I find it plausible that a takeoff would be quite fast in physical time, but much slower in terms of economic or political time.

We can use a similar terminology for AI timelines by asking how much physical, economic, or political time will pass…

  • … until powerful AI is developed, in the sense of AI systems contributing at least 50% of the total economy?
  • … until the evolution of the relative power of different agents or value systems reaches a steady state – which can be considered the end of history –  assuming that this happens at all.9 Such a steady state could be a) a singleton with certain values, b) extinction, or c) a multipolar outcome with a lasting distribution of values and power.

These two questions are equivalent to people who assume that the transition to powerful AI will likely result in a steady state (a singleton with a decisive strategic advantage). But this is an implicit assumption, and in my opinion it’s also quite possible that there would be centuries (or more) of time – especially in terms of economic or political time – between the advent of powerful AI and the formation of a steady state.

Acknowledgments

I am indebted to Brian Tomasik, Lukas Gloor, Max Daniel and Magnus Vinding for valuable comments on the first draft of this text.

Footnotes

  1. However, superintelligence is not necessary to achieve a DSA.
  2. In addition to calling the meaning of “human-level” in question, this can also be an argument against views that expect discontinuous progress at a certain point in time (“intelligence explosion”). Paul Christiano points out that we should expect continuous progress if AI engineers face tradeoffs along many different axes of AI systems (rather than improving along a single dimension that unlocks vast, discontinuous gains).
  3. I’m bracketing a few complications here for simplicity. I think we shouldn’t count “ordinary” automatization or software, i.e. what matters is that AI systems make autonomous decisions in some relevant sense. It may also be hard to disentangle the contribution to economic activity of actors in complex, intertwined systems; but I think humans can still intuitively do this reasonably well, so the concept seems useful.
  4. It is also possible that AI makes up a large share of the total economy by mostly performing new tasks, that is, tasks that humans cannot perform (profitably).
  5. It is conceivable that AI is capable of doing a lot of tasks, but not getting deployed for some reason. In such a scenario, we could instead consider at what fraction of economic activity AI could take over if it was deployed.
  6. Inspired by this comment on LessWrong.
  7. Clearly, the concept breaks down if economic growth is 0 or negative. So this assumes that there will be positive economic growth (bracketing temporal blips).
  8. This actually depends on how you model political change. If it’s a random walk, then the total change over a timespan only grows with the square root of time; if there are persistent trends, it might be linear.
  9. It is also conceivable that there’s a cycle, but that seems less plausible and I’ll ignore this possibility for simplicity.

One comment

  1. I think using fractions of the economy is a good idea for making things more precise, but it’s not so clear whether this is really the most relevant measure. In particular, saying that the AI takeoff “has happened” if 90% of the economy is replaced by AIs suggests that more than 10% of human activities in the economy require something that could be called human-level general intelligence. But that need not be the case. In fact, I find it plausible that 90% of human activity in today’s economy can be replaced without attaining anything resembling human-level general intelligence. E.g., given progress on autonomous cars, it seems that transportation (maybe a few percent of GDP) can mostly be replaced with AI systems without general AI. Same for many jobs in factories, grocery clerks, etc. In other words, AI can be transformative without presenting an existential risk. I suspect that some people would respond by saying that self-driving cars don’t really count as autonomous AI systems. https://en.wikipedia.org/wiki/AI_effect

    >It is possible that the economic growth rate is a good approximation of the rate of political change, rendering these notions of time equivalent.

    Prima facie, this seems implausible to me. It seems that the Western world has been unusually stable (in terms of physical time) in the past 50 years, despite having very high growth rates. As you say, “economic collapse may cause a lot of political change”. In general, growth and political stability probably cause each other.

Leave a Reply

Your email address will not be published. Required fields are marked *