What does the stock market tell us about AI timelines?

Suppose that there is – as some rationalists and effective altruists claim (1, 2, 3) – a non-negligible chance (say, 10%) that superintelligent artificial general intelligence (AGI) will be developed in the next 10 years. I’ll refer to that hypothesis as H.

If H is true, you really want to buy stocks of AI companies, especially leading AI companies like Google (as they own DeepMind). A company that develops superintelligent AI would likely be able to make a ton of money off of it. Superintelligent AI could perform all economically relevant tasks better than humans, so the economic value created would be a large fraction of world GDP ($107.5 trillion in terms of purchasing power parity, or $78.28 trillion in nominal terms) in a conservative estimate; considering future growth, it would be far larger still.

If only 5% of that economic value went to the company developing AI, that would easily be enough to make them orders of magnitude more valuable than any other present company.

But it seems that actual stock prices do not reflect this. The stock price of Alphabet (the parent company of Google) roughly doubled since the beginning of 2014 (when they acquired DeepMind), but so did the NASDAQ100 index of US tech companies, so there seems to be nothing special going on. Over the timespan of AlphaGo’s victory over Lee Sedol at Go –  which some took to be evidence for H – Alphabet shares rose by roughly 4%, while the NASDAQ rose by 2%. Conditional on H, or AlphaGo’s victory being evidence of H, we’d expect a far bigger reaction.

To sum up, the argument is:

  • If superintelligent AI happens soon, it would have major impacts on stock market prices.
  • Observed prices seem to show no outperformance of AI companies, or at least no outperformance of the implied magnitude.
  • Therefore, the implied market opinion is to confidently reject H.

So, what are we to make of that?

A proponent of H could attack the argument on two (mutually exclusive) grounds:

  1. If superintelligent AI is developed, the profits may not be reaped by any specific AI company. For instance, maybe a government will take over at some point, or regulation prohibits the company that develops AI from taking a significant share of the profits. So we shouldn’t expect outperformance even if H is true.
  2. AI companies’ stock prices do actually outperform as strongly as could be expected given H. That is, the stock market believes H and reacts accordingly.

I find both objections implausible. Objection 1 reduces the expected value of developing a superintelligent AI system by a bit, but not by orders of magnitude. This is because it’s hard to be confident either way; there are plausible scenarios in which the company that develops superintelligence gets super-rich, and there are plausible scenarios in which it doesn’t. In expectation, I’d still say that H would multiply the value of AI companies.

Objection 2 is right to point out that looking at one the price of one single stock may not, strictly speaking, be considered conclusive evidence. However, the reason why I didn’t bother to gather more data is that it seems fairly obvious that the market does not price in H. The impacts of H, if that was market consensus, would be so enormous that the topic would be discussed in every stock market news broadcast.

If the argument is valid – that is, if market opinion strongly contradicts H – then we could respond to that in two ways:

  1. We argue that the market is inefficient when it comes to evaluating the future of AI.
  2. We accept that this observation is (strong) evidence against H.

Proponents of H often refer to civilisational inadequacy to explain why the general public fails to realize the (postulated) possibility of machine superintelligence in the near future. But financial markets are usually considered an example of system that does not exhibit inadequacy – due to the existence of strong, selfish incentives to get it right.

Of course, markets can still be inefficient at times. It’s often costly to short-sell stocks, which means that they can be overvalued without any possibility for informed market participants to profit. However, it’s fairly easy to make a bet on H: just buy AI companies. So I don’t see strong reasons to expect market inefficiency with respect to this particular question.

So I put my money on B. I don’t think that short timelines are plausible, and I don’t think AlphaGo is strong evidence of discontinuous progress. I think that some kind of transition to advanced AI will happen eventually, but it will likely be gradual, distributed, and will likely take decades or centuries (in subjective time).

5 comments

  1. The truest things you say, are that the financial markets are not pricing in the possibility of superintelligence, and that they have not treated AlphaGo as a massive breakthrough.

    But superintelligence is not a market-friendly hypothesis anyway. People talk about it as meaning human extinction or human liberation or human enslavement. How could the markets process that, except perhaps by freezing up in panic again?

  2. Yeah, as Mitchell says, I think there’s an argument C that runs: AGI dev is overwhelmingly likely to lead to a fast takeoff, in which case everyone’s probably at similar utility regardless of Alphabet shares owned. Given the level of disagreement among experts re: slow vs fast takeoff, this also seems implausible.

    Also, hi again Tobias! 🙂 (We met briefly in Berkeley early this year. I’m now back to doing research with FRI, thinking about doing academic AI safety/strategy research in the longer run. What are you up to these days?)

    1. Hi Ashwin!

      > I think there’s an argument C that runs: AGI dev is overwhelmingly likely to lead to a fast takeoff, in which case everyone’s probably at similar utility regardless of Alphabet shares owned.

      I agree that this is a possible line of argument. But even if Alphabet shares are meaningless in a fast takeoff, I’d argue that the consequences of such a fast takeoff would be so tremendous that we’d see some kind of strong reaction if the market actually believed this hypothesis.

      > I’m now back to doing research with FRI, thinking about doing academic AI safety/strategy research in the longer run. What are you up to these days?

      Sounds great – happy to chat more about these topics via e-mail or slack! I currently split my time between writing and work on my PhD.

  3. I think in order to evaluate whether stock markets are inadequate at a particular thing, you have to first ask what the general procedure people use to evaluate their predictions of that thing. If people are using old methodologies on new technologies, we can’t expect their predictions to bear out.

    Historically, new technologies are very difficult to predict, which leads to surprising volatility in the marketplace. To use one example, Bitcoin had nearly no value until just a few years ago. Personally, I think the argument that AI will become very influential in the future is *more* persuasive than a hypothetical argument in 2011 about how Bitcoin will replace common currency.

    Most people who buy stocks aren’t aware of the arguments for hard takeoff, or AGI, or superintelligence. So why should we give them epistemic deference?

    You could argue that we don’t really know whether or not the average person on Wall Street has enough information to judge this. It could be that EAs are in a bubble, and aren’t really aware of all the strong evidence about how AI is unlikely in the short term. I find this highly unlikely. People who trade stocks usually aren’t interested in things that seem like weird tail possibilities that have never happened in history. That’s just not that type of thing that people usually look into when they’re buying Google.

    Of course, perhaps I’m wrong and the people who trade stocks *really do* have good information to counter my argument. Normally, I don’t try to beat the market, but in this case I would find it very surprising that traders have many privileged arguments and information that I haven’t considered.

    As for why I don’t bet on Google? There are roughly two possibilities: either AI will be diffuse, as you mentioned, or AI will experience something more akin to a hard takeoff. In the first case, the overall economic effects of AI will probably be distributed pretty evenly. Google might well gain faster than the average stock, but probably not by a huge factor. In the second case, it doesn’t seem like there’d be much time to cash-in on the payoff anyway. I’d consider taking the bet, but since it seems unlikely I’ll get a large payoff, even with 10% probability of AI in 10 years, the reinforcement signal is weak.

  4. I think that the market being a good predictor is generally not the worst idea, but this is something that is unprecedented. How good are markets at predicting a black swan event? Historically not that great, IMHO. It always comes with big corrections once it is relative obvious. That’s because something unprecedented requires imagination. A lot of people in the stock market invest other people’s money and they have to rely on facts (e.g. historical data) and forecast relatively conservatively. They don’t aim to maximise the upside opportunity, but try to get upside while minimising downward risk.
    A recent example being: it took a long time for it to be obvious that Electric Vehicles would displace Internal Combustion Engine Vehicles. Only now companies like Tesla, Nio and Xpeng get higher valuations.

Leave a Reply

Your email address will not be published. Required fields are marked *