Discussion about this post

User's avatar
Hellbender's avatar

Interesting article, but I don’t think it succeeds in refuting the doomerist position.

You start by noting some known limits on computation and (correctly) observe that AIs are bound by these limits. However, as far as we know, humans are also just special cases of Turing machines - we are equally bound by all of these limits! So sure, a super intelligence cannot any more than a human produce a fully general solution to the halting problem. But that doesn’t mean that the theoretical bound on the intelligence of a system isn’t significantly higher than a human. Even an AI “only” as smart as an unusually smart human that can copy itself at will and thinks an order of magnitude faster than a human would be extremely capable.

The idea of the singularity is that at a certain point, an AGI would be better at machine learning than the human teams that built it, and thus it would best be able to build a superior version of itself, which would in turn be better at machine learning, and so on. This theory does not predict singularities in single-purpose AIs such as chess engines. A reinforcement-learning algorithm built to play chess does not know anything about reinforcement learning, so at no point as it improves at its ability to play chess would it be expected to self-improve by designing its own successor.

Expand full comment
4 more comments...

No posts