Interesting article, but I don’t think it succeeds in refuting the doomerist position.
You start by noting some known limits on computation and (correctly) observe that AIs are bound by these limits. However, as far as we know, humans are also just special cases of Turing machines - we are equally bound by all of these limits! So sure, a super intelligence cannot any more than a human produce a fully general solution to the halting problem. But that doesn’t mean that the theoretical bound on the intelligence of a system isn’t significantly higher than a human. Even an AI “only” as smart as an unusually smart human that can copy itself at will and thinks an order of magnitude faster than a human would be extremely capable.
The idea of the singularity is that at a certain point, an AGI would be better at machine learning than the human teams that built it, and thus it would best be able to build a superior version of itself, which would in turn be better at machine learning, and so on. This theory does not predict singularities in single-purpose AIs such as chess engines. A reinforcement-learning algorithm built to play chess does not know anything about reinforcement learning, so at no point as it improves at its ability to play chess would it be expected to self-improve by designing its own successor.
In some sense it is the absolute bed of Procrustes : what you can scientifically describe is limited by Turing Machines. Indeed you need to be able to compute to make forecasts and test them. But there is no way to insure that the theory is *complete* with relation to reality. It just has to be sound.
We don't even know if the world follows a set of fixed rules like chess. All we have are approximations.
Inl also discussed another issue related to doomism : intelligence alone is far from enough you have to be able to act on the world. This is what I call the mystery of incarnation
So before moving on to these new points, do you at least accept:
- All theoretical CS limits of AI apply to human intelligence unless humans are not Turing Machines
- The theory of the singularity which predicts recursive self-improvement of AGI does not predict recursive self-improvement of chess-playing systems and thus the lack of recursive self-improvement of chess-playing systems is not evidence against this theory
I do agree. I suspect (but I do not know) that humans are not Turing Machines.
Chess is just an experience that has the advantage of being real. Self improvement has its place here too : an AGI model could self improve on any subject. So it is reasonable to think that it could appear in chess too. If a very good artificial intelligence tries to learn chess and this intelligence has self improvement capacities then it should manifest itself in chess too and we see nothing. But yes absence of proof is not proof of absence. It is a clue nothing more.
"If a very good artificial intelligence tries to learn chess and this intelligence has self improvement capacities then it should manifest itself in chess too"
If by "self-improvement capacities" you mean "the AI is at least as good at AI research as an AI scientist and can improve its own architecture" then yes. But we have never built an AI with this property. Chess engines are not AGI.
My point is not "absence of proof is not proof of absence." My point is that the singularity theory does *not* predict recursive self-improvement in the type of models that humans have built to date, so the lack of recursive self-improvement in such models should not make one more skeptical of the singularity theory.
Interesting article, but I don’t think it succeeds in refuting the doomerist position.
You start by noting some known limits on computation and (correctly) observe that AIs are bound by these limits. However, as far as we know, humans are also just special cases of Turing machines - we are equally bound by all of these limits! So sure, a super intelligence cannot any more than a human produce a fully general solution to the halting problem. But that doesn’t mean that the theoretical bound on the intelligence of a system isn’t significantly higher than a human. Even an AI “only” as smart as an unusually smart human that can copy itself at will and thinks an order of magnitude faster than a human would be extremely capable.
The idea of the singularity is that at a certain point, an AGI would be better at machine learning than the human teams that built it, and thus it would best be able to build a superior version of itself, which would in turn be better at machine learning, and so on. This theory does not predict singularities in single-purpose AIs such as chess engines. A reinforcement-learning algorithm built to play chess does not know anything about reinforcement learning, so at no point as it improves at its ability to play chess would it be expected to self-improve by designing its own successor.
We actually don't know whether humans are Turing Machines. I posted about it here :
https://spearoflugh.substack.com/p/mathematical-necessity-nature-and
In some sense it is the absolute bed of Procrustes : what you can scientifically describe is limited by Turing Machines. Indeed you need to be able to compute to make forecasts and test them. But there is no way to insure that the theory is *complete* with relation to reality. It just has to be sound.
We don't even know if the world follows a set of fixed rules like chess. All we have are approximations.
Inl also discussed another issue related to doomism : intelligence alone is far from enough you have to be able to act on the world. This is what I call the mystery of incarnation
https://spearoflugh.substack.com/p/the-mystery-of-ai-incarnation
So before moving on to these new points, do you at least accept:
- All theoretical CS limits of AI apply to human intelligence unless humans are not Turing Machines
- The theory of the singularity which predicts recursive self-improvement of AGI does not predict recursive self-improvement of chess-playing systems and thus the lack of recursive self-improvement of chess-playing systems is not evidence against this theory
I do agree. I suspect (but I do not know) that humans are not Turing Machines.
Chess is just an experience that has the advantage of being real. Self improvement has its place here too : an AGI model could self improve on any subject. So it is reasonable to think that it could appear in chess too. If a very good artificial intelligence tries to learn chess and this intelligence has self improvement capacities then it should manifest itself in chess too and we see nothing. But yes absence of proof is not proof of absence. It is a clue nothing more.
"If a very good artificial intelligence tries to learn chess and this intelligence has self improvement capacities then it should manifest itself in chess too"
If by "self-improvement capacities" you mean "the AI is at least as good at AI research as an AI scientist and can improve its own architecture" then yes. But we have never built an AI with this property. Chess engines are not AGI.
My point is not "absence of proof is not proof of absence." My point is that the singularity theory does *not* predict recursive self-improvement in the type of models that humans have built to date, so the lack of recursive self-improvement in such models should not make one more skeptical of the singularity theory.