George Violaris

About | Research | Resources

Strong AI and the Dunning-Kruger effect

On Tue, 27 Apr 2010 20:33 +0300

george [email protected] wrote:

I’ve decided to take a leap here and discuss a little bit about human psychology, specifically what is now known as the Dunning-Kruger effect. I rarely understand psychological terminology, possibly because I most often than not, compare it to either computational complexity or a non-optimal algorithm, rather than acknowledging that the human condition may never be reached by machines.

However, whilst on one hand this tells me that strong-AI might in fact never work out for us (thinking about it I come to realize that it probably wouldn’t be such a good thing), on the other hand it shows how much weak-AI may come to solve many real world problems. Following my gut, I’ll just go ahead and say that many or even most of these real world problems will belong to fields other than computer science, such as for the purposes of this article this other field could be psychology.

As Justin Kruger and David Dunning put it on the Journal of Personality and Social Psychology (issue 77), the effect under consideration is “a cognitive bias, in which people reach erroneous conclusions and make unfortunate choices but their incompetence robs them of the metacognitive ability to realize it”.

I have to admit, that off the top of my head, I don’t really know what the above definition really means. However, putting it into computer science terms, I find this to be a good argument against strong-AI. Even if a machine appears to be artificially intelligent, in the strong sense that is, it sounds impossible for it to develop a cognitive bias.

The reason I say this is because a computer, even if it functions on an intelligent basis, it will always need a set of instructions to work with. Even if this set of instructions is created by another machine, or the computer itself, is of little or no importance, as the conclusion of it all is that it will still function based on a set of instructions. Therefore, it will be unable of developing a cognitive bias against another computer or a human.

Also, even if it does develop a cognitive bias, given that every action the computer undertakes is stored in its database, when queried (either by itself or by a human), it will “realize” in a sense that this cognitive bias has taken place. Thus, an artificially intelligent computer would not lack the metacognitive ability to realize that a bias has taken place.

Since computers lack cognitive functions (in the human sense) all together, in an artificially intelligent environment, a computer would be querying itself to simulate several human cognitive functions.

In my mind, what comes out of this is that even if we seemingly develop an artificially intelligent machine, it will merely be a simulation of how humans think. It might even pass the Turing test, but it will never be self-aware, capturing the difference between the human mind and electronic circuit boards. Even in the unlikely case of biological computers, the biological matter acts simply as a messenger between the different neural networks. As of today however, reading signals produced by cell activity is not yet possible due to technological limitations.

George C. Violaris - 27/04/2010