Okay, so there are problems in knowledge engineering, AGI in particular (to recall, AGI is a machine or program that can demonstrate intelligence on the complexity level of humans). More generally, in every domain of sufficiently complex structure, AI fails, sometimes spectacularly. A well-known example is the board game Go, for which nobody has managed yet to design a computer opponent that can beat players above the level of novice.
Yet humans manage many of these tasks seemingly without any problems. One might be tempted to think that the human brain is the ideal “thinking machine”. In reality, it has a staggering number of bugs which produce incorrect actions or results in a variety of situations.
Simple statistical questions are easy for humans. It is not difficult for most people to estimate that the chances for rolling a 6 are significantly higher than those for rolling a 6 three times in a row. You don’t even need formal knowledge of probability theory to know that.
It gets more complicated when you introduce conditional probabilities. Observe.
A once popular game show works this way: you are asked to choose one of three doors, behind one of which there is treasure beyond measure (sorry for that… I promise I won’t do it again in this article). After you have chosen, the show master opens one of the two other doors (making sure that it isn’t the one with the riches behind it). Now you are allowed to change your choice to the other door that is still closed, if you want to. What’s better, sticking with the old choice or changing your mind?
You will probably be tempted to say that it doesn’t make a difference, and you would be wrong. In fact, if you stick with your first choice, your chance of getting rich (during the show, to be precise) amounts to one out of three. On the other hand, if you change your mind, it’s two out of three. In case you don’t believe me, get your hands on a recent introductory book about probability theory. I’m fairly sure they all have this example.
Here’s a great trick to earn money with this: on any party with more than 23 guests (under normal circumstances), start a bet that at least two guests have the same birthday (excluding year, of course). You will win more often than not (50.7% of the time when there are 23 guests; on a party with 30 guests, it’s 70%).
It gets even worse when Bayes’ theorem (which is about dependencies between various conditional probabilities) gets involved. A popular example goes this way. Suppose there is a medicinical test designed to diagnose cancer, which occurs in 0.5% of the population. The test detects the cancer correctly in 99% of all cases, but it also incorrectly gives a positive results in 2% of healthy people. Now suppose someone gets tested and the test comes out positive. What’s the probability that the person has cancer?
Okay, let’s think about this. 99% is a pretty high number, right? And 2% is a pretty low one? So the test should be fairly accurate, i.e. the chance that our subject will die is high enough. Or so you might think… the correct answer is about 20%. Not so good after all, huh?
If you want to see it for yourself, there’s a more in-depth page with an intuitive explanation of Bayesian Reasoning that has the same example with slightly different numbers and a detailed solution.
Suppose I give you data and ask you if it supports a given hypothesis. Chances are that you’ll either tell me that the data supports the hypothesis or that it supports the opposite of the hypothesis. It’s probably a lot less likely that you tell me the data says nothing, i.e. that it’s random. That’s because humans don’t see randomness easily. For example, from the three plots of “random” data in “Warning Signs in Experimental Design and Interpretation“ (section D7), which one is truly random? I’ll give you a hint: it’s not the right one.
Although the rightmost graph looks like it’s random (because there are no clusters of dots), it’s actually not. Random data doesn’t mean that there are no clusters. In truly random data, clusters occur naturally. The truly random graph is the one on the left.
It gets even worse if you don’t have an overview of the data in question. I regularly use the bus to get downtown, and I keep catching myself thinking “these buses are always early when I’m at the bus stop right on time”. This is, of course, very much impossible (given the knowledge that the buses are not always early). What gives? I obviously only notice those cases in which my hypothesis is indeed correct and don’t notice the others because they are not noteworthy enough.
This incorrect thinking is subject to the so-called confirmation bias. There’s disconfirmation bias too, which means that you doubt hypotheses by default, even if they are true most of the time, because you’re too focused on the times when they are not true.
It gets even worse. Let’s get to central issues in living. Happiness is an interesting problem there. People at Harvard have found that “when we try to predict what will make us happy, we’re often wrong”. I’m ready to bet that this is true for many other central areas in life. Another particularly sad example of the same phenomenon is those people who keep sending you “funny” e-mails. They actually think you find those mails funny!
Here’s another goof in the brain, the one which motivates meditation: getting worked up over something that upsets you does not help at all, even if it’s the straightforward thing to do. Meditation actually makes it possible for the adept practitioner to let emotions like anger pass so that they are left with the pure facts and can decide what to do with them. Emotions, by the way, are the most defining difference between real intelligence and artificial intelligence. At least that’s what I believe. Emotions motivate us and keep us running, but they are also hindrances in many situations, more so for some than for others.
Another popular myth is that the conscious mind (the part of your mind you’re thinking with) does anything except thinking. Utterly wrong; everything you actually do is controlled by parts of your mind you are not aware of. It can be measured, for example, that you have decided to move your arm before you know that you are going to move it (though I’d really like to know how the experiment is set up… “Now! Uh, wait, that was a dud… now! Or now?”). This extends to many other areas. Telling yourself you are not afraid of the big black spider right next to you doesn’t usually make a difference. Neither does telling yourself to remember the answer on a test question that you are sure you knew before the test began. The list goes on and on.
Some people have argued that consciousness developed in humans less than two thousand years ago (I’m afraid I forgot the actual numbers). In other words, humans have functioned without it before. Hard too imagine, though, isn’t it?
Let’s face it: we’re actually incompetent at living (and many other things), at least by default. Error correction is possible but it takes a serious amount of getting used to, and conscious intervention.
Now why did people want to clone human intelligence again?