Part 9: Intuition
A few days ago, I got a chance to listen to a talk by Daniel Kahneman. He talked about his Nobel prize-winning work on the curiosities of intuition. He started with some background on what cognitive psychologists refer to as System One (intuition) and System Two (reasoning). With a few thought experiments, it is easy to see that there is a clear distinction between the two. System One responses are the ones our brain generates automatically, whether we want it or not. System Two responses require "mental effort" (whatever that is). For example, when shown a picture of a human face, we can immediately realise whether the facial expression is that of anger or happiness. It takes no effort to do so. In fact, we cannot avoid making the observation. Let's attribute this mental response to System One - intuition. On the other hand, given a quadratic equation, we need to exert considerable effort to solve it. In other words, we need to engage System Two - reasoning. Note that computers have far less trouble at solving quadratic equations than they do at reading facial expressions.
Kahneman gave the following rule of thumb. System One operations are the ones we can perform while making a left turn into traffic. System Two operations are preemptive - we can only perform one of those at a time. They require attention.
With these two definitions in mind, it is natural to realise that most of our day-to-day brain activity is handled by System One. Very few decisions require the use of System Two. System Two acts primarily as a correction to System One.
The rest of the talk was devoted to a series of experiments that demonstrated the marvels and flaws of intuition.
Long-term predictions
We use System One to make predictions all the time. When walking, the brain subconsciously predicts the effect of every step and makes corrections to keep the body in balance. These are very short-term predictions with immediate feedback. If I'm carrying something heavy in my right hand, my balance is shifted, and I find it hard to walk straight for the first few seconds, but System One quickly adjusts and learns how to keep me from falling. We still do not have robots that can walk reliably, although Honda is doing fairly well on that front.
What if we take a number of political experts and ask them to make predictions? If a war starts in a certain region, the experts are very good at explaining the sequence of events that lead to the war. They are also very good at naming the key players and making short-term predictions of events that are about to happen in the next few days or weeks. These are short-term predictions, and experts are more accurate than average people or computers.
What about long-term predictions? When asked to predict what is going to happen in the next 5 or 10 years, experts are surprisingly bad. According to some researchers, experts are as bad as average readers of the New York Times, and they are worse than computer programs. It is possible to write a computer program that can make predictions, given the same information, and those predictions will be better than the ones given by the experts. Why is that? First of all, given the same information twice, experts often give two different predictions! In fact, a computer program that tries to predict what the expert is going to say will make better predictions than the expert! (I would appreciate a link to the original paper if anyone can find it.)
Another reason why experts (and humans in general) are bad at long-term predictions is the simple fact that we do not get immediate feedback when we are wrong, and so we do not learn from our mistakes. What is worse is that we actively refuse to learn from failed predictions. When we make a long-term prediction that turns out wrong, two magical things happen. Firstly, we quickly convince ourselves that we were almost right, and the outcome we have predicted almost happened. Secondly, we comfort ourselves by thinking that this was an anomaly, and next time, things will go as we have predicted. We may even find a reason why we were wrong and assure ourselves that that reason was an execption. Just look at the people on TV who make stock market predictions. They are extremely confident, and they are wrong time after time, and they never learn or admit their mistakes.
Job interviews
When looking at the hiring process in several companies, it turns out that the face-to-face inteview carries the most weight, in comparison to the resume, education, work experience or other factors. Perhaps that is not surprising, but researchers found that when the people responsible for the hiring of candidates are not the ones doing the interview, the candidates who get hired are less likely to be fired later. A possible reason is that we are inclined to form an opinion of a person immediately after seeing the person's face, and that opinion is very strong.
Difficult questions
Take a random student and ask him two questions:
- How many dates have you been on in the past two months?
- How happy are you?
What is the relationship between the answers to the two questions? It turns out that the correlation depends entirely on the order in which the questions are asked. Someone who answers "zero" to question 1 is very likely to answer "no" to question 2. However, the same person might answer "yes" to question 2 if it were asked first. The reason is that question 2 is very difficult. It involves too many factors to consider at once, so when we hear question 1 followed by question 2, we substitute our answer to the easy question in place of the answer to the hard one.
Insurance and the fear factor
Consider the following two insurance quotes.
- $50,000 in case of death from terrorism.
- $50,000 in case of death for any reason.
Which one do you think will sell better? Answer: clearly, option 1. Of course, if people are given the two options side by side, anyone with any sense of set theory will understand that option 2 subsumes option 1 and is, therefore, better. However, when shown only one of the two options, people are more likely to buy the first one because of the fear associated with the word "terrorism".
The baseball problem
Here is a math problem. A baseball bat and a ball together cost $1.10. The bat is $1 more expensive than the ball. What is the price of the ball?
Hopefully, you have figured out that the answer is 5 cents, but you had the wrong answer first, didn't you? That's System One doing math. System One is very bad at math. If you did not check your answer, then you never engaged System Two, and you never realised that the answer was wrong. There is mental effort involved in calling on System Two, and many people avoid using it.
Sums and averages
How many lines are in this paragraph? What is the average length of a line? I bet you can answer these questions easily and with very good precision. What is the total length of the lines? That is hard to approximate. Why? It's just the product of the number of lines and the length of an average line. Yet it takes a lot of effort to approximate the total length. Strange. Sure, it's unfair to ask about the average length of a line when they are so nicely arranged one under the other, but the same trick works with 5-10 line segments drawn at random on a piece of paper.
Integrals of pain
This next experiment demonstrates how evil cognitive psychologists are. We take two groups of people and subject them to some pain. The first group gets a constant amount of pain for one minute. Group 2 gets the same minute of pain, followed by a minute of gradually decreasing pain. Then we ask them about the total amount of pain they have experienced. The strange thing is that group 1 will complain a lot more. Next, we switch the two groups and perform experiment 1 on group 2 and experiment 2 on group 1. When we ask everyone which experiment they would rather repeat, if they had to, almost everyone chooses experiment 2. Clearly, the second experiment has more total pain, but once again, the brain is very bad at computing sums (integrals in this case).
The same experiment was done on patients receiving a colonoscopy. Once every few seconds, an evil psychologist would ask the patient, "How much pain are you feeling right now?" The pain was plotted as a graph. The total pain is the area underneath the plot. However, when asked at the end of the experiment, the patients' measure of total pain was determined primarily by the difference between the peak pain and the pain at the end.
My speculations
The most enlightening slide from the talk for me was the one showing 3 words side by side: perception, intuition, reasoning. Perception is what comes into our brain from the senses - the images we see. Sounds like Hawkins' bottommost layer of the cortical hierarchy. Intuition is our involuntary, subconscious reaction to those images. This is System One. Reasoning is the process of applying mental effort to make decisions. This is System Two, and it has a lot to do with logic, math and formal concepts. Could it be that each of the 3 systems occupies separate layers of the Hawkins hierarchy? Maybe each one takes up, say, two layers?
There is also a clear connection between System Two processes and attention. We don't need to pay attention to determine if a face is angry or happy, but we need our full attention when solving a quadratic equation. What exactly is attention or mental effort? More on this later...