Why Everyone (else) is Overconfident
There is an abundance of overconfidence in the world. And let’s face it, we all think we’re a little bit better than everyone else. David Brooks put it best in a recent TED talk: “95 percent of professors report that they are above average teachers, 96 percent of college students say that they have above average social skills… [and] 19% of Americans say that they are in the top 10% of earners.” The list goes on – most people think they are above average drivers, have above average intelligence, humor, etc. You get the idea.
We all feel this way, but when push comes to shove, are we willing to put our money where our mouths are? A study conducted a few years back by Elanor Williams and Tom Gilovich tried to answer this very question. Here’s what they did:
Participants were given a bogus personality test and then asked to predict how they would score on it relative to other Cornell students. They were asked to make predictions about their scores on four traits known to yield above average effects. After making their ratings, participants were told that their actual scores on the test would be compared to those of a randomly selected participant. With this in mind, they were to make a series of gambles, one for each of the four traits. For each gamble, they could bet on whether they would score higher on the test than the other participant, or they could bet on a random drawing with a probability of success equal to the percentile ranking they had assigned themselves on the trait in question.
The four traits were heavy hitters: intelligence, creativity, maturity, and positivity. And their findings are what you would expect; participants genuinely believed that they possessed above-average social skills and traits, so much so that they were willing to bet money on it. As the authors explain:
Participants in our study were indifferent between betting on the percentile rankings they assigned themselves and a matched-chance random drawing, indicating that they believed the two numerically equivalent probabilities – the probability that they would score higher than a random person on a personality test and the probability that they would win a random lottery – were truly equal. Their indifference between the two bets indicates that they believe they were neither overestimating nor underestimating their standing among their peers on the traits in question.
Unlike Williams and Gilovich’ study, downsides to overconfidence can sometimes mean the difference between millions of dollars. Here are two examples.
The first is a study conducted throughout the 2000s by a group of professors at Duke University. They asked chief financial officers of large corporations to predict the returns of the Standard & Poor’s index over the course of the following year. Their findings weren’t encouraging; the correlation between their estimates and reality was slightly less than zero. In other words, the CFO’s hadn’t a clue of where the returns were headed.
The interesting part of the study was that the CFO’s didn’t seem to realize how pitiful their forecasts were. Here’s Daniel Kahneman on the second part of the study:
In addition to their best guess about S&P returns, the participants provided two other estimates: a value that they were 90 percent sure would be too high, and one that they were 90 percent would be too low. The range between the two values is called an “80 percent confidence interval” and outcomes that fall outside the interval are labeled “surprises.” An individual who sets confidence on multiple occasions expects about 20 percent of the outcomes to be surprises. As frequently happens in such exercises, there were far too many surprises; their incidence was 67 percent, more than three times higher than expected. This shows that CFO’s were grossly overconfident about their ability to forecast the market.
The second is classic. It comes from Philip Tetlock, a psychologist from the University of Pennsylvania who spent 18 years gathering data to measure how good the “experts” are at predicting future events. The experts included academics, journalists, and intelligence analysts. He asked them to rate the probability of something – an economic, political, or military event – increasing, decreasing, or remaining the same, and measured their results. (For instance, one question was the “central-government debt will either hold between 35% and 40% of GDP or fall below or rise about that range.”) One New Yorker article sums up Tetlock’s results well:
By the end of the study, in 2003, the experts had made 82,361 forecasts… he measured his experts on two dimensions: how good they were at guessing probabilities… and how accurate they were at predicting specific outcomes. The results were unimpressive. On the first scale, the experts performed worse than they would have if they had simply assigned an equal probability to all three outcomes—if they had given each possible future a thirty-three-per-cent chance of occurring. Human beings who spend their lives studying the state of the world… are poorer forecasters than dart-throwing monkeys, who would have distributed their picks evenly over the three choices.
Is there away around our overconfidence? I’m pessimistic. As Harvard psychologist Dan Gilbert says, “The brain and the eye may have a contractual relationship in which the brain has agreed to believe what they eye sees, but in return the eye has agreed to look for what the brain wants.” Unfortunately, this contract seems binding.