Skip to content

Posts tagged ‘Dan Ariely’

Creative Cheating: The Link Between Creativity and Dishonesty

Originally posted on my blog at

By the time French police arrested him in 1969, Frank Abagnale had posed as a lawyer, doctor, U.S. Bureau of Prisons agent, teaching assistant at Brigham Young University, pilot and, “passed $2.5 million worth of meticulously forged checks across 26 countries.” By any standard, Abagnale – who committed most of his crimes as a teenager – was a criminal. In fact, when he was finally captured, 12 countries wanted him on accounts of fraud.

At that point everybody had questions for Abagnale. How did he get away with all of it for so long? Why didn’t anyone notice his youthful appearance? And how did he manage to escape authorities, even after they captured him?

Abagnale’s unmatched intelligence, willingness to take a chance and charm created a strange cognitive brew that gave rise to his unusual accomplishments – if that’s the appropriate word. But he was also highly creative. He didn’t just steal and forge; he stole and forged in entirely novel ways. In this light – a somewhat pessimistic light – he was a creative genius.

Was Abagnale a cheater because of his crafty creativity? Consider a study published last year recently brought to the popular audience with the publication of Dan Ariely’s latest book The (Honest) Truth About Dishonesty. Ariely conducted five experiments for the study. In one, after measuring how creative each participant was, Ariely and his research partner Francesca Gino facilitated a multiple-choice test with cash rewards that depended on performance – the better people did the more money they made.

Here’s where things got tricky. Ariely and Gino gave the participants bubble sheets (think the SAT) with instructions to transfer their answers onto it. However, because of a “copyright error,” the bubble sheets already had the correct answers marked in. The error, of course, was a ruse. Ariely and Gino implemented this small ripple to give the participants a chance to cheat without getting caught. The question is: Would they?

The researchers found two things. The first is that many people cheated but only a little bit. This finding is consistent with Ariely’s thesis, which describes how most “honest” people are willing to cheat by “fudging” their results in order to give themselves small gains. Ariely demonstrates this with numerous studies and anecdotes throughout his book. By the end he concludes that cheating is a widespread phenomenon not just limited to a few bad apples.

The second finding confirmed Ariely’s and Gino’s hunch: creative people cheated more. “Those who cheated more on each of the… tasks had on average higher creativity scores compared to noncheaters, but their intelligence scores were not very different.” Why? Ariely thinks it has to do with storytelling. That is, creative types told themselves more convincing and justifying stories:

 [T]he difference between creative and less creative individuals comes into play mostly when there is ambiguity in the situation at hand and, with it, more room for justification… Put simply, the link between creativity and dishonesty seems related to the ability to tell ourselves stories about how we are doing the right thing, even when we are not. The more creative we are, the more we are able to come up with good stories that help us justify our selfish interests.

What’s also interesting is the relationship between pathological liars and gray and white matter. Gray matter is a term that describes the neurons that power our thinking. White matter, in contrast, is the wiring that connects our brain. A study led by Yaling Yang found that – and this is the interesting part – pathological liars had 14 percent less gray matter in their prefrontal cortices, a part of the brain that helps us distinguish right form wrong. One interpretation of this finding is that pathological liars have a difficult time when it comes to moral dilemmas because of their lack of gray matter.

However, Yang and her team also found that the pathological liars had 22 to 26 percent more white matter in their prefrontal areas compared to a control group. In Ariely’s words, this means that “pathological liars are likely able to make more connections between different memories and ideas, and this increased connectivity and access to the world of associations stored in the their gray matter might be the secret ingredient that makes them natural liars.”

Ariely speculates the implications:

If we extrapolate these findings to the general population, we might say that higher brain connectivity could make it easier for any of us to lie and at the same time think of ourselves as honorable creatures. After all, more connected brains have more avenues to explore when it comes to interpreting and explaining dubious events – and perhaps this is a crucial element in the rationalization of our dishonest acts.

This doesn’t mean that the more creative you are the more of a cheater you are – correlation doesn’t equal causation. But Ariely does rightly point out that cheating requires a creative mindset. Such was the case with Abagnale. He didn’t cheat because of his creativity, but his novel brand of thievery couldn’t have been possible without his wildly creative mind. After all, “facts are for people who lack imagination to create their own truth.” Abagnale would agree.

The Future Of Religion

Religious people, that is, people who say that religion is important in their lives, have, on average, higher subjective well being. They find a greater sense of purpose or meaning, are connected to stronger social circles and live longer and healthier lives. Why, then, are so many dropping out of organized religion?

Last year a team of researchers led by Ed Diener tried to answer this question. They found that economically developed nations are much less likely to be religious. On the other hand, religion is widespread in countries with more difficult circumstances. “Thus,” the authors conclude, “it appears that the benefits of religion for social relationships and subjective well-being depend on the characteristics of the society.” People of developed nations are dropping out of organized religion, then, because they are finding meaning and wellness elsewhere.

The real paradox is America, where Nietzsche’s anti-theistic proclamation went unheard. 83 percent of Americans identify with a religious denomination, most say that religion is “very important” in their lives and according to Sam Harris 44 percent “of the American population is convinced that Jesus will return to judge the living and the dead sometime in the next fifty years.” In fact, a recent study even showed that atheists are largely seen as untrustworthy compared to Christian and Muslims.

Why does the United States, one the most economically developed countries in the world, deviate from the correlation between religion and wealth? One answer is that trends always contain outliers. As Nigel Barber explains in an article: “The connection between affluence and the decline of religious belief is as well-established as any such finding in the social sciences…. [and] no researcher ever expects every case to fit exactly on the line… If they did, something would be seriously wrong.”

Whatever the reasons, a recent article by David Campbell and Robert Putnam suggests that Americans are catching up to their non-believing European counterparts. According to Campbell and Putnam, the number of “nones” – those who report no religious affiliation – has dramatically increased in the last two decades. “Historically,” Campbell and Putnam explain, “this category made up a constant 5-7 percent of the American population… in the early 1990s, however, just as the God gap widened in politics, the percentage of nones began to shoot up. By the mid-1990s, nones made up 12 percent of the population. By 2011, they were 19 percent. In demographic terms, this shift was huge.”

A study by Daniel Mochon, Michael Norton and Dan Ariely bodes well with this observation. They discovered that, “while fervent believers benefit from their involvement, those with weaker beliefs are actually less happy than those who do not ascribe to any religion-atheists and agnostics.” It’s possible the “nones” Campbell and Putnam speak of are motivated to abandon their belief by a desire to be happier and less conflicted with their lives. This might be too speculative, but there are plenty of stories, especially in the wake of the New Atheist movement, of people who describe their change of faith as a dramatic improvement for their emotional life. In a recent interview with Sam Harris, for example, Tim Prowse, a United Methodist pastor for almost 20 years, described leaving his faith as a great relief. “The lie was over, I was free,” he said, “…I’m healthier now than I’ve been in years and tomorrow looks bright.”

What does this say about the future of atheism? Hitchens and others suggest that a standoff between believers and non-believers may be inevitable. “It’s going to be a choice between civilization and religion,” he says. However, grandiose predictions about the future of the human race are almost always off the mark, and it’s likely that the decline in religion will remain slow and steady. It’s important to keep in mind that this decline is a recent phenomena. It wasn’t until the 17th century, the so-called Age of Reason, when writers, thinkers and some politicians began to insist that societies are better off when they give their citizens the political right to communicate their ideas. This was a key intellectual development, and in context to the history of civilization, very recent.

To be sure, radical ideologies will always exist; religion, Marx suggested, is the opiate of the people. But the trend towards empiricism, logic and reason is undeniable and unavoidable. Titles including God Is Not Great and The God Delusion are bestsellers for a reason. And if Prowse’s testimony as well as Campbell and Putnam’s data are indicative, there is a clear shift in the zeitgeist.

The Irrationality Of Irrationality

Reason has fallen on hard times. After decades of research psychologists have spoken: we humans are led by our emotions, we rarely (if ever) decide optimally and we would be better off if we just went with our guts. Our moral deliberations and intuitions are mere post-hoc rationalizations; classical economic models are a joke; Hume was right, we are the slaves of our passions. We should give up and just let the emotional horse do all the work.

Maybe. But sometimes it seems like the other way around. For every book that explores the power of the unconscious another book explains how predictably irrational we are when we think without thinking; our intuitions deceive us and we are fooled by randomness but sometimes it is better to trust our instincts. Indeed, if a Martian briefly compared subtitles of the most popular psychology books in the last decade he would be confused quickly. Reading the introductions wouldn’t help him either; keeping track of the number of straw men would be difficult for our celestial friend. So, he might ask, over the course of history have humans always thought that intelligence was deliberate or automatic?

When it comes to thinking things through or going with your gut there is a straightforward answer: It depends on the situation and the person. I would also add a few caveats. Expert intuition cannot be trusted in the absence of stable regularities in the environment, as Kahneman argues in his latest book, and it seems like everyone is equally irrational when it comes to economic decisions. Metacognition, in addition, is a good idea but seems impossible to consistently execute.

However, unlike our Martian friend who tries hard to understand what our books say about our brains, the reason-intuition debate is largely irrelevant for us Earthlings. Yes, many have a sincere interest in understanding the brain better. But while the lay reader might improve his decision-making a tad and be able explain the difference between the prefrontal cortex and the amygdala the real reason millions have read these books is that they are very good.

The Gladwells, Haidts and Kahnemans of the world know how to captivate and entertain the reader because like any great author they pray on our propensity to be seduced by narratives. By using agents or systems to explain certain cognitive capacities the brain is much easier to understand. However, positioning the latest psychology or neuroscience findings in terms of a story with characters tends to influence a naïve understanding of the so-called most complex entity in the known universe. The authors know this of course. Kahneman repeatedly makes it clear that “system 1” and “system 2” are literary devices not real parts in the brain. But I can’t help but wonder, as Tyler Cowen did, if deploying these devices makes the books themselves part of our cognitive biases.

The brain is also easily persuaded by small amounts of information. If one could sum up judgment and decision-making research it would go something like this: we only require a tiny piece of information to confidently form a conclusion and take on a new worldview. Kahneman’s acronym WYSIATI – what you see is all there is – captures this well. This is precisely what happens the moment readers finish the latest book on intuition or irrationality; they just remember the sound bite and only understand brains through it. Whereas the hypothetical Martian remains confused, the rest of us humans happily walk out of our local Barnes and Noble, or even worse, finish watching the latest TED with the delusion feeling that now, we “got it.”

Many times, to be sure, this process is a great thing. Reading and watching highbrow lectures is hugely beneficial intellectually speaking. But let’s not forget that exposure to X is not knowledge of X. The brain is messy; let’s embrace that view, not a subtitle.

Drawing Out Our Better Angels: The Important Role of Moral Reminders

Everyone has their pet theories about human morality; are we inherently selfless or selfish? Indeed, the “nature of man” has been a popular subject throughout the millennia. Thomas Hobbes claimed that life in the state of nature is brutish and short while Rousseau held that nothing is more peaceful than man in his natural state. So which is it? Are we intrinsically evil with our moments of good actually being selfishness in disguise? Or do societies and cultures cover up our primitive motivations to do unto others as we would have them do unto us?

Kathleen Vohs, a psychologist out of the University of Minnesota, tells one side of the story. In a series of experiments Vohs demonstrated that evoking the concept of money influences people to be more self-reliant and individualistic. In the first experiment Vohs gave participants 30 sets of jumbled words and tasked them with the challenge of unscrambling them to construct a four-word phrase. 15 of the phrases were neutral (e.g., “cold it desk outside is” became “it is cold outside.”) while the other 15 evoked the concept of money (e.g., “high a salary desk paying” became “a high-paying salary.”). To top it off, Vohs placed a stack of Monopoly money in the visual periphery of the participants in the money-condition while they completed the descrambling task. Next, the participants completed a difficult but solvable task by arranging 12 disks into a square with five disks per side. Here was the key: “As the experimenter exited the room, he offered that he was available to help if the participant wanted assistance.” What Vohs measured was how long participants would hold out before they asked for help. She found that participants who were primed with money persevered nearly twice as long before asking for help.

Vohs followed up this experiments with others. Her findings were consistent with the first experiment: when people are reminded of money they tend to focus more on themselves. For instance, “when an experimenter clumsily dropped a bunch of pencils on the floor, the participants with money (unconsciously) on their mind picked up fewer pencils.” Similarly, when “participants were told that they would shortly have a get-acquainted conversation with another person and were asked to set up two chairs while the experimenter left to retrieve that person… participants primed by money chose to stay much farther apart than their nonprimed peers (118 vs. 80 centimeter).”

What do these studies suggest? In Vohs’ words, “money brings about a state of self-sufficiency. Relative to people not reminded of money, people reminded of money reliably performed independent but socially insensitive actions.” Perhaps, then, Hobbes was right, man is brutish.

But let’s not get too pessimistic about our inner demons. In a series of experiments Dan Ariely demonstrated that people primed with the concepts of fairness and equality were less likely to cheat than a control group. Eagle eyed readers of Predictably Irrational may recall the experiments. In one, Ariely et al asked participants to complete a simple math test. There were 20 problems that required participants to find two numbers that added up to 10 (details here). They had five minutes to solve as many of the problems as they could after which they were entered into a lottery where they had the chance of winning ten dollars for every correct problem. Ariely and his colleagues created two groups: one was instructed to hand their answers directly to the experimenter; the other turned in a duplicate answer sheet, which they created, and disposed the original. The latter group, as planned, was given a chance to cheat.

And the results? Ariely found that the group given the chance to cheat did in fact cheat, but only by a little bit. This wasn’t the important part. The key to the experiment is what preceded the math problems. Before participants tackled the math problems, Ariely and his team asked them to do one of two things for a “memory test:” write down the names of 10 books they read in high school or write down as many of the Ten Commandments they could remember. When cheating wasn’t possible the participants solved 3.1 problems correctly on average. But when cheating was possible – and this is where things got interesting – the group that recalled 10 books from high school solved about 4.1 of the questions on average while the group that recalled the Ten Commandments solved about 3 problems on average. In other words, for the participants who were given the chance to cheat, the “books in high school” group cheated while the Ten Commandments group didn’t.

Ariely ran a nearly identical experiment except he swapped the Ten Commandments with the MIT honor code (his participants were MIT nerds). As you might have guessed, the scores of those who had to read and sign the honor code were nearly identical to the control group suggesting that they didn’t cheat while the scores of those who didn’t have to read and sign the honor code showed strong signs of cheating. This means, in Ariely’s words, that “people cheat when they have a chance to do so… [but] once they begin thinking about honesty – whether by recalling the Ten Commandments or by signing a simple statement – they stop cheating completely. In other words, when we are removed from any benchmarks of ethical thoughts, we tend to stray into dishonesty. But if we are reminded of morality at the moment we are tempted, then we are much more likely to be honest.” Advantage Rousseau.

Vohs’ and Ariely’s work suggests that the question of humans being inherently good or bad is largely irrelevant. The more accurate picture is the cartoon image; our moral senses are dictated by an angel over one shoulder and a devil over the other. Therefore, the more fruitful question is: what are the external contexts and circumstances that favor one over the other? This is not to suggest a blank slate view of human morality – far from it – but it is to say that societies where messages of honesty and fairness dominate are better off. Ariely’s conclusion is bad news for societies where it is almost impossible to go a day without seeing a photo, video or advertisement where avarice rules. And this is the larger and more important point. When given a chance to cheat most people do; not a lot, but enough to improve a test score by a few points (One can easily see how this can perpetuate in a negative way). But when the same people are reminded about honestly and fairness it is their moral codes that take the drivers seat. So let’s hope that those in control of the airwaves can drawn out our better angels by broadcasting messages of honestly and fairness.

Read more

The Price of Framing & Anchoring

I love wikipedia, which is why I am more than willing to donate some money. But I was a little taken back when I saw that its initial asking price is $20. I was thinking a few bucks at most, certainly not $20… that’s four Bud Lights in NYC! What’s interesting is that after seeing the initial price of $20, giving five, six, or seven dollars as opposed to one or two didn’t seem that bad. But then my knowledge of cognitive biases reminded me that Jimmy Wales was playing me.

The green box on the right illustrates a cognitive bias known as anchoring, which “describes the common human tendency to rely too heavily… on one trait or piece of information when making decisions” (I took this quote, appropriately, from wikipedia). A good bargainer uses anchoring to set the initial price high and give the buyer the illusion that he or she is getting a good deal. Likewise, Wales set the smallest donation at $20 to make, say, ten dollars seem like not that much. I mentioned anchoring about a month ago; now, I want to turn to its evil twin, the framing effect, which also distracts us with irrelevant information.

To get a sense of the power of framing, consider Dan Ariely’s example, which appears in the first chapter of Predictably Irrational. Below are three subscription plans offered by Which would you choose?

  1. subscription – US $59.00 One-year subscription to Includes online access to all articles from The Economist since 1997.
  2. Print subscription – US $125.00 One-year subscription to the print edition of The Economist.
  3. Print & web subscription – US $125.00 One-year subscription to the print edition of The Economist and online access to all articles from The Economist since 1997.

If you read closely, something strange should have jumped out at you. Who would, as Ariely says, “want to buy the print option alone… when both the Internet and the print subscriptions were offered for the same price?” At first it seems as if someone at The Economist may have made a mistake, after all, how could a one-year subscription have the same value as a one-year subscription and access to online articles since 1997? But after thinking for a second, you may realize that the people at The Economist are not all that stupid; they may in fact know a thing or two about human behavior.

To see just how influential the “framing” of the Economist’s subscription plans are, Ariely conducted the following experiment. First, he presented his MIT Sloan School of Management students with the options as seen on and had them choose a subscription. Here were the results.

  • Internet-only subscription for $59 – 16 students
  • Print-only subscription for $125 – 0 students
  • Print-and-Internet subscription for $125 – 84 students

It makes sense – who would choose option two given option three? But the question is: how much did option two influence the student’s decision making? Ariely conducted a second experiment to find the answer. He gave the following subscription plan, this time without the second option, to a second group of students and had them pick one. Here were the results.

  • Internet-only subscription for $59 – 68 students
  • Print-and-Web subscription for $125 – 32 students

As you can see, by simply removing the second option the preference of the students shifted dramatically. Without the second option 68 students chose option one while only 32 students chose option two. How significant is this? Well let’s say that instead of running this experiment with 100 graduate students, you did it with 10,000 customers in the real world. And let’s say that all 10,000 customers chose to sign up for a subscription. In scenario one, where three options are presented, 8,400 people would have chosen option three, 1,600 would have chosen option one, none would have chosen option two, and The Economist would have made $1,144,400 in revenue. Let’s compare this to the second scenario; 6,800 chose option one, 3,200 chose option two, and The Economist would have made $801,200 in revenue. By simply placing a decoy option, The Economist has made $343,200 more.

So what’s the lesson? When you go out this weekend to restaurants or bars, remember that all those gimmicks are just waiting to feast on your cognitive biases. Maintain rationality!

Misguided Incentives in Schools

A few years ago Uri Gneezy and Aldo Rustichini ran an insightful study to test the effectiveness of incentives. They approached an Israeli daycare center that was suffering from a not too unusual problem: late parents. To counter their tardiness, Gneezy and Rustichini imposed a fine – every time the parents were more than ten minutes late, they had to pay up. Before I finished reading about this study my intuition told me that the fine would be an effective deterrent. I was wrong. Not only did it not work, the fine actually caused an increase in late parents compared to control groups. The graph below says it all.

So what happened? Before they imposed the fine, the mothers were bound by a social contract, where social norms about being late influenced parents to show up on time and avoid the guilt that comes with being late. However, when Gneezy and Rustichini imposed the fine, parents became bound by a market contract; they suddenly started paying for their tardiness with money instead of guilt. Once this happened, the parents could decide for themselves if they wanted to be late or not.

The worst part, at least for the day care center (they actually ran this experiment on several centers), was that when they removed the fine the parents continued to act according to the market contract. And, as Dan Ariely explains,”social relationships are not easy to reestablish. Once the bloom is off the rose — once a social norm is trumped by a market norm — it will rarely return.”

Misguided incentives with unintentional negative consequences are nothing new (not to this blog or academia). For example, in a study done back in the 1970s, researchers put in place a reward program at an elementary school to increase students’ interest in math; for every three hours the students’ did math they earned credits they could use to get prizes. In one aspect this worked – kids spent more time on math. But after the teachers removed the prizes (they told the students’ they had to be fair to the rest of the students in the school), the students’ interest in math “plummeted to a level below where it had been during the pre-reward baseline period. In other words, it didn’t just go back to where it had been before the reward program was instituted, as an economist might have predicted – the kids were now less interested in the games than they were when the program started.” Clearly, in this aspect, it did not work. The fundamental problem with incentives like this one is that they undermine the very interest, motivation and passion that they try to garner.

This is why I am concerned when I hear about schools incentivizing their students with money. Here are three cases, which I pulled from an US Today article:

  • In suburban Atlanta, a pair of schools last week kicked off a program that will pay 8th- and 11th-grade students $8 an hour for a 15-week “Learn & Earn” after-school study program (the federal minimum wage is currently $5.85).
  • Baltimore schools chief Andres Alonso last week promised to spend more than $935,000 to give high school students as much as $110 each to improve their scores on state graduation exams.
  • In New York City, about 9,000 fourth- and seventh-graders in 60 schools are eligible to win as much as $500 for improving their scores on the city’s English and math tests, given throughout the school year.

It’s a bit too early to tell if such programs are working (if anyone has data or a news story on the subject please let me know). On the other side of the coin, there is plenty of talk about “merit pay”, programs that reward teachers for high performance. In one form or another, they’ve been implemented in Denver, Chicago, Nashville and most notably in Washington D.C. by former D.C. chancellor Michelle Rhee. Have they worked? According to the Freakonomics Blog:

In the last year… research showing that merit pay, in a variety of shapes and sizes, fails to raise student performance. In the worst of cases, such as the scandal in Atlanta, it’s contributed to flat-out cheating on the part of teachers and administrators.

Personally, I believe that education reform needs to be a top-down effort; working from the bottom-up with incentive plans like these can only do so much. But I’m really not that concerned about education reform. What keeps me up is how humans misunderstand incentives. We think that bonuses are a good idea in business, we think that cash rewards are good in schools, and we ignore the fact that many people don’t need an incentive because they truly enjoy what they do. Sometimes incentives in businesses and schools work, but sometimes they don’t. What’s important is that we keep theory and reality in line. To do this we must run experiments, collect data and empirically demonstrate what the best course is – be good scientists in other words. Psychologists have been doing this for years, now it is time for those outside of academia to do the same.

The Evil of Irrelevant Information: Anchoring & The Conjunction Fallacy

I want to think that people are rational consumers, but it’s hard to ignore the overwhelming evidence that says they’re not. You don’t even have to read the academic literature to realize this, just go to the grocery store! As you walk down the aisle and see a delicious bag of chips with “50 percent less calories,” ask yourself this: would you have bought it if it said “with 50 percent as many calories?” Or how about the medication over in the pharmaceutical section that works “99 percent of the time,” would you buy it if it was “ineffective 1 percent of the time”? In both cases the answer is probably not.

We succumb to these silly things because our brains are easily fooled by numerical manipulations. As psychologist Barry Schwartz explains, “when we see outdoor gas grills on the market for $8,000, it seems quite reasonable to buy one for $1,200. When a wristwatch that is no more accurate than the one you can buy for $50 sells for $20,000, it seems reasonable to buy one for $2,000.” Whether you like it or not, your decisions are easily swayed.

Let’s look at some more examples.

Imagine you’re at an auction bidding on a bottle of Côtes du Rhône, a bottle of Hermitage Jaboulet La Chapelle, a cordless keyboard and mouse, a design book and a one-pound box of Belgium chocolates. Before the auction starts the auctioneer asks you to jot down the last two digits of your social security number and indicate if you would be willing pay this amount for any of the products as well as the maximum amount you would be willing to bid for each product. When Dan Ariely, Drazen Prelec and George Loewenstein conducted this auction to a group of MIT undergrads, they found that the social security number greatly influenced the students’ bids. In Ariely’s words:

The top 20 percent (in terms of the value of their s.s), for instance, had an average of $56 for the cordless keyboard; the bottom 20 percent bid an average of $16. In the end, we could see that students with social security numbers ending in the upper 20 percent placed bids that were 216 to 346 percent higher than those of the students with social security numbers ending in the lowest 20 percent.

Ariely’s experiment illustrates a cognitive bias known as anchoring, which illustrates our inability to ignore irrelevant information and assess things at face value. The classic anchoring experiment comes from Daniel Kahneman and Amos Tversky. Two groups were asked whether or not the percentage of African countries in the United Nations was higher or lower than a given value: 10 percent for one group and 65 percent for the other group. They found that “the median estimates of the percentage of African countries in the United Nations were 25 and 45 for groups that received 10 and 65, respectively, as starting points.” Put differently, those who received 10 percent estimated the percentage of African countries in the UN to be 25, whereas those who received 65 percent estimated the percentage of African counties in the UN to be 65. As one author puts it, “the brain isn’t good at disregarding facts, even when it knows those facts are useless.”

Along the same lines is the “conjunction fallacy,” which highlights our propensity to misunderstand probability. Here is a simple example. Which description of my friend Brent is more likely: 1) he is the CEO of Bank of America, or 2) he is the CEO of Bank of America and his annual salary is at least $1,000? Though your intuition strongly favors option two, option one is more likely because there are less contingencies. In other words, though it is very likely that he makes more than $1,000 a year as the CEO of a Bank of America, the probability of option one is higher. As UCLA psychologists Dean Buonomano says, “the probability of any event A and any other event B occurring together has to be less likely than (or equal to) the probability of event A by itself.”

The point I am driving at is that we are easily manipulated by irrelevant information. Why? There is a fairly simple explanation.

For most of human history our species survived in a simple world where there wasn’t TV, the internet, fast food, birth control pills, or economic meltdowns. There was just one thing – survival. This was what our psychologies evolved for. Unfortunately, there is a significant mismatch between the world our psychologies were built for and the world as it is today. Food illustrates this disconnect. In the hunter-gatherer society where food was scarce, it would have been smart to load up on as many fatty and salty foods as possible. Now, it would be stupid, or at least bad for your health, to visit your local McDonalds every day, which relentlessly takes advantage of our primitive appetites.

Here’s the kicker: the same is true for anchoring and the conjunction fallacy. In hunter-gatherer societies humans didn’t have to decide between different priced gas grills or bags of chips, or figure out probabilities. They just had to understand how to get food, build shelter and exist long enough to pass on their genes. Because of this, our poor judgement is “not a reflection of the fact that [our brains were] poorly designed, but… that [they were] designed for a time and place very different from the world we now inhabit,” as Buonomano says.

Unfortunately, this means that unless natural selection speeds up, we won’t be getting better any time soon.

Read more

How Misguided Incentives Negatively Affect Productivity and Well-Being

Americans are not well: reported levels of subjective happiness haven’t budged in years, divorce rates are hovering around 50%, and tons of money doesn’t seem to do the trick. So what’s going on? Social scientists, economists, and politicians give us their reasons, but most are speculative and lack legitimate evidence. Thankfully, psychologists are weighing in with some data-backed answers. It’s sad news though. As psychologists publish more and more happiness studies, it is becoming clear that the biggest problem is our intuition. When it comes to well-being, our gut feelings are usually way off the mark.

This error manifests itself in a number of ways, one of which regards our intuitions of incentives. We think that monetary incentives make us more productive and better off when many times the opposite is true. Let me explain.

In the 1960s psychologist Sam Glucksberg devised a simple task. Participants were given a box of tacks, a box of matches, and a candle and told to light the candle and fix it to the wall so the wax wouldn’t hit the floor. Here was the catch: the faster participants accomplished the task, the more money they made – 5$ for those who solved it faster than 75% of people, and 20$ for those who solved it the fastest (keep in mind these are 1960 prices). With the incentives in place, most melted one side of the candle, stuck it to the wall and watched their idea helplessly melt away. A few minutes later, participants got creative and realized the solution; they placed the candle in the box and tacked the box to the wall – thinking outside of the box helps after all.

Here’s where things got interesting. Glucksberg ran the same experiment again but removed the monetary incentive. While conventional wisdom tells us that monetary incentives improve performance, Glucksberg found just the opposite: those who weren’t incentivized accomplished the task three and a half minutes faster. Why?

According to Daniel Pink, who writes about Glucksberg in his latest book Drive, “rewards, by their very nature, narrow our focus…. as [the candle] experiment shows, the reward… blinkered the wide view that might have allowed [participants] to see new uses for old objects.” This means that monetary incentives taper our creative juices – you could think of them as creative brain drains.

Complimentary results were found by Dan Ariely et al. In a series of clever experiments, Ariely divided participants into three groups, low bonus, medium bonus, and high bonus and had them try their hands in six mini games: packing quarters, Simon, recall the last three numbers, labyrinth, dart ball, and roll-up. The rules were straightforward, the better they performed on the mini games the more they were paid, but if they didn’t perform to a certain standard they were paid nothing. After running the participants Ariely found that “those who could earn the small bonus (equivalent to one day of pay) and the medium-level bonus (equivalent of two weeks’ worth of work) did not differ much from each other… [however] those who stood to earn the most demonstrated the lowest level of performance.” When the stakes are high, our ability to perform vanishes (if you don’t think this is true, go talk to Greg Norman, Jean Van de Velde, or Alex Rodriguez during October).

Pink and Ariely are suggesting two things: monetary incentives cause 1) a decrease in creativity and 2) a decrease in performance. So how is this related to well-being?

Along with strong social and romantic relationships and several other factors, the ability to engage in optimal experiences, what psychologist Mihalyi Csikszentmihalyi calls Flow, greatly contributes to well-being. What’s Flow? As Csikszentmihalyi explainsit is the climber [feeling] at one with the mountain, the clouds, the rays of the sun… the surgeon [feeling] at one with the movements of the operating team, sharing the beauty and the power of a harmonious transpersonal system,” or the chess master being “fully enthralled” in a match. Put simply, flow is being in the zone. Those who achieve it almost always describe themselves as having a strong sense of purpose and meaning, and a genuine and intrinsic love for what they do – work is not work for them in other words.

The problem is that the highly monetarily incentivized economy discourages flow. As Pink describes, “too many organizations… still operate from assumptions about human potential and individual performance that are outdated, unexamined, and rooted more in folklore than in science. They continue to pursue practices such as short-term incentive plans and pay-for-performance schemes even in the face of mounting evidence [which shows that such practices] usually don’t work and often do harm.” This is what the research by Glucksberg and Ariely illustrates; that when money is on the line, our performance and creatively decreases, we are pushed further from flow-like activities and our well-being suffers as a result.

Luckily, some companies recognize this and have taken action. Google’s “Innovation Time Off” encourages employees to spent 20% of their work time on personal projects, and once a quarter Australian software company Cannon-Brookes sets aside an entire day for its engineers to work on any software problem they want to. They’ve paid off; when employes are allowed to pursue personal goals, as opposed to goals placed on them, they “get in the zone,” produce more and report being better off (you can thank “Innovation Time Off” for your gmail account by the way). Unfortunately, the vast majority of businesses haven’t adopted similar models, and many unhappy employes could likely attest to this.

To be sure, I am not denying the value of monetary incentives, but understanding that they do not work absolutely is vital. I am also not saying that flow is a recipe for well-being, though it is an important ingredient. The take away is that monetary incentives prevent people from engaging in flow, which thereby decreases productivity, creatively, and most importantly, well-being. It is a cliché and obvious to suggest that people are better off when they are intrinsically motivated, but its seem like we continue to ignore this simple fact.

Read more

Don’t Blink! What Are Behavioral Studies Really Saying?

Several years ago, Malcolm Gladwell’s book Blink described several interesting accounts of how we make decisions not on reason, but on what is called, ‘rapid cognition.’ For example, Gladwell explained how small adjustments in how a product is presented can greatly change its sales:

“Christian Brothers, wanted to know why, after years of being the dominant brand in [brandy sales], it was losing market share to E&J. Their brandy wasn’t more expensive. It wasn’t harder to find in the store. And they weren’t being out-advertised.  “The problem [was] not the product and it [was] not the branding. It [was] the package.” Christian Brothers looked like a bottle of wine: it had a long, slender spout and a simple off-white label. E&J, by contrast, had a far more ornate bottle: more squat, like a decanter, with smoked glass, foil wrapping around the spout, and a dark, richly textured label. To prove their point, Rhea and his colleagues did [a] test. They served two hundred people Christian Brothers Brandy out of an E&J bottle, and E&J Brandy out of a Christian Brothers bottle. Which brandy won? Christian Brothers, hands-down”

Gladwell’s brandy story is an example of sensation transference, which is a fancy phrase that describes people’s tendency to unconsciously assess a product through emotion or sensation instead of reason. Since Blink was published, behavioral economic studies like these have become more popular with the public and academia. Many of them illustrate how the cognitive mechanisms that constitutes our decisions are out of our consciously control.

One of my favorites is a well known study by Eric Johnson and Daniel Goldstein. In it, they found that the tendency for people to sign up for an organ donation program in several European countries was largely a function of how a question was presented. The countries with nearly a 100% sign up rate used forms that read, “Check the box below if you don’t want to participate in the organ donor program.” The other countries with no more than a 28% sign up rate used forms that read, “Check the box below if you want to participate in the organ donor program.” As the graph illustrates, a simple change in words can have a huge influence.

Then there is the fly in the urinal example, another favorite of mine. At the Schiphol Ariport in Amsterdam authorities etched tiny images of flies in the urinals in attempt to, shall we say, improve aim. It worked, and spillage reduced by 80%.

As I said, behavioral studies like these are becoming commonplace as more and more psychologists publish books. But as cool as behavioral studies like these are, their honeymoon phase is ending. Now, psychologists are trying to figure out what to do with them. So far, there have a been a couple of different routes that they have taken. They are used to…

  •  Improve policy and business decisions: Ariely’s The Upside of Irrationality: The Unexpected Benefits of Defying Logic at Work and at Home and Thaler and Sunstein’s Nudge: Improving Decisions About Health, Wealth, and Happiness. 
  • Improve well-being: Martin Seligman’s Authentic Happiness and Flourishand Jonathian Haidt’s The Happiness Hypothesis
  • Improve decision-making at the individual level: Carol Tavris and Elliot Aronson’s Mistakes Were Made, Jonah Lehrer’s How We Decide, and Kathryn Schulz Being Wrong.
However, if you read these books closely, you’ll find many overlapping points, like these two:

We travel to France, meet a couple from our hometown, and instantly become touring buddies because compared with all those French people who hate us when we don’t try to speak their language and hate us more when we do, the hometown couples seems exceptionally warm and interesting… But when we have them over for dinner a month after returning home, we are surprised to find that our new friends are rather boring and remote compared with our regular friends.


In Barcelona… I met Jon, an American tourist who, like me, did not speak any Spanish. We felt an immediate camaraderie… Jon and I ended up having a wonderful dinner and a deeply personal discussion… we exchanged e-mail addresses… [and] about six months later, Jon and I met again for lunch in New York. This time, it was hard for me to figure out why i’d felt such a connection with him, and no doubt he felt the same. We had a perfectly amicable and interesting lunch, but it lacked the intensity of our first meeting.

Pretty similar right? The first is from Daniel Gilbert’s Stumbling on Happiness, which discusses “how well the human brain can imagine its own future,” and the second is from Dan Ariely’s Predictably Irrational, which outlines detriments to everyday reasoning. Their similarities aren’t a huge surprise, though. Both books exist in roughly the same area of study – the psychology of decisions-making/behavioral economics. However, they raise an important question: how are we treating behavioral psychology data?

On the one hand, you could say that psychologists – at least those who write popular books – are being a bit trigger happy when it comes to discussing the implications of behavioral studies. It is a fair point, and one that comes up often. On the other hand, you could claim that it is a good thing that these studies are being so widely applied – they are getting the public and academia (especially economists) excited about psychology.

Keeping both points in mind, I encourage psychologists and lay readers to keep their skeptic caps on at all times. In the mean time, psychologists will have to figure what all their data is really saying…

Read more

My $10,000 Blog

I’ve always thought that my blog is good. You’d have to pay me a lot to shut it down. Just how much? Probably a few thousands dollars at least. Of course, it probably isn’t worth more than a few cents, but I’m only human, and it’s natural for me to overvalue items or services that I own. This tendency is referred to as the endowment effect, and the Wendy’s commercial sums it up nicely. The man on the left is more willing to give up a dollar than his equally valued Double Stacker because he owns the Double Stacker.

The endowment effect is well established in psychologyand behavioral economics. It initially appeared in psych literature when Richard Thaler published Toward a Positive Theory of Consumer Choice in 1980, and its effects have been reproduced in a number of experiments.

In one, researchers divided Cornell undergrads into two groups, one that was given coffee cups and the other that was given nothing. Then, they asked the former group to estimate how much they would sell the cups for and the later group how much they would buy the cups for (they were being sold for 6$ at the Cornell bookstore). Their findings clearly illustrate the endowment effect: those with the cups were “unwilling to sell for less than $5.25,” while those without the cups were “unwilling to pay more than $2.25-$2.75.”

Another experiment by Dan Ariely, Michael Norton, and Daniel Mochon illustrates our tendency to overvalue items that we are emotionally attached to – another version of the endowment effect. In it, they set up a booth at the Harvard University Student Center and offered students a chance to create origami frogs. Ariely, Norton, and Mochon, wanted to see if the student who created origami frogs valued them higher than the students who did not. To do this, they asked half of the students to construct origami frogs and estimate their value, and the other half to estimate their value but not to construct the frogs. Their findings confirmed previous examinations of the endowment effect: the students who made the origami frog valued them about 18 cents higher than subjects who did not. (Ariely calls this “The IKEA Effect,” after assembling an IKEA toy chest and noticing how much more he valued it compared to his family members).

Here is the interest part of the endowment effect: for the most part, it has only been studied in contrast to neoclassical economic theory, which holds that consumers are rational actors. And because the endowment effect breaks an axiom of neoclassical theory – that the price of a good is objective – it has been labeled as an irrational behavior.

But I think this is only half the story.

I was reading Richard Dawkins’ book Greatest Show on Earth the other day when I came across an interesting passage. Dawkins was explaining that the reason we think babies are so cute is because early humans who evolved to find their babies cute were more likely to nurture them, care for them, play with them, and raise them to be healthy than those who did not. In other words, it is evolutionarily advantageous to think our babies are cute – one way (of many) our genes make sure they get passed on to the next generation.

I bring this up to say that we are endowed to find our children adorable in the same way that we are endowed to overvalue our possessions. That is, the endowment effect is a survival technique handed to us through natural selection – it exists because it is evolutionarily advantageous to overvalue possessions that are important for survival. Think about it. If you were a prehistoric person, and you possessed a tool that helped you hunt, make fires, and built shelters, wouldn’t it be wise for you to overvalue this possession? I am not the first person to make this argument, and I don’t think that it is that controversial. But things do get contentious when you try to qualify the endowment effect as being rational or not.

Whereas psychologists and economists see the endowment effect as irrational, evolutionary biologists gives us a strong case for it being entirely rational; Ariely and his colleagues suggest that it is irrational relative to economic theory, but Dawkins says the opposite on evolutionary terms. I am afraid that this debate might be one of words, clearly, the qualifications for rational behavior are not absolute, they are relative. As such, it is impossible to objectively say what is or isn’t rational. Perhaps this is an innocuous claim, but as anyone involved in psychology or behavioral economics will tell you, the qualifications of rational behavior are not so easily agreed to.

At any rate, we should stop thinking about the endowment effect only in the context of the reasoning, decision-making, or economics. As Dawkins’ example illustrates, it could also be understood in terms of evolutionary biology.

Read more

%d bloggers like this: