Skip to content

Posts tagged ‘Timothy Wilson’

Do We Know What We Like?

 

People are notoriously bad at explaining their own preferences. In one study researchers asked several women to choose their favorite pair of nylon stockings from a group of twelve. After they made their selections the scientists asked them to explain their choices. The women mentioned things like texture, feel, and color. All of the stockings, however, were identical. The women manufactured reasons for their choices, believing that they had conscious access to their preferences.

In other words: “That voice in your head spewing out eloquent reasons to do this or do that doesn’t actually know what’s going on, and it’s not particularly adept at getting you nearer to reality. Instead, it only cares about finding reasons that sound good, even if the reasons are actually irrelevant or false. (Put another way, we’re not being rational – we’re rationalizing.)”

Our ignorance of our wants and desires is well-established in psychology. Several years ago Timothy Wilson conducted one of the first studies to illustrated this. He asked female college students to pick their favorite posters from five options: a van Gogh, a Monet and three humorous cat posters. He divided them into two groups: The first (non-thinkers) was instructed to rate each poster on a scale from 1 to 9. The second (analyzers) answered questionnaires asking them to explain why they liked or disliked each of them. Finally, Wilson gave each subject her favorite poster to take home.

Wilson discovered that the preferences of the two groups were quite different. About 95 percent of the non-thinkers went with van Gogh or Monet. On the other hand, the analyzers went with the humorous cat poster about 50 percent of the time. The surprising results of the experiment showed themselves a few weeks later. In a series of follow-up interviews, Wilson found that the non-thinkers were much more satisfied with their posters. What explains this? One author says that, “the women who listened to their emotions ended up making much better decisions than the women who relied on their reasoning powers. The more people thought about which posters they wanted, the more misleading their thoughts became. Self-analysis resulted in less self-awareness.”

Wilson found similar results with an experiment involving jams. And other researchers, including Ap Dijksterhuis of Radboud University in the Netherlands, have also demonstrated that we know if we like something, but we don’t know why and the more time we spend deliberating the worse off we are. Freud, then, was right: we’re not even the masters of our own house.

Our tendency to make up reasons for our preferences is of particular importance for advertisers, who sometimes rely on focus groups. But if we don’t know what we like, then how are ad agencies supposed to know what we like? The TV shows The Mary Tyler Moore Show and Seinfeld, for example, are famous for testing terribly even though they went on to be two of the most popular shows in the history of TV. By the same token, many shows that tested well, flopped. As Philip Graves, author of Consumer.ology reminds us: “As long as we protect the illusion that we ourselves are primarily conscious agents, we pander to the belief that we can ask people what they think and trust what we hear in response. After all, we like to tell ourselves we know why we do what we do, so everyone else must be capable of doing the same, mustn’t they?”

Stories of the failures of market research are not uncommon. Here’s one from Gladwell.com:

At the beginning of the ’80s, I was a product manager at General Electric, which at the time had a leading market share in the personal audio industry (radios, clock radios, cassette recorders, etc.). Sony had just introduced the Walkman, and we were trying to figure out how to react. Given the management structure of the day, we needed to prove the business case. Of course, we did focus groups!

Well, the groups we did were totally negative. This was after the Walkman had been on the scenes for months, maybe a year. The groups we did felt that personal music would never take off. Would drivers have accidents? Would bicycle riders get hit by drivers?

If we listened to “typical” consumers, the whole concept was DOA.

This type of reaction is probably the reason that there is the feeling of a “technological determination” on the part of the electronics community. It leads to the feeling that you should NEVER listen to the consumer, and just go about introducing whatever CAN be produced.

At the time, we had a joke about Japanese (Sony/Panasonic/JVC) market research. “Just introduce something. If it sells, make more of it.” It’s one way of doing business. One the other hand, when I was hired by a Japanese company in the mid-80’s, I was asked how GE could get by with introducing such a limited number of models. Simple, I said, “We tested them before we introduced them.”

History tells which method has worked better.

One person who understood this was Steve Jobs. He never cared for market research or focus groups because, as he once said, “people don’t know what they want until you show it to them.” Instead, Jobs was a pseudo- Platonist about his products. He believed that there was an ideal music player, phone, tablet and computer and trusted the customers to naturally recognize perfection when they saw it. When asked what market research went into the iPad, his New York Times obituary reports, Mr. Jobs replied: “None. It’s not the consumers’ job to know what they want.”

I’m not the only we with an ancient Greek take on Jobs. Technology-theory contrarian Evgeny Morozov compared Jobs to Plato a few years back. He said:

The notion of essence as invoked by Jobs and Ive [the top Apple designer] is more interesting and significant—more intellectually ambitious—because it is linked to the ideal of purity. No matter how trivial the object, there is nothing trivial about the pursuit of perfection. On closer analysis, the testimonies of both Jobs and Ive suggest that they did see essences existing independently of the designer—a position that is hard for a modern secular mind to accept, because it is, if not religious, then, as I say, startlingly Platonic.

Does this mean all markers should think platonically? Not necessarily; Jobs, to be sure, was an outliner. But it does remind us that many times we don’t know what we like.  Read more

Is Character More Important Than GPA?

KIPP – Knowledge is Power Program – is a chain of college-preparatory charter schools. They are known for their long hours, lofty academic demands and high graduation rates. However, a recent Times article sheds light on a problem that KIPP is facing.

Almost every member of the cohort did make it through high school, and more than 80 percent of them enrolled in college. But then the mountain grew steeper, and every few weeks, it seemed, Levin (co-founded of KIPP) got word of another student who decided to drop out. According to a report that KIPP issued last spring, only 33 percent of students who graduated from a KIPP middle school 10 or more years ago have graduated from a four-year college.

The article goes on to describe that along with GPA, educators and positive psychologists such as Martin Seligman and Chris Peterson are realizing that character is an important trait that contributes to academic success.

For the headmaster of an intensely competitive school, Randolph, who is 49, is surprisingly skeptical about many of the basic elements of a contemporary high-stakes American education. He did away with Advanced Placement classes in the high school soon after he arrived at Riverdale (Randolph is the headmaster); he encourages his teachers to limit the homework they assign; and he says that the standardized tests that Riverdale and other private schools require for admission to kindergarten and to middle school are “a patently unfair system” because they evaluate students almost entirely by I.Q. “This push on tests,” he told me, “is missing out on some serious parts of what it means to be a successful human.”

The most critical missing piece, Randolph explained as we sat in his office last fall, is character — those essential traits of mind and habit that were drilled into him at boarding school in England and that also have deep roots in American history. “Whether it’s the pioneer in the Conestoga wagon or someone coming here in the 1920s from southern Italy, there was this idea in America that if you worked hard and you showed real grit, that you could be successful,” he said. “Strangely, we’ve now forgotten that. People who have an easy time of things, who get 800s on their SAT’s, I worry that those people get feedback that everything they’re doing is great. And I think as a result, we are actually setting them up for long-term failure. When that person suddenly has to face up to a difficult moment, then I think they’re screwed, to be honest. I don’t think they’ve grown the capacities to be able to handle that.”

The KIPP statistics and Randolph’s story brings me to an important study highlighted in Timothy Wilson’s latest book Redirect. Wilson and his colleague targeted college freshman who were worrying about their grades and not doing well academically. They brought the students into their lab and told them they were surveying first-year students’ attitudes toward college life (this was a cover of course). Over the next thirty minutes, the freshman read survey results that highlighted many students who overcame early academic problems. They also watched videotaped interviews of upper-class students describing similar stories. One student, for example, “reported a steady increase [in GPA]… getting a 2.0, a 2.6, and then a 3.2.”

What happened over the next four years is noteworthy. In Wilson’s words, “compared to a randomly assigned control group of students who didn’t get any information about grade improvement, those who got the story prompt achieved better grades in the following year and were less likely to drop out of college.” How did a small indirect intervention have such a large impact?

Wilson explains that students who do poorly at the beginning of their collegiate academic career’s typically follow two paths: a pessimism cycle or an optimism cycle. As the graphic below illustrates, those in the pessimism cycle are quick to doubt their intellectual abilities and easily deem themselves failures. Those in the optimism cycle, on the other hand, usually blame their poor work ethic and strive to improve. In short, Wilson’s study encouraged students to filter into the optimism cycle and out of the pessimism cycle.

And then there is Carol Dweck’s study, which I mentioned a few posts ago:

A few years back Dweck and her team studied how praise affects student performance. Her team ran experiments on 400 fifth-graders. First, they pulled the students out of their classrooms for a nonverbal IQ test; it was fairly easy and most kids did fine. The important part came after the test. Once the kids received their scores they were told one of two things: “You must be smart,” or, “You must have worked really hard.” Next, the students were asked to choose between two tests, one that was more difficult than the first and one that was as easy as the first. They found that those praised for being smart tended to opt for the easy test while those praised for their hard work almost always opted for the more difficult test.

The connection to Wilson’s study is obvious – hard work pays off, and blaming or relying on intelligence can actually be harmful.

Back to KIPP and Randolph. The takeaway from the Times article is that character is an important part of education and it is often overlooked because test scores and GPA cannot capture it nor do they incentivize it. Wilson and Dweck are suggesting that with just a bit of intervention students learn how important a good hard-working attitude is. That is character. Hopefully educators have their ears on this conversation. If schools are smart, they will adopt Wilson and Dweck’s findings into their curriculum to improve character along with test scores.

Read more

A Brief History of Popular Psychology: An Essay

It is unclear when the popular psychology movement started, perhaps with Malcolm Gladwell’s The Tipping Point or Steven Levitt and Stephen Dubner’s Freakonomics, or how it is defined, but it could be generally described by the public’s growing interest in understanding people and events from a sociological, economical, psychological, or neurological point of view.

Over the last decade the New York Times bestseller list has seen a number of these books: Ariely’s Predictably Irrational (2008) and The Upside of Rationality (2010), Gilbert’s Stumbling on Happiness (2006), Haidt’s The Happiness Hypothesis (2006), Lehrer’s How we Decide (2009), and Thaler & Sunstein’s Nudge (2008). What unites them is their attempt to “explore the hidden side of everything,” by synthesizing numerous academic studies in a relatable way, drawing upon interesting real-world examples, and by providing appealing suggestions for how one can understand the world, and his or her decisions and behaviors within the world, better.

The popular psychology movement is the result of a massive paradigm shift, what many call the cognitive revolution, that took place in the second half of the 20th century. Although it’s starting point is unclear, George A. Miller’s 1956 “The Magical Number Seven, Plus or Minus Two,” and Noam Chomsky’s 1959 “Review B. F. Skinner’s Verbal Behavior,” were, among others, important publications that forced psychology to become increasingly cognitive. Whereas behaviorists – who represented the previous paradigm – only considered the external, those involved in the cognitive revolution sought to explain behavior by studying the internal; the cause of behavior was therefore thought of as being dictated by the brain and not the environment.

The cognitive revolution naturally gave rise to the cognitive sciences – neuroscience, linguistics, artificial intelligence, and anthropology – all of which began to study how human brains processed information. A big part of the revolution revolved around the work done by psychologists Daniel Kahneman and Amos Tversky. Kahneman and Tversky developed a cognitive bias and heuristic program in the early 1970s that changed the way human judgment was understood. The heuristics and biases program had two goals. First, it demonstrated that the mind has a series of mental shortcuts, or heuristics, that “provide subjectively compelling and often quite serviceable solutions to… judgmental problems.” And second, it suggested that underlying these heuristics were biases that “[departed from] normative rational theory.”

Kahneman and Tversky’s work was vital because it questioned the notion that judgment was an extensive exercise based off of algorithmic processes. Instead, it suggested that people’s decisions and behaviors are actually influenced by “simple and efficient… [and] highly sophisticated… computations that the mind had evolved to make.”

Their work was complimented by Richard Nisbett and Lee Ross’s 1980 book Human Inference: Strategies and Shortcomings of Social Judgment, which outlined how people’s “attempts to understand, predict, and control events in their social sphere are seriously compromised by specific inferential shortcomings.” From this, a list of cognitive biases began to accumulate. These included: attentional bias, confirmation bias, the endowment effect, status quo bias, gambler’s fallacy, the primacy effect, and more.

The cognitive biases and heuristic program was just one part of the cognitive revolution however. The other equally important aspects came a bit later when psychologists began to empirically study how unconscious processing influenced behavior and conscious thought. These studies stemmed from the 1977 paper Telling More Than We Can Know: Verbal Reports on Mental Processes, by Richard Nisbett and Timothy Wilson. Nisbett and Wilson argued that, “there may be little or no direct introspective access to higher order cognitive processes,” thereby introducing the idea that most cognition takes place automatically at the unconscious level.

Wilson continued his research in the 80s and 90s, eventually developing the concept of the “adaptive unconscious,” a term he uses to describe our ability to “size up our environments, disambiguate them, interpret them, and initiate behavior quickly and non-consciously.” He argued that the adaptive unconscious is an evolutionary adaptation used to navigate the world with a limited attention. This is why we are able to drive a car, type on a computer, or walk without having to think about it.

Complimenting Wilson was Yale psychologist Jon Bargh who significantly contributed to the study of how certain stimulus influenced people’s implicit memory and behavior. In numerous experiments, Bargh demonstrated that people’s decisions and behaviors are greatly influenced by how they are “primed”. In one case, Bargh showed the people primed with rude words, such as “aggressively, bold, and, intrude,” were on average about 4 minutes quicker to interrupt an experimenter than participants who were primed with the polite words such as “polite, yield, and sensitively.”

Also in the 80s and 90s, neuroscientists began to understand the role of emotion in our decisions. In the 1995 book Descartes Error, Antonio Damasio explicates the “Somatic Markers Hypothesis” to suggest that, contrary to traditional western thought, a “reduction in emotion may constitute an equally important source of irrational behavior.” NYU professor Joseph LeDoux was also instrumental in studying emotions. Like Wilson, Nisbett, and Bargh, LeDoux advocated that an understanding of conscious emotional states required an understanding of “underlying emotional mechanisms.”

Along with emotion and the unconscious, intuition was another topic that was heavily researched in the past few decades. It was identified and studied as a way of thinking and as a talent. As a way of thinking, intuition more or less corresponds to Wilson’s adaptive unconscious; it is an evolutionary ability that helps people effortlessly and unconsciously disambiguate the world; i.e., the ability for people to easily distinguish males from females, their language from another, or danger from safety.

Intuition as a talent was found to be responsible for a number of remarkable human capabilities, most notably those of experts. As Malcolm Gladwell says in his 2005 best seller Blink, intuitive judgments, “don’t logically and systemically compare all available options.” Instead, they act off of gut feelings and first impressions that cannot be explained rationality. And most of the time, he continues, acting on these initial feelings is just as valuable as acting on more “thought out” feelings.

By the 1990s, when the “revolution in the theory of rationality… [was] in full development,” the line between rational and irrational behavior became blurred as more and more studies made it difficult to determine what constituted rational behavior. One on hand, some (mainly economists) maintained rationality as the norm even though they knew that people deviated from it. On the other hand, individuals like Herbert Simon and Gerd Gigerenzer argued that the standards for rational behavior should be grounded by ecological and evolutionary considerations. In either case though, rational choice theory was what was being argued. Because of this, the 1990s saw books such as Stuart Sutherland’s Irrationality (1994), Massimo Piattelli-Palmarini’s Inevitable Illusions: How Mistakes of Reason Rule Our Mind (1996), and Thomas Gilovich’s How We Know What Isn’t: The Fallibility of Human Reason in Everyday Life (1991). Each perpetuated that idea that behavior or decision-making was to be judged by a certain standard or norm (in this case, rational choice theory) as the titles imply.

However, when all of the facets of the cognitive revolution – cognitive biases and heuristics, the unconscious, emotion, and intuition – are considered, the idea that we act rationally begins to look extremely weak; this observation has heavily influenced the popular psychology movement. Pick up any popular psychology book and you will find Kahneman, Tversky, Nisbett, Wilson, Bargh, Damasio, Ledoux, and others heavily cited in arguments that run contrary to rational actor theory.

What’s interesting, and my last post touched on this, is that each popular psychology author has something different to say: Dan Ariely pushes behavioral economics to argue that we are all predictably irrational; Damasio argues that reason requires emotion; Gladwell, David Myers, and Wilson suggest that mostly thought is unconscious and our intuitive abilities are just as valuable as our rational ones; Daniel Gilbert and Jonathan Haidt illustrate how our cognitive limitations affect our well-being; Barry Schwartz shows how too much choice can actually hurt us; and Jonah Lehrer draws upon neuroscience to show the relationship between emotion and reason in our decision-making.

As a result of all these assertions, the human condition has become seriously complicated!

If there is something to conclude from what I have outlined it is this. Implicit in any evaluation of behavior is the assumption that human beings have a nature or norm, and that their behavior is deviating from this nature or norm. However, the popular psychology movement shows that our brains are not big enough to understand human behavior and our tendency to summarize it so simplistically is a reflection of this. We aren’t rational, irrational, or intuitive, we are, in the words of K$sha, who we are. 

Priming Revisited

In 1966, psychologists (Eagle, Wolitzky, and Kleim) wanted to know more about implicit memory – memory of experiences that unconsciously influence the performances of a task – so they had participants watch three one-second clips of a tree trunk and draw a nature scene. Here was the catch, one group of participants were shown a normal trunk (the one on the left), and the other group was shown a trunk that was subtly outlined like a duck (the one of the right). To the researchers surprise, those who were shown the trunk that resembled a duck were more likely to depict a duck in their nature scene compared to those who were shown the trunk that didn’t resemble a duck, even though the students who depicted a duck in their nature scene never reported seeing a duck.

This experiment was one of the first to demonstrate the power of priming, a psychology term that wikipedia defines as the, “implicit memory effect in which exposure to a stimulus influences a response to a later stimulus.” Since the 1960s, a lot more priming studies have been done, the bulk of them coming from Yale psychologist John Bargh.

In one experiment (some of you may know this from Gladwell’s Blink) Bargh, Chen, and Barrows asked participants to make four-word sentences from 30 sets of five word combinations. The experimenters primed participants by embedding “rude” and “polite” words into the five word combinations; the words included: Aggressively, bold, rude, bother, disturb, intrude, annoyingly, respect, honor, considerate, appreciate, patiently, polite, yield, and sensitively. When the participants finished, they were instructed to deliver the test to an experimenter who was in another room, and ask for further instruction. Whenever a participant arrived in the experimenters office, however, Bargh made sure that the experimenter was busy talking to someone else – usually a confederate who was having “trouble” understanding some directions. Bargh measured the amount of time participants waited until they interrupted the conversation, and found that participants who were primed with the “rude” words interrupted the conversation on average about 4 minutes earlier than participants who were primed with the “polite” words.

In the same experiment, Bargh demonstrated that people primed with old words like “Florida,” “Grey,” and “Wrinkle,” walked slower than those who were not. And found that African-Americans who were primed with negative stereotypes of their race actdc more hostile and irritable than caucasians. Bargh concluded that “the automatic activation of one’s stereotypes of social groups, by the mere presence of group features (e.g., African-American faces), can cause one to behave in line with that stereotype without realizing it.”

Studies like these have been replicated by Bargh and others. Psychologists have found that sports drinks influence people to perform physical activities better; food advertising influences people to eat more; the presence of a backpack causes people to be more cooperative than the presence of a brief case; and that the temperature of a cup can influence how people perceive interpersonal relationships (turns out that if you are holding a hot coffee you will find strangers much more friendly than if you are holding a cold soda). Along with other findings in psychology and neuroscience, priming has clearly shown that no matter how Socratic you get, you will never be able to access the mental mechanisms that constitute your brain (the go-to metaphor here is an iceberg, with the tip representing the conscious and the rest representing the unconscious).

However, in a 2006 article in the European Journal of Social Psychology, Bargh admitted that priming studies had reached their “childhood’s end,” and that he “[needed] to move on to research questions such as how these multiple effects of single primes occur,”  and “how these multiple simultaneous priming influences in the environment get distilled into nonconscious social action that has to happen serially, in real time.”

The former point is the generation problem, and it highlights our misguided tendency to interpret priming studies as having to do with single concepts. For example, if experimenters are priming a participant with the idea of generosity, “the effect of the prime… just depends on which dependent variable the experimenter happens to be interested in.” This means that because generosity could manifest itself in a number of ways, it is a mistake to believe that it influences behavior in just one way. Overcoming the generation problem, then, requires an understanding of all the different ways a single prime could influence behavior.

The later point is the reduction problem, and it asks how brains reduce and distill stimulus rich environments “in a world in which you can only do one thing at a time.” In other words, Bargh wants to know how and why our cognition discriminates one prime from another. For example, if we are walking down a city street, our senses are exposed to a wide variety of stimulus: smells, sights, sounds, etc. The question is, which won wins?

I’m not sure how these problems could be overcome any time soon because they seem to require an understanding of the brain that is years away. Until then, psychologists must remain humble. But, as Bargh says, “By constraining and informing our models of nonconscious processes in social psychology with theoretical and empirical developments in these related fields of inquiry, we can help assure that research in our own little neck of the woods will continue to matter in the long run, and to the larger picture.”

Read more

%d bloggers like this: