Skip to content

Posts tagged ‘Psychology’

Jonah Lehrer and the New Science of Creativity

The following is a repost of my latest article originally posted on ScientificAmerican.com. It is a review of Jonah Lehrer’s latest book, Imagine: How Creativity Works, which was released March 19th.

Bob Dylan was stuck. At the tail end of a grueling tour that took him across the United States and through England he told his manager that he was quitting music. He was physically drained – insomnia and drugs had taken their toll – and unsatisfied with his career. He was sick of performing “Blowin’ in the Wind” and answering the same questions from reporters. After finishing a series of shows at a sold out Royal Albert Hall in London, he escaped to a cabin in Woodstock, New York to rethink his creative direction.

What came next would change rock ‘n’ roll forever. As soon as Dylan settled into his new home he grabbed a pencil and started writing whatever came to his mind. Most of it was a mindless stream of conscious. “I found myself writing this song, this story, this long piece of vomit, twenty pages long,” he once told interviewers. The song he was writing started like any children’s book – “Once Upon a Time” – but what emerged was a tour de force that left people like Bruce Springsteen and John Lennon in awe. A few months later, “Like A Rolling Stone” was released to critical acclaim.

Creativity in the 21st Century

Every creative journey begins with a problem. For Dylan it was the predictability and shallowness of his previous songs. He wasn’t challenging his listeners enough; they were too comfortable. What Dylan really wanted to do to was replace the expected with the unexpected. He wanted to push boundaries and avoid appealing to the norm; he wanted to reinvent himself.

For most of human history, the creative process has been associated with higher powers; it was about channeling the muses or harnessing one’s inner Apollonian and Dionysian; it was otherwordly. Science has barely touched creativity. In fact, in the second half of the 20th century less than 1 percent of psychology papers investigated aspects of the creative process. This changed in the last decade. Creativity is now one of the most popular topics in cognitive science.

The latest installment is Jonah Lehrer’s Imagine: How Creativity Works – released today. With grandiose style á la Proust Was Neuroscientist and How We Decide, Lehrer tells stories of scientific invention and tales of artistic breakthroughs – including Dylan’s – while weaving in findings from psychology and neuroscience. What emerges from his chronicles is a clearer picture of what happens in the brain when we are exercising – either successfully or unsuccessfully – our creative juices. The question is: what are the secrets to creativity? 

How To Think

There’s nothing fun about creativity. Breakthroughs are usually the tale end of frustration, sweat and repeated failure. Consider the story of Swiffer. Back in the 1980s Procter and Gamble hired the design firm Continuum to study how people cleaned their floors. The team “visited people’s homes and watched dozens of them engage in the tedious ritual of floor cleaning. [They] took detailed notes on the vacuuming of carpets and the sweeping of kitchens. When the notes weren’t enough, they set up video cameras in living rooms.” The leader of the team, Harry West, described the footage as the most boring stuff imaginable. After months of poring through the tapes he and his team knew as much about how people cleaned their floors as anybody else – very little.

But they stuck with it. And eventually landed on a key insight: people spend more time cleaning their mops than they did cleaning the floor. That’s when they realized that a paper towel could be used as a disposable cleaning surface. Swiffer launched in the spring of 1999 and by the end of the year it generated more than $500 million in sales.

Discovery and invention require relentless work and focus. But when we’re searching for an insight stepping back from a problem and relaxing is also vital; the unconscious mind needs time to mull it over before the insight happens – what Steve Berlin Johnson calls the “incubation period.” This is the story of Arthur Fry, which Lehrer charmingly brings to life.

In 1974 Fry attended a seminar given by his 3M colleague Spencer Silver about a new adhesive. It was a weak paste, not even strong enough to hold two pieces of paper together. Fry tried to think of an application but eventually gave up.

Later in the year he found himself singing in his Church’s choir. He was frustrated with the makeshift bookmarkers he fashioned to mark the pages in his hymnal book; they either fell out or got caught in the seams. What he really needed was glue strong enough so his bookmarkers would stick to the page but weak enough so they wouldn’t rip the paper when he removed them. That’s when he had his moment of insight: why not use Silver’s adhesive for the bookmark? He called it the Post-it Note.

Fry’s story bodes well with tales of insight throughout history. Henrí Poincaré is famous for thinking up Non-Euclidean geometry while boarding a bus, and then there’s Newton’s apple-induced revelation about the law of gravity. Lehrer delves into the relevant research to make sense of these stories from the neurological level. Fascinating studies from Mark Jung-Beeman, John Kounios and Joy Bhattacharya give us good reason to take Lehrer’s advice: “Rather than relentlessly focusing, take a warm shower, or play some Ping-Pong, or walk on the beach.”

When it comes to the creative process, then, it’s important to balance repose with red bull. As Lehrer explains: “the insight process… is a delicate mental balancing act. At first, the brain lavishes the scarce resource of attention on a single problem. But, once the brain is sufficiently focused, the cortex needs to relax in order to seek out the more remote association in the right hemisphere, which will provide the insight.”

Other People

The flip side of the creative process is other people. The world’s great ideas are as much about our peers as they are about the individual who makes it into the textbook. To explore how the people around us influence our ideas Lehrer explains the research of Brian Uzzi who, a few years ago, set out to answer this question: what determines the success of a Broadway musical?

With his colleague Jarrett Spiro, Uzzi thoroughly examined a data set that included 2,092 people who worked on 474 musicals from 1945 to 1989. They considered metrics such as reviews and financial success and controlled for talent and any economic or geographic advantages – big New York City musicals would likely flub the data. They found that productions failed for two reasons. The first was too much like-mindedness: “When the artists were so close that they all thought in similar ways… theatrical innovation [was crushed].” On the other hand, when “the artists didn’t know one another, they struggled to work together and exchange ideas.” Successful productions, in contrast, found an even distribution between novelty and familiarity within its members. This is why West Side Story was such a hit: it balanced new blood with industry veterans.

This is what the website InnoCentive.com teaches us. InnoCentive is a website where, as Matt Ridley would suggest, ideas go to have sex. The framework is simple: “seekers” go to the website to post their problems for “solvers.” The problems aren’t trivial but the rewards are lucrative. For example, the Sandler-Kenner Foundation is currently offering a $10,000 reward for anybody who can create “diagnostic tools for identification of adenocarcinoma and neuroendocrine pancreatic cancer at early stages of development.” Another company is offering $8,000 to anyone who can prevent ice formation inside packages of frozen foods.

What’s remarkable about InnoCentive is that it works. Karim Lakhani, a professor at Harvard Business School, conducted a study that found that about 40 percent of the difficult problems posted on InnoCentive were solved within 6 months. A handful of the problems were even solved within days. “Think, for a moment,” Lehrer says, “about how strange this is: a disparate network of strangers managed to solve challenges that Fortune 500 companies like Eli Lilly, Kraft Foods, SAP, Dow Chemical, and General Electric—companies with research budgets in the billions of dollars—had been unable to solve.”

The secret was outside thinking:

The problem solvers on InnoCentive were most effective when working at the margins of their fields. In other words, chemists didn’t solve chemistry problems, they solved molecular biology problems, just as molecular biologists solved chemistry problems. While these people were close enough to understand the challenges, they weren’t so close that their knowledge held them back and caused them to run into the same stumbling blocks as the corporate scientists.

This is the lesson from West Side Story: great ideas flourish under the right balance of minds. John Donne was right: no man is an island.

Conclusion

There are so many wonderful nuggets to take away from Imagine, and Lehrer does an excellent job of gathering stories from history to bring the relevant psychological research to life. The few stories and studies I’ve mentioned here are just the tip of the iceberg.

When I asked him what the takeaway of his book is (if there could be just one) he said:

The larger lesson is that creativity is a catchall term for a bundle of distinct processes. If you really want to solve the hardest problems you will need all these little hacks in order to solve these problems. This is why what the cognitive sciences are saying about creativity is so important.

He’s right. We think about creativity as being a distinct thing and as people being either creative or not, but the empirical research Lehrer discusses tells a different story. Creativity engages multiple cognitive processes that anybody can access.

This is why Dylan’s story is so important: It’s the story of a musician being discontent with his creative direction, having a moment of insight, working tirelessly to bring this insight and new sounds to life to ultimately change the norm. Dylan’s genius isn’t about a specific skill that nobody else possessed. It’s about his ability to wage through the creative process by using the right parts of the brain at the right times.

Not everybody can be Dylan, but Imagine reminds us that as mysterious and magical as creativity seems, “for the first time in human history, it’s possible to learn how the imagination actually works.”

Do We Know What We Like?

 

People are notoriously bad at explaining their own preferences. In one study researchers asked several women to choose their favorite pair of nylon stockings from a group of twelve. After they made their selections the scientists asked them to explain their choices. The women mentioned things like texture, feel, and color. All of the stockings, however, were identical. The women manufactured reasons for their choices, believing that they had conscious access to their preferences.

In other words: “That voice in your head spewing out eloquent reasons to do this or do that doesn’t actually know what’s going on, and it’s not particularly adept at getting you nearer to reality. Instead, it only cares about finding reasons that sound good, even if the reasons are actually irrelevant or false. (Put another way, we’re not being rational – we’re rationalizing.)”

Our ignorance of our wants and desires is well-established in psychology. Several years ago Timothy Wilson conducted one of the first studies to illustrated this. He asked female college students to pick their favorite posters from five options: a van Gogh, a Monet and three humorous cat posters. He divided them into two groups: The first (non-thinkers) was instructed to rate each poster on a scale from 1 to 9. The second (analyzers) answered questionnaires asking them to explain why they liked or disliked each of them. Finally, Wilson gave each subject her favorite poster to take home.

Wilson discovered that the preferences of the two groups were quite different. About 95 percent of the non-thinkers went with van Gogh or Monet. On the other hand, the analyzers went with the humorous cat poster about 50 percent of the time. The surprising results of the experiment showed themselves a few weeks later. In a series of follow-up interviews, Wilson found that the non-thinkers were much more satisfied with their posters. What explains this? One author says that, “the women who listened to their emotions ended up making much better decisions than the women who relied on their reasoning powers. The more people thought about which posters they wanted, the more misleading their thoughts became. Self-analysis resulted in less self-awareness.”

Wilson found similar results with an experiment involving jams. And other researchers, including Ap Dijksterhuis of Radboud University in the Netherlands, have also demonstrated that we know if we like something, but we don’t know why and the more time we spend deliberating the worse off we are. Freud, then, was right: we’re not even the masters of our own house.

Our tendency to make up reasons for our preferences is of particular importance for advertisers, who sometimes rely on focus groups. But if we don’t know what we like, then how are ad agencies supposed to know what we like? The TV shows The Mary Tyler Moore Show and Seinfeld, for example, are famous for testing terribly even though they went on to be two of the most popular shows in the history of TV. By the same token, many shows that tested well, flopped. As Philip Graves, author of Consumer.ology reminds us: “As long as we protect the illusion that we ourselves are primarily conscious agents, we pander to the belief that we can ask people what they think and trust what we hear in response. After all, we like to tell ourselves we know why we do what we do, so everyone else must be capable of doing the same, mustn’t they?”

Stories of the failures of market research are not uncommon. Here’s one from Gladwell.com:

At the beginning of the ’80s, I was a product manager at General Electric, which at the time had a leading market share in the personal audio industry (radios, clock radios, cassette recorders, etc.). Sony had just introduced the Walkman, and we were trying to figure out how to react. Given the management structure of the day, we needed to prove the business case. Of course, we did focus groups!

Well, the groups we did were totally negative. This was after the Walkman had been on the scenes for months, maybe a year. The groups we did felt that personal music would never take off. Would drivers have accidents? Would bicycle riders get hit by drivers?

If we listened to “typical” consumers, the whole concept was DOA.

This type of reaction is probably the reason that there is the feeling of a “technological determination” on the part of the electronics community. It leads to the feeling that you should NEVER listen to the consumer, and just go about introducing whatever CAN be produced.

At the time, we had a joke about Japanese (Sony/Panasonic/JVC) market research. “Just introduce something. If it sells, make more of it.” It’s one way of doing business. One the other hand, when I was hired by a Japanese company in the mid-80’s, I was asked how GE could get by with introducing such a limited number of models. Simple, I said, “We tested them before we introduced them.”

History tells which method has worked better.

One person who understood this was Steve Jobs. He never cared for market research or focus groups because, as he once said, “people don’t know what they want until you show it to them.” Instead, Jobs was a pseudo- Platonist about his products. He believed that there was an ideal music player, phone, tablet and computer and trusted the customers to naturally recognize perfection when they saw it. When asked what market research went into the iPad, his New York Times obituary reports, Mr. Jobs replied: “None. It’s not the consumers’ job to know what they want.”

I’m not the only we with an ancient Greek take on Jobs. Technology-theory contrarian Evgeny Morozov compared Jobs to Plato a few years back. He said:

The notion of essence as invoked by Jobs and Ive [the top Apple designer] is more interesting and significant—more intellectually ambitious—because it is linked to the ideal of purity. No matter how trivial the object, there is nothing trivial about the pursuit of perfection. On closer analysis, the testimonies of both Jobs and Ive suggest that they did see essences existing independently of the designer—a position that is hard for a modern secular mind to accept, because it is, if not religious, then, as I say, startlingly Platonic.

Does this mean all markers should think platonically? Not necessarily; Jobs, to be sure, was an outliner. But it does remind us that many times we don’t know what we like.  Read more

Rewire Your Brain For Love

Marsha Lucas, PhD, is a neuropsychologist based in Washington DC. She recently released Rewire Your Brain For Love: Creating Vibrant Relationships Using the Science of Mindfulness, a book that explores what neuroscience can teach us about creating and fostering healthy relationships.

Marsha’s book is charming and personal. She speaks with the reader, not to the reader. Along the way she outlines basic neuroanatomy, explains what prevents us from forming strong relationships and describes the benefits of meditation. Throughout the book Marsha reminds readers that mindfulness can change the brain in areas and ways that promote healthier relationships with yourself and others.

Learn more about the book and Marsha by reading our interview below!

     

Give us a little bit of your personal background. Where are you from and where did you grow up? 

I’m a native of New York, where I lived until heading off to college. I’m the daughter of a clinical psychologist and a stay-at-home mom who later became a silversmith.

How did you fall into your current profession as a psychologist?

I’ve always been fascinated by the brain, what’s “us” and what it is that makes us who we are. My dad and I would have these sometimes pretty odd conversations, about things like the Purkinje phenomenon (when, at dusk, your eyes are shifting from mostly color perception to mostly light detection, reds and greens seem to “pop” and appear much more intense), or hypothesizing what the dog might be thinking while he sniffs, um — well, you get the idea.

Why did you decide to write the book? 

I’d been so fortunate to have three important parts of my life converge – my passion for neuroscience, my love of doing psychotherapy, and the tremendous difference that mindfulness practice can make in well-being. Putting them together in everyday language — with some of humor and examples mixed in — made such a difference with my patients that eventually patients, colleagues, and friends were all encouraging me to write the book.

Take us through the writing process a little bit. What were the biggest challenges? What was your original idea and how did it change (if it did)? 

The biggest challenge in writing the book wasn’t the writing of the book itself — that was a challenge, but it was exciting and enlivening. I learned that to get a non-fiction book published, you first write a proposal — a business plan and sales pitch, really, plus a couple of sample chapters. Business plans, sales pitches — not exactly the stuff that most psychologists are inclined to do, so it was a steep learning curve. I had one author joke with me, “If you want to create a more mindful life, don’t write a book about creating a more mindful life!”

Ok, now for the good stuff. Meditation is a common theme in the book. So, how can we use it to improve our relationships? 

It seems a little counter-intuitive, doesn’t it? After all, most people don’t say, “I’m really so hungry for a relationships, or to improve the one I have – Hey, I know! I’ll go sign up to learn meditation!” But here’s the connection: Our brains are wired — or not — for healthy relationships very early in life, through our first experiences with those who cared for us. If, like so many people, your brain is not, you can improve that wiring through the simple practice of mindfulness. Research (from Harvard, UCLA, and so on) has shown that changes happen in the brain as a result of mindfulness practice, in areas and with pathways that support and promote better emotional resilience, healthier empathy, quicker recovery after an argument, and more. I talk about the changes in terms of seven “high-voltage” relationship benefits.

What can neuroscience teach about reducing and controlling stress? 

Humans have this tremendous capacity for thinking, and thinking about thinking. It’s a good thing, but it also costs us. Robert Sapolsky wrote Why Zebras Don’t Get Ulcers — zebras have their stress and tension when they sense that there’s a lion about to try to make one of them her dinner, but when the threat is over, they go back to a non-stressed baseline and get back to munching the grass. We, on the other hand, dwell on what happened in the past, project into what might happen in the future — and using our busy minds, keep ourselves stressed beyond what serves our well-being. When we acquire the skill of greater regulation of our body’s response to stress or fear, as one example, we’re developing “better” neural pathways, and less stressed-out ways of living our lives.

Throughout the book you advise the reader to be more “mindful.” Tell us what you mean by this. 

Jon Kabat-Zinn, probably the most influential person in popularizing mindfulness in the US, describes mindfulness as paying attention in a particular way, on purpose, in the present moment, and without judgment. Sounds simple, doesn’t it? You’re bringing your attention to something — often your breath — and of course, with our busy minds, your attention wanders off. You notice it, and gently, non-judgmentally bring it back to the original focus. It’s not about “stopping” your mind, but noticing what it’s up to, and gently bringing it back to the present moment.

My favorite part of the book is chapter 9. You describe your experiences with a woman named Justine who is having problems finding a good partner. What does Justine teach us? 

It’s fascinating to me that so many people find Justine’s story so compelling. Justine was basically living her life in a way that was turning a blind eye, every minute of every day, to how she had fallen into a sort of “autopilot” — living her life pretty mindlessly so she could just keep on going. She was a power broker in Washington DC, very successful in her career doing things that were good for her clients but (as she readily admitted) not really the right or decent thing for the rest of us on the planet. She came in to see me because she was having some serious problems finding a “solid, decent guy.” Understandably, she was resistant to really taking a look at the life and the crummy relationships she’d created — but was ultimately able to develop the capacity to mindfully shift her perspective, her talents, and her life to something more meaningful, and with more integrity. Finding a more meaningful relationship, and a guy with integrity, flowed so amazingly from that.

For the big question. What does your book try to accomplish? Or, what would you like the reader to walk away with? 

We spend much of our time on “autopilot”. It’s like a prison, really. The practice of mindfulness gives you the chance to change your brain, to create better neural pathways, allowing you to break out of autopilot — and to create more vibrant, juicy relationships.

Thanks Marsha!

Rewire Your Brain For Love

The Irrationality Of Irrationality

Reason has fallen on hard times. After decades of research psychologists have spoken: we humans are led by our emotions, we rarely (if ever) decide optimally and we would be better off if we just went with our guts. Our moral deliberations and intuitions are mere post-hoc rationalizations; classical economic models are a joke; Hume was right, we are the slaves of our passions. We should give up and just let the emotional horse do all the work.

Maybe. But sometimes it seems like the other way around. For every book that explores the power of the unconscious another book explains how predictably irrational we are when we think without thinking; our intuitions deceive us and we are fooled by randomness but sometimes it is better to trust our instincts. Indeed, if a Martian briefly compared subtitles of the most popular psychology books in the last decade he would be confused quickly. Reading the introductions wouldn’t help him either; keeping track of the number of straw men would be difficult for our celestial friend. So, he might ask, over the course of history have humans always thought that intelligence was deliberate or automatic?

When it comes to thinking things through or going with your gut there is a straightforward answer: It depends on the situation and the person. I would also add a few caveats. Expert intuition cannot be trusted in the absence of stable regularities in the environment, as Kahneman argues in his latest book, and it seems like everyone is equally irrational when it comes to economic decisions. Metacognition, in addition, is a good idea but seems impossible to consistently execute.

However, unlike our Martian friend who tries hard to understand what our books say about our brains, the reason-intuition debate is largely irrelevant for us Earthlings. Yes, many have a sincere interest in understanding the brain better. But while the lay reader might improve his decision-making a tad and be able explain the difference between the prefrontal cortex and the amygdala the real reason millions have read these books is that they are very good.

The Gladwells, Haidts and Kahnemans of the world know how to captivate and entertain the reader because like any great author they pray on our propensity to be seduced by narratives. By using agents or systems to explain certain cognitive capacities the brain is much easier to understand. However, positioning the latest psychology or neuroscience findings in terms of a story with characters tends to influence a naïve understanding of the so-called most complex entity in the known universe. The authors know this of course. Kahneman repeatedly makes it clear that “system 1” and “system 2” are literary devices not real parts in the brain. But I can’t help but wonder, as Tyler Cowen did, if deploying these devices makes the books themselves part of our cognitive biases.

The brain is also easily persuaded by small amounts of information. If one could sum up judgment and decision-making research it would go something like this: we only require a tiny piece of information to confidently form a conclusion and take on a new worldview. Kahneman’s acronym WYSIATI – what you see is all there is – captures this well. This is precisely what happens the moment readers finish the latest book on intuition or irrationality; they just remember the sound bite and only understand brains through it. Whereas the hypothetical Martian remains confused, the rest of us humans happily walk out of our local Barnes and Noble, or even worse, finish watching the latest TED with the delusion feeling that now, we “got it.”

Many times, to be sure, this process is a great thing. Reading and watching highbrow lectures is hugely beneficial intellectually speaking. But let’s not forget that exposure to X is not knowledge of X. The brain is messy; let’s embrace that view, not a subtitle.

What Motivates A Suicide Bomber?

Suicide terrorism is a peculiar business. As a means of killing civilians it is hugely efficient. Steven Pinker explains that, “it combines the ultimate in surgical weapon delivery – the precision manipulators and locomotors called hands and feet, controlled by the human eyes and brain – with the ultimate in stealth – a person who looks just like millions of other people.” The most sophisticated drone doesn’t come close.

Relative to the past few decades it is trending. During the 1980s the world saw an average of about five suicide attacks per year. Between 2000 and 2005 that number skyrocketed to 180. The targets have been diverse. Israel, Iraq and Afghanistan get all the media attention, but Somalia and Sri Lanka experienced their share of self-destruction over the past five years.

What’s peculiar about suicide terrorism is that it is especially difficult to understand from a psychological point of view. Most people find it impossible to empathize with someone who walks into a crowded Jerusalem market wearing an overcoat filled with nails, ball bearings and rat poison with the intention of detonating the bomb strapped to his (99 percent of suicide terrorists are male) waist. How do we make sense of this?

Secular westerners tend to understand suicide terrorists as unfortunate products of undeveloped, undereducated and economically devastated environments. This isn’t true. All the 9/11 hijackers were college educated and suffered “no discernible experience of political oppression.” As Sam Harris explains:

Economic advantages and education, in and of themselves, are insufficient remedies for the cause of religious violence. There is no doubt that many well-educated, middle-class fundamentalists are ready to kill and die for God…. Religious fundamentalism in the developing world is not, principally, a movement of the poor and uneducated.

What is a sufficient explanation? In the case of Islam, why are so many of its followers eager to turn themselves into bombs? Harris believes that it is “because the Koran makes this activity seem like a career opportunity… Subtract the Muslim belief in martyrdom and jihad, and the actions of suicide bombers become completely unintelligible.” However you interpret the Koran, Harris’ position is that faith motivates Muslim suicide terrorists and that beliefs are the key to understanding the psychology of suicide terrorism. When nineteen Muslim terrorists woke up on the morning of September 11th they believed that 72 virgins awaited them in Heaven; they believed they would be remembered as heroes; they believed that self-destruction in the name of their God was glorious. It does not take a stretch of the imagination to correctly guess what they were saying (I should say, praying) moments before their doom.

Epistemology isn’t the whole story. Action requires belief but belief is not created in a vacuum. Understanding the motives of suicide bombers demands knowledge of the community they grew up in. You need context.

This is precisely what anthropologist Scott Atran attempted to dissect. After interviewing failed and prospective suicide terrorists he published several articles outlining the psychological profile of suicide terrorists and concluded that a call to martyrdom is appealing because it offers an opportunity to join a cohesive and supportive community of like-minded persons. Here’s Atran’s testimony to a U.S. Senate subcommittee:

When you look at whom [suicide terrorists] idolize, how they organize, what bonds them and what drives them; then you see that what inspires the most lethal terrorists in the world today is not so much the Koran or religious teachings as a thrilling cause and call to action that promises glory and esteem in the eyes of friends, and through friends, eternal respect and remembrance in the wider world that they will never live to enjoy.

The work of anthropologist Richard Sosis suggests that Atran is correct. Sosis studied the history of communes in the United States in the nineteenth century. He found that twenty years after their founding 6 percent of the secular communes still existed compared to 39 percent of the religious communes. He also discovered that the more costly sacrifices the religious commune demanded the better it functioned. By requiring members to withstand from things like alcohol and conform to dress codes the religious communes quickly and effectively bound its members together. This is why if the West wants to minimize suicide terrorism, Atran recommends, it should “[learn] how to minimize the receptivity of mostly ordinary people to recruiting organizations.”

Thankfully, the number of suicide bombers has declined in the last few years. In Iraq Vehicle and suicide attacks dropped from 21 a day in 2007 to about 8 a day in 2010. Along with a surge of American soldiers, the decline can be attributed to an attitude shift within the Islamic community. In Pinker’s latest book he explains that, “in the North-West Frontier Province in Pakistan, support for Al Qaeda plummeted from 70 percent to 4 percent in just five months in late 2007… In a 2007 ABC/BBC poll in Afghanistan, support for jihadist militants nosedived to one percent.” If Atran is correct in suggesting that suicide terrorism is fueled by an appeal to community and an opportunity to gain esteem then this is good news.

Individual belief and the communities they arise from help us understand the psyche of suicide bombers. But even a sufficient explanation would leave me wondering. Our DNA has one goal: replication. That natural selection has given us the means to stop this process might be one of Nature’s great ironies.

Read more

Why Intellectual Diversity Is Important

Below is my latest column at The Creativity Post in its entirety. I argue that good ideas benefit from intellectual diversity. Incidentally, I came across this wonderful NYTimes article on the same subject at Farnam Street blog this morning. It discusses Scott Page’s The Difference: How the Power of Diversity Creates Better Groups, Firms, Schools and Societies.

A few years ago Brian Uzzi of Northwestern University and Jarrett Spiro of Stanford University set out (pdf) to answer this question: What determines the success of a Broadway musical? Uzzi and Spiro began by poring through a data set that included 2,092 people who worked on 474 musicals from 1945 to 1989. To determine how good each production was they considered metrics such as reviews and financial success. They also controlled for things like talent and economic and geographic conditions to ensure that the big New York City musicals didn’t flub the data.

What they found was that successful productions relied on two components: “The ratio of new blood versus industry veterans, and the degree to which incumbents involved their former collaborators and served as brokers for new combinations of production teams.” In other words, productions that worked found a balance between strong social ties and weak ones, rookies and veterans, familiarity and novelty. They weren’t flooded with a group of likeminded people but neither was everyone a stranger to each other. Uzzi and Spiro hypothesized that the reason intellectual diversity was important is because “small world networks that help to create success or failure in Broadway musicals… face liabilities in the realms of innovation and collaboration that impede their creating new, successful musical hits… too much small-worldliness can undermine the very benefits it creates at more moderate levels, due to a decrease in artists’ ability to innovate and break convention.”

What’s alarming about their conclusions is that a plethora of psychological data suggests that most of us balk when we are given the chance to connect with people who might not share similar intellects. Consider a study (pdf) done back in 2007 by Paul Ingram and Michael Morris at Columbia University. The psychologists gathered a group of executives and had them attend a cocktail mixer where the psychologists encouraged the executives to exchange ideas, network and meet new people. Like good behavioral scientists, Ingram and Morris weaseled microphones on all the nametags to record what was said. Prior to the “mixing” the executives stated that they wanted to “meet as many different people as possible” or “expand their social network,” but the Ingram and Morris found just the opposite. “Do people mix at mixers? “ they asked in the concluding remarks of their study, “The answer is no… our results show that guests at a mixer tend to spend the time talking to the few other guests whom they already know well.” Or, as Jonah Lehrer somewhat sarcastically puts it in a recent post, “investment bankers chatted with other investment bankers, and marketers talked with other marketers, and accountants interacted with other accountants.”

Ingram and Morris’ study should be taken as a warning: If we want to broaden our intellectual horizons it’s important to remember our natural tendency to drift towards and eventually connect with only likeminded people. Stories of innovation and discovery throughout history illustrate how important this point is. My favorite, which doesn’t get told enough, is the discovery of Cosmic Microwave Background Radiation (CMB), a key piece of evidence that changed our understanding of the origin of the universe forever.

The story begins in Holmdel New Jersey at Bell Labs where Arno Penzias and Robert Wilson were experimenting with a horn antenna originally built to detect radio waves that bounced off of echo balloon satellites. After spending some time with the antenna they ran into a problem. It was a mysterious hissing noise – like static on the radio – that persisted all over the sky, day and night. The duo went to great lengths to eliminate the hiss – they even washed bird droppings off of the dish – but it was all to no avail. Meanwhile, at Princeton University just 60 miles down the road, Robert Dicke, Jim Peebles and David Wilkinson were trying to find evidence for the Big Bang in the form of microwave radiation. They predicated that if the Big Bang did in fact take place it must have scattered an enormous blast of radiation throughout the universe much like how a rock thrown into a lake creates ripples that broadcast outwards. With the right instrumentation, they believed, this radiation could be all over the sky, day and night.

It was only a matter of time before serendipity set in and a mutual friend at MIT, professor of physics Bernard F. Burke, told Penzias about what the researchers at Princeton were looking for. After that, the two teams exchanged ideas and realized the implications of their work. It turned out that the hiss that Penzias and Wilson were trying so hard to get rid of was precisely the radiation that the Princeton team was looking for. A few calculations and a published paper later landed Penzias and Wilson the 1978 Noble Prize in Physics; the rest of us are still repeating the benefits of a more complete understanding of the universe.

The story of CMB reminds us that when it comes to solving difficult problems a fresh set of eyes, even one that comes from a different field, is vital. The CMB story shows itself in one form or another many times throughout history. The world’s great ideas are as much about other people as they are about the individual who makes it into the textbook. As Matt Ridely explains in a TED lecture in a slightly different context, “what’s relevant to a society is how well people are communicating their ideas and how well they are cooperating not how clever the individuals are… it’s the interchange of ideas, the meeting and mating of ideas between that [causes]… innovation.”

There is a wonderful website called InnoCentive.com that facilitates what Ridley calls the meeting and mating of ideas. The framework of InnoCentive is quite simple: “seekers” go to the website to post their problems for “solvers.” Problems range from the “Recovery of Bacillus Spore from Swabs,” to “Blueprints for a Medical Transportation Device for Combat Rescue,” and multi-billion dollar companies like General Electric and Procter and Gamble often post them with cash prizes up to $1 million.

The amazing part is that it’s working. A study (pdf) by researchers at Harvard Business School found that about 33 percent of problems posted on InnoCentive were solved on time. Why does InnoCentive work? The same reason that successful Broadway plays do and CMB was discovered: intellectual diversity. If an organic chemistry problem only attracted organic chemists it tended to be troublesome. However, if a biologist got involved with that same problem then the chances were greater that the problem was solved. The implications of this should make you think: solvers were at their best when they were at the margins of their fields of expertise.

Maybe it sounds obvious to suggest that a proper mixture of minds is important for accomplishing tasks, but remember the lesson from Uzzi’s and Spiro’s cocktail party study: it’s really hard to not surround yourself with people like you. Don’t hang out with too many opposites though, we don’t want another Spider Man: Turn Off The Dark.

“Who’s There?” Is The Self A Convenient Fiction?

For a long time people thought that the self was unified and eternal. It’s easy to see why. We feel like we have an essence; we grow old, gain and lose friends, and change preferences but we are the same person from day one.

The idea of the unified self has had a rough few centuries however. During the English Enlightenment Hume and Locke challenged the platonic idea of human nature being derived from an essence; in the 19th century Freud declared that the ego “was not even the master of his own house;” and after decades of revealing empirical research neuroscience has yet to reveal anything that scientists would call unified. As clinical neuropsychologist Paul Broks says, “We have this deep intuition that there is a core… But neuroscience shows that there is no center in that brain where things do all come together.”

One of the most dramatic demonstrations of the illusion of the unified self comes from Michael Gazzaniga, who showed that each hemisphere of the brain exercises free will independently when surgeons cut the corpus callosum. Gazzaniga discovered this with a simple experiment. When he flashed the word “WALK” in the right hemisphere of split-brain patients they walked out of the room. But when he asked them why they walked out all responded with a trivial remark such as, “To go to the bathroom” or “To get a Coke.” Here’s where things got weird. When he flashed a chicken in patients’ left hemisphere (in the right visual field) and a wintry scene in their right hemisphere (in the left visual field), and asked them to select a picture that goes with what they saw, he found that their left hand correctly pointed to a snow shovel and their right hand correctly pointed to a chicken. However, when the patients were asked to explain why they pointed at the pictures they responded with something like, “That’s easy. The shovel is for cleaning up the chicken.”

Nietszche was right: “We are necessarily strangers to ourselves…we are not ‘men of knowledge’ with respect to ourselves.”

But you don’t have to have a severed corpus callosum or a deep understanding of Genealogy of Morals (which I don’t) to appreciate how modular ourselves are. Our everyday inner-monologues are telling enough. We weigh the pros and cons between fatty meats and nutritious vegetables even though we know which is healthier. When we have the chance to procrastinate we usually take it and rationalize it as a good decision. We cheat, lie, are lazy and eat Big Macs knowing full well how harmful doing these things are. When it comes to what we think about, what we like and what we do Walt Whitman captured our natural hypocrisies and inconsistencies with this famous and keenly insightful remark: “Do I contradict myself? Very well then I contradict myself, (I am large, I contain multitudes.)”

That the unified self is largely an illusion is not necessarily a bad thing. The philosopher and cognitive scientist Dan Dennett suggests that it is a convenient fiction. I think he’s right. With it we are able to maintain stories and narratives that help us make sense of the world and our place in it. This is a popular conviction nowadays. As prominent evolutionary psychologist Steven Pinker explains in one of his bestsellers, “each of us feels that there is a single “I” in control. But that is an illusion that the brain works hard to produce.” In fact, without the illusion of selfhood we all might suffer the same fate as Phineas Gage who was, as anyone who has taken an introductory to psychology course might remember, “no longer Gage” after a tragic railroad accident turned his ventromedial prefrontal cortex into a jumbled stew of disconnected neurons.

However, according to the British philosopher Julian Baggini in a recent TED lecture the illusion of the self might not be an illusion. The question Baggini asks is if a person should think of himself as a thing that has a bunch of different experiences or as a collection of experiences. This is an important distinction. Baggini explains that, “the fact that we are a very complex collection of things does not mean we are not real.” He invites the audience to consider the metaphor of a waterfall. In many ways a waterfall is like the illusion of the self: is it not permanent, it is always changing and it is different at every single instance. But this doesn’t mean that a waterfall is an illusion or that it is not real. What it means is that we have to understand it as a history, as having certain things that are the same and as a process.

Baggini is trying to save the self from neuroscience, which is admirable considering that neuroscience continues to show how convoluted our brains are. I am not sure if he is successful – argument by metaphor can only go so far, empirical data wins at the end of the day – but I like the idea that personal and neurological change and inconsistency doesn’t imply an illusion of identity. In this age of cognitive science it’s easy to subscribe to Whitman’s doctrine – that we are constituted by multitudes; it takes a brave intellect, on the other hand, to hang on to what Freud called our “naïve self-love.”

Shakespeare opened Hamlet with the huge and beautifully complex query, “Who’s There.” Four hundred years later Baggini has an answer, but many of us are still scratching our heads.

Read more

How To Generate A Good Idea

When it comes to getting work done Sartre was right, hell is other people. So was Picasso, who said that, “without great solitude, no serious work is possible.” And then there’s Steve Wosniak, who in his memoir explained that, “most inventors and engineers I’ve met are like me … they live in their heads. They’re almost like artists… And artists work best alone …. I’m going to give you some advice that might be hard to take. That advice is: Work alone… Not on a committee. Not on a team.”

Generating ideas is different. Auguste Rodin’s “The Thinker” portrays a mediating figure waging a powerful intellectual struggle trying to force an insight. But the reality of great ideas is that they require other people. This is why the English coffeehouse was central to the Enlightenment. As one author explains, “[they] fertilized countless Enlightenment-era innovations; everything from the science of electricity, to the insurance industry, to democracy itself.” He’s right. They were a place where ideas went to have sex, to paraphrase Matt Ridley. (Replacing a depressant – alcohol – with a stimulant – caffeine – didn’t hurt either.)

The modern day coffeehouse can be found in the office buildings of the most innovative companies. At Pixar, for example, Steve Jobs insisted that the architect positioned the bathrooms at the center of the building so that the animator could easily strike up a conversation with the designer who could bounce ideas off of the COO. Likewise, as Steven Berlin Johnson explains, “[businesses] are giving up traditional conference rooms and replacing them with project based spaces… you walk into the room and on the white board is a drawing from six months ago… and there are prototypes they built a year and a half ago. Instead of going into… a conference room and erasing the white board at the end… [These spaces have] a history of the conversation that is triggered by the physical lay out of the space.”

Johnson’s point is that brainstorming is horribly counterproductive. Research from the late 1940s and early 1950s clearly demonstrates this to be true. A recent New York Times article laments that, “people in groups tend to sit back and let others do the work; they instinctively mimic others’ opinions and lose sight of their own; and, often succumb to peer pressure.” The problem with brainstorming is its tendency to treat people and their ideas too kindly. Criticism and error are essential in the formation of good ideas after all; brainstorming simply doesn’t facilitate this.

There is a great study conducted by Charlan Nemeth out of UC Berkeley that “[tested] the potential value of permitting criticism and dissent”. Nemeth (along with Bernard Personnaz, Maris Personnaz and Jack A. Goncalo) created three groups of people – minimal, brainstorming and debate – and had them discuss a topic. She found that, “groups encouraged to debate—even criticize (Debate condition) did not retard idea generation, as many would have predicted. In fact, such permission to criticize led to significantly more (rather than less) ideas than did the Minimal condition, both in the group and in total production of ideas.” The exchange of ideas amongst people is good, then, but an overly agreeable brainstorming session is certainly not.

When it comes to getting work done Picasso and Woz were right, isolation is the best. The aforementioned New York Times article goes on to explain the empirical evidence:

A fascinating study… compared the work of more than 600 computer programmers at 92 companies. They found that people from the same companies performed at roughly the same level — but that there was an enormous performance gap between organizations. What distinguished programmers at the top-performing companies wasn’t greater experience or better pay. It was how much privacy, personal workspace and freedom from interruption they enjoyed. Sixty-two percent of the best performers said their workspace was sufficiently private compared with only 19 percent of the worst performers. Seventy-six percent of the worst programmers but only 38 percent of the best said that they were often interrupted needlessly.

The important distinction to be made is that when it comes to generating good ideas, other people are key because they are needed for criticism, debate and exchange; this is the story of the English coffeehouse and the architecture of the Pixar building. When it comes to getting work done, well, Sartre nailed it on the head: hell is other people.  Read more

The Brain as a Kluge: How We Experience and Remember

It’s impossible to think about the past clearly. When it comes to evaluating your life, your brain is easily tricked into thinking one thing or another: How satisfied are you with your life? How happy are you? How well-off are you? Well, it depends.

In one study psychologists asked college students two questions: “How happy are you with your life in general?” and “How many dates did you have last month?” When asked in this order the researchers found almost no correlation. However, changing the order of the questions influenced the students to focus on the quality of their lives in terms of the quality of their romantic lives. The researchers found that people who had been on a lot of dates rated themselves as being much happier than those who had not been on a lot of dates. As brain scientist Gary Marcus explains, “this may not surprise you, but it ought to, because it highlights just how malleable our beliefs really are. Even our own internal sense of self can be influenced by what we happen to focus on at a given moment.”

Along similar lines, Norbert Schwarz and his colleagues demonstrated that good moods influenced how people evaluate their lives. They asked subjects to complete a questionnaire on life satisfaction. Beforehand, however, Schwartz asked them to photocopy a sheet of paper. (This was the key part of the study.) For half of the subjects Schwarz placed a dime on the photocopier. He and his colleagues found that, as one author says, “the minor lucky incident caused a marked improvement in subjects’ reported satisfaction with their life as a whole.”

Why are our life evaluations so easily swayed? Consider a study done by Daniel Kahneman and Donalnd Redelmeier. They tracked colonoscopy patients to see if there was a difference between how much pain they experienced and how much pain they thought they experienced. As Kahneman explains, “the experience of each patient varied considerably during the procedure, which lasted 8 minutes for patient A and 24 for patient B… [and] the general agreement [is] that patient B had the worse time.” Indeed, the graph illustrates that during the procedure patient B suffered more than patient A. However, they found that patient A reported that the colonoscopy was much worse than patient B. In other words, patient B suffered more but remembered it as better.

The inconsistency is explained by the peak-end rule, which describes our tendency to evaluate experiences by how they end, and duration neglect, which describes our tendency to be insensitive to the duration of an experience. Patient A remembered it as being terrible, even though he suffered less, because it ended terribly whereas Patient B remembered it better, even though he suffered more, because his second half was much less intense (see graph).

(The peak-end rule and duration neglect help explain why people tend to remember failed relationships or marriages only on bad terms – like a good movie with a bad ending, it’s just so hard to judge a relationship without thinking about what happened at the end. Perhaps the most dramatic example of these two cognitive biases is childbirth – extremely painful for most of the time but a great ending seems to dispel this from memory.)

Kahneman and Redelmeier’a ultimate point is that when it comes to understanding happiness psychologists must distinguish between the “remembering self” and the “experiencing self.” The remembering self is the one that “keeps score,” it answers questions like, “How satisfied are you with your life,” or “How is your health.” It is a story teller and its primary job is to tell the story of your life. The experiencing self answers questions like, “How was the concert last night,” or “How was your birthday party.” It is your current mood and reports how you are in the present. This distinction brings me back to my original question: Why are our life evaluations so easily swayed?

When it comes to assessing our life, the remembering self and the experiencing self are not on the same page. The remembering self, for example, will report a satisfied life but it will also be influenced by your experiencing self, which is moreover influenced by everyday occurrences like, say, finding a dime on a photocopier or answering your friend when he asks about your romantic life. Daniel Kahneman puts it this way: “Odd as it may seem, I am my remembering self, and the experiencing self, who does my living, is like a stranger to me.” In other words, our brain is a kluge – an ill-assorted collection of parts assembled to fulfill a particular purpose – and thinking about the past with an objective lens is not a top priority.

Read more

Seeing Without Knowing: How the Conscious Mind Messes With Memory

Our memories aren’t very reliable. The sobering truth is that we forget most of what we experience, our memories are usually distorted after they are formed and we have the tendency to accept misinformation about the past and faithfully adopt it as our own. In other words, we don’t store memories like computers. Now, a new study out of Psychology Science by Deborah Hannula, Carol L. Baym and Neal J. Cohen suggests that there is a more accurate way of reading our memories.

For the study Hannula and her team gathered a few dozen students and had them examine several dozen males faces (study-trails). Then, the students were shown displays of three faces (test-trails); half of the displays contained a face from the study-trails, the other half contained three faces visually similar to the faces (but not the faces themselves) from the study-trails. Participants were told to press a button when they recognized a face. They were also told to verbally indicate if they had seen a face from the study-trails since some displays did not contain faces from the study-trails. Using eye-tracking technology, Hannula looked for two things, where their eye focused first and what proportion of time the students spent looking there.

As expected, the participants did a good job of recognizing faces they had studied; they usually focused their eyes first on the studied faces and for the longest time. Hannula explains that, “before they chose a face and pressed a button, there was disproportionate viewing of the [studied faces] as compared to either type of selected face.” Particularly interesting was the observation that “early disproportionate viewing of the [studied faces] may precede and help give rise to awareness that a particular face has been studied… these cognitive processes permit us to make a decision, but may also lead us down the wrong path. In this case, leading us to endorse a face as studied despite having never seen it before.”

Their findings suggest that our unconscious mind does an excellent job of recognizing previously seen faces yet it is our conscious mind that sometimes gets in the way – we sometimes see without knowing, in other words, and it is only when we review our memories that they get distorted. Therefore, the researchers conclude that the effects of prior exposure can be seen in eye movements differently than explicit judgments.

In a different but related paper, Micah Edelson, Tali Sharot, Raymond Dolan and Yadin Dudai also demonstrated how much our conscious mind distorts memory. Jonah Lehrer explains:

 A few dozen people watched an eyewitness style documentary about a police arrest in groups of five. Three days later, the subjects returned to the lab and completed a memory test about the documentary. Four days after that, they were brought back once again and asked a variety of questions about the short movie while inside a brain scanner.

This time, though, the subjects were given a “lifeline”: they were shown the answers given by other people in their film-viewing group. Unbeknownst to the subjects, the lifeline was actually composed of false answers to the very questions that the subjects had previously answered correctly and confidently. Remarkably, this false feedback altered the responses of the participants, leading nearly 70 percent to conform to the group and give an incorrect answer. They had revised their stories in light of the social pressure.

The question, of course, is whether their memory of the film had actually undergone a change…. [So] the researchers invited the subjects back to the lab one last time to take the memory test, telling them that the answers they had previously been given were not those of their fellow film watchers, but randomly generated by a computer. Some of the responses reverted back to the original, but more than 40 percent remained erroneous, implying that the subjects were relying on false memories implanted by the earlier session. They had come to believe their own bullshit.

Our conscious mind also does an excellent job of maintaining false memories as real. For example, the day after the Challenger disaster Ulric Neisser asked Emory University undergrads to write a description of how they heard of the disaster – the time of day, what they were doing and how they felt about it. Then he asked the same students the same set of questions two and a half years later and compared the two descriptions. He found that, “twenty-five percent of the students’ subsequent accounts were strikingly different from their original journal entries. More than half the people had lesser degrees of error, and less than ten percent had all the details correct.” To make matters worse, “when confronted with their original reports, rather than suddenly realizing that they had misremembered, they often persisted in believing their current memory.”

You think that you remembered your fourth birthday correctly? Think again.

%d bloggers like this: