Skip to content

Posts tagged ‘Reason’

Why The Future of Neuroscience Will Be Emotionless

In Phaedrus, Plato likens the mind to a charioteer who commands two horses, one that is irrational and crazed and another that is noble and of good stock. The job of the charioteer is to control the horses to proceed towards Enlightenment and the truth.

Plato’s allegory sparked an idea that perpetuated throughout the next several millennia in western thought: emotion gets in the way of reason. This makes sense to us. When people act out-of-order, they’re irrational. No one was ever accused of being too reasonable. Around the 17th and 18th centuries, however, thinkers began to challenge this idea. David Hume turned the tables on Plato: reason, Hume said, was the slave of the passions. Psychological research of the last few decades not only confirms this view, some of it suggests that emotion is better at deciding.

We know a lot more about how the brain works compared to the ancient Greeks, but a decade into the 21st century researchers are still debating which of Plato’s horses is in control, and which one we should listen to.

A couple of recent studies are shedding new light on this age-old discourse. The first comes from Michael Pham and his team at Columbia Business School. The researchers asked participants to make predictions about eight different outcomes ranging from American Idol finalists, to the winners of the 2008 Democratic primary, to the winner of the BCS championship game. They also forecasted the Dow Jones average.

Pham created two groups. He told the first group to go with their guts and the second to think it through. The results were telling. In the American Idol results, for example, the first group correctly predicted the winner 41 percent of the time whereas the second group was only correct 24 percent of the time. The high-trust-in-feeling subjects even predicted the stock market better.

Pham and his team conclude the following:

Results from eight studies show that individuals who had higher trust in their feelings were better able to predict the outcome of a wide variety of future events than individuals who had lower trust in their feelings…. The fact that this phenomenon was observed in eight different studies and with a variety of prediction contexts suggests that this emotional oracle effect is a reliable and generalizable phenomenon. In addition, the fact that the phenomenon was observed both when people were experimentally induced to trust or not trust their feelings and when their chronic tendency to trust or not trust their feelings was simply measured suggests that the findings are not due to any peculiarity of the main manipulation.

Does this mean we should always trust our intuition? It depends. A recent study by Maarten Bos and his team identified an important nuance when it comes to trusting our feelings. They asked one hundred and fifty-six students to abstain from eating or drinking (sans water) for three hours before the study. When they arrived Bos divided his participants into two groups: one that consumed a sugary can of 7-Up and another that drank a sugar-free drink.

After waiting a few minutes to let the sugar reach the brain the students assessed four cars and four jobs, each with 12 key aspects that made them more or less appealing (Bos designed the study so an optimal choice was clear so he could measure of how well they decided). Next, half of the subjects in each group spent four minutes either thinking about the jobs and cars (the conscious thought condition) or watching a wildlife film (to prevent them from consciously thinking about the jobs and cars).

Here’s the BPS Research Digest on the results:

For the participants with low sugar, their ratings were more astute if they were in the unconscious thought condition, distracted by the second nature film. By contrast, the participants who’d had the benefit of the sugar hit showed more astute ratings if they were in the conscious thought condition and had had the chance to think deliberately for four minutes. ‘We found that when we have enough energy, conscious deliberation enables us to make good decisions,’ the researchers said. ‘The unconscious on the other hand seems to operate fine with low energy.’

So go with your gut if your energy is low. Otherwise, listen to your rational horse.

Here’s where things get difficult. By now the debate over the role reason and emotion play in decision-making is well documented. Psychologists have written thousands of papers on the subject. It shows in the popular literature as well. From Antonio Damasio’s Descartes’ Error to Daniel Kahneman’s Thinking, Fast and Slow, the lay audience knows about both the power of thinking without thinking and their predictable irrationalities.

But what exactly is being debated? What do psychologists mean when they talk about emotion and reason? Joseph LeDoux, author of popular neuroscience books including The Emotional Brain and The Synaptic Self, recently published a paper in the journal Neuron that flips the whole debate on its head. “There is little consensus about what emotion is and how it differs from other aspects of mind and behavior, in spite of discussion and debate that dates back to the earliest days in modern biology and psychology.” Yes, what we call emotion roughly correlates with certain parts of the brain, it is usually associated with activity in the amygdala and other systems. But we might be playing a language game, and neuroscientists are reaching a point where an understanding of the brain requires more sophisticated language.

As LeDoux sees it, “If we don’t have an agreed-upon definition of emotion that allows us to say what emotion is… how can we study emotion in animals or humans, and how can we make comparisons between species?” The short answer, according to the NYU professor, is “we fake it.”

With this in mind LeDoux introduces a new term to replace emotion: survival circuits. Here’s how he explains it:

The survival circuit concept provides a conceptualization of an important set of phenomena that are often studied under the rubric of emotion—those phenomena that reflect circuits and functions that are conserved across mammals. Included are circuits responsible for defense, energy/nutrition management, fluid balance, thermoregulation, and procreation, among others. With this approach, key phenomena relevant to the topic of emotion can be accounted for without assuming that the phenomena in question are fundamentally the same or even similar to the phenomena people refer to when they use emotion words to characterize subjective emotional feelings (like feeling afraid, angry, or sad). This approach shifts the focus away from questions about whether emotions that humans consciously experience (feel) are also present in other mammals, and toward questions about the extent to which circuits and corresponding functions that are relevant to the field of emotion and that are present in other mammals are also present in humans. And by reassembling ideas about emotion, motivation, reinforcement, and arousal in the context of survival circuits, hypotheses emerge about how organisms negotiate behavioral interactions with the environment in process of dealing with challenges and opportunities in daily life.

Needless to say, LeDoux’s paper changes things. Because emotion is an unworkable term for science, neuroscientists and psychologists will have to understand the brain on new terms. And when it comes to the reason-emotion debate – which of Plato’s horses we should trust – they will have to rethink certain assumptions and claims. The difficult part is that we humans, by our very nature, cannot help but resort to folk psychology to explain the brain. We deploy terms like soul, intellect, reason, intuition and emotion but these words describe very little. Can we understand the brain even though our words may never suffice? The future of cognitive science might depend on it.

Read more

The Irrationality Of Irrationality

Reason has fallen on hard times. After decades of research psychologists have spoken: we humans are led by our emotions, we rarely (if ever) decide optimally and we would be better off if we just went with our guts. Our moral deliberations and intuitions are mere post-hoc rationalizations; classical economic models are a joke; Hume was right, we are the slaves of our passions. We should give up and just let the emotional horse do all the work.

Maybe. But sometimes it seems like the other way around. For every book that explores the power of the unconscious another book explains how predictably irrational we are when we think without thinking; our intuitions deceive us and we are fooled by randomness but sometimes it is better to trust our instincts. Indeed, if a Martian briefly compared subtitles of the most popular psychology books in the last decade he would be confused quickly. Reading the introductions wouldn’t help him either; keeping track of the number of straw men would be difficult for our celestial friend. So, he might ask, over the course of history have humans always thought that intelligence was deliberate or automatic?

When it comes to thinking things through or going with your gut there is a straightforward answer: It depends on the situation and the person. I would also add a few caveats. Expert intuition cannot be trusted in the absence of stable regularities in the environment, as Kahneman argues in his latest book, and it seems like everyone is equally irrational when it comes to economic decisions. Metacognition, in addition, is a good idea but seems impossible to consistently execute.

However, unlike our Martian friend who tries hard to understand what our books say about our brains, the reason-intuition debate is largely irrelevant for us Earthlings. Yes, many have a sincere interest in understanding the brain better. But while the lay reader might improve his decision-making a tad and be able explain the difference between the prefrontal cortex and the amygdala the real reason millions have read these books is that they are very good.

The Gladwells, Haidts and Kahnemans of the world know how to captivate and entertain the reader because like any great author they pray on our propensity to be seduced by narratives. By using agents or systems to explain certain cognitive capacities the brain is much easier to understand. However, positioning the latest psychology or neuroscience findings in terms of a story with characters tends to influence a naïve understanding of the so-called most complex entity in the known universe. The authors know this of course. Kahneman repeatedly makes it clear that “system 1” and “system 2” are literary devices not real parts in the brain. But I can’t help but wonder, as Tyler Cowen did, if deploying these devices makes the books themselves part of our cognitive biases.

The brain is also easily persuaded by small amounts of information. If one could sum up judgment and decision-making research it would go something like this: we only require a tiny piece of information to confidently form a conclusion and take on a new worldview. Kahneman’s acronym WYSIATI – what you see is all there is – captures this well. This is precisely what happens the moment readers finish the latest book on intuition or irrationality; they just remember the sound bite and only understand brains through it. Whereas the hypothetical Martian remains confused, the rest of us humans happily walk out of our local Barnes and Noble, or even worse, finish watching the latest TED with the delusion feeling that now, we “got it.”

Many times, to be sure, this process is a great thing. Reading and watching highbrow lectures is hugely beneficial intellectually speaking. But let’s not forget that exposure to X is not knowledge of X. The brain is messy; let’s embrace that view, not a subtitle.

What is Reason Good For? The Rationality-Intuition Debate

Reason is under attack. Lobbing bomb shells is its twin brother who thinks unconsciously, quickly, and with less effort; I speak of intuition of course. It’s unclear when the rationality-intuition debate began, but its empirical roots were no doubt seeded when the cognitive revolution began and grew when Kahneman and Tversky started demonstrating the flaws of rational actor theory. Their cognitive biases and heuristic program, as it came to be known, wasn’t about bashing economic theory though, it was meant to illustrate not only innocuous irrationalities but systematic errors in judgment. What emerged, which is now beautifully portrayed in Daniel Kahneman’s new book, is a dualistic picture of human cognition where our mental processes are dictated by two types of thinking: system 1 thinking, which is automatic, quick and intuitive, and system 2 thinking, which is deliberate, slow and rational. We think, as the title reads, fast and slow.

It was only in the last decade that literature on system 1 and system 2 thinking made its way into the eye of the lay audience. Gladwell’s Blink, which nicely illustrated the power of thinking without thinking – system 1 – made a splash. On the other hand, Ariely’s Predictably Irrational spurred public debate about the flaws of going with your gut. In the wake of this literature, reason suffers from a credibility crisis. Am I rational or irrational? Should I go with my gut or think things through? Questions like these abound and people too often forget that context and circumstance are what really matter. (If you’re making a multimillion dollar business deal think it through. If you’re driving down the highway stick with your intuition!). Lately though, I’ve seen too much reason-bashing and I want to defend this precious cognitive capacity after reading the following comments, which were left in response to my last post by someone kind enough to engage my blog. His three points:

  • Consciousness-language/self-talk is trivial and epiphenomenal. It means very little and predicts less.
  • It is post-hoc pretty much anything interesting in brains processes > behavior
  • All other animals and living things get along just fine w/out it.

With the exception of his third point, which is worth a debate elsewhere, he (or she, but for the sake of writing I am just sticking with one pronoun) captures what many psychologists believe – that our vocalized beliefs are nothing more than post-hoc justifications of gut-reactions. Jonathan Haidt, for example, uses the metaphor of the rider atop an elephant where the rider ignorantly holds himself to be in control of his uncontrollable beast. There is more than a grain of truth to Haidt’s model, and plenty of empirical data backs it up. My favorite is one study in which several women were asked to choose their favorite pair of nylon stockings from a group of twelve. Then, after they made their selections researchers asked them to explain their choices. Among the explanations texture, feel, and color were the most popular. However, all of the stockings were in fact identical. The women were being sincere – they truly believed that what they were saying made sense – but they simply made up reasons for their choices believing that they consciously knew their preferences.

There is a problem with the whole sale reaction of reason. It is difficult to explain why humanity has made so much moral progress if we believe that our deliberations are entirely uncontrollable. For example, how is it, a critic of Haidt’s model may ask, that institutions like slavery, which were for the most of human history intuitively acceptable, are now intuitively unacceptable? In other words, if we really are solely controlled by the elephant, why aren’t we stuck in a Hobbesian state of nature where life is violent, brutish and short?

One answer is that through reason we were able to objectively look at the world and realize that slavery – and many other injustices and immoralities – made society worse. As Paul Bloom explains in a recent Nature piece: “Emotional responses alone cannot explain one of the most interesting aspects of human nature: that morals evolve. The extent of the average person’s sympathies has grown substantially and continues to do so. Contemporary readers of Nature, for example, have different beliefs about the rights of women, racial minorities and homosexuals compared with readers in the late 1800s, and different intuitions about the morality of practices such as slavery, child labour and the abuse of animals for public entertainment. Rational deliberation and debate have played a large part in this development.” Bloom’s point is thoroughly expanded in Pinker’s latest book, The Better Angels of our Nature, where Pinker argues that reason led people to commit fewer acts of violence. In his words: “At various times in history superstitious killings, such as inhuman sacrifice, witch hunts, blood libels, inquisitions, and ethnic scapegoating, fell away as the factual assumptions on which they rested crumbled under the scrutiny of a more intellectually sophisticated populace. Carefully reasoned briefs against slavery, despotism, torture, religious persecution, cruelty to animals, harshness to children, violence to women, frivolous wars, and the persecution of homosexuals were not just hot air but entered into the decisions of the people and institutions who attended to the arguments and implemented reforms.” In regard to my commenter’s first point – that conscious talk is trivial and epiphenomenal – I think there should be little question that reason played and plays an important role in shaping society for the better and that it is certainly not trivial or epiphenomenal as a result.

His second point – that reason is all post-hoc justifications – is also problematic. Although conscious deliberate thought depends on unconscious cognition, it does not follow that all reasons are post-hoc justifications. For example, solving math problems requires unconscious neurological cognition but nobody would ever say that 1+1=2 is a post-hoc justification. The same is true of scientific truths; are Newton’s laws likewise post hoc justifications? No. This is because there are truths to be known about the world and they can be discovered with reason. As Sam Harris explains, “the fact that we are unaware of most of what goes on in our brains does not render the distinction between having good reasons for what one believes and having bad ones any less clear or consequential.” Reason, in other words, separates correct beliefs from incorrect beliefs to justify truths from falsehoods. It requires unconscious thought as neuroscience now knows, but it does not follow that everything our rationality discovers is a post-hoc justification.

So, let’s not forget that one of our species’ most important assets – reason – is a vitally important cognitive capacity that shouldn’t be left by the way side. Psychologists have done insightful work to demonstrate the role of the cognitive unconscious but this is not to disregard the power of human rationality.

Why We Reason

Last year Huge Mercier and Dan Sperber published an paper in Behavioral and Brain Science that was recently featured in the New York Times and Newsweek. It has since spurred a lot of discussion in the cognition science blogosphere by psychologists and science writers alike. What’s all the hype about?

For thousands of years human rationality was seen as a means to truth, knowledge, and better decision making. However, Mercier and Sperber are saying something different: reason is meant to persuade others and win arguments.

Our hypothesis is that the function of reasoning is argumentative. It is to devise and evaluate arguments intended to persuade…. reasoning does exactly what can be expected of an argumentative device: Look for arguments that support a given conclusion, and, ceteris paribus, favor conclusions for which arguments can be found (2010).

Though Mercier and Sperber’s theory is novel, it is not entirely original. In the western tradition, similar theories of human rationality date back to at least ancient Greece with the Sophists.

Akin to modern day lawyers, the Sophists believed that reason was a tool used for convincing others of certain opinions regardless of them being true or not. They were paid to teach young Athenians rhetoric so they could have, as Plato says in Gorgias, “the ability to use the spoken word to persuade the jurors in the courts, the members of the Council, the citizens attending the Assembly – in short, to win over any and every form of public meeting.”

So why is Mercier and Sperber’s paper seen as groundbreaking if its central idea is thousands of years old? Unlike ancient Greece, Mercier and Sperber have a heap psychological data to support their claims. At the heart of this data is what psychologists call confirmation bias. As the name indicates, confirmation bias is the tendency for people to favor information that conforms to their ideologies regardless of if it is true or not. It explains why democrats would prefer to listen to Bill Clinton over Ronald Reagan, why proponents of gun control are not NRA members, and why individuals who are pro-choice only listen to or read sources that are also pro-choice. In addition, confirmation bias greatly distorts our self-perceptions, namely, it describes why “95% of professors report that they are above average teachers, 96% of college students say that they have above average social skills… [and why] 19% of Americans say that they are in the top 10% of earners.”

If we are to think of rationality as having to do with knowledge or truth, like Socrates, Plato, and Descartes did, confirmation bias is a huge problem. If rationality really was about discovering objective truths, then it seems like confirmations bias would be ripe for natural selection; imagine how smart we would be if we actually listened to opposing opinions and considered how they may be better than ours. Put differently, if the goal of reasoning was really to improve our decisions and beliefs, and find the truth, then there should be no reason for confirmation bias to exist.

Under the Sophist paradigm, however, confirmation bias makes much more sense, as do similar cognitive hiccups such as hindsight biasanchoring, representativeness, the narrative fallacy, and many more. These biases, which began to appear in the psychological literature of the 1960s, provide “evidence, [which] shows that reasoning often leads to epistemic distortions and poor decisions.” And it is from this point that Mercier and Sperber have built their ideas from. Instead of thinking of faulty cognition as irrational as many have, we can now see that these biases are tools that are actually helpful. In a word, with as many opinions as there are people, our argumentative-orientated reasoning does a fine job of  making us seem credible and legitimate. In this light, thank God for confirmation bias!

Of course, there is a down side to confirmation bias and our rationality being oriented towards winning arguments. It causes us to get so married to some ideas – WMD’s in Iraq and Doomsday events, for example – that we end up hurting ourselves in the long wrong. But at the end of the day, I think it is a good thing that our reasoning is so self conforming. Without confirmation bias, we wouldn’t have a sense of right or wrong, which seems to be a necessary component for good things like laws against murder.

Finally, if you were looking to Mercier and Sperber thesis’ to improve your reasoning, you would be missing the point. Inherent in their argument is the idea that our rationality will forever be self-confirming and unconcerned with truth and knowledge. And for better or for worse, this is something we all have to deal with.

Brains, Comedy, and Steve Martin

In my last post I discussed the neuroscience of music. I concluded that renowned musicians share one thing in common: they understand the importance of patterns, expectations, prediction in music. I encourage you to read it if you have not already.

This post takes the ideas of the last – patterns, expectations, and prediction – and applies it to comedy. Comedy is made possible by creating and fulfilling expectations while considering the importance of delivery, context, and timing. Consider this joke, taken from a recent article from Discovermagazine.com. 

A couple of New Jersey hunters are out in the woods when one of them falls to the ground. He doesn’t seem to be breathing; his eyes are rolled back in his head. The other guy whips out his cell phone and calls the emergency service. He gasps to the operator: “My friend is dead! What can I do?” The operator says: “Take it easy. I can help. First, let’s make sure he’s dead.” There is silence, then a shot is heard. The guy’s voice comes back on the line. He says, “OK, now what?”

Why is this funny? It starts by establishing a familiar pattern; in this case, the pattern is the standard beginning-middle-punch line structure that many jokes are structured by. Then, it creates an expectation; implicit in the statement, “First, let’s make sure he’s dead” is the expectation that the hunter is going to do something reasonable to see if his friend is dead. Finally, the comedy is delivered when the answer deviates from the expectation – we expected x, but we got y, i.e., we never though the alive hunter would shoot his friend just to make sure he was dead. Most importantly, the entire joke still maintains the beginning-middle-punch line pattern.

The best jokes have the most unexpected punch lines but maintain the pattern. Neuroscientist Vilayanur S. Ramachandran explains this in his 1998 book Phantoms in the Brain. 

Despite all their surface diversity, most jokes and funny incidents have the following logical structure: Typically you lead the listener along a garden path of expectation, slowly building up tension. At the very end, you introduce an unexpected twist that entails a complete reinterpretation of all the preceding data, and moreover, it’s critical that the new interpretation, though wholly unexpected, makes as much “sense” of the entire set of facts as did the originally “expected” interpretation (Ramachandran, p. 204).

From Richard Pryor to Chris Rock, comedians rely on what Ramachandran is describing. It is their ability to create and relieve tension, and deliver the unexpected while maintaining the pattern, that makes them so funny.

Steve Martin is one of my favorite comedians and is someone who understands this well. If you are familiar with Martin’s standup you will know his unique style. Like Pryor and the Rock, Martin did not change the medium per se; he simply altered the expectations that defined the medium. For example, here is an opening bit from one of Martin’s routines: “I’d like to open up with sort of a funny comedy bit. This has really been a big one for me… I’m sure most of you will recognize the title when I mention it; it’s the Nose on Microphone routine.” Martin would then lean in and placed his nose on the microphone for a few seconds, step back, take a few bows, and move on to his next joke. The “laugh came not then, but only after they realized I had already moved on to the next bit.”

Martin’s anticlimax style ended up defining his stand up. But it did not come to him in the blink of an eye, rather, it was the product of years of trial and error. He describes this in his autobiography Born Standing Up:

With conventional joke telling, there’s a moment when the comedian delivers the punch line, and the audience knows it’s the punch line, and their response ranges from polite to uproarious… These notions stayed with me for months, until they formed an idea that revolutionized my comic direction: what if there were no punch lines… What if I created tension and never realised it…  Theoretically, it would have to come out sometime. But if I kept denying them the formality of a punch line, the audience would eventually pick their own place to laugh, essentially out of desperation. This type of laugh seemed stronger to me, as they would be laughing at something they chose, rather than being told exactly when to laugh… My goal was to make the audience laugh but leave them unable to describe what it was that had made them laugh (Martin, p. 111-113).

Note how similar Martin’s remarks are to Ramachandran’s. They are talking about the relationship between patterns and expectations, and understand that something which is funny denies the initial expectation and challenges the observer to understand the new pattern. Like a Pryor or Rock joke, Martin’s Microphone bit takes the observer down a familiar path to start, but leaves her at an unfamiliar destination. Yet, she is not entirely lost for she still exists in the context of the joke. In other words, she knows that she is supposed to laugh – and she does – but she doesn’t know why.

Below is a video that illustrates an extreme example of this. Here we seen Kurt Braunohler and Kristen Schaal perform a bit at the 2008 Melbourne comedy festival. After a brief opening dialog, Braunohler and Schaal begin an impressive staging that seems to defy comedic logic. However, underneath all the repetition Braunohler and Schaal remain committed to the same principles that Martin and all successful jokes are committed to.

This is funny for the same reason the New Jersey joke is funny – it introduces a pattern, creates an expectation, and breaks an expectation while keeping to the pattern. But the genius of Braunohler and Schaal is that they break the expectation by not breaking the expectation. In other words, you don’t expect them to keep doing the “Kristin Schaal is a horse” dance, but they do, and that’s why its funny. Like Martin’s joke, the punch line is that there isn’t a punch line. Again, the audience is left in hysterics even though they couldn’t have reasonable said what is so funny. And this is one of the secrets of comedy – breaking an expectation in such an unexpected way that they audience can only respond by laughing.

%d bloggers like this: