Skip to content

Posts tagged ‘david hume’

Why The Future of Neuroscience Will Be Emotionless

In Phaedrus, Plato likens the mind to a charioteer who commands two horses, one that is irrational and crazed and another that is noble and of good stock. The job of the charioteer is to control the horses to proceed towards Enlightenment and the truth.

Plato’s allegory sparked an idea that perpetuated throughout the next several millennia in western thought: emotion gets in the way of reason. This makes sense to us. When people act out-of-order, they’re irrational. No one was ever accused of being too reasonable. Around the 17th and 18th centuries, however, thinkers began to challenge this idea. David Hume turned the tables on Plato: reason, Hume said, was the slave of the passions. Psychological research of the last few decades not only confirms this view, some of it suggests that emotion is better at deciding.

We know a lot more about how the brain works compared to the ancient Greeks, but a decade into the 21st century researchers are still debating which of Plato’s horses is in control, and which one we should listen to.

A couple of recent studies are shedding new light on this age-old discourse. The first comes from Michael Pham and his team at Columbia Business School. The researchers asked participants to make predictions about eight different outcomes ranging from American Idol finalists, to the winners of the 2008 Democratic primary, to the winner of the BCS championship game. They also forecasted the Dow Jones average.

Pham created two groups. He told the first group to go with their guts and the second to think it through. The results were telling. In the American Idol results, for example, the first group correctly predicted the winner 41 percent of the time whereas the second group was only correct 24 percent of the time. The high-trust-in-feeling subjects even predicted the stock market better.

Pham and his team conclude the following:

Results from eight studies show that individuals who had higher trust in their feelings were better able to predict the outcome of a wide variety of future events than individuals who had lower trust in their feelings…. The fact that this phenomenon was observed in eight different studies and with a variety of prediction contexts suggests that this emotional oracle effect is a reliable and generalizable phenomenon. In addition, the fact that the phenomenon was observed both when people were experimentally induced to trust or not trust their feelings and when their chronic tendency to trust or not trust their feelings was simply measured suggests that the findings are not due to any peculiarity of the main manipulation.

Does this mean we should always trust our intuition? It depends. A recent study by Maarten Bos and his team identified an important nuance when it comes to trusting our feelings. They asked one hundred and fifty-six students to abstain from eating or drinking (sans water) for three hours before the study. When they arrived Bos divided his participants into two groups: one that consumed a sugary can of 7-Up and another that drank a sugar-free drink.

After waiting a few minutes to let the sugar reach the brain the students assessed four cars and four jobs, each with 12 key aspects that made them more or less appealing (Bos designed the study so an optimal choice was clear so he could measure of how well they decided). Next, half of the subjects in each group spent four minutes either thinking about the jobs and cars (the conscious thought condition) or watching a wildlife film (to prevent them from consciously thinking about the jobs and cars).

Here’s the BPS Research Digest on the results:

For the participants with low sugar, their ratings were more astute if they were in the unconscious thought condition, distracted by the second nature film. By contrast, the participants who’d had the benefit of the sugar hit showed more astute ratings if they were in the conscious thought condition and had had the chance to think deliberately for four minutes. ‘We found that when we have enough energy, conscious deliberation enables us to make good decisions,’ the researchers said. ‘The unconscious on the other hand seems to operate fine with low energy.’

So go with your gut if your energy is low. Otherwise, listen to your rational horse.

Here’s where things get difficult. By now the debate over the role reason and emotion play in decision-making is well documented. Psychologists have written thousands of papers on the subject. It shows in the popular literature as well. From Antonio Damasio’s Descartes’ Error to Daniel Kahneman’s Thinking, Fast and Slow, the lay audience knows about both the power of thinking without thinking and their predictable irrationalities.

But what exactly is being debated? What do psychologists mean when they talk about emotion and reason? Joseph LeDoux, author of popular neuroscience books including The Emotional Brain and The Synaptic Self, recently published a paper in the journal Neuron that flips the whole debate on its head. “There is little consensus about what emotion is and how it differs from other aspects of mind and behavior, in spite of discussion and debate that dates back to the earliest days in modern biology and psychology.” Yes, what we call emotion roughly correlates with certain parts of the brain, it is usually associated with activity in the amygdala and other systems. But we might be playing a language game, and neuroscientists are reaching a point where an understanding of the brain requires more sophisticated language.

As LeDoux sees it, “If we don’t have an agreed-upon definition of emotion that allows us to say what emotion is… how can we study emotion in animals or humans, and how can we make comparisons between species?” The short answer, according to the NYU professor, is “we fake it.”

With this in mind LeDoux introduces a new term to replace emotion: survival circuits. Here’s how he explains it:

The survival circuit concept provides a conceptualization of an important set of phenomena that are often studied under the rubric of emotion—those phenomena that reflect circuits and functions that are conserved across mammals. Included are circuits responsible for defense, energy/nutrition management, fluid balance, thermoregulation, and procreation, among others. With this approach, key phenomena relevant to the topic of emotion can be accounted for without assuming that the phenomena in question are fundamentally the same or even similar to the phenomena people refer to when they use emotion words to characterize subjective emotional feelings (like feeling afraid, angry, or sad). This approach shifts the focus away from questions about whether emotions that humans consciously experience (feel) are also present in other mammals, and toward questions about the extent to which circuits and corresponding functions that are relevant to the field of emotion and that are present in other mammals are also present in humans. And by reassembling ideas about emotion, motivation, reinforcement, and arousal in the context of survival circuits, hypotheses emerge about how organisms negotiate behavioral interactions with the environment in process of dealing with challenges and opportunities in daily life.

Needless to say, LeDoux’s paper changes things. Because emotion is an unworkable term for science, neuroscientists and psychologists will have to understand the brain on new terms. And when it comes to the reason-emotion debate – which of Plato’s horses we should trust – they will have to rethink certain assumptions and claims. The difficult part is that we humans, by our very nature, cannot help but resort to folk psychology to explain the brain. We deploy terms like soul, intellect, reason, intuition and emotion but these words describe very little. Can we understand the brain even though our words may never suffice? The future of cognitive science might depend on it.

Read more

“Who’s There?” Is The Self A Convenient Fiction?

For a long time people thought that the self was unified and eternal. It’s easy to see why. We feel like we have an essence; we grow old, gain and lose friends, and change preferences but we are the same person from day one.

The idea of the unified self has had a rough few centuries however. During the English Enlightenment Hume and Locke challenged the platonic idea of human nature being derived from an essence; in the 19th century Freud declared that the ego “was not even the master of his own house;” and after decades of revealing empirical research neuroscience has yet to reveal anything that scientists would call unified. As clinical neuropsychologist Paul Broks says, “We have this deep intuition that there is a core… But neuroscience shows that there is no center in that brain where things do all come together.”

One of the most dramatic demonstrations of the illusion of the unified self comes from Michael Gazzaniga, who showed that each hemisphere of the brain exercises free will independently when surgeons cut the corpus callosum. Gazzaniga discovered this with a simple experiment. When he flashed the word “WALK” in the right hemisphere of split-brain patients they walked out of the room. But when he asked them why they walked out all responded with a trivial remark such as, “To go to the bathroom” or “To get a Coke.” Here’s where things got weird. When he flashed a chicken in patients’ left hemisphere (in the right visual field) and a wintry scene in their right hemisphere (in the left visual field), and asked them to select a picture that goes with what they saw, he found that their left hand correctly pointed to a snow shovel and their right hand correctly pointed to a chicken. However, when the patients were asked to explain why they pointed at the pictures they responded with something like, “That’s easy. The shovel is for cleaning up the chicken.”

Nietszche was right: “We are necessarily strangers to ourselves…we are not ‘men of knowledge’ with respect to ourselves.”

But you don’t have to have a severed corpus callosum or a deep understanding of Genealogy of Morals (which I don’t) to appreciate how modular ourselves are. Our everyday inner-monologues are telling enough. We weigh the pros and cons between fatty meats and nutritious vegetables even though we know which is healthier. When we have the chance to procrastinate we usually take it and rationalize it as a good decision. We cheat, lie, are lazy and eat Big Macs knowing full well how harmful doing these things are. When it comes to what we think about, what we like and what we do Walt Whitman captured our natural hypocrisies and inconsistencies with this famous and keenly insightful remark: “Do I contradict myself? Very well then I contradict myself, (I am large, I contain multitudes.)”

That the unified self is largely an illusion is not necessarily a bad thing. The philosopher and cognitive scientist Dan Dennett suggests that it is a convenient fiction. I think he’s right. With it we are able to maintain stories and narratives that help us make sense of the world and our place in it. This is a popular conviction nowadays. As prominent evolutionary psychologist Steven Pinker explains in one of his bestsellers, “each of us feels that there is a single “I” in control. But that is an illusion that the brain works hard to produce.” In fact, without the illusion of selfhood we all might suffer the same fate as Phineas Gage who was, as anyone who has taken an introductory to psychology course might remember, “no longer Gage” after a tragic railroad accident turned his ventromedial prefrontal cortex into a jumbled stew of disconnected neurons.

However, according to the British philosopher Julian Baggini in a recent TED lecture the illusion of the self might not be an illusion. The question Baggini asks is if a person should think of himself as a thing that has a bunch of different experiences or as a collection of experiences. This is an important distinction. Baggini explains that, “the fact that we are a very complex collection of things does not mean we are not real.” He invites the audience to consider the metaphor of a waterfall. In many ways a waterfall is like the illusion of the self: is it not permanent, it is always changing and it is different at every single instance. But this doesn’t mean that a waterfall is an illusion or that it is not real. What it means is that we have to understand it as a history, as having certain things that are the same and as a process.

Baggini is trying to save the self from neuroscience, which is admirable considering that neuroscience continues to show how convoluted our brains are. I am not sure if he is successful – argument by metaphor can only go so far, empirical data wins at the end of the day – but I like the idea that personal and neurological change and inconsistency doesn’t imply an illusion of identity. In this age of cognitive science it’s easy to subscribe to Whitman’s doctrine – that we are constituted by multitudes; it takes a brave intellect, on the other hand, to hang on to what Freud called our “naïve self-love.”

Shakespeare opened Hamlet with the huge and beautifully complex query, “Who’s There.” Four hundred years later Baggini has an answer, but many of us are still scratching our heads.

Read more

Our Modular Selves: Science and the Philosophy of Self

One of the most enduring themes in western thought is the idea of The Self. Who am “I” and what does it mean “to be,” many philosophers have asked over the centuries. Thought provoking questions indeed, but most discussions of The Self make the mistake of assuming that it is something. The reality is that many modules that are constantly in conflict influence human beings. As prominent evolutionary psychologist Robert Kurzban explains, “the very constitution of the human mind makes us massively inconsistent.” We think that there is an “I” behind all of our cognition – a ghost in the machine – but this is largely a delusion.

Take our moral intuitions. Sometimes we are morally sound. In one experiment, researchers found that deliberately dropped envelopes were stamped and mailed one fifth of the time by complete strangers. Psychologist Jenifer Kunz found that when people receive a Christmas card from a family they do not know, they usually send one back in return. And in the famous Ultimatum experiment in which people are given $20 and the choice to either take all of it, split it $18/$2, or split it $10/$10, most split it evenly. Moreover, moral psychologists are demonstrating that babies as young as five to six months old have a “moral sense” towards people and objects outside of their kin.

On the other hand, consider Douglas Kenrick’s infamous study, which asked participants how often they thought about killing other people. Partnering with Virgil Sheets, Kenrick polled 760 Arizona State University students and found that “the majority of those smiling, well-adjusted, all-American students were willing to admit to having had homicidal fantasies. In fact 76 percent of the men reported such fantasies… [and] 62 percent of the so-called gentler sex had also contemplated murder at least once.” Furthermore, as Kenrick explains, “when David Buss and Josh Duntley later surveyed a sample of students at the University of Texas, they found similarly high percentages of men (79 percent) and women (58) percent admitting to homicidal fantasies.”

These findings aren’t surprising. Devil-angel, heaven-hell, and good cop-bad cop dichotomies have illustrated our inner conflicts for centuries. But isn’t it strange that these divisions occur inside of something we consider singular and unified? Of course, we are far from being singular or unified. As Jonathan Haidt says, “we assume that there is one person in each body, but in some ways we are each more like a committee whose members have been thrown together working at cross purposes” This should also seem obvious. Just think about weighing the short-term with the long-term: Should I eat Pizza now or go for a run? Should I keep drinking or go home to avoid the hangover? Should I continue working this job even though I don’t like it? Should I stay in this relationship even though it isn’t what it once was? To borrow an example from Kurzban, think of a few verbs to complete the sentence, I really like to ______ but afterwards I wish I hadn’t, and compare them to verbs that could complete the sentence, I don’t like to _______ but afterwards I’m glad I did. The first set of verbs sharply contrasts with the second set with the former illustrating the “impatient modules” and the latter the “patient modules.”

Why the inconsistencies? Put simply, we speak to ourselves, not ourself, and these inner dialogues depend on context and current states. For example, would you pay 30 dollars for a slice of pizza? Probably not. But what if you were starving on the African savannah and you happened to have 30 dollars? Or, to put it in more relatable terms, what if you were on one of those long-haul flights across the Pacific when after not eating for hours the person next to you pulls out a delicious slice of warm pizza and starts eating it. Suddenly, 30 dollars doesn’t seem so bad. Context and circumstance matter, duh. But what’s important is that the more specific our understanding of ourselves gets, the less general we can be in our explanations.

For example, let’s say you asked me if I liked coffee or beer. It depends. If it’s 9am I would say coffee and if it’s 9pm I would say beer. However, if I was cramming for a final exam, a 9pm coffee would sound very appealing. But let’s say I just got a bout of food poisoning and anything I put in my stomach was coming right back up. Now, both sound horrible. Ok, let’s say it’s Friday at 9pm and I don’t have to study for a test and I don’t have food poisoning but I am eating dinner with my girlfriend’s family and they look down upon alcohol. I would avoid beer like the plague. So I like beer as long as it’s 9pm, I don’t have a final exam to study for, I don’t have food poisoning and I’m not eating dinner with my girlfriend’s anti-alcohol family. I admit, I’m exaggerating. But there is an important point in drawing out these hypotheticals. And it is that, “the more we specify the context… the less we can generalize… [but] the less we specify… the more likely we are to miss something about how [we] decide.” This is one dilemma of economic theory in a nutshell: we say that humans are rational to understand how we decide as consumers but we know that this isn’t entirely true. However, we don’t sacrifice the entire theory just because it doesn’t apply universally.

Back to The Self.

The Scottish philosopher David Hume once again got it right long before the science by realizing Kurzban’s and Haidt’s points exactly. Hume held that The Self was more like a commonwealth, which upheld an identity not through some sort of essence or soul, as Plato would have said, but through different yet related elements. As Hume explained, “we are never intimately conscious of anything but a particular perception; man is a bundle or collection of different perceptions which succeed one another with an inconceivable rapidity and are in perpetual flux and movement.”  This is not to deny the importance of the self. As Daniel Dennett argues, the idea of the self, though delusional, acts as a convenient fiction. We tell ourselves stories to make sense of the world and our place in it; many times this is a good thing even though it is scientifically wishy-washy. But let’s at least keep in mind that The Self isn’t actually a thing the next time we think about our identities from the armchair.

Read more

Is It Possible To Predict Black Swans?

In 1943, the chairman of IBM, Thomas Watson, declared that, “there is a world market for maybe five computers.” In 1962, Decca recording company said of the Beatles, “we don’t like their sound, and guitar music is on the way out.” And in 1974, Margaret Thatcher proclaimed that, “it will be years — not in my time — before a woman will become Prime Minister.”

Why are we so bad with the future?

Whenever we speculate about future events we are bound by two things, our rationality in the present and our memory of the past. The problem is that 1) cognitive biases and heuristics contort our rationality and 2) our memory is highly unreliable being subjected to its own set of prejudices (there are mountains of psychological studies that explore these two points. The specifics aren’t important here).

Philosophers knew about this all along even though they didn’t have the empirical data. They called our inability to have knowledge of the future the problem of induction. For example, if I asked you how you know the Sun will rise tomorrow you will likely tell me that you know from experience; you’ve seen it rise every single day of your life so you induce from the past that it will do so again. The worry is that relying on past events do not guarantee knowledge of future events (e.g., just because the sun has risen every day doesn’t mean it will tomorrow). Philosopher David Hume summarizes:

The supposition that the future resembles the past, is not founded on arguments of any kind, but is derived entirely from habit.

Here’s another more dramatic and realistic example. Pretend you’re a Turkey. Up until Thanksgiving time you would have no reason to believe that you and all your friends would be axed. Then, one day, out of no where, it’s Turkey genocide.

Past experience couldn’t have predicted what was about to happen, and this turned out to be hugely problematic for you, the Turkey. This problem, what some call the Black Swan Theory, is everywhere: the internet, World War One, Harry Potter, the rise of global religions. What binds these phenomena are their unpredictability and high impact on society – nobody saw them coming and they ended up changing the world.

There are three elements of the black swan events

  • They are very difficult to predict.
  • They have a high impact on society
  • After the event people have the tendency to rationalize them giving the illusion that they were expected

The fact of the matter is that we don’t know why black swans happened and we don’t know what and when the next one will be. That is the nature of black swans: before they occur they are extremely unpredictable but after they occur we explain them as if we saw them coming. They are prospectively unpredictable but retrospectively predictable, as Nassim Taleb says.

The frustrating aspect of Black Swan Theory is that it provokes people to ask when the next black swan will be. This completely misses the point. Asking when the next black swan is or whether or not it is possible to avoid black swans tells me that you don’t understand Black Swan Theory. By definition black swans can’t be predicted or avoided. Yet, people continue to believe that both are possible.

Why?

I think it goes back to the narrative fallacy, which is what the third bullet point highlighted. As Taleb explains, the narrative fallacy describes our “limited ability to look at sequences of facts without weaving an explanation into them, or, equivalently, forcing a logical link, and arrow of relationship, upon them.” I italicize “limited ability” to suggest that the narrative fallacy is unavoidable. We simply cannot assess the past without creating a narrative that fits with the present and projects the future. For better or for worse, this means we will continue believe that past experience guarantees knowledge of the future. It’s a strange paradox: we will always believe that we have a grip on the future even when we are repeatedly surprised by what it brings us.

Read more

%d bloggers like this: