Skip to content

Posts tagged ‘Jonathan Haidt’

What Believers and Atheists Can Learn From Each Other (co-written with Rabbi Geoff Mitelman)

Here’s a forthcoming article for the Huffington Post religious blog I’ve written with Rabbi Geoff Mitelman, a friend and fellow cognitive science enthusiast. We discuss atheism and the psychology of belief. Check out his blog Sinai and Synapses

Rabbi Geoffrey Mitelman: It’s inherently challenging for believers and atheists to have productive conversations. Discussing topics such as belief and nonbelief, the potential irrationality of religion, or the limits of scientific knowledge is difficult since each side often ends up more firmly entrenched in their own worldview.

But one bright person interested in broadening the conversation is Sam McNerney, a science writer who focuses on cognitive science and an atheist interested in religion from a psychological point of view.

I found Sam through his writing on ScientificAmerican.com, and started reading his blog Why We Reason and his posts on BigThink.com. We discovered that even though we approached religion from different perspectives, we had great respect for each other.

So as two people with different religious outlooks we wondered: what can we learn from each other?

Sam McNerney: There are many things we can learn. Let’s take one: the role of authority.

A recent New York Times article points out that secular liberal atheists tend to conflate authority, loyalty and sanctity with racism, sexism and homophobia. It’s not difficult to see why. Societies suffer when authority figures, being motivated by sacred values and religious beliefs, forbid their citizens from challenging the status quo. But a respect for authority and the principles they uphold to some degree is necessary if societies seek to maintain order and justice and function properly. The primatologist Frans de Waal explains it this way: “Without agreement on rank and a certain respect for authority there can be no great sensitivity to social rules, as anyone who has tried to teach simple house rules to a cat will agree.” (Haidt, 106)

Ironically, atheists’ steadfast allegiance to rationality, secular thinking and the importance of open-mindedness blinds them to important religious values including respect for authority. As a result, atheists tend to confuse authority with exploitation and evil and undervalue the vital role authority plays in a healthy society.

Geoff: You accurately bring up one aspect of why organized religion can be so complicated: it is intertwined with power. And I’m glad you note that authority and power are not inherently bad when it comes to religion. In fact, as you also say, a certain degree of authority is necessary.

To me, the real problem arises when religion adds another element into the mix: certainty. It’s a toxic combination to have religious authorities with the power to influence others claiming to “know” with 100% certainty that they’re right and everyone else is wrong.

One thing I learned from several atheists is the importance of skepticism and doubt. Indeed, while certainty leads to arrogance, uncertainty leads to humility. We open up the conversation and value diverse experiences when we approach the world with a perspective of “I’m not sure” or “I could be wrong.”

Recently, astrophysicist Adam Frank wrote a beautiful piece on NPR’s blog 13.7 about how valuable uncertainty can be:

Dig around in most of the world’s great religious traditions and you find people finding their sense of grace by embracing uncertainty rather than trying to bury it in codified dogmas…

Though I am an atheist, some of the wisest people I have met are those whose spiritual lives (some explicitly religious, some not) have forced them to continually confront uncertainty. This daily act has made them patient and forgiving, generous and inclusive. Likewise, the atheists I have met who most embody the ideals of free inquiry seem to best understand the limitations of every perspective, including their own. They encounter the ever shifting ground of their lives with humor, good will and compassion.

Certainty can be seductive, but it hurts our ability to engage with others in constructive ways. Thus when religious people talk about God, belief or faith, we have to approach the conversation with a little humility and recognize that we don’t have a monopoly on the truth. In the words of Rabbi Brad Hirschfield, we need to realize that another person doesn’t have to be wrong for us to be right.

This doesn’t mean believers and atheists will agree on the role of religion in society, the validity of a particular belief system, or even the very existence of God. In fact, believers and atheists will almost certainly continue to vehementlydisagree about these questions. But we have to remember that not all disagreements are bad. Some arguments are quite beneficial because they help us gain a deeper understanding of reality, encourage clearer thinking, and broaden people’s perspectives.

The Rabbis even draw a distinction between two different kinds of arguments. Arguments they call “for the sake of Heaven” will always be valuable, while arguments that are only for self-aggrandizement will never be productive (Avot5:20). So I’m not interested in arguments that devolve into mocking, ridicule, name-calling or one-upmanship. But I’d gladly participate in any discussion if we are arguing about how we make ourselves and this world better, and would actively strive to involve whoever wants to be part of that endeavor, regardless of what they may or may not believe.

Sam: You are right to point out that both atheists and believers under the illusion of certainty smother potentially productive dialogue with disrespectful rhetoric. What’s alarming is that atheism in the United States is now more than non-belief. It’s an intense and widely shared sentiment where a belief in God is not only false, but also ridiculous. Pointing out how irrational religion can be is entertaining for too many.

There’s no doubt that religious beliefs influence negative behavioral consequences, so atheists are right to criticize religion on many epistemological claims. But I’ve learned from believers and my background in cognitive psychology that faith-based beliefs are not necessarily irrational.

Consider a clever study recently conducted by Kevin Rounding of Queen’s University in Ontario that demonstrates how religion helps increase self-control. In two experiments participants (many of whom identified as atheists) were primed with a religious mindset – they unscrambled short sentences containing words such as “God,” “divine” and “Bible.” Compared to a control group, they were able to drink more sour juice and were more willing to accept $6 in a week instead of $5 immediately. Similar lines of research show that religious people are less likely to develop unhealthy habits like drinking, taking drugs, smoking and engaging in risky sex.

Studies also suggest that religious and spiritual people, especially those living in the developing world, are happier and live longer, on average, than non-believers. Religious people also tend to feel more connected to something beyond themselves; a sentiment that contributes to well-being significantly.

It’s unclear if these findings are correlative or causal – it’s likely that many of the benefits that come from believing in God arise not from beliefs per se but from strong social ties that religious communities do such a good job of fostering. Whatever the case, this research should make atheists pause before they dismiss all religious beliefs as irrational or ridiculous.

Geoff: It’s interesting — that actually leads to another area where atheists have pushed believers in important ways, namely, to focus less on the beliefs themselves, and more on how those beliefs manifest themselves in actions. And to paraphrase Steven Pinker, the actions that religious people need to focus on are less about “saving souls,” and more about “improving lives.”

For much of human history the goal of religion was to get people to believe a certain ideology or join a certain community. “Being religious” was a value in and of itself, and was often simply a given, but today, we live in a world where people are free to choose what they believe in. So now, the goal of religion should be to help people find more fulfillment in their own lives and to help people make a positive impact on others’ lives.

It’s important to note that people certainly do not need religion to act morally or find fulfillment. But as Jonathan Haidt writes in his new book The Righteous Mind, religion can certainly make it easier.

Haidt argues that our mind is like a rider who sits atop an elephant to suggest that our moral deliberations (the rider) are post-hoc rationalizations of our moral intuitions (the elephant). The key to his metaphor is that intuitions comes first (and are much more powerful) and strategic reason comes afterwards.

We need our rider because it allows us to think critically. But our elephant is also important because it motivates us to connect with others who share a moral vision. Ultimately, if we are striving to build communities and strengthen our morals, we cannot rely exclusively on either the rider or the elephant; we need both. As Haidt explains:

If you live in a religious community, you are enmeshed in a set of norms, institutions and relationships that work primarily on the elephant to influence your behavior. But if you are an atheist living in a looser community with a less binding moral matrix, you might have to rely somewhat more on an internal moral compass, read by the rider. That might sound appealing to rationalists, but it is also a recipe for…a society that no longer has a shared moral order. [And w]e evolved to live, trade and trust within shared moral matrices. (Haidt, 269)

Since religion is a human construct, with its “norms, institutions and relationships,” it can be used in a variety of different ways. It can obviously be used to shut down critical thinking and oppress others. But as you mention, religion has positive effects on well-being, and religious beliefs correlate with a sense of fulfillment. Perhaps the job of religion, then, should be giving us a common language, rituals, and communities that reinforce and strengthen our ability to become better human beings and find joy and meaning in our lives.

Ultimately, we don’t have to agree with someone in order to learn from them. As Ben Zoma, a 2nd century Jewish sage, reminds us: “Who is wise? The person who learns from all people.” (Avot 4:1) When we are willing to open ourselves up to others, we open ourselves up to new ideas and different perspectives.

Indeed, I have come to believe that our purpose as human beings – whether we identify as a believer, an atheist, or anything in between – is to better ourselves and our world. And any source of knowledge that leads us to that goal is worth pursuing.

Political Empathy & Moral Matrices

It’s difficult to make objective predictions about our future self. No matter how hard we try, we’re always influenced by the present. In one study, for example, researchers phoned people around the country and asked them how satisfied they were with their lives. They found that “when people who lived in cities that happened to be having nice weather that day imagined their lives, they reported that their lives were relatively happy; but when people who lived in cities that happened to be having bad weather that day imagined their lives, they reported that their lives were relatively unhappy.”

Similarly, a few years ago researchers went to a local gym and asked people who had just finished working out if food or water would be more important if they were lost in the woods. Like good social scientists, they asked the same question to people who were just about to work out. They found that 92 percent of the folks who just finished working out said that water would be more important; only 61 percent of people who were about to work out made the same prediction.

Physical states are difficult to transcend, and they often cause us to project our feelings onto everyone else. If I’m cold, you must be too. If I like the food, you should too. We are excellent self-projectors (or maybe that’s just me). Sometimes there are more consequential downsides to this uniquely human ability. And this brings me to a new study led by Ed O’Brien out of the University of Michigan recently published in Psychological Science. (via Maia Szalavitz at Time.com)

The researchers braved the cold for the first experiment. They approached subjects at a bus stop in January (sometimes the temperature was as low as -14 degrees F) and asked them to read a short story about a hiker who was taking a break from campaigning when he got lost in the woods without adequate food, water and clothing. For half of the subjects the lost hiker was a left leaning and pro-gay rights Democrat; the other half read about a right-wing Republican. Next, the researchers asked the subjects their political views and which feeling was most unpleasant for the stranded hiker – being thirsty, hungry or cold. (For female participants, the hiker was described as female; for men, the hiker was male.) While these chilly interviews were being conducted O’Brien and his team ran the same study in a cozy library. Did the two groups show different answers?

The first thing O’Brien found was consistent with the gym study: 94 percent of the people waiting for the bus said the cold was the most unpleasant feeling for the hiker compared to only 57 percent of the library dwellers. Here’s were things got interesting: “If participants disagreed with the hiker’s politics… their own personal physical state had no bearing on their response: people chose the cold in equal numbers, regardless of where they were interviewed.” In other words, we don’t show as much empathy towards people who don’t share our political beliefs.

Their findings are disheartening given the current political climate in the United States. If we cannot empathize with someone who doesn’t share our political views, how are we supposed to engage in rational discourse with them? In order to work out our differences, it seems like we need to first recognize that we are the same deep down.

The larger problem is that compassion, empathy and moral sentiments towards other people binds and blinds. As one author says, “we all get sucked into tribal moral communities, circling around something sacred and then sharing post-hoc arguments about why we are so right and they are so wrong. We think the other side is blind to truth, reason, science, and common sense, but in fact everyone goes blind when talking about their sacred objects.”

How do we break out of our political matrices? Here’s one idea: let’s take the red pill and realize that we all can’t be right while remembering that we all have something to contribute. This is what the Asian religions nailed on the head. Ying and Yang aren’t enemies because like night and day they are necessary for the functioning of the world. Vishnu the preserver (who stands for conservative principles) and Shiva the destroyer (who stands for liberal principles), the two of the high Gods in Hinduism, cooperate to preserve the universe. It’s a cliché worth repeating: let’s work together to get along.

Read more

Religion, Evolution & What The New Atheists Overlook

Lancet flukes (Dicrocelium dendriticum) are a clever little parasite. To reproduce, they find their way into the stomach of a sheep or cow by  commandeering an ant’s brain. Once this happens, ants exhibit strange behavior: they climb up the nearest blade of grass until it falls, then they climb it again, and again. If the flukes are lucky, a grazing farm animal eats the grass along with the ant; a sure win for the flukes, but a sad, and unfortunate loss for the six-legged insect.

Does anything like this happen with human beings? Daniel Dennett thinks so. In the beginning of his book Breaking the Spell, Dennett uses the fluke to suggest that religions survive because they influence their hosts (e.g., people) to do bad things for themselves (e.g., suicide bombing) but good things for the parasite (e.g., Islam). Implicit in Dennett’s example is that religions are like viruses, and people and societies are better of without them.

Dennett’s position is akin to the rest of the New Atheists: religion is a nasty and irrational byproduct of natural selection. This means that religious beliefs were not directly selected for by evolution any more than our noses evolved to help us keep our glasses from sliding off our faces. In the words of Pascal Boyer, “religious concepts and activities hijack our cognitive resources.” The question is: what cognitive resources influenced religion?

Most cognitive scientists agree that the Hypersensitve Agency Detection Device (abbreviated HADD) played an important role. In brief, the HADD explains why we see faces in the clouds, but never clouds in faces. Neuroscientist Dean Buonomano puts it this way: “We are inherently comfortable assigning a mind to other entities. Whether the other entity is your brother, a cat, or a malfunctioning computer, we are not averse to engaging it in conversation.” This ability endows will and intention to other people, animals and inanimate objects. The HADD produces a lot of false positive errors (e.g., seeing the virgin Mary in a piece of toast), and God might be one of them.

Another feature of the human mind that religion might have co-opted is a natural propensity towards a dualistic theory of mind. Dualism is our tendency to believe that people are made up off physical matter (e.g., lungs, DNA, and atoms) as well as an underlying and internal essence. Even the strictest materialist cannot escape this sentiment; we all feel that there is a “me” resting somewhere in our cortices. A belief in disembodied spirits could have given rise to beliefs in supernatural entities that existed independent of matter. Yale psychologist Paul Bloom is a proponent of this view and supports his conclusions with experimental evidence highlighted in his book Descartes’ Baby.

Although the by-productive hypothesis, as it is known, is incomplete, it all points to the same logic: “a bit of mental machinery evolved because it conferred a real benefit, but the machinery sometimes misfires, producing accidental cognitive effects that make people prone to believing in gods.”

This is an important piece of the puzzle for the New Atheists. If religion is the off shoot of a diverse set of cognitive modules that evolved for a variety of problems, then religious beliefs are nothing more than a series of neural misfires that are “correctable” with secular Enlightenment thinking.

Not everyone agrees. The evolutionary biologists David Sloan Wilson and Edward O. Wilson propose that religiosity is a biological adaptation that created communities by instilling a “one for all, all for one” mentality in its members. This is important because it allowed group members to function as a superorganism, which moreover gave them an advantage on the African savannah; “An unshakable sense of unity among… warriors,” Buonomano says, “along with certainty that the spirits are on their side, and assured eternity, were as likely to, as they are now, to improve the chances of victory in battle.” The binding power of religion would have also helped communities form objective moral codes – do unto others as you would have others do unto you – and protected against free riders.

Jonathan Haidt is making a name for himself by advocating this point. In addition to the group selection hypothesis, Haidt points to our species ability to experience moments of self-transcendence. The world’s religions, he believes, are successful because they found a way to facilitate such experiences. Here’s how he explained it in a recent TED:

If the human capacity for self-transcendence is an evolutionary adaptation, then the implications are profound. It suggests that religiosity may be a deep part of human nature. I don’t mean that we evolved to join gigantic organized religions — that kind of religion came along too recently. I mean that we evolved to see sacredness all around us and to join with others into teams that circle around sacred objects, people and ideas. This is why politics is so tribal. Politics is partly profane, it’s partly about self-interest. But politics is also about sacredness. It’s about joining with others to pursue moral ideals. It’s about the eternal struggle between good and evil, and we all believe we’re on the side of the good.

What’s interesting about Haidt’s angle is that it sheds a bad light on Enlightenment and secular ideals that western civilization was founded on. We exult liberty, individualism and the right to pursue our self-interest. But are we ignoring our innate desire to be part of something greater? Are we denying our groupish mentalities? The modern world gives us fixes – think big football games or raves – but I think some atheists are deprived.

And this brings me back to the fluke and the New Atheists. If Haidt is right, and our religiosity was an evolutionary adaptation, then religious beliefs are a feature of, not a poison to, our cognition. The fluke, therefore, is not a parasite but an evolutionary blessing the facilitated the creation of communities and societies. This is not to deny all the bloodshed on behalf of religion. But if religion is an adaptation and not a byproduct, then “we cannot expect people to abandon [it] so easily.”

The Irrationality Of Irrationality

Reason has fallen on hard times. After decades of research psychologists have spoken: we humans are led by our emotions, we rarely (if ever) decide optimally and we would be better off if we just went with our guts. Our moral deliberations and intuitions are mere post-hoc rationalizations; classical economic models are a joke; Hume was right, we are the slaves of our passions. We should give up and just let the emotional horse do all the work.

Maybe. But sometimes it seems like the other way around. For every book that explores the power of the unconscious another book explains how predictably irrational we are when we think without thinking; our intuitions deceive us and we are fooled by randomness but sometimes it is better to trust our instincts. Indeed, if a Martian briefly compared subtitles of the most popular psychology books in the last decade he would be confused quickly. Reading the introductions wouldn’t help him either; keeping track of the number of straw men would be difficult for our celestial friend. So, he might ask, over the course of history have humans always thought that intelligence was deliberate or automatic?

When it comes to thinking things through or going with your gut there is a straightforward answer: It depends on the situation and the person. I would also add a few caveats. Expert intuition cannot be trusted in the absence of stable regularities in the environment, as Kahneman argues in his latest book, and it seems like everyone is equally irrational when it comes to economic decisions. Metacognition, in addition, is a good idea but seems impossible to consistently execute.

However, unlike our Martian friend who tries hard to understand what our books say about our brains, the reason-intuition debate is largely irrelevant for us Earthlings. Yes, many have a sincere interest in understanding the brain better. But while the lay reader might improve his decision-making a tad and be able explain the difference between the prefrontal cortex and the amygdala the real reason millions have read these books is that they are very good.

The Gladwells, Haidts and Kahnemans of the world know how to captivate and entertain the reader because like any great author they pray on our propensity to be seduced by narratives. By using agents or systems to explain certain cognitive capacities the brain is much easier to understand. However, positioning the latest psychology or neuroscience findings in terms of a story with characters tends to influence a naïve understanding of the so-called most complex entity in the known universe. The authors know this of course. Kahneman repeatedly makes it clear that “system 1” and “system 2” are literary devices not real parts in the brain. But I can’t help but wonder, as Tyler Cowen did, if deploying these devices makes the books themselves part of our cognitive biases.

The brain is also easily persuaded by small amounts of information. If one could sum up judgment and decision-making research it would go something like this: we only require a tiny piece of information to confidently form a conclusion and take on a new worldview. Kahneman’s acronym WYSIATI – what you see is all there is – captures this well. This is precisely what happens the moment readers finish the latest book on intuition or irrationality; they just remember the sound bite and only understand brains through it. Whereas the hypothetical Martian remains confused, the rest of us humans happily walk out of our local Barnes and Noble, or even worse, finish watching the latest TED with the delusion feeling that now, we “got it.”

Many times, to be sure, this process is a great thing. Reading and watching highbrow lectures is hugely beneficial intellectually speaking. But let’s not forget that exposure to X is not knowledge of X. The brain is messy; let’s embrace that view, not a subtitle.

Passions, Reason & Moral Hypocrisy

Most of us think we are morally sound. If we see an injustice, we’ll step in, if we are given the opportunity to cheat, we won’t. Or so we say. Psychological research demonstrates that in certain situations we tend to twist our reasoning to position ourselves as morally superior to others even when have acted otherwise.

In one experiment conducted by David DeSteno and Piercarlo Valdesolo participants were told that they would be performing one of two tasks; the first was short and fun while the second was long and hard. To induce a small yet significant (and later very revealing) moral dilemma, DeSteno and Valdesolo let half participants decide which task they would preform, knowing that the other task would be allocated to another participant. (They also had the option of letting a computer randomly choose how the tasks would be distributed.) After they finished assigning the task, participants were asked to rate how fair they were. Meanwhile, the group of participants who were at the receiving end of the task allocation were asked to rate how fair the allocating-participants were. It doesn’t take a lot of foresight to see where this is going.

The first thing DeSteno and Valdesolo found was in line with their previous research: only about 8 percent of participants acted altruistically – what an objective set of eyes would call “fair”. Not a great start, and it gets worse. The second thing they found was that, “moral hypocrisy emerged in the control conditions; the same fairness transgression was judged to be substantially more moral when enacted by the self than when enacted by another.” In other words, participants who were in charge of allocating the tasks usually believed that they decided fairly no matter what their decision was. In sharp contrast were the participants who had no say in the process. They believed that the delegating participants were not fair. The lesson here is that we are all “moral hypocrites;” we claim to be morally sound, and when we’re not, we rationalize to improve our moral stature to others and ourselves. Again, not a big surprise.

What DeSteno and Valdesolo were really after was a better understanding of the dual-model process of moral judgment, which understands our moral judgments as products of both our intuitive and deliberate capacities. When it comes to assessing moral situations we have a gut-reaction immediately followed by a more deliberate line of reasoning. For example, when someone asks us if killing an innocent person is wrong you know right away that the answer is yes, but it usually takes a few moments to think of reasons for why this is true. This is not to say that these two systems (system 1 and system 2 as they are referred to in the popular literature) are neurologically separate, but it is to suggest that they are not necessarily on the same page at all times. Understanding their relationship is key to understanding how humans think about moral judgments.

To tease out how these two systems handle moral judgments DeSteno and Valdesolo incorporated a twist. They replicated the experiment but the second time around half of the participants had to make fairness judgments under cognitive load. (They had to memorize a string of digits. The idea here is that their “rational” brains will be busy memorizing the digits thereby freeing up the “intuitive” brain.) They found that under cognitive load, which made reasoning very difficult, the ratings were identical rendering no signs of “moral hypocrisy.”

DeSteno and Valdesolo conclude:

The present study provides strong evidence that moral hypocrisy is governed by a dual-process model of moral judgment wherein a propotent negative reaction to the thought of a fairness transgression operates in tandem with higher order processes to mediate decision making. Hypocrisy readily emerged under normal processing conditions, but disappeared under conditions of cognitive constraint. Inhibiting control prevented a tamping down or override of the intuitive aversive response to the transgression. Of import, these findings rule out the possibility that hypocrisy derives from differences in automatic affective reactions towards one’s own and others’ transgressions. Rather, when contemplating one’s own transgression, motives of rationalization and justification temper the initial negative response and lead to more lenient judgments. Motivated reasoning processes are not engaged when judging others’ violations, rendering the prepotent negative response more causally powerful and leading to harsher judgments.

So Freud had it backwards. It is our intuition – not just our rationality – that seems to have a more objective reaction to moral situations. However, understanding the relationship between the passions and reason is certainly not over. If anything it has just begun, in the context of empirical research at the least. From the ancient Greek philosophers to philosophers of the 21st century moral debates have almost always taken place in the abstract. Now there is plenty of promising science to be excited about. Are our moral judgments simply post-hoc justifications, the rational tail of the emotional dog? Or can our conscious deliberations inform, perhaps control, our moral intuitions. We’ll see what the data says.

Jonathan Haidt and the Moral Matrix: Breaking Out of Our Righteous Minds

My latest at the Scientific American guest blog:

Meet Jonathan Haidt, a professor of social psychology at the University of Virginia who studies morality and emotion. If social psychology was a sport, Haidt would be a Phil Mickelson or Rodger Federer – likable, fun to watch and one of the best. But what makes Haidt one-of-a-kind in academia is his sincere attempt to study and understand human morality from a point of view other than his own.

Morality is difficult. As Haidt writes on his website, “It binds people together into teams that seek victory, not truth. It closes hearts and minds to opponents even as it makes cooperation and decency possible within groups.” And while many of us understand this at a superficial level, Haidt takes it to heart. He strives to understand our inherent self-righteousness and morality as a collection of diverse mental modules to try to ultimately make society better off.

I had the pleasure of visiting him at his office, which is currently in Tisch Hall at NYU (Haidt is a visiting professor at Stern School of Business), to speak about his background and how he came to write his forthcoming book, The Righteous Mind: Why Good People Are Divided by Politics and Religion.

What Liberals Can Learn From Emile Durkheim

Last week I had the pleasure of visiting Jonathan Haidt, a social psychologist and author of the best-selling book The Happiness Hypothesis, in his office at New York University’s Stern School of Business. Haidt has been a professor at the University of Virginia for nearly two decades, but he is in Manhattan for the next semester as a visiting professor. (It is also a homecoming, Haidt is from just outside of the city.) I interviewed him about his forthcoming book The Righteous Mind: Why Good People are Divided by Politics and Religion and wrote an article about the book and his intellectual background, which will be running on the Scientific American guest blog in a few weeks or so. While preparing for the interview and writing the article I learned a lot about the French sociologist Emile Durkheim, who along with Max Weber is considered one of the founders of sociology.

Durkheim is perhaps best known for studying the factors that contributed to suicide during the late nineteenth century throughout Europe. As Haidt explains in The Happiness Hypothesis, all of the data that Durkheim collected can be summarized in one word: constraints. No matter how Durkheim shuffled through the data he found that suicides rates increased whenever people had fewer social constrains. Specifically, he found that Catholics and Jews (who both had the strongest religious obligations) committed suicide at a much lesser rate than Protestants (who had the weakest religious obligations), that single men and women committed suicide more than married men and women and that suicide rates are higher in times of peace compared to times of war. He concluded that constraints and obligations are necessary for structure and meaning. In his words: “The more weakened the groups to which [a man] belongs, the less he depends on them, the more he consequently depends only on himself and recognizes no other rules of conduct than what are founded on his private interests.”

In September of 2008, Haidt wrote an essay entitled, “What Makes People Vote Republican,” which was published on the website Edge.org. He argued that there are two radically different approaches to forming a society where unrelated people can live peacefully. One approach, which is predominantly liberal, was most famously outlined by John Stewart Mill in “On Liberty,” when he argued that, “the only purpose for which power can be rightfully exercised over any member of a civilized community, against his will, is to prevent harm to others.” Mill’s idea – the harm principle as it is known – is a cornerstone in liberal ideology. It is best illustrated in institutions like the United Nations and embodied by Obama’s latest speech to the UN when he twice quoted the General Assembly’s Universal Declaration, which states that, “all human beings are born free and equal in dignity and in rights.”

The other approach, and this is mainly conservative, is Durkheim’s. Haidt explains that a Durkheimian society, “would value self-control over self-expression, duty over rights, and loyalty to one’s groups over concerns for out-groups.” He also quotes Durkheim who warned of the dangers of anomie (normlessness) in 1897: “Man cannot become attached to higher aims and submit to a rule if he sees nothing above him to which he belongs. To free himself from all social pressure is to abandon himself and demoralize him.” Haidt holds that there are six foundations to morality, and Durkheimian societies exemplify three: Ingroup, Authority, and Purity. (Liberals foundations include harm/care and fairness/reciprocity.)

One conclusion that Haidt comes to is that Democrats tend to think that Republicans are “duped” into being Republican because they are either dumb, were brought up with overly strict parents or they fear openness and change. The other conclusion is that morality is diverse; it is constituted by several moral foundations and a healthy society requires that all are valued. Both sides of the political spectrum do an equally bad job of understanding this because they are trapped in “moral matrices.”

I’ve considered myself liberal most of my life, but studying Durkheim has made me understand conservatives better. I still don’t agree with their (and I am generalizing here) stance towards homosexuality, abortion, religion and other social issues, but I now sincerely appreciate that a limit of people’s autonomy, tradition and social hierarchies are vital aspects to morality. Liberals would view Durkheimian society where individuals “bind themselves to each other, [suppress] each other’s selfishness, and [punish] the deviants and free riders who eternally threaten to undermine cooperative groups” with a critical lens. I think this is a mistake. (And a mistake that is particularly costly to the leaderless Occupy movements.)

The key piece, Haidt explained to me in his office, is that morality binds and blinds. Whether our world views are Millian or Durkhiemian, we are attracted to those who are like-minded and are frustrated with those who are not, we look for what confirms our intuitions and ignore what does not, and it is genuinely difficult for us to understand why people do not see the world like we do. Such is human nature. But this is only the pessimistic side. On the other hand, we are, as Haidt also explained, 10 percent bee (as opposed to 90 percent chimp). That is, we have the extraordinary ability, unlike more species on Earth, to cooperate and have common purposes and goals. Don’t forget this no matter your political lens. If you are conservative, read some Mill, if you are liberal, check out Durkheim.


What is Reason Good For? The Rationality-Intuition Debate

Reason is under attack. Lobbing bomb shells is its twin brother who thinks unconsciously, quickly, and with less effort; I speak of intuition of course. It’s unclear when the rationality-intuition debate began, but its empirical roots were no doubt seeded when the cognitive revolution began and grew when Kahneman and Tversky started demonstrating the flaws of rational actor theory. Their cognitive biases and heuristic program, as it came to be known, wasn’t about bashing economic theory though, it was meant to illustrate not only innocuous irrationalities but systematic errors in judgment. What emerged, which is now beautifully portrayed in Daniel Kahneman’s new book, is a dualistic picture of human cognition where our mental processes are dictated by two types of thinking: system 1 thinking, which is automatic, quick and intuitive, and system 2 thinking, which is deliberate, slow and rational. We think, as the title reads, fast and slow.

It was only in the last decade that literature on system 1 and system 2 thinking made its way into the eye of the lay audience. Gladwell’s Blink, which nicely illustrated the power of thinking without thinking – system 1 – made a splash. On the other hand, Ariely’s Predictably Irrational spurred public debate about the flaws of going with your gut. In the wake of this literature, reason suffers from a credibility crisis. Am I rational or irrational? Should I go with my gut or think things through? Questions like these abound and people too often forget that context and circumstance are what really matter. (If you’re making a multimillion dollar business deal think it through. If you’re driving down the highway stick with your intuition!). Lately though, I’ve seen too much reason-bashing and I want to defend this precious cognitive capacity after reading the following comments, which were left in response to my last post by someone kind enough to engage my blog. His three points:

  • Consciousness-language/self-talk is trivial and epiphenomenal. It means very little and predicts less.
  • It is post-hoc pretty much anything interesting in brains processes > behavior
  • All other animals and living things get along just fine w/out it.

With the exception of his third point, which is worth a debate elsewhere, he (or she, but for the sake of writing I am just sticking with one pronoun) captures what many psychologists believe – that our vocalized beliefs are nothing more than post-hoc justifications of gut-reactions. Jonathan Haidt, for example, uses the metaphor of the rider atop an elephant where the rider ignorantly holds himself to be in control of his uncontrollable beast. There is more than a grain of truth to Haidt’s model, and plenty of empirical data backs it up. My favorite is one study in which several women were asked to choose their favorite pair of nylon stockings from a group of twelve. Then, after they made their selections researchers asked them to explain their choices. Among the explanations texture, feel, and color were the most popular. However, all of the stockings were in fact identical. The women were being sincere – they truly believed that what they were saying made sense – but they simply made up reasons for their choices believing that they consciously knew their preferences.

There is a problem with the whole sale reaction of reason. It is difficult to explain why humanity has made so much moral progress if we believe that our deliberations are entirely uncontrollable. For example, how is it, a critic of Haidt’s model may ask, that institutions like slavery, which were for the most of human history intuitively acceptable, are now intuitively unacceptable? In other words, if we really are solely controlled by the elephant, why aren’t we stuck in a Hobbesian state of nature where life is violent, brutish and short?

One answer is that through reason we were able to objectively look at the world and realize that slavery – and many other injustices and immoralities – made society worse. As Paul Bloom explains in a recent Nature piece: “Emotional responses alone cannot explain one of the most interesting aspects of human nature: that morals evolve. The extent of the average person’s sympathies has grown substantially and continues to do so. Contemporary readers of Nature, for example, have different beliefs about the rights of women, racial minorities and homosexuals compared with readers in the late 1800s, and different intuitions about the morality of practices such as slavery, child labour and the abuse of animals for public entertainment. Rational deliberation and debate have played a large part in this development.” Bloom’s point is thoroughly expanded in Pinker’s latest book, The Better Angels of our Nature, where Pinker argues that reason led people to commit fewer acts of violence. In his words: “At various times in history superstitious killings, such as inhuman sacrifice, witch hunts, blood libels, inquisitions, and ethnic scapegoating, fell away as the factual assumptions on which they rested crumbled under the scrutiny of a more intellectually sophisticated populace. Carefully reasoned briefs against slavery, despotism, torture, religious persecution, cruelty to animals, harshness to children, violence to women, frivolous wars, and the persecution of homosexuals were not just hot air but entered into the decisions of the people and institutions who attended to the arguments and implemented reforms.” In regard to my commenter’s first point – that conscious talk is trivial and epiphenomenal – I think there should be little question that reason played and plays an important role in shaping society for the better and that it is certainly not trivial or epiphenomenal as a result.

His second point – that reason is all post-hoc justifications – is also problematic. Although conscious deliberate thought depends on unconscious cognition, it does not follow that all reasons are post-hoc justifications. For example, solving math problems requires unconscious neurological cognition but nobody would ever say that 1+1=2 is a post-hoc justification. The same is true of scientific truths; are Newton’s laws likewise post hoc justifications? No. This is because there are truths to be known about the world and they can be discovered with reason. As Sam Harris explains, “the fact that we are unaware of most of what goes on in our brains does not render the distinction between having good reasons for what one believes and having bad ones any less clear or consequential.” Reason, in other words, separates correct beliefs from incorrect beliefs to justify truths from falsehoods. It requires unconscious thought as neuroscience now knows, but it does not follow that everything our rationality discovers is a post-hoc justification.

So, let’s not forget that one of our species’ most important assets – reason – is a vitally important cognitive capacity that shouldn’t be left by the way side. Psychologists have done insightful work to demonstrate the role of the cognitive unconscious but this is not to disregard the power of human rationality.

Our Modular Selves: Science and the Philosophy of Self

One of the most enduring themes in western thought is the idea of The Self. Who am “I” and what does it mean “to be,” many philosophers have asked over the centuries. Thought provoking questions indeed, but most discussions of The Self make the mistake of assuming that it is something. The reality is that many modules that are constantly in conflict influence human beings. As prominent evolutionary psychologist Robert Kurzban explains, “the very constitution of the human mind makes us massively inconsistent.” We think that there is an “I” behind all of our cognition – a ghost in the machine – but this is largely a delusion.

Take our moral intuitions. Sometimes we are morally sound. In one experiment, researchers found that deliberately dropped envelopes were stamped and mailed one fifth of the time by complete strangers. Psychologist Jenifer Kunz found that when people receive a Christmas card from a family they do not know, they usually send one back in return. And in the famous Ultimatum experiment in which people are given $20 and the choice to either take all of it, split it $18/$2, or split it $10/$10, most split it evenly. Moreover, moral psychologists are demonstrating that babies as young as five to six months old have a “moral sense” towards people and objects outside of their kin.

On the other hand, consider Douglas Kenrick’s infamous study, which asked participants how often they thought about killing other people. Partnering with Virgil Sheets, Kenrick polled 760 Arizona State University students and found that “the majority of those smiling, well-adjusted, all-American students were willing to admit to having had homicidal fantasies. In fact 76 percent of the men reported such fantasies… [and] 62 percent of the so-called gentler sex had also contemplated murder at least once.” Furthermore, as Kenrick explains, “when David Buss and Josh Duntley later surveyed a sample of students at the University of Texas, they found similarly high percentages of men (79 percent) and women (58) percent admitting to homicidal fantasies.”

These findings aren’t surprising. Devil-angel, heaven-hell, and good cop-bad cop dichotomies have illustrated our inner conflicts for centuries. But isn’t it strange that these divisions occur inside of something we consider singular and unified? Of course, we are far from being singular or unified. As Jonathan Haidt says, “we assume that there is one person in each body, but in some ways we are each more like a committee whose members have been thrown together working at cross purposes” This should also seem obvious. Just think about weighing the short-term with the long-term: Should I eat Pizza now or go for a run? Should I keep drinking or go home to avoid the hangover? Should I continue working this job even though I don’t like it? Should I stay in this relationship even though it isn’t what it once was? To borrow an example from Kurzban, think of a few verbs to complete the sentence, I really like to ______ but afterwards I wish I hadn’t, and compare them to verbs that could complete the sentence, I don’t like to _______ but afterwards I’m glad I did. The first set of verbs sharply contrasts with the second set with the former illustrating the “impatient modules” and the latter the “patient modules.”

Why the inconsistencies? Put simply, we speak to ourselves, not ourself, and these inner dialogues depend on context and current states. For example, would you pay 30 dollars for a slice of pizza? Probably not. But what if you were starving on the African savannah and you happened to have 30 dollars? Or, to put it in more relatable terms, what if you were on one of those long-haul flights across the Pacific when after not eating for hours the person next to you pulls out a delicious slice of warm pizza and starts eating it. Suddenly, 30 dollars doesn’t seem so bad. Context and circumstance matter, duh. But what’s important is that the more specific our understanding of ourselves gets, the less general we can be in our explanations.

For example, let’s say you asked me if I liked coffee or beer. It depends. If it’s 9am I would say coffee and if it’s 9pm I would say beer. However, if I was cramming for a final exam, a 9pm coffee would sound very appealing. But let’s say I just got a bout of food poisoning and anything I put in my stomach was coming right back up. Now, both sound horrible. Ok, let’s say it’s Friday at 9pm and I don’t have to study for a test and I don’t have food poisoning but I am eating dinner with my girlfriend’s family and they look down upon alcohol. I would avoid beer like the plague. So I like beer as long as it’s 9pm, I don’t have a final exam to study for, I don’t have food poisoning and I’m not eating dinner with my girlfriend’s anti-alcohol family. I admit, I’m exaggerating. But there is an important point in drawing out these hypotheticals. And it is that, “the more we specify the context… the less we can generalize… [but] the less we specify… the more likely we are to miss something about how [we] decide.” This is one dilemma of economic theory in a nutshell: we say that humans are rational to understand how we decide as consumers but we know that this isn’t entirely true. However, we don’t sacrifice the entire theory just because it doesn’t apply universally.

Back to The Self.

The Scottish philosopher David Hume once again got it right long before the science by realizing Kurzban’s and Haidt’s points exactly. Hume held that The Self was more like a commonwealth, which upheld an identity not through some sort of essence or soul, as Plato would have said, but through different yet related elements. As Hume explained, “we are never intimately conscious of anything but a particular perception; man is a bundle or collection of different perceptions which succeed one another with an inconceivable rapidity and are in perpetual flux and movement.”  This is not to deny the importance of the self. As Daniel Dennett argues, the idea of the self, though delusional, acts as a convenient fiction. We tell ourselves stories to make sense of the world and our place in it; many times this is a good thing even though it is scientifically wishy-washy. But let’s at least keep in mind that The Self isn’t actually a thing the next time we think about our identities from the armchair.

Read more

Is There Anything Wrong With Incest? Emotion, Reason and Altruism in Moral Psychology

Meet Julie and Mark, two siblings who are vacationing together in France. One night after dinner and a few bottles of wine, they decide to have sex. Julie is on the pill and Mark uses a condom so there is virtually no chance that Julie will become pregnant. They enjoy it very much but decide to never tell anyone or do it again. In the end, having sex brought them together and they are closer than ever.

Did Julie and Mark do anything wrong?

If incest isn’t your thing, your gut-reaction is probably yes – what Julie and Mark did is wrong. But the point of Julie and Mark’s story, which was created by University of Virginia professor of social psychology Jonathan Haidt, is to illustrate how easy it is to feel that something is wrong and how difficult it is to justify why something is wrong. This is what happens when Haidt tells the Julie and Mark story to his undergrads. Some say that incest causes birth defects, or that Julie and Mark will cause pain and awkwardness to friends and family, but birth control and secrecy ensured that none of these problems will occur. Students who press the issue eventually run out of reasons and fall back on the notion of it  “just being wrong.” Haidt’s point is that “the emotional brain generates the verdict. It determines what is right and what is wrong… The rational brain, on the other hand, explains the verdict. It provides reason, but those reasons all come after the fact.”

So the question is: when it comes to our moral sentiments and deliberations, what system is in charge, the rational one or the emotional one?

The reason-emotion debate runs throughout the field of moral psychology. On one hand, cognitive science clearly shows that emotion is essential to our rationality, on the other hand, psychologists argue if reason really is the “slave of the passions,” as David Hume suggested. Haidt tends to take on the later position (and this is what the incest debate illustrates), but psychologists such as Paul Bloom and Steven Pinker believe that reason can persuade our emotions; this is why we have moral progress they argue.

Neuroscience is weighing in too. It demonstrates that we use different parts of the brain when we think deliberately versus when we go with our guts. As one author explains, “subjects who choose [rationally] rely on the regions of the brain known as the dorsolateral prefrontal cortex and the posterior parietal cortex, which are known to be important for deliberative reasoning. On the other hand, people who decide [with their guts] rely more on regions of the limbic cortex, which are more closely tied to emotion.”

So which system sets the agenda, the intuitive one or the rational one? Should I go with my gut as Gladwell advertises? Or would that lead me into predictably irrational mistakes as Ariely warns? Should I listen to my unconscious as Gerd Gigerenzer and Timothy Wilson suggest? Or, as the Invisible Gorilla folks advise, should I take note of how intuitions deceive us? And finally, will we ever know if anything is objective wrong with incest?

Moral psychology is young, so are relevant neuroscience and evolutionary psychology studies, so I hesitant to draw any conclusions here. So what about more general moral feelings? Is it nature, nurture, or somewhere in between? Thanks to several recent studies we now have some answers.

One experiment, which I briefly mentioned a couple of months ago, comes from Paul Bloom, Kiley Hamlin and Karen Wynn. Bloom summarizes in the following article:

In one of our first studies of moral evaluation, we decided… to use… a three-dimensional display in which real geometrical objects, manipulated like puppets, acted out the helping/hindering situations: a yellow square would help the circle up the hill; a red triangle would push it down. After showing the babies the scene, the experimenter placed the helper and the hinderer on a tray and brought them to the child. In this instance, we opted to record… which character they reached for, on the theory that what a baby reaches for is a reliable indicator of what a baby wants. In the end, we found that 6- and 10-month-old infants overwhelmingly preferred the helpful individual to the hindering individual.

Does this mean that we are born with a moral code? No, but it does suggest that we have a sense of compassion and favor those who are altruistic from very early on.

Another experiment comes from Marco Schmidt and Jessica Sommerville. Schmidt and Sommerville showed 15 months year old babies two videos, one in which an experimenter distributes an equal share of crackers to two recipients and another in which the experimenter distributes an unequal share of crackers (she also did the same procedure with milk). Then, they measured how the babies looked at the crackers and milk while they were distributed. According to “violation of expectancy,” babies pay more attention to something when it surprises them. This is exactly what they found; babies spent more time looking when one recipient got more food than the other.

What does this suggest? According to the researchers, “the infants expected an equal and fair distribution of food, and they were surprised to see one person given more crackers or milk than the other.” This doesn’t mean that the babies felt something was morally wrong, but it does mean that they noticed something wasn’t equal or fair.

Schmidt and Sommerville followed up the experiment with another. In the second, they offered the babies two toys, a LEGO block and a LEGO doll. They labeled whichever toy the babies chose as their preferred toy. Then an experimenter asked the baby if he could have the preferred toy. They found that about one-third of the babies gave away their preferred toy, another third gave away the toy that wasn’t preferred, and the last third didn’t share at all. They also found that 92 percent of the babies who shared their preferred toy spent considerably more time looking when the food was unequally distributed; 86 percent of babies who shared their less-preferred toy were more surprised when there was an equal distribution of food. In other words, the altruistic sharers (those who gave the preferred dolls away) noticed more when the crackers and milk weren’t distributed equally while the selfish sharers (those who gave the less-perferred dolls away) showed the opposite.

Taken together, Bloom’s and Schmidt and Sommerville’s work encourages the fact that our moral instincts form early on. But these two studies are just a tiny sampling. It is still difficult to say with certainty if we are born with a moral instinct or not. It is also difficult to say what this moral instinct entails.

Back to incest.

To be sure, evolutionary psychology easily explains why we morally reject incest  – obviously, reproducing with our siblings would be counter productive – but there are many other topics such as why we act altruistically, why we show compassion towards strangers and why we give to charity that remain fairly mysterious. Fortunately, moral psychology is making great progress. It is an exciting new field and I look forward to more findings like the ones outlined here. In addition, I hope that one day in the near future psychologists will come to a consensus regarding the emotion-reason debate.

Read more

%d bloggers like this: