Book Summary: The Righteous Mind by Jonathan Haidt

Best quote: " . . . if you put individuals together in the right way, such that some individuals can use their reasoning powers to disconfirm the claims of others, and all individuals feel some common bond or shared fate that allows them to interact civilly, you can create a group that ends up producing good reasoning as an emergent property of the social system." (p. 105).
Key Takeaways
- People adopt political, moral and religious beliefs that cohere with their preferences and desires.
- Preferences and desires are personality-dependent. Therefore, political and religious divisions can be explained, in part, by personality differences.
- Respectful dialogue is a precondition for rational reflection upon political and religious issues.
Summary
Jonathan Haidt is a social and moral psychologist at New York University Stern School of Business. In his other popular books, he’s written on positive psychology and happiness (The Happiness Hypothesis, 2006) as well as the role of social media and educational institutions in declining youth mental health (The Coddling of the American Mind, 2018). But The Righteous Mind (2012) is probably his most beloved work.
The Righteous Mind is an attempt to explain the role of values in political and religious debate. Further, Haidt argues that values have their origin in preferences, desires, and, as David Hume would say, passions. Finally, passions are personality-dependent, and so a psychology of politics and morality is made possible by the same tools that personality psychologists use to derive things like the Five-Factor Model of personality.
If values are personality-dependent and beliefs are value-laden, then personality differences could explain why different political and religious groups have such a hard time understanding one another. The majority of the book focuses on these themes, so they will be my focus for this review.
1. People adopt political, moral and religious beliefs that cohere with their preferences and desires.
Haidt starts by exploring past schools of moral philosophy and psychology. Generally, he claims, there are three schools of thought that attempt to explain the origin of morality: rationalism, empiricism, and nativism.
Rationalists about morality argue that moral laws are derived from self-evident ethical principles – e.g., right actions are those that minimize harm. Rationalists (e.g., Immanuel Kant, Jean Piaget, and Edward Turiel) may disagree over what ethical principles are innate and/or self-evident. But they all agree that moral rules are derived from rational reflection upon innate guiding principles.
Empiricists about morality argue that moral beliefs are grounded in experience – morals are impressed upon us in childhood. Morals are simply generalizations of solutions to social problems.
Nativists about morality argue that we have built-in moral preferences or passions. These preferences are strictly non-rational – in the tradition of David Hume, preferences are not deemed rational or irrational. Rather, the means by which we act to satisfy our preferences can be rational or irrational. Different cultures have similar moral systems because we have similar innate preference structures.
Haidt is a nativist about morality. His social intuitionist model says that people have moral “intuitions” (innate preferences that are relevant to moral issues) which can only be scrutinized and adjusted under appropriate social conditions (we’ll get to those in takeaway #3). He argues in favour of his particular social intuitionist model later in the book. But his argument for nativism generally stems from his experimental research in “moral dumbfounding”.
In moral dumbfounding tasks, Haidt challenges participants’ moral beliefs with strange moral problems. For example, the inconsequential incest story: an interviewer tells a participant about a brother and sister who, whilst on vacation, mutually agree to engage in protected sex. They keep their incest a secret. That way, so the story goes, they can have the experience of incest without the consequences of pregnancy and/or social rejection. The interviewer then asks participants two questions: (1) did the siblings do something morally wrong? and (2) if the answer to (1) is “yes”, then why was it wrong?
The answer to (1) is almost always “yes”. The answers to (2) may surprise you. Participants attempt to come up with explanations for how the siblings acted wrongly, mostly by saying that they risked pregnancy, and pregnancy induced by incest is bound be harmful to the child. But the interviewer pushes back, saying that the siblings had protected sex with birth control pills and a condom, so the potential for harm was eliminated. Even when participants agree that harm (or some similar rule) is the foundational moral principle, and that the siblings did no harm, they still say inconsequential incest is wrong. Their reason? It’s disgusting. Rationalist and empiricist theories predict that people would change their mind upon realizing their principle(s) and/or rule(s) is/are broken, but instead people stubbornly maintain their position with appeal to preference over principle. So, rationalism and empiricism about morality are rejected in favour of nativism.
It is from the results of moral dumbfounding tasks that Haidt derives his first principle of moral psychology: intuitions come first, strategic reasoning second. Intuitions (innate preferences, like disgust) dictate our moral beliefs. When we are challenged on our beliefs, we search for reasons to support them. But when push comes to shove, moral beliefs rarely change with reason and instead remain grounded in intuition.
2. Preferences and desires are personality-dependent. Therefore, political and religious divisions can be explained, in part, by personality differences.
Now that Haidt has shown that values rely on intuitions, he goes on to investigate the grounding of intuitions in personality. His investigation proceeds much like any empirical study in personality psychology: Haidt and colleagues developed the Moral Foundations Questionnaire (MFQ), which (in its current version) is a 30-item questionnaire that asks participants for their takes on all kinds of moral and political issues. People read moral/political statements are rate them along a 5-point Likert scale from “Strongly Disagree” to “Strongly Agree”. Using factor analysis, Haidt et al. find answers to the questionnaire that correlate with one another into recognizable patterns or “factors”. The factors are then named with personality-terms, and used to predict political identification.
Haidt et al. derived six moral foundations (intuitions) from the MFQ:
- Care/harm: Responsiveness to suffering, pain, etc. Highly caring people are more sympathetic to others’ suffering. Stems from the evolutionary need to care for loved ones.
- Fairness/cheating: Responsiveness to acts and violations of reciprocal altruism. People high in fairness tend to demand consequences for people that get more than their deserved share. Stems from the evolutionary need to occasionally help others in exchange for help from them.
- Loyalty/betrayal: Tendencies to act favourably to an in-group. People high in loyalty tend to demand consequences for people that act unfavourably toward the in-group. Stems from the evolutionary need to bind groups for tribal activities.
- Authority/subversion: Tendencies to form and respect social hierarchies. People high in authority tend to favour hierarchy, whether it be an existing hierarchy or one they seek to create. Stems from the evolutionary need to make groups efficient by sorting for skill, competence, power, etc.
- Sanctity/degradation: Tendency to avoid contaminants, both real and symbolic. People high in sanctity tend to be more sensitive to feelings of disgust, and to associate with symbols of purity. Stems from evolutionary needs to avoid sickness, poisonous foods, etc.
- Liberty/oppression: Tendency to avoid feelings of constraint, particularly constraints on freedom of choice. People high in liberty/oppression tend to feel alienated by social dynamics that limit autonomy. Stems from the evolutionary need to keep tribal leaders, bullies, etc. in check.
OK, so those are the six moral foundations derived from the MFQ. Now, how can they be used to understand politics and religion?
Alongside the MFQ, Haidt et al. had participants rate their personal political orientation on a left-right scale. They found that conservatives have a six-factor morality, such that they aren’t particularly low in any of the foundations (except they’re slightly lower in care/harm). Liberals, however, have a three-factor morality, which is defined by being quite high in care/harm and liberty/oppression, and low in fairness/cheating. Libertarians tend to be very high almost only in liberty/oppression.
Intuitively, it makes sense how the moral foundations apply to politics: liberals are very caring and despise restrictions on free choice – that’s why they’re compassionate to minority groups who are most often oppressed and discriminated against. Libertarians basically care only about eliminating restrictions on their freedom, so they tend to be socially liberal and fiscally conservative – they don’t want anyone telling them how to live their lives or spend their money. Conservatives may be slightly less compassionate, but they’re loyal to in-groups, favour hierarchical social organizations, highly disgust sensitive and more likely to be religious, and don’t appreciate it when the state bosses them around. This makes all the foundations apply intuitively to politics and religion, except for fairness/cheating.
Shouldn’t liberals be higher in fairness/cheating and conservatives lower? After all, aren’t liberals supposed to be against inequality? Well, this assumes a definition of “fairness” that roughly equates to “equality”, where equality is conceptualized primarily in terms of outcomes. Conservatives, however, do not conceptualize of fairness as equality, but rather as proportionality – essentially, you get what you deserve. Instead, the liberal spirit of equality is captured in high liberty/oppression and care/harm: liberals hate inequality because it gives some people more power and resources than others. This sentiment fits well with liberty/oppression and care/harm, while leaving fairness/cheating to be defined in terms of proportional fairness to better represent conservative notions of fairness.
So as we’ve seen so far, people hold moral beliefs that aren’t too offensive to their intuitions, AND their intuitions cluster into identifiable personality traits. So, personality can be used to predict political and religious ideology. But if liberals, conservatives, libertarians, etc. are fundamentally different people, does that mean they can’t get along? If we all have different moralities and we stick to those moralities despite reason, is a rational approach to politics possible?
3. Respectful dialogue is a precondition for rational reflection upon political and religious issues.
It’s worth stating up front that I think this is the weakest part of the book. Not because I disagree with Haidt, but because he doesn’t draw nearly as much on hardcore scientific research to argue his case. But that’s not to say his case is weak in an absolute sense; rather, it’s weak relative to the rest of the book.
Psychology is based on the assumption that if we understand ourselves, we will live better lives. Haidt thinks we can use the new science of moral psychology not just to understand ourselves, but to know why we struggle to understand others. For example, Haidt et al. had liberals and conservatives fill out the MFQ from the perspective of their political opponents – i.e., if you’re a liberal, then fill out the MFQ as you think a conservative would; conservatives do the same for liberals. They found that conservatives actually understand liberals better than liberals understand conservatives. Haidt says this is because conservatives have a six-factor morality, so they’re not indifferent to liberal concerns of care/harm and liberty/oppression, whereas liberals are indifferent to hierarchy, proportional fairness, etc. However, conservatives are still slightly lower in care/harm. Liberals and conservatives ought to optimize for understanding by making special effort to steel-man the opposite side along moral foundations which they are not as concerned about.
It’s great to know how we’re misunderstanding one another, but that doesn’t mean we’ll be more likely to change our minds. This is especially true if morals are impenetrable to rational criticism. However, morals are not impenetrable to rational criticism – they’re just really resistant. We can change our minds about moral and political issues, if the norms of discourse are right.
Haidt draws on Muzafer Sherif’s famous “Robbers Cave study”: Sherif took two groups of boys, and separated them into two groups in a summer camp. In each team, the boys engaged in team-building activities, fostering a sense of shared identity. Hierarchies of competence emerged, such that leaders were selected for summer camp skills (like tent pitching). One group called themselves the “Rattlers”; the other group was the “Eagles”. Eventually, the Rattlers and the Eagles were introduced to one another and forced to compete in a variety of games. Now, the group dynamics changed. Leaders were re-selected based on their expressed hatred for the other group, and the boys changed their focus from regular camp activities to destroying the other group, even going so far as to vandalize enemy campsites. Later, however, the Rattlers and the Eagles were put together to cooperate in carrying out shared camp goals. Once again, the group dynamics changed, this time bringing affection between the Rattlers and the Eagles. It seems that if we share a set of goals, we can learn to like one another. Hopefully knowing how we are likely to misunderstand others will enhance this process of intergroup bonding.
Still, Haidt doesn’t really give us reason to believe that good group dynamics and shared goals will help us change our values. He lists many ways in which conservatives are right in ways liberals don’t appreciate and vice versa, as well as ways in which libertarians are right, and all of the things he lists are self-evidently aimed at achieving goals common to each political group. However, it is not necessarily true that agreeing with the other side on any particular issue will make us more likely to shift our values. Perhaps liberals, conservatives, and libertarians (and various religious groups, though Haidt doesn’t talk as much about them) are destined to always have fundamentally different worldviews, and therefore will always be prone to disagreement. We may be prone to fight with one another, but that doesn’t rule out progress. It just rules out peace. In other words, we’ll always have disagreements about values, but we can still cooperate in ways that further our own interests by fostering dialogue and group dynamics where our moral differences don’t drive disagreements into destruction.
As I said, I think this is the weakest argument in the book. It would’ve been the perfect place to include a study on cooperation and moral understanding using the MFQ. For instance, they could have had liberals and conservatives cooperate on some tasks, and then see if recent cooperation predicts better understanding of political opponents on MFQ ratings. Or, maybe cooperation with others actually changes how people answer for themselves on the MFQ. Unfortunately, Haidt shows us neither that we can change our values with intergroup cooperation, nor that cooperation helps us understand others better on the MFQ. What can a study in adolescent boys possibly tell us about the value structures of adult liberals and conservatives? Not much. So while I don’t think Haidt’s case in totally unconvincing, it isn’t as strong as I anticipated given the strength of his earlier arguments.
Conclusion
The Righteous Mind is a truly remarkable book – one which permanently changed my views about politics. While I said that Haidt’s case for takeaway #3 isn’t as convincing as his other arguments, I have an anecdotal point: I feel that I have become more tolerant of others after having read this book. I also felt this after taking Haidt’s various moral and political personality tests, including the Moral Foundations Questionnaire, at YourMorals.org.
If you’re interested in reading The Righteous Mind or you’ve already read it, I’d encourage you to take some of the tests at YourMorals.org. If you haven’t yet read it, I think you’ll want to know more about the tests and their applications, and so you’ll be encouraged to read the full book. If you have read it, I think you’ll appreciate more the rigour with which Haidt et al. have built and applied their questionnaires. I experienced both feelings, as I took some of the questionnaires prior to reading The Righteous Mind and others afterwards.
Overall, I give The Righteous Mind a 9/10. Following Zach Highley and Derek Sivers, I only recommend you read books I rate as 8/10 or higher, so I definitely recommend you read this one! You’ll especially like it if you’re at all interested in political psychology, particularly as it relates to current affairs and worsening political polarization.
My present rubric for rating books is as follows:
Writing style: 10/10
Argumentative rigour: 7.5/10
Inclusion and understanding of competing positions: 9/10
Total: 9/10
Thanks for reading!
Nicholas Murray
