MORAL NARRATIVES
More depends upon what things are called, than on what they are
Nietzsche
Moral narratives are ubiquitous. Whether regarding public figures or those in our day-to-day lives, we frequently encounter stories recounting the (im)moral actions of others. Describing the actions of others (or oneself) necessitates that a speaker make linguistic choices, as multiple terms can often be used to describe the same act. To what extent do these choices guide peoples’ perceptions of an act? In a recent study, my coauthors and I examined this question, finding that peoples’ evaluations of actions are made more favorable by replacing a disagreeable term (e.g., torture) with a semantically related agreeable term (e.g., enhanced interrogation) in an act’s description. Furthermore, we show that the less details people have about the actions they are evaluating the more susceptible they are to a speaker’s strategic choice of terms. Nevertheless, even when provided with a detailed description of each action, we find some evidence that actions described with an agreeable (as opposed to disagreeable) term are judged to be more acceptable.
Additionally, we demonstrate that people view both the presented agreeable and disagreeable act descriptions as largely truthful and distinct from lies, and view speakers using such descriptions as more trustworthy and moral than liars. Thus, despite their influence, the strategic and self-serving use of more or less agreeable terms may be associated with limited reputational risk. Overall, our data suggests that a strategic speaker can, through the careful use of language, sway the opinions of others in a preferred direction while avoiding many of the reputational costs associated with less subtle forms of linguistic manipulation (e.g., lying).
Future Directions
Even when describing the same event, narratives may differ considerably across political divides. The extent to which exposure to ideologically-biased narratives increases polarization and furthers these divides is an important question with real-world implications. People are often motivated to seek out news sources that reinforce their existing points of view. In doing so, they may also be selectively exposing themselves to self-serving linguistic framings of popular events that make their beliefs appear more justified than they otherwise would given a neutral framing. In future work I hope to better understand how ideologically-biased narratives spread (e.g., on social media), polarize, and shape moral impressions and behavior. Ultimately, through this work I hope to reveal the features of moral narratives that allow for contentious moral issues to be communicated in a way that facilitates trust and understanding–as opposed to division and hostility–among groups commonly opposed.
HOW DO WE JUDGE THE MORALITY OF OTHERS?
“The oldest and strongest emotion of mankind is fear, and the oldest and strongest kind of fear is fear of the unknown”
H. P. Lovecraft
Evaluating the character of others is important for our social well-being. It is important to accurately distinguish between individuals who can be trusted as reliable cooperation partners and those most likely to cause us harm. How do people judge the moral character of others? Recent theories of moral inference suggest that such inferences follow a relational logic. From this perspective, factors that reveal an individual's suitability as a cooperation partner should also guide judgments of their moral character.
It is necessary to be able to predict how an agent will behave with some reliability if one is to confidently enter into a stable relationship with them. As such, people who are perceived as unpredictable may also be viewed as poor cooperation partners. In recent work, my coauthors and I examined whether people show a moral preference for more predictable individuals, finding that those signalling unpredictability with their actions–either by acting immorally without an intelligible motive or by performing an immoral act in an unusual manner–are viewed as possessing an especially poor moral character. Notably, this moral preference was observed even when more predictable actors were described as performing an additional immoral act (e.g., robbing a bank; see figure below).
In related work, we had people judge fictional agents acting within sacrificial moral dilemmas. Here, people once again showed a moral preference for more predictable individuals. That is, regardless of the consequences of an agent's actions, and regardless of their violation of proscriptions against killing, people consistently showed a moral preference for the agent who they judged to be most predictable. For instance, utilitarian agents opting to sacrifice an individual for the greater good were perceived as less predictable and less moral (compared to deontological agents) in high-conflict moral dilemmas and more predictable and more moral in low-conflict moral dilemmas (see figure below). Importantly, the observed moral preference for more predictable agents was not explained by a misunderstanding of utilitarian motivations, appeals to homophily (i.e., preferring others who are like oneself), perceived action typicality, or a simple preference for inaction.