Daniel Effron
Key publications
A Synthesis of my Research, 2010–2022:
Effron, D. A., & Helgason, B. A. (2023). Moral inconsistency. Advances in Experimental Social Psychology, 67, 1-55. Abstract | Full text
Dishonesty and Misinformation
Helgason, B., & Effron, D. A. (2022). It might become true: How prefactual thinking licenses dishonesty. Journal of Personality and Social Psychology, 123, 909-940. Abstract | Full text
Effron, D. A., & Raj, M. (2020). Misinformation and morality: Encountering fake-news headlines makes them seem less unethical to publish and share. Psychological Science, 31, 75-87. Abstract | Full text
Effron, D. A. (2018). It could have been true: How counterfactual thoughts reduce condemnation of falsehoods and increase political polarization. Personality and Social Psychology Bulletin, 44 (729-745). Abstract | Full text
Effron, D. A., Bryan, C. J., & Murnighan, J. K. (2015). Cheating at the end to avoid regret. Journal of Personality and Social Psychology, 109, 395-414. Abstract | Full text
Hypocrisy
O’Connor, K., Effron, D. A., & Lucas, B. J. (2020). Moral cleansing as hypocrisy: When private acts of charity make you feel better than you deserve. Journal of Personality and Social Psychology, 119, 540-559. Abstract | Full text
Effron, D. A., O'Connor, K., Leroy, H., & Lucas, B. J. (2018). From inconsistency to hypocrisy: When does "saying one thing but doing another" invite condemnation? Research in Organizational Behavior. Abstract | Full text
Effron, D. A., Lucas, B. J., & O’Connor, K. (2015). Hypocrisy by association: When organizational membership increases condemnation for wrongdoing. Organizational Behavior and Human Decision Processes, 130, 147-159. Abstract | Full text
Moral Licensing
Effron, D. A. (2014). Making mountains of morality from molehills of virtue: Threat causes people to overestimate their moral credentials. Personality and Social Psychology Bulletin, 40, 972-985. Abstract | Full text | Blog entry
Effron, D. A., Miller, D. T., & Monin, B. (2012). Inventing racist roads not taken: The licensing effect of immoral counterfactual behaviors. Journal of Personality and Social Psychology, 103, 916-932. Abstract | Full text
Effron, D. A., Cameron, J. S., & Monin, B. (2009). Endorsing Obama licenses favoring Whites. Journal of Experimental Social Psychology, 45, 590-593. Abstract | Full text
All publications
Effron, D. A., Epstude, K., & Roese, N. J. (in press). Motivated counterfactual thinking and moral inconsistency: How we use our imaginations to selectively condemn and condone. Current Directions in Psychological Science. Full text
Langdon, J. A., Helgason, B. A., Qiu, J., & Effron, D. A. (in press). “It’s not literally true, but you get the gist:” How nuanced understanding of truth encourage people to condone and spread misinformation. Current Opinion in Psychology. Full text
Pillai, R. M., Fazio, L. K., & Effron, D. A. (2023). Repeatedly encountered descriptions of wrongdoing seem more true but less unethical: Evidence in a naturalistic setting. Psychological Science, 34(8), 863-874. Abstract and full text
Haire, S., Lépine, A., Effron, D. A., & Treibich, C. (2023). Can self-affirmation encourage HIV-prevention? Evidence from female sex workers in Senegal. AIDS and Behavior, 27, 3183-3196. Abstract and full text
Effron, D. A., & Helgason, B. A. (2023). Moral inconsistency. Advances in Experimental Social Psychology, 67, 1-55. . Abstract | Full text
Epstude, K., Effron, D. A., & Roese, N. J. (2022). Polarized imagination: Partisanship influences the direction and consequences of counterfactual thinking. Philosophical Transactions of the Royal Society B: Biological Sciences, 377, 20210342. Abstract | Full text
Effron, D. A., & Helgason, B. A. (2022). The moral psychology of misinformation: Why we excuse dishonesty in a post-truth world. Current Opinion in Psychology, 47, 101375. Abstract | Full text
Helgason, B. A., & Effron, D. A. (2022). From critical to hypocritical: Counterfactual thinking increases partisan disagreement about media hypocrisy. Journal of Experimental Social Psychology, 101, 104308. Abstract | Full text
Helgason, B. A., & Effron, D. A. (2022). It might become true: How prefactual thinking licenses dishonesty. Journal of Personality and Social Psychology, 123(5), 909-940. Abstract | Full text
Effron, D. A. (2022). The moral repetition effect: Bad deeds seem less unethical when repeatedly encountered. Journal of Experimental Psychology: General, 151(10), 2562-2585. Abstract | Full text
Effron, D. A., Kakker, H., & Cable, D. M. (2022). Consequences of perceiving organization members as a unified entity: Stronger attraction, but greater blame for member transgressions. Journal of Applied Psychology, 107(11), 1951-1972. Abstract | Full text
Giurge, L. M., Lin, E. H.-L., & Effron, D. A. (2021). Moral credentials and the 2020 Democratic presidential primary: No evidence that endorsing female candidates licenses people to favor men. Journal of Experimental Social Psychology, 95. Abstract | Full text
Effron, D. A., & Raj, M. (2021). Disclosing interpersonal conflicts of interest: Revealing whom we like, but not whom we dislike. Organizational Behavior and Human Decision Processes, 164, 68-85. Abstract | Full text
O’Connor, K., Effron, D. A., & Lucas, B. J. (2020). Moral cleansing as hypocrisy: When private acts of charity make you feel better than you deserve. Journal of Personality and Social Psychology, 119, 540-559. Abstract | Full text
Effron, D. A., & Raj, M. (2020). Misinformation and morality: Encountering fake-news headlines makes them seem less unethical to publish and share. Psychological Science, 31, 75-87. Abstract | Full text
Georgeac, O., Rattan, A., & Effron, D. A. (2019). An exploratory investigation of Americans’ expression of gender bias before and after the 2016 presidential election. Social Psychological and Personality Science, 10, 632-642. Abstract | Full text
Effron, D. A., O'Connor, K., Leroy, H., & Lucas, B. J. (2018). From inconsistency to hypocrisy: When does "saying one thing but doing another" invite condemnation? Research in Organizational Behavior, 38, 61-75. Abstract | Full text
Effron, D. A., Kakkar, H., & Knowles, E. D (2018). Group cohesion benefits individuals who express prejudice, but harms their group. Journal of Experimental Social Psychology, 79, 239-251. Abstract | Full text
Effron, D. A., Jackman, L., Markus, H., Uchida, Y., & Muluk, H. (2018). Hypocrisy and culture: Failing to practice what you preach receives harsher interpersonal reactions in independent (vs. interdependent) cultures. Journal of Experimental Social Psychology, 76, 371-384. Abstract | Full text
Polman, E., Effron, D. A., & Thomas, M. (2018). Other people's money: Money's perceived purchasing power is smaller for others than for the self. Journal of Consumer Research, 45, 109-125. Abstract
Effron, D. A. (2018). It could have been true: How counterfactual thoughts reduce condemnation of falsehoods and increase political polarization. Personality and Social Psychology Bulletin, 44 (729-745). Abstract | Full text
Effron, D. A. (2016). Beyond “being good frees us to be bad:” Moral self-licensing and the fabrication of moral credentials. In P. A. M. Van Lange & J. W. Van Prooijen, (Eds.), Cheating, corruption, and concealment: Roots of unethical behavior. Cambridge, UK: Cambridge University Press. Full text
Effron, D. A., & Miller, D. T. (2015). Do as I say, not as I've done: Suffering for a misdeed reduces the hypocrisy of advising against it. Organizational Behavior and Human Decision Processes, 131, 16-32. Abstract | Full text
Effron, D. A., Lucas, B. J., & O’Connor, K. (2015). Hypocrisy by association: When organizational membership increases condemnation for wrongdoing. Organizational Behavior and Human Decision Processes, 130, 147-159. Abstract | Full text
Effron, D. A., Bryan, C. J., & Murnighan, J. K. (2015). Cheating at the end to avoid regret. Journal of Personality and Social Psychology, 109, 395-414. Abstract | Full text
Effron, D. A., & Conway, P. (2015). When virtue leads to villainy: Advances in research on moral self-licensing. Current Opinion in Psychology, 6, 32-35. Abstract | Full text
Shu, L. L., & Effron, D. A. (2015). Ethical decision-making: Insights from contemporary behavioral research on the role of the self. In R. Scott & S. Kosslyn (Eds.), Emerging trends in the social and behavioral sciences (pp. 1-9) Wiley. Full text
Effron, D. A., & Knowles, E. D. (2015). Entitativity and intergroup bias: How belonging to a cohesive group allows people to express their prejudices. Journal of Personality and Social Psychology, 108, 234-253. Abstract | Full text
Effron, D. A. (2014). Making mountains of morality from molehills of virtue: Threat causes people to overestimate their moral credentials. Personality and Social Psychology Bulletin, 40, 972-985. Abstract | Full text | Blog entry
Effron, D. A., Monin, B., & Miller, D. T. (2013). The unhealthy road not taken: Licensing indulgence by exaggerating counterfactual sins. Journal of Experimental Social Psychology, 49, 573-578. Abstract | Full text
Effron, D. A., Miller, D. T., & Monin, B. (2012). Inventing racist roads not taken: The licensing effect of immoral counterfactual behaviors. Journal of Personality and Social Psychology, 103, 916-932. Abstract | Full text
Effron, D. A., & Miller, D. T. (2012). How the moralization of issues grants social legitimacy to act on one’s attitudes. Personality and Social Psychology Bulletin, 38, 690-701. Abstract | Full text
Merritt, A. C., Effron, D. A., Fein, S. Savitsky, K. K., Tuller, D. M., & Monin, B. (2012). The strategic pursuit of moral credentials. Journal of Experimental Social Psychology, 48, 774-777. Abstract | Full text
Effron, D. A. (2012). Hero or hypocrite? A psychological perspective on the risks and benefits of positive character evidence. The Jury Expert, 24(4). Full text
Cehajic-Clancy, S., Effron, D. A., Halperin, E., Liberman, V., & Ross, L. D. (2011). Affirmation, acknowledgment of ingroup responsibility, group-based guilt, and support for reparative measures. Journal of Personality and Social Psychology, 101, 256-270. Abstract | Full text
Effron, D. A., & Miller, D. T. (2011). Diffusion of entitlement: An inhibitory effect of scarcity on consumption. Journal of Experimental Social Psychology, 47, 378-383. Abstract | Full text
Effron, D. A., & Miller, D. T. (2011). Reducing exposure to trust-related risks to avoid self-blame. Personality and Social Psychology Bulletin, 37, 181-192. Abstract | Full text
Effron, D. A., & Monin, B. (2010). Letting people off the hook: When do good deeds excuse transgressions? Personality and Social Psychology Bulletin, 36, 1618-1634. Abstract | Full text
Merritt, A. M., Effron, D. A., & Monin, B. (2010). Moral self-licensing: When being good frees us to be bad. Social and Personality Psychology Compass, 4, 344-357. Abstract | Full text
Miller, D. T., & Effron, D. A. (2010). Psychological license: When it is needed and how it functions. In M. P. Zanna and J. Olson (Eds.), Advances in experimental social psychology (Vol. 43, pp. 117-158). San Diego, CA: Academic Press/Elsevier. Abstract | Full text
Effron, D. A., Cameron, J. S., & Monin, B. (2009). Endorsing Obama licenses favoring Whites. Journal of Experimental Social Psychology, 45, 590-593. Abstract | Full text
Miller, D. T., Effron, D. A., & Zak, S. V. (2009). From moral outrage to social protest: The role of psychological standing. In D. R. Bobocel, A. C. Kay, M. P. Zanna & J. M. Olson (Eds.), The psychology of justice and legitimacy: The Ontario symposium (Vol. 11, pp. 103-123). New York: Psychological Press. Abstract | Full text
Niedenthal, P. M., Mondillon, L., Effron, D. A., & Barsalou, L. W. (2009). Representing social concepts modally and amodally. In F. Strack & J. Förster (Eds.), Social cognition: The basis of human interaction. Frontiers of social psychology. (pp. 23-47). New York: Psychological Press. Full text
Effron, D. A., Niedenthal, P. M., Gil, S., & Droit-Volet, S. (2006). Embodied temporal perception of emotion. Emotion, 6, 1-9. Abstract | Full text
ABSTRACTS
Moral Inconsistency
We review a program of research examining three questions. First, why is the morality of people’s behavior inconsistent across time and situations? We point to people’s ability to convince themselves they have a license to sin, and we demonstrate various ways people use their behavioral history and others – individuals, groups, and society – to feel licensed. Second, why are people’s moral judgments of others’ behavior inconsistent? We highlight three factors: motivation, imagination, and repetition. Third, when do people tolerate others who fail to practice what they preach? We argue that people only condemn others’ inconsistency as hypocrisy if they think the others are enjoying an “undeserved moral benefit.” Altogether, this program of research suggests that people are surprisingly willing to enact and excuse inconsistency in their moral lives. We discuss how to reconcile this observation with the foundational social psychological principle that people hate inconsistency.
Back to key publications | Back to full publication list
Polarized Imagination: Partisanship Influences the Direction and Consequences of Counterfactual Thinking
Four studies examine how political partisanship qualifies previously-documented regularities in people’s counterfactual thinking (N = 1,186 Democrats and Republicans). First, whereas prior work finds that people generally prefer to think about how things could have been better instead of worse (i.e., entertain counterfactuals in an upward vs. downward direction), Studies 1a–2 find that partisans are more likely to generate and endorse counterfactuals in whichever direction best aligns with their political views. Second, previous research finds that the closer someone comes to causing a negative event, the more blame that person receives; Study 3 finds that this effect is more pronounced among partisans who oppose (vs. support) a leader who “almost” caused a negative event. Thus, partisan reasoning may influence which alternatives to reality people will find most plausible, will be most likely to imagine spontaneously, and will view as sufficient grounds for blame.
The Moral Psychology of Misinformation: Why We Excuse Dishonesty in a Post-Truth World
Commentators say we have entered a “post-truth” era. As political lies and “fake news” flourish, citizens appear not only to believe misinformation, but also to condone misinformation they do not believe. The present article reviews recent research on three psychological factors that encourage people to condone misinformation: partisanship, imagination, and repetition. Each factor relates to a hallmark of “post-truth” society: political polarization, leaders who push “alterative facts,” and technology that amplifies disinformation. By lowering moral standards, convincing people that a lie’s “gist” is true, or dulling affective reactions, these factors not only reduce moral condemnation of misinformation, but can also amplify partisan disagreement. We discuss implications for reducing the spread of misinformation.
From Critical to Hypocritical: Counterfactual Thinking Increases Partisan Disagreement About Media Hypocrisy
Partisans on both sides of the political aisle complain that the mainstream media is hypocritical, but they disagree about whom that hypocrisy benefits. In the present research, we examine how counterfactual thinking contributes to this partisan disagreement about media hypocrisy. In three studies (two pre-registered, N = 1,342) of people’s reactions to media criticism of politicians, we find that people judged the media’s criticism of politicians they support as more hypocritical when they imagined whether the media would have criticized a politician from a different party for the same behavior if given the chance. Because this effect only emerged when people judged the media’s criticism of politicians they supported, and not politicians they opposed, counterfactual thinking increased partisan division in perceptions of media hypocrisy. We discuss implications for how counterfactual thinking facilitates motivated moral reasoning, contributes to bias in social judgment, and amplifies political polarization.
It Might Become True: How Prefactual Thinking Licenses Dishonesty
In our “post-truth” era, misinformation spreads not only because people believe falsehoods, but also because people sometimes give dishonesty a moral pass. The present research examines how the moral judgments that people form about dishonesty depend not only on what they know to be true, but also on what they imagine might become true. In six studies (N = 3,607), people judged a falsehood as less unethical to tell in the present when we randomly assigned them to entertain prefactual thoughts about how it might become true in the future. This effect emerged with participants from 59 nations judging falsehoods about consumer products, professional skills, and controversial political issues – and the effect was particularly pronounced when participants were inclined to accept that the falsehood might become true. Moreover, thinking prefactually about how a falsehood might become true made people more inclined to share the falsehood on social media. We theorized that, even when people recognize a falsehood as factually incorrect, these prefactual thoughts reduce how unethical the falsehood seems by making the broader meaning that the statement communicates, its gist, seem truer. Mediational evidence was consistent with this theorizing. We argue that prefactual thinking offers people a degree of freedom they can use to excuse lies, and we discuss implications for theories of mental simulation and moral judgment.
Back to key publications | Back to full publication list
The Moral Repetition Effect: Bad Deeds Seem Less Unethical When Repeatedly Encountered
Reports of moral transgressions can “go viral” through gossip, continuous news coverage, and social media. When they do, the same person is likely to hear about the same transgression multiple times. The present research demonstrates that people will judge the same transgression less severely after repeatedly encountering an identical description of it. I present seven
experiments (six of which were pre-registered; 73,265 observations from 3,301 online participants and urban residents holding 55 nationalities). Participants rated fake-news sharing, real and hypothetical business transgressions, violations of fundamental “moral foundations,” and various everyday wrongdoings as less unethical and less deserving of punishment if they had been shown descriptions of these behaviors previously. Results suggest that affect plays an important role in this moral repetition effect. Repeated exposure to a description of a transgression reduced the negative affect that the transgression elicited, and less-negative affect meant less-harsh moral judgments. Moreover, instructing participants to base their moral judgments on reason, rather than emotion, eliminated the moral repetition effect. An alternative explanation based on perceptions of social norms received only mixed support. The results extend understanding of when and how repetition influences judgment, and they reveal a new way in which moral judgments are biased by reliance on affect. The more people that hear about a transgression, the wider moral outrage will spread; but the more times an individual hears about it, the less outraged that person may be.
Consequences of Perceiving Organization Members as a Unified Entity: Stronger Attraction, but Greater Blame for Member Transgressions
Are Uber drivers just a collection of independent workers, or a meaningful part of Uber’s workforce? Do the owners of Holiday Inn franchises around the world seem more like a loosely knit group, or more like a cohesive whole? These questions examine perceptions of organization members’ entitativity, the extent to which individuals appear to comprise a single, unified entity. We propose that the public’s perception that an organization’s members are highly entitative can be a double-edged sword for the organization. On one hand, perceiving an organization’s members as highly entitative makes the public more attracted to the organization because people associate entitativity with competence. On the other hand, perceiving members as highly entitative leads the public to blame the organization and its leadership for an individual member’s wrongdoing, because the public infers that the organization and its leadership tacitly condoned the wrongdoing. Two experiments and a field survey, plus three supplemental studies, support these propositions. Moving beyond academic debates about whether theories should treat an organization as a unified entity, these results demonstrate the importance of understanding how much the public does perceive an organization as a unified entity. As the changing nature of work enables loosely-knit collections of individuals to hold membership in the same organization, entitativity perceptions may become increasingly consequential.
Moral Credentials and the 2020 Democratic Presidential Primary: No Evidence that Endorsing Female Candidates Licenses People to Favor Men
Endorsing Obama in 2008 licensed some Americans to favor Whites over Blacks––an example of moral self-licensing (Effron et al., 2009). Could endorsing a female presidential candidate in 2020-21 similarly license Americans to favor men at the expense of women? Two high-powered, pre-registered experiments found no evidence for this possibility. We manipulated whether Democrat participants had an opportunity to endorse a female Democratic candidate if she ran against a male candidate (i.e., Trump in Study 1, N = 2,143; an anti-Trump Republican or independent candidate in Study 2, N = 2,228). Then, participants read about a stereotypically masculine job and indicated whether they thought a man should fill it. Contrary to predictions, we found that endorsing a female Democrat did not increase participants’ tendency to favor men over women for the job. We discuss implications for the robustness and generalizability of moral self-licensing.
Disclosing Interpersonal Conflicts of Interest: Revealing Whom We Like, But Not Whom We Dislike
Imagine your boss asks you to evaluate the work performance of a coworker whom you happen to like or dislike for reasons unrelated to the performance. This situation poses an interpersonal conflict of interest because the fact that you like or dislike the coworker could undermine your professional obligation to offer an objective evaluation. We hypothesize that people are less likely to disclose conflicts of interest that involve disliking as opposed to liking, because they worry that disclosing dislike will make them look unsociable. Nine studies (four pre-registered) – examining hypothetical, actual, and lab-simulated workplace conflicts of interest – provide supportive evidence, and cast doubt on alternative explanations based on the motivations to maintain objectivity or to get away with bias. The findings demonstrate the importance of considering interpersonal dynamics in theorizing about advice-giving and conflicts of interest. We discuss implications for detecting conflicts of interest in organizations.
Moral Cleansing as Hypocrisy: When Private Acts of Charity Make You Feel Better than you Deserve
What counts as hypocrisy? Current theorizing emphasizes that people see hypocrisy when an individual sends them “false signals” about his or her morality (Jordan, Sommers, Bloom, & Rand, 2017); indeed, the canonical hypocrite acts more virtuously in public than in private. An alternative theory posits that people see hypocrisy when an individual enjoys “undeserved moral benefits,” such as feeling more virtuous than his or her behavior merits, even when the individual has not sent false signals to others (Effron, O’Connor, Leroy, & Lucas, 2018). This theory predicts that acting less virtuously in public than in private can seem hypocritical by indicating that individuals have used good deeds to feel less guilty about their public sins than they should. Seven experiments (N = 3,468 representing 64 nationalities) supported this prediction. Participants read about a worker in a “sin industry” who secretly performed good deeds. When the individual’s public work (e.g., selling tobacco) was inconsistent with, versus unrelated to, the good deeds (e.g., anonymous donations to an antismoking cause vs. an antiobesity cause), participants perceived him as more hypocritical, which in turn predicted less praise for his good deeds. Participants also inferred that the individual was using the inconsistent good deeds to cleanse his conscience for his public work, and such moral cleansing appeared hypocritical when it successfully alleviated his guilt. These results broaden and deepen understanding about how lay people conceptualize hypocrisy. Hypocrisy does not require appearing more virtuous than you are; it suffices to feel more virtuous than you deserve.
Back to key publications | Back to full publication list
Morality and Misinformation: Encountering Fake-News Headlines Makes Them Seem Less Unethical to Publish and Share
People may repeatedly encounter the same misinformation when it “goes viral.” Four experiments and a pilot study (two pre-registered; N = 2,587) suggest that repeatedly encountering misinformation makes it seem less unethical to spread––regardless of whether one believes it. Seeing a fake-news headline one or four times reduced how unethical participants thought it was to publish and share that headline when they saw it again – even when it was clearly labelled false and participants disbelieved it, and even after statistically accounting for judgments of how likeable and popular it was. In turn, perceiving it as less unethical predicted stronger inclinations to express approval of it online. People were also more likely to actually share repeated (vs. new) headlines in an experimental setting. We speculate that repeating blatant misinformation may reduce the moral condemnation it receives by making it feel intuitively true, and we discuss other potential mechanisms.
Back to key publications | Back to full publication list
An Exploratory Investigation of Americans’ Expression of Gender Bias Before and After the 2016 Presidential Election
Did the 2016 U.S. presidential election’s outcome affect Americans’ expression of gender bias? Drawing on theories linking leadership with intergroup attitudes, we proposed it would. A pre-registered exploratory survey of two independent samples of Americans pre- and post-election (Ns=1,098 and 1,192) showed no pre-post differences in modern sexism, concern with the gender pay gap, or perceptions of gender inequality and progress overall. However, supporters of Donald Trump (but not of Hillary Clinton) expressed greater modern sexism post- versus pre-election – which in turn predicted reporting lower disturbance with the gender pay gap, perceiving less discrimination against women but more against men, greater progress toward gender equality, and greater female representation at top levels in the U.S. Results were reliable when evaluated against four robustness standards, thereby offering suggestive evidence of how historic events may affect gender-bias expression. We discuss the theoretical implications for intergroup attitudes and their expression.
From Inconsistency to Hypocrisy: When Does "Saying One Thing But Doing Another" Invite Condemnation?
It is not always possible for leaders, teams, and organizations to practice what they preach. Misalignment between words and deeds can invite harsh interpersonal consequences, such as distrust and moral condemnation, which have negative knock-on effects throughout organizations. Yet the interpersonal consequences of such misalignment are not always severe, and are sometimes even positive. This paper presents a new model of when and why audiences respond negatively to those who “say one thing but do another.” We propose that audiences react negatively if they (a) perceive a high degree of misalignment (i.e., perceive low “behavioral integrity”), and (b) interpret such misalignment as a claim to an undeserved moral benefit (i.e., interpret it as hypocrisy). Our model integrates disparate research findings about factors that influence how audiences react to misalignment, and it clarifies conceptual confusion surrounding word-deed misalignment, behavioral integrity, and hypocrisy. We discuss how our model can inform unanswered questions, such as why people fail to practice what they preach despite the risk of negative consequences. Finally, we consider practical implications for leaders, proposing that anticipating and managing the consequences of misalignment will be more effective than trying to avoid it altogether.
Back to key publications | Back to full publication list
Group Cohesion Benefits Individuals Who Express Prejudice, But Harms Their Group
When someone expresses prejudice against an outgroup, how negatively do we judge the prejudiced individual and his or her ingroup? Previous lines of research suggest that the answer depends on the ingroup's entitativity—i.e., how cohesive it is—but they make different predictions about whether entitativity should increase or decrease outside observers' negative reactions to prejudice. We resolve this tension by demonstrating divergent consequences of entitativity for prejudiced individuals versus their groups. Mediational and experimental data from six studies (two pre-registered; N = 2455) support two hypotheses: Entitativity increases how responsible the group seems for its member's prejudice, which in turn decreases how unacceptable observers find the member's behavior and how much they condemn her (H1), but which also increases how much they condemn the group (H2). Thus, entitativity can grant individuals a license to express prejudice but can damage their group's reputation.
Hypocrisy and Culture: Failing to Practice What you Preach Receives Harsher Interpersonal Reactions in Independent (vs. Interdependent) Cultures
Failing to practice what you preach is often condemned as hypocrisy in the West. Three experiments and a field survey document less negative interpersonal reactions to misalignment between practicing and preaching in cultures encouraging individuals' interdependence (Asian and Latin American) than in those encouraging independence (North American and Western Europe). In Studies 1–3, target people received greater moral condemnation for a misdeed when it contradicted the values they preached than when it did not – but this effect was smaller among participants from Indonesia, India, and Japan than among participants from the USA. In Study 4, employees from 46 nations rated their managers. Overall, the more that employees perceived a manager's words and deeds as chronically misaligned, the less they trusted him or her – but the more employees' national culture emphasized interdependence, the weaker this effect became. We posit that these cultural differences in reactions to failures to practice what one preaches arise because people are more likely to view the preaching as other-oriented and generous (vs. selfish and hypocritical) in cultural contexts that encourage interdependence. Study 2 provided meditational evidence of this possibility. We discuss implications for managing intercultural conflict, and for theories about consistency, hypocrisy, and moral judgment.
Other People’s Money: Money’s Perceived Purchasing Power Is Smaller for Others Than for the Self
Nine studies find that people believe their money has greater purchasing power than the same quantity of others’ money. Using a variety of products from socks to clocks to chocolates, we found that participants thought the same amount of money could buy more when it belonged to themselves versus others—a pattern that extended to undesirable products. Participants also believed their money—in the form of donations, taxes, fines, and fees—would help charities and governments more than others’ money. We tested six mechanisms based on psychological distance, the endowment effect, wishful thinking, better-than-average biases, pain of payment, and beliefs about product preferences. Only a psychological distance mechanism received support. Specifically, we found that the perceived purchasing power of other people’s money decreased logarithmically as others’ psychological distance from the self increased, consistent with psychological distance’s subadditive property. Further supporting a psychological distance mechanism, we found that framing one’s own money as distant (vs. near) reduced the self-other difference in perceived purchasing power. Our results suggest that beliefs about the value of money depend on who owns it, and we discuss implications for marketing, management, psychology, and economics.
It Could Have Been True: How Counterfactual Thoughts Reduce Condemnation of Falsehoods and Increase Political Polarization
This research demonstrates how counterfactual thoughts can lead people to excuse others for telling falsehoods. When a falsehood aligned with participants’ political preferences, reflecting on how it could have been true led them to judge it as less unethical to tell, which in turn led them to judge a politician who told it as having a more moral character and deserving less punishment. When a falsehood did not align with political preferences, this effect was significantly smaller and less reliable, in part because people doubted the plausibility of the relevant counterfactual thoughts. These results emerged independently in three studies (two preregistered; total N = 2,783) and in meta- and Bayesian analyses, regardless of whether participants considered the same counterfactuals or generated their own. The results reveal how counterfactual thoughts can amplify partisan differences in judgments of alleged dishonesty. I discuss implications for theories of counterfactual thinking and motivated moral reasoning.
Back to key publications | Back to full publication list
Do as I Say, Not as I've Done: Suffering for a Misdeed Reduces the Hypocrisy of Advising Against it
Not everyone who has committed a misdeed and wants to warn others against committing it will feel entitled to do so. Six experiments, a replication, and a follow-up study examined how suffering for a misdeed grants people the legitimacy to advise against it. When advisors had suffered (vs. not suffered) for their misdeeds, observers thought advisors had more of a right to advise and perceived them as less hypocritical and self-righteous; advisees responded with less anger and derogation; and advisors themselves felt more comfortable offering strong advice. Advisors also strategically highlighted how they had suffered for their wrongdoing when they were motivated to establish their right to offer advice. Additional results illustrate how concerns about the legitimacy of advice-giving differ from concerns about persuasiveness. The findings shed light on what prevents good advice from being disseminated, and how to help people learn from others’ mistakes.
Hypocrisy by Association: When Organizational Membership Increases Condemnation for Wrongdoing
Hypocrisy occurs when people fail to practice what they preach. Four experiments document the hypocrisy-by-association effect, whereby failing to practice what an organization preaches can make an employee seem hypocritical and invite moral condemnation. Participants judged employees more harshly for the same transgression when it was inconsistent with ethical values the employees’ organi- zation promoted, and ascriptions of hypocrisy mediated this effect (Studies 1–3). The results did not sup- port the possibility that inconsistent transgressions simply seemed more harmful. In Study 4, participants were less likely to select a job candidate whose transgression did (vs. did not) contradict a value pro- moted by an organization where he had once interned. The results suggest that employees are seen as morally obligated to uphold the values that their organization promotes, even by people outside of the organization. We discuss how observers will judge someone against different ethical standards depend- ing on where she or he works.
Cheating at the End to Avoid Regret
How do people behave when they face a finite series of opportunities to cheat with little or no risk of detection? In 4 experiments and a small meta-analysis, we analyzed over 25,000 cheating opportunities faced by over 2,500 people. The results suggested that the odds of cheating are almost three times higher at the end of a series than earlier. Participants could cheat in one of two ways: They could lie about the outcome of a private coin flip to get a payoff that they would otherwise not receive (Studies 1-3) or they could overbill for their work (Study 4). We manipulated the number of cheating opportunities they expected but held the actual number of opportunities constant. The data showed that the likelihood of cheating and the extent of dishonesty were both greater when people believed that they were facing a last choice. Mediation analyses suggested that anticipatory regret about passing up a chance to enrich oneself drove this cheat-at-the-end effect. We found no support for alternative explanations based on the possibility that multiple cheating opportunities depleted people’s self-control, eroded their moral standards, or made them feel that they had earned the right to cheat. The data also suggested that the cheat- at-the-end effect may be limited to relatively short series of cheating opportunities (i.e., n < 20). Our discussion addresses the psychological and behavioral dynamics of repeated ethical choices.
Back to key publications | Back to full publication list
When Virtue Leads to Villainy
Acting virtuously can subsequently free people to act less-than-virtuously. We review recent
insights into this moral self-licensing effect: (a) It is reliable, though modestly-sized, and occurs in both real-world and laboratory contexts; (b) Planning to do good, reflecting on foregone bad deeds, or observing ingroup members’ good deeds is sufficient to license less virtuous behavior; (c) When people need a license, they can create one by strategically acting or planning to act more virtuously, exaggerating the sinfulness of foregone bad deeds, or reinterpreting past behavior as moral credentials; (d) Moral self-licensing effects seem most likely to occur when people interpret their virtuous behavior as demonstrating their lack of immorality but not signaling that morality is a core part of their self-concept.
Entitativity and Intergroup Bias: How Belonging to a Cohesive Group Allows People to Express their Prejudices
We propose that people treat prejudice as more legitimate when it seems rationalistic—that is,
linked to a group’s pursuit of collective interests. Groups that appear to be coherent and unified wholes (entitative groups) are most likely to have such interests. We thus predicted that belonging to an entitative group licenses people to express prejudice against outgroups. Support for this idea came from three correlational studies and five experiments examining racial, national, and religious prejudice. The first four studies found that prejudice and discrimination seemed more socially acceptable to third parties when committed by members of highly entitative groups, because people could more easily explain entitative groups’ biases as a defense of collective interests. Moreover, ingroup entitativity only lent legitimacy to outgroup prejudice when an interests-based explanation was plausible—namely, when the outgroup could possibly threaten the ingroup’s interests. The last four studies found that people were more willing to express private prejudices when they perceived themselves as belonging to an entitative group. Participants’ perceptions of their own race’s entitativity were associated with a greater tendency to give explicit voice to their implicit prejudice against other races. Furthermore, experimentally raising participants’ perceptions of ingroup entitativity increased explicit expressions of outgroup prejudice, particularly among people most likely to privately harbor such prejudices (i.e., highly identified group members). Together, these findings demonstrate that entitativity can lend a veneer of legitimacy to prejudice and disinhibit its expression. We discuss implications for intergroup relations and shifting national demographics.
Back to key publications | Back to full publication list
Making Mountains of Morality from Molehills of Virtue: Threat Causes People to Overestimate their Moral Credentials
Seven studies demonstrate that threats to moral identity can increase how definitively people think they have previously proven their morality. When White participants were made to worry that their future behavior could seem racist, they overestimated how much a prior decision of theirs would convince an observer of their non-prejudiced character (Studies 1a–3). Ironically, such overestimation made participants appear more prejudiced to observers (Study 4). Studies 5–6 demonstrated a similar effect of threat in the domain of charitable giving – an effect driven by individuals for whom maintaining a moral identity is particularly important. Threatened participants only enhanced their beliefs that they had proven their morality when there was at least some supporting evidence, but these beliefs were insensitive to whether the evidence was weak or strong (Study 2). Discussion considers the role of motivated reasoning, and implications for ethical decision-making and moral licensing.
Back to key publications | Back to full publication list
The Unhealthy Road Not Taken: Licensing Indulgence by Exaggerating Counterfactual Sins
This research examined two hypotheses: 1) Reflecting on foregone indulgences licenses people to indulge, and 2) To justify future indulgence, people will exaggerate the sinfulness of actions not taken, thereby creating the illusion of having previously foregone indulgence. In Study 1 (a longitudinal study), dieters induced to reflect on unhealthy alternatives to their prior behavior (compared to dieters in a control condition) expressed weaker intentions to pursue their weight-loss goals – and one week later, they said that they had actually done less and intended to continue doing less to pursue such goals. In Study 2, weight-conscious participants who expected to eat cookies (compared to those merely shown cookies) inflated the unhealthiness of snack foods that they previously declined to eat, and exaggerated the extent to which dieting concerns explained why they had declined these snacks. Implications for moral behavior, self- control, and motivated construal processes are discussed.
Back to publication list
Inventing Racist Roads Not Taken: The Licensing Effect of Immoral Counterfactual Behaviors
Six experiments examined how people strategically use thoughts of foregone misdeeds to regulate their moral behavior. We tested two hypotheses: first, that people will feel licensed to act in morally dubious ways when they can point to immoral alternatives to their prior behavior, and second, that people made to feel insecure about their morality will exaggerate the extent to which such alternatives existed. Supporting the first hypothesis, when White participants could point to racist alternatives to their past actions, they felt they had obtained more evidence of their own virtue (Study 1), they expressed less racial sensitivity (Study 2), and they were more likely to express preferences about employment and allocating money that favored Whites at the expense of Blacks (Study 3). Supporting the second hypothesis, White participants whose security in their identity as a non-racist had been threatened remembered a prior task as having afforded more racist alternatives to their behavior than did those who were not threatened. This distortion of the past involved overestimating the number of Black individuals they had encountered on the prior task (Study 4), and exaggerating how stereotypically Black specific individuals had looked (Studies 5 and 6). We discuss implications for moral behavior, the motivated rewriting of one’s moral history, and how the life unlived can liberate people to lead the life they want.
Back to key publications | Back to full publication list
How the Moralization of Issues Grants Social Legitimacy to Act on One's Attitudes
Actions that do not have as their goal the advancement or protection of one’s material interests are often seen as illegitimate. Four studies suggested that moral values can legitimate action in the absence of material interest. The more participants linked sociopolitical issues to moral values, the more comfortable they felt advocating on behalf of those issues and the less confused they were by others’ advocacy (Studies 1 and 2). Crime victims were perceived as being more entitled to claim special privileges when the crime had violated their personal moral values (Studies 3 and 4). These effects were strongest when the legitimacy to act could not already be derived from one’s material interests, suggesting that moral values and material interest can represent interchangeable justifications for behavior. No support was found for the possibility that attitude strength explained these effects. The power of moralization to disinhibit action is discussed.
Back to publication list
The Strategic Pursuit of Moral Credentials
Moral credentials establish one's virtue and license one to act in morally disreputable ways with impunity (Monin & Miller, 2001). We propose that when people anticipate doing something morally dubious, they strategically attempt to earn moral credentials. Participants who expected to do something that could appear racist (decline to hire a Black job candidate in Studies 1 and 2, or take a test that might reveal implicit racial bias in Study 3) subsequently sought to establish non-racist credentials (by expressing greater racial sensitivity in Studies 1 and 2, or by exaggerating how favorably they perceived a Black job candidate in Study 3). Consistent with prior research, a follow-up study revealed that the opportunity to establish such credentials subsequently licensed participants to express more favorable attitudes towards a White versus a Black individual. We argue that strategically pursuing moral credentials allows individuals to manage attributions about their morally dubious behavior.
Back to publication list
Affirmation, Acknowledgement of In-Group Responsibility, Group-Based Guilt, and Support for Reparative Measures
Three studies, 2 conducted in Israel and 1 conducted in Bosnia and Herzegovina, demonstrated that affirming a positive aspect of the self can increase one’s willingness to acknowledge in-group respon- sibility for wrongdoing against others, express feelings of group-based guilt, and consequently provide greater support for reparation policies. By contrast, affirming one’s group, although similarly boosting feelings of pride, failed to increase willingness to acknowledge and redress in-group wrongdoing. Studies 2 and 3 demonstrated the mediating role of group-based guilt. That is, increased acknowledgment of in-group responsibility for out-group victimization produced increased feelings of guilt, which in turn increased support for reparation policies to the victimized group. Theoretical and applied implications are discussed.
Diffusion of Entitlement: An Inhibitory Effect of Scarcity on Consumption
Four studies demonstrated that increasing a desirable commodity’s scarcity (i.e.,
decreasing its supply or increasing demand for it) can inhibit people from claiming the commodity for themselves, thereby delaying its consumption. In Study 1, participants were slower to claim a commodity when its supply was limited versus unlimited. In Study 2, participants expressed more disapproval of someone who took the last commodity compared to the second-last commodity. Participants in Study 3 anticipated that increased demand for a commodity would make them less likely to claim it despite wanting it more. Study 4 showed that the more participants there were who could claim a commodity, the longer it went unclaimed. The inhibitory effect of scarcity was mediated by diminished entitlement to the commodity (Study 3), and increasing entitlement reduced the inhibition against taking scarce commodities (Studies 1 and 2). These findings are discussed in the context of individuals’ concern with equality.
Reducing Exposure to Trust-Related Risks to Avoid Self-Blame
Three studies demonstrated that anticipated self-blame elicits more conservative
decisions about risks that require trust than about otherwise economically identical risks that do not. Participants were more reluctant to invest money in a company when it risked failure due to fraud versus low consumer demand (Study 1), and to risk points in an economic game when its outcome ostensibly depended on another participant versus chance (Studies 2 and 3). These effects were mediated by anticipated self-blame (Studies 1 and 2). Additionally, participants who actually experienced a loss felt more self-blame when the loss violated their trust, and became even more conservative in subsequent risk decisions relative to participants whose loss did not violate their trust (Study 3). No support emerged for alternative explanations based on either the perceived probability of incurring a loss, or on an aversion to losses that profit others. The motivational power of trust violations is discussed.
Letting People Off the Hook: When Do Good Deeds Excuse Transgressions?
Three studies examined when and why an actor’s prior good deeds make observers more
willing to excuse – or license – his or her subsequent, morally dubious behavior. In a pilot study, actors’ good deeds made participants more forgiving of the actors’ subsequent transgressions. In Study 1, participants only licensed blatant transgressions that were in a different domain than actors’ good deeds; blatant transgressions in the same domain appeared hypocritical and suppressed licensing (e.g., fighting adolescent drug use excused sexual harassment, but fighting sexual harassment did not). Study 2 replicated these effects, and showed that good deeds made observers license ambiguous transgressions (e.g., behavior that might or might not represent sexual harassment) regardless of whether the good deeds and the transgression were in the same or in a different domain – but only same-domain good deeds did so by changing participants’ construal of the transgressions. Discussion integrates two models of why licensing occurs.
Back to key publications | Back to full publication list
Moral Self-Licensing: When Being Good Frees Us to Be Bad
Past good deeds can liberate individuals to engage in behaviors that are immoral, unethical, or otherwise problematic, behaviors that they would otherwise avoid for fear of feeling or appearing immoral. We review research on this moral self-licensing effect in the domains of political correctness, prosocial behavior, and consumer choice. We also discuss remaining theoretical tensions in the literature: Do good deeds reframe bad deeds (moral credentials) or merely balance them out (moral credits)? When does past behavior liberate and when does it constrain? Is self-licensing primarily for others’ benefit (self-presentational) or is it also a way for people to reassure themselves that they are moral people? Finally, we propose avenues for future research that could begin to address these unanswered questions.
Psychological License: When it is Needed and How it Functions
Differences among people in the actions they take or the opinions they express do not always reflect differences in underlying attitudes, preferences, or moti- vations. When people differ in the extent to which they are psychologically licensed (i.e., feel able to act without discrediting themselves), they will act differently despite having similar attitudes, preferences, and motivations. Wanting to do something is not sufficient to spur action; one must also feel licensed to do it. We show that feeling licensed can liberate people to express morally problematic attitudes that those who do not feel licensed are inhibited from expressing. We also show that feeling one lacks license can inhibit people
Endorsing Obama Licenses Favoring Whites
Three studies tested whether the opportunity to endorse Barack Obama made individuals
subsequently more likely to favor Whites over Blacks. In Study 1, participants were more willing to describe a job as better suited for Whites than for Blacks after expressing support for Obama. Study 2 replicated this effect and ruled out alternative explanations: participants favored Whites for the job after endorsing Obama, but not after endorsing a White Democrat, nor after seeing Obama’s photo without having an opportunity to endorse him. Study 3 demonstrated that racial attitudes moderated this effect: endorsing Obama increased the amount of money allocated to an organization serving Whites at the expense of an organization serving Blacks only for participants high in a measure of racial prejudice. These three studies suggest that expressing support for Obama grants people moral credentials (Monin & Miller, 2001), thus reducing their concern with appearing prejudiced.
Back to key publications | Back to full publication list
From Moral Outrage to Social Protest: The Role of Psychological Standing
The thesis of this chapter is that the decision to protest requires not only that people experience outrage but that they feel entitled to act upon their outrage. We call this feeling of entitlement psychological standing. One important determinant of a person’s standing to protest an injustice is the extent to which he or she is materially affected by it. The more one is materially affected by the source of outrage the more standing one has to protest it. When people lack a material stake in an issue, they can nonetheless feel that they have the standing to protest if they observe other non-vested individuals protesting or if they perceive themselves as having a moral stake in the issue. Having a personal characteristic or history that justifies to others why one feels such outrage can also provide one with standing. However, not just any connection to an issue will suffice. Having committed a particular transgression in the past, or simply being a member of a group that has committed (or continues to commit) that transgression deprives one of the standing to protest that particular transgression. Finally, having a material stake in an issue’s outcome is not always sufficient to license protest. Victims lack the standing to retaliate against a transgressor when others who have been more victimized by the transgression choose to turn the other cheek. The chapter concludes by showing that the concept of standing, in addition to permitting unique predictions, offers an alternative frame for viewing previous findings.
Embodied Temporal Perception of Emotion
The role of embodiment in the perception of the duration of emotional stimuli was investigated with a temporal bisection task. Previous research has shown that individuals overestimate the duration of emotional, compared with neutral, faces (S. Droit-Volet, S. Brunot, & P. M. Niedenthal, 2004). The authors tested a role for embodiment in this effect. Participants estimated the duration of angry, happy, and neutral faces by comparing them to 2 durations learned during a training phase. Experimental participants held a pen in their mouths so as to inhibit imitation of the faces, whereas control participants could imitate freely. Results revealed that participants overestimated the duration of emotional faces relative to the neutral faces only when imitation was possible. Implications for the role of embodiment in emotional perception are discussed.