monthly insights into the science of what moves people
From the psychology of connection and persuasion to how to use behavioral insights to create meaningful change, our monthly newsletter makes the science of influence simple, engaging, and applicable. Moreover, new subscribers get instant access to our free, five-day email course.
free preview: our SEPTEMBER 2025 nEWSLETTER
When our attitudes and our behaviors are not aligned, our brains are determined to change one of them to restore balance.
One of Aesop’s fables is known as “The Fox and the Grapes.” In it, a hungry fox tries to reach some grapes hanging high on a vine. Though he makes several attempts to jump up and grab the grapes, he is unable to do so. As he walks away, he mutters angrily “Oh, you aren’t even ripe yet. I don’t need any sour grapes!”
The fable A) yes, is where the expression “sour grapes” originates and B) is an excellent example of one of the most pervasive cognitive quirks humans have: cognitive dissonance.
Here’s the thing: people like to see themselves as rational and “internally consistent”; individuals who “do as they say and say as they do.” In other words, we like to have our attitudes and our behaviors align. When we perceive there to be misalignment (i.e., when what we do does not match what we think or say), we experience discomfort (or “dissonance”) which we are motivated to resolve, either by changing our attitudes to fit our behaviors or adjusting our behaviors to fit our attitudes. In the fable above, the fox had a misalignment between his attitude (i.e., he privately wanted the grapes) and his behavior (i.e., his inability to get the grapes), so rather than dealing with the discomfort of that misalignment, he instead shifts his attitude to align with his behavior (“I didn’t really want those anyway!”).
Savvy individuals throughout history have leveraged this cognitive quirk to their advantage. One example is Benjamin Franklin. He was faced with a political opponent and fellow legislator who simply didn’t like him. Everything Franklin said was questioned by the man; everything he suggested was contested; everything he proposed was scrutinized.
This man’s animosity towards Franklin had also started to impact their ability to get work done, so Franklin decided to do something about it.
Rather than attempt to win him over with kindness or flattery, Franklin basically did the opposite: he asked the man to do him a favor. He knew that this gentleman had a rare book in his collection, so Franklin asked if he would lend it to him. In spite of being a bit caught off-guard by the request, the gentleman ultimately obliged, and Franklin returned it about a week later with a note thanking him for his willingness to share.
Here’s what Franklin wrote about their next encounter:
“When we next met in the House, he spoke to me (which he had never done before), and with great civility; and he ever after manifested a readiness to serve me on all occasions, so that we became great friends, and our friendship continued until his death. This is another instance of the truth of an old maxim I had learned, which says, ‘He that has once done you a kindness will be more ready to do you another, than he whom you yourself have obliged.’"
Ben Franklin understood something critical about human nature: if you can get someone to change their behavior towards you without forcing them to do so (e.g., by getting them to lend you a book as they would a friend), it often paves the way for them to change their attitude towards you, as well.
Essentially, Franklin had engineered a situation where - with minimal coercion - he got this man to do something nice for him. In doing so, he created a tension in the man’s mind: I don’t like Ben, but I just did him a favor (i.e., there was a conflict between his attitude and his behavior). To resolve it, he convinced himself that maybe Franklin wasn’t that bad of a guy after all, and from there, their relationship flourished.
Fascinatingly, recent research suggests that it might not just be humans that display cognitive dissonance. In studies led by Mahzarin Banaji of Harvard University and Steve Lehr of Cangrade, ChatGPT was found to show the telltale signs of cognitive dissonance.
They asked the large language model (LLM) to write essays either supporting or opposing Vladimir Putin. They then asked for its “opinion” (a complicated term in the context of LLMs) of Russia’s leader. Consistent with decades of research on humans, they found the LLM’s opinion tended to shift depending on whether the essay they had written (i.e., the behavior it had displayed) was supportive or critical of Putin. When it had just written a supportive essay, it tended to have a more favorable opinion of Putin; when the essay was critical, its’ opinion was decidedly negative.
But here’s the even more interesting finding. For humans, one of the biggest predictors of dissonance-driven shifts in attitudes or behavior is the element of freedom. For instance, if you’re a Republican that hates Democrats (i.e., your attitude) but someone forces you at gunpoint to be nice to a Democrat (i.e., your behavior), you’re unlikely to experience any dissonance because you have what’s called an “external justification” for your actions (i.e., I did not freely choose to do that, I was forced to do it, and so I do not need to convince myself why I displayed the behavior - it’s obvious). But if, for example, you find yourself behaving kindly towards a group of Democrats during a town council meeting of your own volition, you now have to grapple with the reality that nobody forced you to do so, but you did it anyway - why? Here, you’re likely to soften your attitudes on Democrats as a way of justifying your behavior. Similarly, the researchers found that when ChatGPT was given the freedom to choose whether to write a positive or negative essay on Putin (and then asked for its’ opinion), its’ attitude toward the Russian leader changed even more than when it was instructed to write either a positive or a negative essay.
applying THE INSIGHT
There are a number of ways you might leverage cognitive dissonance to inspire attitude and behavior change, but the simplest approach broadly involves encouraging someone (with minimal coercion - it has to feel like it was ultimately their choice) to display a behavior that aligns with an attitude you want them to hold. For example:
Reducing Employee Resistance to Change: During restructurings, have skeptics contribute ideas (a "favor" to the team), creating dissonance between their initial doubts and active involvement in the process, ultimately leading to greater buy-in.
Building Rapport with Opposing Counsel: Request a small, non-substantive favor, like sharing a neutral resource (e.g., "Do you have a quick recommendation for a case management tool?"). This humanizes interactions and can reduce hostility, as the other party dissonantly justifies cooperation by viewing you more favorably.
Boosting Support for Political Causes: Recruit supporters for a small, simple task (e.g., "Can you make three quick calls to friends about our event?"), creating dissonance that amplifies their commitment to the cause.
Increasing Donor Commitment for Nonprofits: Ask past donors or prospects for a non-monetary favor (e.g., "Share this story on your social media to help raise awareness"), dissonantly linking their effort to the mission and increasing future donations.
WHAT I’M CURRENTLY READING
The Righteous Mind
by Jonathan Haidt
Okay, let’s begin by acknowledging that The Righteous Mind is not a new release (it came out in 2011). However, few books over the past several decades have had the type of impact that this book has had on fields like moral and political psychology, so it’s worth a re-read.
As its subtitle suggests, Haidt sets out to explain why good people are so divided by politics and religion. In doing so, he forwards a theory that has come to be known as “moral foundations theory” (or MFT).
MFT sets forth a “social-intuitionist” model of morality. Now, the first component (i.e., the “social” part) is not particularly novel - many researchers have argued that the development of our moral sentiments is informed by our social environments (e.g., our cultures, groups, etc.). However, it’s the “intuitionist” component that’s a bit more controversial…
Haidt (along with his collaborators, Jesse Graham, Brian Nosek, etc.) argue that much of our moral sense is intuitive - that people are basically born with an innate ability to discern right from wrong. Now, they don’t mean that infants can teach an ethics class or anything like that, but just that they come equipped with moral machinery that will develop - regardless of nurture - as they do.
Most psychologists don’t doubt that babies are born with an impressive set of innate traits, preferences, and predispositions, but moral judgment has long been regarded as something that needs to be taught (going back to prominent theories like those proposed by Lawrence Kohlberg). However, Haidt and his colleagues contend that we are born with five “moral foundations” upon which moral judgments are made: harm, fairness, loyalty, authority, and purity.
Basically, when deciding whether something is “right” or “wrong,” we tend to base our decisions on some combination of these “foundations.” But here’s where it gets even more interesting: Haidt and his colleagues have found - in experiments with tens of thousands of individuals - that liberals and conservatives tend to rely on very different sets of foundations.
More specifically, when liberals render moral judgments, they tend to disproportionately rely on the foundations of harm and fairness - that is, they intuitively ask themselves “is someone/something being harmed?” and “is what’s happening fair or unfair?” The answer to these questions tends to dictate whether the act is deemed to be right or wrong. Conservatives, on the other hand - while still considering harm and fairness - tend to also consider the remaining three foundations (i.e., loyalty, authority, and purity). So, in addition to harm and fairness, conservatives also ask themselves “is this act loyal or disloyal to the group?,” “would it be obeying or disobeying authority?,” and “would it violate something that shouldn’t be violated (i.e., something “pure”)?” The answer to these will determine whether the act is deemed morally permissible or not.
Now, there has been a lot of debate over the validity of MFT, especially in regards to the claim that humans are born with five “moral foundations.” However, regardless of whether A) we’re born with this innate capacity for morality or B) there five foundations (or less/more), studies have consistently shown that using the moral foundations of “the other side” to build communications is an effective way to craft more persuasive political messaging - the type that can encourage conservatives to support traditionally liberal policies and vice versa.
So if you’re looking for one of the most provocative theories on what separates liberals and conservatives that has been proposed over the past half a century or so, The Righteous Mind is worth a read.