Shape attitudes.

sway decisions.

shift behavior.

influence 51: Move people

WHAT TO EXPECT FROM THE INFLUENCE 51 NEWSLETTER

example: JUNE 2025 nEWSLETTER

INFLUENCE INSIGHT

We’re probably underestimating the social components of attitude change.

One of the hardest things to do is to change the mind of a conspiracy theorist.

Conspiratorial views are notoriously impervious to reason, disconfirming evidence, or counterarguments of any kind. Historically, you’d probably have an easier time convincing a dog to eat a salad over a steak than you would convincing a conspiracy theorist to admit that they might be wrong about their beliefs.

But recently, some researchers managed to do just that. The team of Costello, Pennycook, and Rand (2024) were not only able to reduce conspiracy theorists’ beliefs in the short-term, but observed sustained belief reduction even months later, leading them to conclude that the process led to “durable” reductions in such beliefs (a major feat in the field).

So how did they accomplish such a feat? Did they hire slick-talking former salespeople or use some sort of sneaky linguistic trick or persuasive maneuver?

Not really, actually. They just had individuals discuss their conspiratorial beliefs with a conversation partner who tried to change their mind. But here’s the fascinating part: the partner - who did manage to change many minds - was an AI chatbot.

Naturally, this research has garnered a lot of attention, and perhaps just as naturally, researchers were eager to understand the tactics used by the chatbots to change the notoriously-rigid opinions of these conspiracy theorists. What types of arguments did they use? What type of data did they cite?

I have no doubt that argument quality and the use of relevant data contributed to the success of the intervention, but I think researchers are overlooking (or simply under-appreciating) what seems to be a critical component of the exchange.

In a sample of the conversations between the conspiracy theorists and AI chatbots provided by the authors, here’s how the AI agent begins their first response (immediately after the conspiracy theorist had shared their views and the reasons behind them):

“Thank you for sharing your thoughts and concerns about the 9/11 attacks. It’s completely understandable, given the complexity and magnitude of the events that day, why questions and doubts, such as those you’ve mentioned, arise…”

Now, this excerpt may seem like an insignificant pleasantry, but I believe it’s actually an incredibly important component of the intervention’s success.

Admitting that we might be wrong about something is very hard, for several reasons. Firstly, we have to recognize that some beliefs are not just isolated views, but instead are part of a vast network of interconnected beliefs. This means that, if you have to change one belief, it has the ability to compromise the validity of the entire network, potentially forcing you to have to change (or at least reconsider) a bunch of beliefs (something that most people don’t want to have to do). For conspiracy theorists, their conspiracies are often central beliefs, which inform myriad other beliefs that they hold. When you’ve built so many other beliefs on top of a central one, it’s often easier to just deny anything that challenges that belief rather than acknowledge that you might be wrong (and then have to confront the reality of rebuilding your entire belief system).

Secondly (and more germane to the excerpt above), admitting you’re wrong is a threat to your ego and self-worth. This may sound silly, but it’s not. A lot of people refuse to even entertain the idea that their beliefs may be flawed because of the contentious context in which these discussions typically take place.

Think about the last time you saw someone debate a conspiracy theorist, be it live or in a virtual setting. Chances are, the individual(s) trying to challenge the views of the conspiracy theorist(s) were not just trying to prove to them that their ideas were flawed, but were also probably trying to make them feel stupid for even entertaining them in the first place. In other words, it isn’t just about politely correcting someone’s beliefs; it’s about humiliating them for holding those beliefs. Under such circumstances, it’s no wonder why conspiracy theorists dig their heels into the ground and refuse to consider counterarguments and alternative points of view: to do so would also mean to tacitly admit that they’ve been foolish. When faced with the choice of acknowledging good points made by an opponent (but, by doing so, having to endure social embarrassment) or continuing to insist you’re right (even if you may have doubts) so you can’t be made to seem foolish, many of us choose the latter route.

And herein lies the beauty of the approach used by the AI chatbots. Those first few sentences - as trivial as they may initially seem - signal something powerful about the nature of the conversation to be had: the AI partner is not here to judge or try to embarrass the conspiracy theorist. While they might not agree with them, they can understand why they might hold those views - not because they’re dumb or silly, but because it’s a complicated, emotionally-charged issue. By beginning the conversation in this manner, it opens the door for the conspiracy theorist to acknowledge the doubts they might have about their own views without the fear of being socially chastised for doing so. And in this way, change is less about the facts and data and more about creating a space where someone can examine the facts and data without fear that doing so would leave them vulnerable to being humiliated.

Do I believe that some (perhaps most) conspiracy theories are foolish? I do. Do I sometimes want to make fun of them for how self-evidently stupid their views are? I do. But ultimately, I recognize that to do so would be satisfying for me, but would likely do more to further entrench them in their existing views than change them. Instead, if I want to stand a chance of making an impact, I have to do something far harder: I have to create a space where they feel that they can critically interrogate their own views without worrying that, if they do decide to admit doubts, they’ll be made to feel foolish for doing so.

USEFUL RESEARCH AND RESOURCES

Which Healthy Eating Nudges Work Best? A Meta-Analysis of Field Experiments

Cadario and Chandon (2020)

Overview: Many of us want to eat healthier, but changing our behavior to reflect that desire is hard. This is an issue of great importance not only to individuals, but to governments, as well, who stand to benefit from reduced pressure on their (sometimes overburdened) healthcare systems should a society adopt healthier eating practices long-term. So how can we address this attitude-behavior gap? A promising line of research is nudging, wherein “choice architects” attempt to influence behavior in subtle, non-coercive ways (such as changing the location of an item in a grocery store or providing more easily-interpretable information on its’ health impacts). Critically, nudging aims to change behavior without taxing or banning “bad choices,” so an individual can still decide to take the “unhealthy” route if they so choose. A few years ago, the team of Cadario and Chandon conducted an extensive analysis to determine which types of nudges were most effective in promoting healthy eating and choices.

Findings: There were a few key findings. First, they found that “behaviorally-oriented” nudges (such as making healthier options more convenient to select or changing the default portion sizes of healthy and unhealthy food) were more effective at changing habits than “cognitively-oriented” nudges (such as making calories and nutrition info more salient, creating more intuitive labels to help people identify “good” and “bad” food, etc.). Secondly, they found that nudges are better at reducing unhealthy eating than they are at increasing healthy eating or reducing total eating. Finally, they found that these interventions had more of an effect when implemented in restaurants or cafeterias than they did in grocery stores.

Key Takeaway: Overall, it seems that nudges do seem to work, which is excellent because it means that governments and organizations seeking to create large-scale behavior change do not have to rely on traditional, “heavy-handed” techniques such as sanctions, bans, and higher taxes on unhealthy foods. In fact, to contextualize the results, the authors specify that the average “real-world” impact of a healthy eating nudge is a reduction of approximately 124 calories per day. However, they emphasize that A) not all nudges are equally effective (e.g., behavioral are superior to cognitive), B) nudges are better at preventing “bad” eating than promoting “good” eating, and C) their effectiveness varies based on the environments in which the nudges are embedded.

Potential Application: If governments or communities want to catalyze large-scale behavior change in the domain of healthy eating, they should consider:

  • Using behaviorally-oriented nudges: These can be aimed at moderating the convenience of making certain choices (e.g., positioning fruits and vegetables in the most easily-accessible, highly-trafficked portions of a store or in places where people might make last-minute “impulse” decisions and positioning unhealthy foods in less easily-accessible areas) or at changing the defaults for certain types of food (e.g., giving people larger plates at a salad bar and smaller plates at the dessert bar).

  • Targeting restaurants and cafeterias: As these seem to be the places where people are most amenable to nudging interventions, perhaps due to the fact that these are more social activities (compared to grocery shopping), where people’s choices may be more susceptible to the social cues of their environments (e.g., what’s the right choice to make here?).

WHAT I’M CURRENTLY READING

The Two Moralities: Conservatives, Liberals, and the Roots of Our Political Divide
by Ronnie Janoff-Bulman

As many of you may know, one of my favorite things to do is to study the factors that “fundamentally separate” liberals from conservatives. Now, of course, there are many things that differ between these two groups, but every researcher has their own response to the party question “if you had to pick just one thing, what would it be?”

In some sense, this book could be thought of as researcher Ronnie Janoff-Bulman’s answer to that question.

Now, this is dramatically oversimplifying Dr. Janoff-Bulman’s book. Ultimately, The Two Moralities synthesizes Janoff-Bulman’s decades-long investigation into the moral tendencies and preferences of these two groups in a rich and detailed manner. But, for our purposes, her overarching finding does seem to point to a “fundamental difference” that we can add to the list of contenders from other notable academics like George Lakoff, Thomas Sowell, Jonathan Haidt and colleagues, and others.

Janoff-Bulman begins by explaining to the reader that the most basic distinction in motivation research is examining approach versus avoidance tendencies. In the wild (and during much of our evolutionary past), this is the initial impulse that kicks off the vast majority of new interactions: “do I approach or avoid that new [area, creature, food source, etc.]?”

Now, it isn’t that one motivation is better than the other - rather, they each simply come with a unique set of tradeoffs. Organisms that are more inclined toward approach motivations are likely to find new food sources and cooperation partners faster, but may also be faster to put themselves in positions of danger (if, for example, the new food or new group of people is unsafe). Those inclined toward avoidance motivations, on the other hand, may be less likely to be harmed by new, unfamiliar opportunities due to their cautionary nature, but this caution may also disadvantage them when the new opportunities are safe and valuable (and thus worth taking quickly).

Janoff-Bulman argues that there are two natural forms of morality, driven by these motivational differences, which she refers to as prescriptive and proscriptive morality.

Prescriptive morality tends to be the brand of morality endorsed by liberals. It’s approach-based and focuses on bringing about positive outcomes by activating “good” behaviors. Proscriptive morality tends to be the brand of morality endorsed by conservatives. It’s avoidance-based and focuses on avoiding negative outcomes by inhibiting “bad” behaviors. Broadly, prescriptive morality focuses on helping while proscriptive morality focuses on not harming.

In Janoff-Bulman’s own words:

“Proscriptive and prescriptive morality require very different efforts from us. Proscriptive morality begins with a temptation to behave immorally - perhaps to cheat, steal, or lie. To be moral is to refrain from these behaviors, to not do what we should not do. Prescriptive morality, in contrast, does not involve restraint of a negative motivation (a temptation) but instead requires initiating a positive motivation to engage in moral behavior - to do what we should do. It requires activation, rather than inhibition, in an attempt to approach “the good.” Whereas temptation and desire are enemies of proscriptive moral regulation, the enemies of prescriptive morality are inertia and apathy…Prescriptive morality provides benefits to others by engaging our altruism and concern for others. Proscriptive morality protects others by restricting our selfishness and self-advantaging behaviors.”

Ultimately, Janoff-Bulman contends that a fundamental difference between liberals and conservatives involves their preferred forms of morality (i.e., prescriptive and proscriptive, respectively). This difference has far-reaching ramifications, as it may not only lead to passionate disagreements about what our moral obligations are (e.g., is doing the right thing just about avoiding doing harm or is it about actively doing good), but it may also inform what the appropriate role of government should be (e.g., should a government only be concerned with preventing bad or should it be proactively attempting to promote good).

If you’re interested in learning a bit more about one of the “fundamental” ways liberals and conservatives differ, Janoff-Bulman’s perspectives and insights are well worth your time.

FUN FACT

Do you know where the origin of the terms “left” and “right” (to describe political orientations) originated)?

Janoff-Bulman begins her book by describing how, in 1789, the National Assembly of France met to write a new constitution. One of the most consequential (and polarizing) issues being discussed was how much power the king should have. There were traditionalists, who argued that the king should continue to have veto power over decisions the Assembly made, and there were non-traditionalists, who believed the king’s power should be limited. During debates on the matter, the traditionalists - who supported continuing to do things as they had been done - sat to the right of the Assembly’s president; the non-traditionalists - who sought change and reform - sat to the left. The French newspapers used the seating arrangement of these two groups to describe the interactions taking place within the Assembly.

And thus, the left-right political distinction took hold.

THE FIVE PRINCIPLES OF INFLUENCE

Influence is an incredibly complex concept. While by no means exhaustive, the five principles below represent the most important things you should know about human nature prior to designing any influence strategy.

principle 1:
the human brain is not a computer

Humans are not perfectly rational creatures. Our brains were not built to maximize accuracy, but rather to find an optimal equilibrium between accuracy and effort. As a result, people are prone to a set of common cognitive “errors” that are both systematic in nature and predictable in direction. Understanding the patterns of the cognitive biases and heuristics to which we are all susceptible allows us to better anticipate how one will think and behave and, consequently, position us to better influence those processes.

principle 2:
our social nature matters

Who we fundamentally are can be a tricky thing. We act one way in our social circles, another in our professional networks, and yet another way with our families and loved ones. Whether we like it or not, our character has a great deal of fluidity based on our social context. Individuals who understand how the presence of others moderates behavior have a distinct advantage over their less-informed peers. A conceptual grasp of how particular social dynamics can moderate processes like conformity, groupthink, and compliance can prove an immensely valuable asset in your ability to influence.

Capitol.jpg

principle 3:
our political sensibilities matter

Liberals and conservatives differ more than just inside the voting booth. Individuals on opposite ends of the political spectrum diverge emotionally, psychologically, and philosophically. These differences create irreconcilable world views, wherein people can be presented with identical information yet process it in disparate ways, ultimately receiving two fundamentally different messages. Understanding how political orientation impacts how messages are received allows us to design the most potent communication strategies for influencing opposing constituencies.

principle 4:
OUR APPROACH matters

Many people operate under the assumption that, if someone is provided with strong enough arguments for why they should change their minds, then they’ll be forced to shift their opinions; that persuasion is, in essence, about “overpowering someone with logic.” But humans are complex and sometimes stubborn creatures, and often being told “you have to do this” is all the motivation they need to look for ways to do the exact opposite. Consequently, persuasion is never an act of force, but rather a process: of gaining someone’s trust, their respect, and ultimately their willingness to consider an alternative point of view. Thus, in many ways, persuasion is less about your argument than it is about your approach.

principle 5:
our structure and presentation matters

The color gray is neither light nor dark; rather, its brightness depends on the color to which it is contrasted. Next to a white pillow, a gray blanket looks dark, yet next to a black coat it appears light. Similarly, persuasive appeals can gain or lose impact depending on what information is included (or omitted) and how it is framed. The content you present is important, but what you choose to include and how you choose to include it is equally critical. One must pay attention to not only the substance of their offering, but also the careful construction of its presentation.

COMPLIMENTARY ARTICLES

Looking for a brief introduction to some of the content Influence 51 delivers? Enjoy a collection of complimentary articles.

Loss Aversion

Nudging and Choice Architecture

Empathy and Its Limitations

Framing

Conformity and Groupthink

Political Disagreement

Cognitive Dissonance

Pricing Psychology

Prosocial Behavior

Social Pain

Subconscious Influence

How Political Dispositions Form