Persuasive Campaign (fifth question) Read article and answer questions

Q1:Review the module lecture, which shows how the company Digital Living used a pilot as a communication effort to encourage employees to adopt a new annual performance review evaluation system.  Then identify one (and only one) tactic that is related to consistency theories and cognitive dissonance and used by the example.  Finally, discuss whether you would use the same tactic and why.

Save Time On Research and Writing
Hire a Pro to Write You a 100% Plagiarism-Free Paper.
Get My Paper

Q2: Can you think of an instance when you agreed with a group despite your own reservations about the decision?  If you cannot recall an instance where this happened, provide an example (real or hypothetical) where you might see the power of group influence in action.  How do the factors of uncertainty, unanimity, and similarity play a role in your example?

U.S. Copyright Law
(title 17 of U.S. code)
governs the reproduction
and redistribution of
copyrighted material.
Downloading this
document for the
purpose of
redistribution is
prohibited.
Eleventh Edition
The Social Animal
Elliot Aronson
University of California, Santa Cruz
with Joshua Aronson
New York University
WORTH PUBLISHERS
To Vera, of course
The Social Animal, Eleventh Edition
Acquisitions Editor: Erik Gilg
Marketing Manager: Jennifer Bilello
Art Director: Babs Reingold
Senior Designer: Kevin Kali
Senior Project Editor: Laura McGinn
Copy Editor: Maria Vlasak
Production Manager: Sarah Segal
Compositor: Northeastern Graphic, Inc.
Printing and Binding: R. R. Donnelley
Cover Drawing by Tom Durfee
Library of Congress Control Number: 2007925852
ISBN-13: 978-1-4292-3341-5
ISBN-10:1-4292-3341-9
© 2012, 2008, 2004,1999 by Worth Publishers
©1995,1992,1988,1984,1980,1976,1972 by
W. H. Freeman and Company
Printed in the United States of America
First printing 2011
Worth Publishers
41 Madison Avenue
New York, NY 10010
www.worthpublishers.com
Saul Steinberg, Untitled drawing, ink on paper.
Originally published in The New Yorker, February 16, 1963.
© T h e Saul Steinberg Foundation / Artists Rights Society (ARS), New York
5
Self-Justification
Picture the following scene: A young man named Sam is being hypnotized. The hypnotist gives Sam a posthypnotic suggestion, telling
him that, when the clock strikes 4:00, he will (1) go to the closet, get
his raincoat and galoshes, and put them on; (2) grab an umbrella;
(3) walk eight blocks to the supermarket and purchase six bottles of
bourbon; and (4) return home. Sam is told that, as soon as he reenters his apartment, he will “snap out of it” and be himself again.
When the clock strikes 4:00, Sam immediately heads for the
closet, dons his raincoat and galoshes, grabs his umbrella, and
trudges out the door on his quest for bourbon. There are a few
strange things about this errand: (1) it is a clear, sunshiny day—there
isn’t a cloud in the sky; (2) there is a liquor store half a block away
that sells bourbon for the same price as the supermarket eight blocks
away; and (3) Sam doesn’t drink.
Sam arrives home, opens the door, reenters his apartment, snaps
out of his “trance,” and discovers himself standing there in his raincoat and galoshes, with his umbrella in one hand and a huge sack of
liquor bottles in the other. He looks momentarily confused. His
friend, the hypnotist, says,
“Hey, Sam, where have you been?”
“Oh, just down to the store.”
“What did you buy?”
“Urn . . . um . . . it seems I bought this bourbon.”
“But you don’t drink, do you?”
18 8 The Social An imal
“No, but . . . um . . . um . . . I’m going to do a lot of entertaining during the next several weeks, and some of my friends do.”
“How come you’re wearing all that rain gear on such a sunny day?”
“Well . . . actually, the weather is quite changeable this time of
year, and I didn’t want to take any chances.”
“But there isn’t a cloud in the sky.”
“Well, you never can tell.”
“By the way, where did you buy the liquor?”
“Oh, heh, heh. Well, um . . . down at the supermarket.”
“How come you went that far?”
“Well, um . .. um . . . it was such a nice day, I thought it might
be fun to take a long walk.”
People are motivated to justify their own actions, beliefs, and feelings.
When they do something, they will try, if at all possible, to convince
themselves (and others) that it was a logical, reasonable thing to do.
There was a good reason why Sam performed those silly actions—he
was hypnotized. But because Sam didn’t know he had been hypnotized, and because it was difficult for him to accept the fact that he
was capable of behaving in a nonsensical manner, he went to great
lengths to convince himself (and his friend) that there was a method
to his madness, that his actions were actually quite sensible.
The experiment by Stanley Schachter and Jerry Singer discussed
in Chapter 2 can also be understood in these terms. Recall that these
investigators injected people with epinephrine. Those who were
forewarned about the symptoms caused by this drug (palpitations of
the heart, sweaty palms, and hand tremors) had a sensible explanation for the symptoms when they appeared. “Oh, yeah, that’s just the
drug affecting me.” Those who were misled about the effects of the
drug, however, had no such handy, logical explanation for their
symptoms. But they couldn’t leave the symptoms unjustified; they
tried to account for them by convincing themselves that they were
either deliriously happy or angry, depending on the social stimuli in
the environment.
Self-Justification
179
The concept of self-justification can be applied more broadly
still. Suppose you are in the midst of a great natural disaster, such as
an earthquake. All around you, buildings are toppling and people are
getting killed and injured. Needless to say, you are frightened. Is
there any need to seek justification for this fear? Certainly not. The
evidence is all around you; the injured people and the devastated
buildings are ample justification for your fear. But suppose, instead,
the earthquake occurred in a neighboring town. You can feel the
tremors, and you hear stories of the damage done to the other town.
You are terribly frightened, but you are not in the midst of the devastated area; neither you nor the people around you have been hurt,
and no buildings in your town have been damaged. Would you need
to justify this fear? Yes. Much like the people in the Schachter-Singer
experiment experiencing strong physical reactions to epinephrine but
not knowing why, and much like our hypnotized friend in the raincoat and galoshes, you would be inclined to justify your own actions
or feelings. In this situation, you see nothing to be afraid of in the
immediate vicinity, so you would be inclined to seek justification for
the fact that you are scared out of your wits.
This disaster situation is not a hypothetical example; it actually
occurred in India. In the aftermath of an earthquake, investigators
collected and analyzed the rumors being spread. What they discovered was rather startling: Jamuna Prasad,1 an Indian psychologist,
found that when the disaster occurred in a neighboring village such
that the residents in question could feel the tremors but were not in
imminent danger, there was an abundance of rumors forecasting impending doom. Specifically, the residents of this village believed, and
helped spread rumors to the effect that a flood was rushing toward
them; February 26 would be a day of deluge and destruction; there
would be another severe earthquake on the day of the lunar eclipse;
there would be a cyclone within a few days; and unforeseeable
calamities were on the horizon.
Why in the world would people invent, believe, and communicate such stories? Were these people masochists? Were they paranoid?
Certainly these rumors would not encourage the people to feel calm
and secure. One rather compelling explanation is that the people were
terribly frightened, and because there was not ample justification for
this fear, they invented their own justification. Thus, they were not
compelled to feel foolish. After all, if a cyclone is on the way, isn’t it
18 8 The Social An imal
perfectly reasonable that I should be wild-eyed with fear? This explanation is bolstered by Durganand Sinha’s study of rumors.2 Sinha investigated the rumors being spread in an Indian village following a
disaster of similar magnitude. The major difference between the situation in Prasad’s study and the one in Sinha’s study was that the people being investigated by Sinha had actually suffered the destruction
and witnessed the damage. They were scared, but they had good reasons to be frightened; they had no need to seek additional justification for their fears. Thus, their rumors contained no prediction of
impending disaster and no serious exaggeration. Indeed, if anything,
the rumors were comforting. For example, one rumor predicted
(falsely) that the water supply would be restored in a very short time.
Leon Festinger organized this array of findings and used them
as the basis for a powerful theory of human motivation that he called
the theory of cognitive dissonance.3 It is a remarkably simple theory
but, as we shall see, the range of its application is enormous. Basically, cognitive dissonance is a state of tension that occurs whenever
an individual simultaneously holds two cognitions (ideas, attitudes,
beliefs, opinions) that are psychologically inconsistent. Stated differently, two cognitions are dissonant if, when considered alone, the opposite of one follows from the other. Because the occurrence of
cognitive dissonance is unpleasant, people are motivated to reduce it;
this is roughly analogous to the processes involved in the induction
and reduction of such drives as hunger or thirst—except that, here,
the driving force arises from cognitive discomfort rather than physiological needs. To hold two ideas that contradict each other is to flirt
with absurdity, and—as Albert Camus, the existentialist philosopher,
has observed—humans are creatures who spend their lives trying to
convince themselves that their existence is not absurd.
How do we convince ourselves that our lives are not absurd; that
is, how do we reduce cognitive dissonance? By changing one or both
cognitions in such a way as to render them more compatible (more
consonant) with each other or by adding more cognitions that help
bridge the gap between the original cognitions.*
*In the preceding chapter, we learned that beliefs and attitudes are not always
good predictors of a person’s behavior—that is to say, behavior is not always consistent with relevant beliefs and attitudes. Here we are making the point that most people feel that their beliefs and attitudes should be consistent with their behavior and,
therefore, are motivated to justify their behavior when it is inconsistent with a preexisting attitude.
Self-Justification
181
Let me cite an example that is, alas, all too familiar to many people. Suppose a person smokes cigarettes and then reads a report of
the medical evidence linking cigarette smoking to lung cancer and
other diseases. The smoker experiences dissonance. The cognition “I
smoke cigarettes” is dissonant with the cognition “cigarette smoking
produces cancer.” Clearly, the most efficient way for this person to
reduce dissonance in such a situation is to give up smoking. The cognition “cigarette smoking produces cancer” is consonant with the
cognition “I do not smoke.”
But, for most people, it is not easy to give up smoking. Imagine
Sally, a young woman who tried to stop smoking but failed. What
will she do to reduce dissonance? In all probability, she will try to
work on the other cognition: “Cigarette smoking produces cancer.”
Sally might attempt to make light of evidence linking cigarette
smoking to cancer. For example, she might try to convince herself
that the experimental evidence is inconclusive. In addition, she
might seek out intelligent people who smoke and, by so doing, convince herself that if Debbie, Nicole, and Larry smoke, it can’t be all
that dangerous. Sally might switch to a filter-tipped brand and delude herself into believing that the filter traps the cancer-producing
materials. Finally, she might add cognitions that are consonant with
smoking in an attempt to make the behavior less absurd in spite of
its danger. Thus, Sally might enhance the value placed on smoking;
that is, she might come to believe smoking is an important and
highly enjoyable activity that is essential for relaxation: “I may lead a
shorter life, but it will be a more enjoyable one.” Similarly, she might
try to make a virtue out of smoking by developing a romantic, devilmay-care self-image, flouting danger by smoking cigarettes. All such
behavior reduces dissonance by reducing the absurdity of the notion
of going out of one’s way to contract cancer. Sally has justified her
behavior by cognitively minimizing the danger or by exaggerating
the importance of the action. In effect, she has succeeded either in
constructing a new attitude or in changing an existing attitude.
Indeed, shortly after the publicity surrounding the original Surgeon General’s report in 1964, a survey was conducted4 to assess people’s reactions to the new evidence that smoking helps cause cancer.
Nonsmokers overwhelmingly believed the health report, only 10 percent of those queried saying that the link between smoking and cancer had not been proven to exist; these respondents had no motivation
to disbelieve the report. The smokers faced a more difficult quandary.
18 8 The Social An imal
Smoking is a difficult habit to break; only 9 percent of the smokers
had been able to quit. To justify continuing the activity, smokers
tended to debunk the report. They were more likely to deny the evidence: 40 percent of the heavy smokers said a link had not been
proven to exist. They were also more apt to employ rationalizations:
More than twice as many smokers as nonsmokers agreed that there
are many hazards in life and that both smokers and nonsmokers get
cancer.
Smokers who are painfully aware of the health hazards associated with smoking may reduce dissonance in yet another way—by
minimizing the extent of their habit. One study5 found that of 155
smokers who smoked between one and two packs of cigarettes a day,
60 percent considered themselves moderate smokers; the remaining
40 percent considered themselves heavy smokers. How can we explain these different self-perceptions? Not surprisingly, those who labeled themselves as moderates were more aware of the pathological
long-term effects of smoking than were those who labeled themselves as heavy smokers. That is, these particular smokers apparently
reduced dissonance by convincing themselves that smoking one or
two packs a day isn’t really all that much. Moderate and heavy are,
after all, subjective terms.
Imagine a teenage girl who has not yet begun to smoke. After
reading the Surgeon General’s report, is she apt to believe it? Like
most of the nonsmokers in the survey, she should. The evidence is
objectively sound, the source is expert and trustworthy, and there is
no reason not to believe the report. And this is the crux of the matter. Earlier in this book, I made the point that people strive to be
right, and that values and beliefs become internalized when they appear to be correct. It is this striving to be right that motivates people
to pay close attention to what other people are doing and to heed the
advice of expert, trustworthy communicators. This is extremely rational behavior. There are forces, however, that can work against this
rational behavior. The theory of cognitive dissonance does not picture people as rational beings; rather, it pictures them as rationalizing beings. According to the underlying assumptions of the theory,
we humans are motivated not so much to be right as to believe we
are right (and wise, and decent, and good).
Sometimes, our motivation to be right and our motivation to believe we are right work in the same direction. This is what is happen-
Self-Justification
183
ing with the young woman who doesn’t smoke and therefore finds it
easy to accept the notion that smoking causes lung cancer. This
would also be true for a smoker who encounters the evidence linking cigarette smoking to lung cancer and then succeeds in giving up
cigarettes. Occasionally, however, the need to reduce dissonance (the
need to convince oneself that one is right or good) leads to behavior
that is maladaptive and therefore irrational. For example, many people have tried to quit smoking and failed. What do these people do?
It would be erroneous to assume that they simply swallow hard and
prepare to die. They don’t. Instead, they try to reduce their dissonance in a different way: namely, by convincing themselves that
smoking isn’t as bad as they thought. Thus, Rick Gibbons and his
colleagues6 recently found that heavy smokers who attended a smoking cessation clinic, quit smoking for a while and then relapsed into
heavy smoking again, subsequently succeeded in lowering their perception of the dangers of smoking.
Why might this change of heart occur? If a person makes a serious commitment to a course of action, such as quitting smoking,
and then fails to keep that commitment, his or her self-concept as
a strong, self-controlled individual is threatened. This, of course,
arouses dissonance. One way to reduce this dissonance and regain a
healthy sense of self—if not a healthy set of lungs—is to trivialize the
commitment by perceiving smoking as less dangerous. A more general study that tracked the progress of 135 students who made New
Year’s resolutions supports this observation.7 Individuals who broke
their resolutions—such as to quit smoking, lose weight, or exercise
more—initially felt bad about themselves for failing but, after a short
time, succeeded in downplaying the importance of the resolution.
Ironically, making light of a commitment they failed to keep serves
to restore their self-esteem but it also makes self-defeat a near certainty in the future. In the short run, they are able to feel better about
themselves; in the long run, however, they have drastically reduced
the chances that they’ll ever succeed in achieving their goals.
Is this the only way to reduce the dissonance associated with failing to achieve a goal? No. An alternative response—-and perhaps a less
maladaptive one—would be to lower one’s expectations for success.
For example, a person who has been unable to give up smoking completely, but who has cut down on the number of cigarettes smoked
daily, could interpret this outcome as a partial success rather than as
18 8 The Social An imal
a complete failure. This course of action would soften the blow to his
or her self-esteem for having failed while still holding out the possibility of achieving success in future efforts to quit smoking altogether.
Let’s stay with the topic of cigarette smoking for a moment and
consider an extreme example: Suppose you are one of the top executives of a major cigarette company—and therefore in a situation of
maximum commitment to the idea of cigarette smoking. Your job
consists of producing, advertising, and selling cigarettes to millions
of people. If it is true that cigarette smoking causes cancer, then, in
a sense, you are partially responsible for the illness and death of a
great many people. This would produce a painful degree of dissonance: Your cognition “I am a decent, kind human being” would be
dissonant with your cognition “I am contributing to the early death
of thousands of people.” To reduce this dissonance, you must try to
convince yourself that cigarette smoking is not harmful; this would
involve a refutation of the mountain of evidence suggesting a causal
link between cigarettes and cancer. Moreover, to convince yourself
further that you are a good, moral person, you might go so far as to
demonstrate how much you disbelieve the evidence by smoking a
great deal yourself. If your need is great enough, you might even succeed in convincing yourself that cigarettes are good for people. Thus,
to see yourself as wise, good, and right, you take action that is stupid
and detrimental to your health.
This analysis is so fantastic that it’s almost beyond belief—
almost. In 1994, Congress conducted hearings on the dangers of
smoking. At these hearings, the top executives of most of the major
tobacco companies admitted they were smokers and actually argued
that cigarettes are no more harmful or addictive than playing video
games or eating Twinkies! In a subsequent hearing in 1997, James J.
Morgan, president and chief executive officer of the leading U.S. cigarette maker, said that cigarettes are not pharmacologically addictive.
“Look, I like gummy bears and I eat gummy bears. And I don’t like
it when I don’t eat gummy bears,” Morgan said. “But I’m certainly
not addicted to them.”8 This kind of public denial is nothing new, of
course. Forty years ago, the following news item was released by the
Washington Post’s News Service.
Jack Landry pulls what must be his 30th Marlboro of the day
out of one of the two packs on his desk, lights a match to it and
Self-Justification
185
tells how he doesn’t believe all those reports about smoking and
cancer and emphysema. He has just begun to market yet another cigarette for Philip Morris U.S.A. and is brimming over
with satisfaction over its prospects. But how does he square
with his conscience the spending of $10 million in these United
States over the next year to lure people into smoking his new
brand? “It’s not a matter of that,” says Landry, Philip Morris’s
vice president for marketing. “Nearly half the adults in this
country smoke. It’s a basic commodity for them. I’m serving a
need. . . . There are studies by pretty eminent medical and scientific authorities, one on a theory of stress, on how a heck of
a lot of people, if they didn’t have cigarette smoking to relieve
stress, would be one hell of a lot worse off. And there are plenty
of valid studies that indicate cigarette smoking and all those
diseases are not related.” His satisfaction, says Landry, comes
from being very good at his job in a very competitive business,
and he will point out that Philip Morris and its big-selling
Marlboro has just passed American Tobacco as the No. 2 cigarette seller in America (R.J. Reynolds is still No. 1). Why a new
cigarette now? Because it is there to be sold, says Landry. And
therein lies the inspiration of the marketing of a new American
cigarette, which Landry confidently predicts will have a 1 percent share of the American market within 12 months. That 1
percent will equal about five billion cigarettes and a healthy
profit for Philip Morris U.S.A.9
It is possible that James Morgan and Jack Landry are simply lying.
(Fancy that; executive officers of a company actually lying!) But it
may be a bit more complicated than that; my guess is that, over the
years, they may have succeeded in deceiving themselves.10 If I am
deeply committed to an attitude or an idea, I will have a strong tendency to doubt the veracity of any opposing point of view. To mention one chilling example of this process, consider the Hale-Bopp
suicides. In 1997,39 members of Heaven’s Gate, an obscure religious
cult, were found dead at a luxury estate in Rancho Santa Fe, California—participants in a mass suicide. Several weeks earlier, a few
members of the cult had walked into a specialty store and purchased
an expensive high-powered telescope so that they might get a clearer
view of the Hale-Bopp comet and the spaceship they fervently believed was traveling behind it. Their belief was that, when the comet
18 8 The Social An imal
got close to Earth, it was time to rid themselves of their “Earthly
containers” (their bodies) by killing themselves so that their essence
could be picked up by the spaceship. A few days after buying the telescope, they brought it back to the store and politely asked for a refund. When the manager asked why, they complained that the
telescope was defective: “We found the comet all right, but we can’t
find the spaceship that’s following it.” Needless to say, there was
no spaceship. But if you are so convinced of the existence of a spaceship that you’re ready to die for a ride on it, and yet your telescope
doesn’t reveal it, then, clearly, there must be something wrong with
your telescope!
Juicy anecdotes are suggestive. But they do not constitute scientific evidence and, therefore, are not convincing in themselves.
Again, taking the cigarette example, it is always possible that Mr.
Morgan and Mr. Landry know that cigarettes are harmful and are
simply being cynical. Likewise, it is possible that Landry always believed cigarettes were good for people even before he began to peddle them. Obviously, if either of these possibilities were true, his
excitement about the benefits of cigarette smoking could hardly be
attributed to dissonance. Much more convincing would be a demonstration of a clear case of attitudinal distortion in a unique event.
Such a demonstration was provided back in the 1950s by (of all
things) a football game in the Ivy League. An important game between Princeton and Dartmouth, the contest was billed as a grudge
match, and this soon became evident on the field: The game is remembered as the roughest and dirtiest in the history of either school.
Princeton’s star player was an All-American running back named
Dick Kazmaier; as the game progressed, it became increasingly clear
that the Dartmouth players were out to get him. Whenever he carried the ball, he was gang-tackled, piled on, and mauled. He was finally forced to leave the game with a broken nose. Meanwhile, the
Princeton team was not exactly inactive: Soon after Kazmaier’s injury, a Dartmouth player was carried off the field with a broken leg.
Several fistfights broke out on the field in the course of the game,
and many injuries were suffered on both sides.
Sometime after the game, a couple of psychologists—Albert
Hastorf of Dartmouth and Hadley Cantril of Princeton11—visited
both campuses and showed films of the game to a number of stu-
Self-Justification
187
dents on each campus. The students were instructed to be completely
objective and, while watching the film, to take notes of each infraction of the rules, how it started, and who was responsible. As you
might imagine, there was a huge difference in the way this game was
viewed by the students at each university. There was a strong tendency for the students to see their own fellow students as victims of
illegal infractions rather than as perpetrators of such acts of aggression. Moreover, this was no minor distortion: It was found that
Princeton students saw fully twice as many violations on the part of
the Dartmouth players as the Dartmouth students saw. Again, people are not passive receptacles for the deposition of information. The
manner in which they view and interpret information depends on
how deeply they are committed to a particular belief or course of action. Individuals will distort the objective world to reduce their dissonance. The manner in which they will distort and the intensity of
their distortion are highly predictable.
A few years later, Lenny Bruce, a perceptive comedian and social
commentator (who almost certainly never read about cognitive dissonance theory), had the following insight into the 1960 presidential
election campaign between Richard Nixon and John Kennedy.
I would be with a bunch of Kennedy fans watching the debate
and their comment would be, “He’s really slaughtering Nixon.”
Then we would all go to another apartment, and the Nixon fans
would say, “How do you like the shellacking he gave Kennedy?”
And then I realized that each group loved their candidate so
that a guy would have to be this blatant—he would have to look
into the camera and say: “I am a thief, a crook, do you hear me?
I am the worst choice you could ever make for the Presidency!”
And even then his following would say, “Now there’s an honest
man for you. It takes a big guy to admit that. There’s the kind
of guy we need for President.”12
People don’t like to see or hear things that conflict with their
deeply held beliefs or wishes. An ancient response to such bad news
was literally to kill the messenger. A modern-day figurative version
of killing the messenger is to blame the media for the presentation
of material that produces the pain of dissonance. For example, when
Ronald Reagan was running for president in 1980, Time published
18 8 The Social An imal
an analysis of his campaign. Subsequent angry letters to the editor
vividly illustrated the widely divergent responses of his supporters,
on the one hand, and his detractors, on the other. Consider the following two letters:13
Lawrence Barrett’s pre-election piece on Candidate Ronald
Reagan [October 20] was a slick hatchet job, and you know it.
You ought to be ashamed of yourselves for printing it disguised
as an objective look at the man.
Your story on “The Real Ronald Reagan” did it. Why didn’t you
just editorially endorse him? Barrett glosses over Reagan’s fatal
flaws so handily that the “real” Ronald Reagan came across as
the answer to all our problems.
The diversity of perception reflected in these letters is not unique
to the 1980 campaign. It happened with Clinton supporters and detractors. It happened with G. W. Bush supporters and detractors.
And it happened with Obama supporters and detractors. Indeed, it
happens every 4 years. During the next presidential election, check
out the letters to the editor of your favorite news magazine following a piece on one of the leading candidates. You will find a similar
array of divergent perceptions.
Dissonance Reduction and Rational
Behavior
I have referred to dissonance-reducing behavior as “irrational.” By
this I mean it is often maladaptive in that it can prevent people from
learning important facts or from finding real solutions to their problems. On the other hand, it does serve a purpose: Dissonancereducing behavior is ego-defensive behavior; by reducing dissonance,
we maintain a positive image of ourselves—an image that depicts us
as good, or smart, or worthwhile. Again, although this ego-defensive
behavior can be considered useful, it can have disastrous consequences. In the laboratory, the irrationality of dissonance-reducing
behavior has been amply demonstrated by Edward Jones and Rika
Kohler.14 These investigators selected individuals who were deeply
Self-Justification
189
committed to a position on the issue of racial segregation; some of
the participants were in favor of segregation, and others were opposed to it. These individuals were allowed to read a series of arguments on both sides of the issue. Some of these arguments were
extremely sensible and plausible, and others were so implausible that
they bordered on the ridiculous. Jones and Kohler were interested
in determining which of the arguments people would remember
best. If people were purely rational, we would expect them to remember the plausible arguments best and the implausible arguments
least; why in the world would people want to keep implausible arguments in their heads? Accordingly, the rational person would rehearse and remember all the arguments that made sense and would
slough off all the ridiculous arguments. What does the theory of cognitive dissonance predict? It is comforting to have all the wise people on your side and all the fools on the other side: A silly argument
in favor of one’s own position arouses dissonance because it raises
doubts about the wisdom of that position or the intelligence of
the people who agree with it. Likewise, a plausible argument on the
other side of the issue also arouses dissonance because it raises the
possibility that the other side is right. Because these arguments
arouse dissonance, one tries not to think about them; that is, one
might not learn them very well, or one might simply forget about
them. This is exactly what Jones and Kohler found. Their participants did not remember in a rational-functional manner. They
tended to remember the plausible arguments agreeing with their own
position and the implausible arguments agreeing with the opposing
position.
In a conceptually similar experiment, Charles Lord, Lee Ross,
and Mark Lepper15 showed that we do not process information in an
unbiased manner. Rather, we distort it in a way that fits our preconceived notions. These investigators selected several Stanford University students who opposed capital punishment and several who
favored it. They showed the students two research articles that discussed whether the death penalty tends to deter violent crimes. One
study confirmed and the other study disconfirmed the existing beliefs
of the students. If these students were perfectly rational, they might
conclude that the issue is a complex one, and accordingly, the two
groups of students might move closer to each other in their beliefs
18 8 The Social An imal
about capital punishment. On the other hand, dissonance theory
predicts that they would distort the two articles, clasping the confirming article to their bosoms and hailing it as clearly supportive of their
belief while finding methodological or conceptual flaws in the disconfirming article and refusing to be influenced by it. This is precisely
what happened. Indeed, rather than coming closer in their beliefs
after being exposed to this two-sided presentation, the two groups of
students disagreed more sharply than they did beforehand. This
process probably accounts for the fact that, on issues like politics and
religion, people who are deeply committed will almost never come to
see things our way, no matter how powerful and balanced our arguments are.
Those of us who have worked extensively with the theory of
cognitive dissonance do not deny that humans are capable of rational behavior. The theory merely suggests that a good deal of our
behavior is not rational—although, from inside, it may seem very
sensible indeed. If you ask the hypnotized young man why he wore
a raincoat on a sunny day, he’ll come up with an answer he feels is
sensible; if you ask the vice president of Philip Morris why he
smokes, he’ll give you a reason that makes sense to him—he’ll tell
you how good it is for everyone’s health; if you ask Jones and
Kohler’s participants why they remembered one particular set of arguments rather than others, they’ll insist that the arguments they
remembered were a fair and balanced sample of those they read.
Similarly, the students in the experiment on capital punishment will
insist that the evidence against their position is flawed. It is important to note that the world is not divided into rational people on the
one side and dissonance reducers on the other. People are not all the
same, and some people are able to tolerate dissonance better than
others, but we are all capable of rational behavior and we are all capable of dissonance-reducing behavior, depending on the circumstances. Occasionally, the same person can manifest both behaviors
in rapid succession.
The rationality and irrationality of human behavior will be illustrated over and over again during the next several pages as we discuss some of the wide ramifications of our need for self-justification.
These ramifications run virtually the entire gamut of human behavior, but for the sake of conserving time and space, I will sample only
Self-Justification
191
a few of these. Let us begin with the decision-making process, a
process that shows humans at their most rational and their most irrational in quick succession.
Dissonance as a Consequence of
Making a Decision
Suppose you are about to make a decision—about the purchase of
a new car, for example. This involves a significant amount of
money, so it is, by definition, an important decision. After looking
around, you are torn between getting a sports utility vehicle and
purchasing a compact model. There are various advantages and disadvantages to each: The SUV would be convenient; you can haul
things in it, sleep in it during long trips, and it has plenty of power,
but it gets atrocious mileage and is not easy to park. The compact
model is less roomy, and you are concerned about its safety, but it
is less expensive to buy and operate, it is more fun to drive, and
you’ve heard it has a pretty good repair record. My guess is that, before you make the decision, you will seek as much information as
you can. Chances are you will go on-line and sample reviews of the
various. You might even read Consumer Reports to find out what
this expert, unbiased source has to say. Perhaps you’ll confer with
friends who own an SUV or a compact car. You’ll probably visit the
dealers to test-drive the vehicles to see how each one feels. All of
this predecision behavior is perfectly rational. Let us assume you
make a decision—you buy the compact car. What happens next?
Your behavior will begin to change: No longer will you seek objective information about all makes of cars. Chances are you may
begin to spend more time talking with the owners of small cars.
You will begin to talk about the number of miles to the gallon as
though it were the most important thing in the world. My guess is
that you will not be prone to spend much time thinking about the
fact that you can’t sleep in your compact. Similarly, your mind will
skim lightly over the fact that driving your new car can be particularly hazardous in a collision and that the brakes are not very responsive, although your failure to attend to these shortcomings
could conceivably cost you your life.
18 8 The Social An imal
How does this sort of thing come about? Following a decision—
especially a difficult one, or one that involves a significant amount
of time, effort, or money—people almost always experience dissonance. This is so because the chosen alternative is seldom entirely
positive and the rejected alternatives are seldom entirely negative. In
this example, your cognition that you bought a compact is dissonant
with your cognition about any deficiencies the car may have. Similarly, all the positive aspects of the other cars that you considered
buying but did not purchase are dissonant with your cognition
that you did not buy one of them. A good way to reduce such dissonance is to seek out exclusively positive information about the
car you chose and avoid negative information about it. One source
of safe information is advertisements; it is a safe bet that an ad
will not run down its own product. Accordingly, one might predict that a person who had recently purchased a new car will begin
to read advertisements selectively, reading more ads about his or
her car after the purchase than people who have not recently purchased the same model. Moreover, owners of new cars will tend
to steer clear of ads for other makes of cars. This is exactly what
Danuta Ehrlich and her colleagues16 found in a well-known survey of advertising readership. In short, Ehrlich’s data suggest that,
after making decisions, people try to gain reassurance that their
decisions were wise by seeking information that is certain to be
reassuring.
People do not always need help from Madison Avenue to gain
reassurance; they can do a pretty good job of reassuring themselves.
An experiment by Jack Brehm17 demonstrates how this can come
about. Posing as a marketing researcher, Brehm showed several
women eight different appliances (a toaster, an electric coffee
maker, a sandwich grill, and the like) and asked that they rate them
in terms of how attractive each appliance was. As a reward, each
woman was told she could have one of the appliances as a gift—
and she was given a choice between two of the products she had
rated as being equally attractive. After she chose one, it was
wrapped up and given to her. Several minutes later, she was asked
to rate the products again. It was found that after receiving the appliance of her choice, each woman rated the attractiveness of that
appliance somewhat higher and decreased the rating of the appliance she had a chance to own but rejected. Again, making a deci-
Self-Justification
193
sion produces dissonance: Cognitions about any negative aspects of
the preferred object are dissonant with having chosen it, and cognitions about the positive aspects of the unchosen object are dissonant with not having chosen it. To reduce dissonance, people
cognitively spread apart the alternatives. That is, after making their
decision, the women in Brehm’s study emphasized the positive attributes of the appliance they decided to own while deemphasizing
its negative attributes; for the appliance they decided not to own,
they emphasized its negative attributes and deemphasized its positive attributes.
The tendency to justify one’s choices is not limited to consumer
decisions. In fact, research has demonstrated that similar processes
can even affect our romantic relationships and our willingness to
consider becoming involved with alternative partners. In a study
conducted by Dennis Johnson and Caryl Rusbult,18 college students were asked to evaluate the probable success of a new computer dating service on campus. Participants were shown pictures
of individuals of the opposite sex, who they believed were applicants to the dating service. They were then asked to rate the attractiveness of these applicants, as well as how much they believed they
would enjoy a potential date with him or her—a possibility that
was presented in a realistic manner. The results of this study were
remarkably similar to Brehm’s findings about appliances: The more
heavily committed the students were to their current romantic
partners, the more negative were their ratings of the attractiveness
of alternative partners presented in the study. In a subsequent experiment, Jeffry Simpson and his colleagues19 also found that those
in committed relationships saw opposite-sex persons as less physically and sexually attractive than did those who weren’t in committed relationships. In addition, Simpson and his co-workers showed
that this effect holds only for “available others”; when presented
with individuals who were somewhat older or who were of the same
sex, people in committed relationships did not derogate their attractiveness. In short: no threat, no dissonance; no dissonance, no
derogation.
In sum, whether we are talking about appliances or romantic
partners, once a firm commitment has been made, people tend to
focus on the positive aspects of their choices and to downplay the attractive qualities of the unchosen alternatives.
18 8 The Social An imal
The Consequences of Decisions: Some
Historical Examples
Although some of the material discussed above is benign enough, it
is impossible to overstate the potential dangers posed by our susceptibility to these tendencies. When I mentioned that ignoring potential danger to reduce dissonance could conceivably lead to a person’s
death, I meant that literally. Suppose a madman has taken over your
country and has decided to eradicate all members of your religious
group. But you don’t know that for sure. What you do know is that
your country is being occupied, that the leader of the occupation
forces does not like your religious group, and that occasionally members of your faith are forced to move from their homes and are kept
in detention camps. What do you do? You could try to flee from your
country; you could try to pass as a member of a different religious
group; or you could sit tight and hope for the best. Each of these options is extremely dangerous: It is difficult to escape or to pass and
go undetected; and if you are caught trying to flee or disguising your
identity, the penalty is immediate execution. On the other hand, deciding to sit tight could be a disastrous decision if it turns out that
your religious group is being systematically annihilated. Let us suppose you decide not to take action. That is, you commit yourself to
sit tight—turning your back on opportunities to try either to escape
or to pass. Such an important decision naturally produces a great deal
of dissonance. To reduce dissonance, you convince yourself that you
made a wise decision—that is, you convince yourself that, although
people of your religious sect are made to move and are being treated
unfairly, they are not being killed unless they break the law. This position is not difficult to maintain because there is no unambiguous
evidence to the contrary.
Suppose that, months later, a respected man from your town tells
you that while hiding in the forest, he witnessed soldiers butchering
all the men, women, and children who had recently been deported
from the town. I would predict that you would try to dismiss this information as untrue—that you would attempt to convince yourself
that the reporter was lying or hallucinating. If you had listened to the
man who tried to warn you, you might have escaped. Instead, you
and your family are slaughtered.
Self-Justification
195
Fantastic? Impossible? How could anyone not take the respected
man seriously? The events described above are an accurate account
of what happened in 1944 to the Jews in Sighet, Hungary.20
The processes of cognitive distortion and selective exposure to
information were important factors in the senseless escalation of the
war in Vietnam. In a thought-provoking analysis of the Pentagon
Papers, Ralph White shows how dissonance blinded our leaders to
information incompatible with the decisions they had already made.
As White put it, “There was a tendency, when actions were out of
line with ideas, for decision makers to align their ideas with their actions.” To take just one of many examples, the decision to continue
to escalate the bombing of North Vietnam was made at the price of
ignoring crucial evidence from the CIA and other sources that made
it clear that bombing would not break the will of the North Vietnamese people but, quite the contrary, would only strengthen their
resolve.
It is instructive, for instance, to compare [Secretary of Defense
Robert] McNamara’s highly factual evidence-oriented summary
of the case against bombing in 1966 (pages 555-63 of the Pentagon Papers) with the Joint Chiefs’ memorandum that disputed
his conclusion and called the bombing one of our two trump
cards, while it apparently ignored all of the facts that showed the
opposite. Yet it was the Joint Chiefs who prevailed.21
More recently, President George W. Bush wanted to believe that
Iraqi leader Saddam Hussein possessed weapons of mass destruction
(WMDs) that posed a threat to Americans. This led the President
and his advisors to interpret the information in CIA reports as definitive proof of Iraq’s WMDs, even though the reports were ambiguous and contradicted by other evidence. President Bush’s
interpretation provided the justification to launch a preemptive war.
He was convinced that once our troops entered Iraq they would find
these weapons.
After the invasion of Iraq, when asked “Where are the WMDs?”
administration officials said that Iraq is a big country in which the
WMDs are well hidden, but asserted that the weapons would be
found. As the months dragged on and still no WMDs were found,
the officials continued to assert that they would be uncovered. Why?
18 8 The Social An imal
Because the administration officials were experiencing enormous
dissonance. They had to believe they would find the WMDs. Finally,
it was officially concluded that there were no such weapons, which
suggests that, at the time of our invasion, Iraq posed no immediate
threat to the United States.
Now what? American soldiers and Iraqi civilians were dying
every week, and hundreds of billions of dollars were being drained
from the U.S. treasury. How did President Bush and his staff reduce
dissonance? By adding new cognitions to j u s t i f y the war. Suddenly,
we learned that the U.S. mission was to liberate the nation from a
cruel dictator and bestow upon the Iraqi people the blessings of democratic institutions. To a neutral observer, that justification was inadequate (after all, there are a great many brutal dictators in the world).
But, to President Bush and his advisors, who had been experiencing
dissonance, the justification seemed reasonable.
Several commentators have suggested that the Bush administration was dissembling; that is, that it was deliberately trying to deceive
the American people. We cannot be certain what was going on in the
President’s mind. What we do know, based on more than 50 years of
research on cognitive dissonance, is that although the President and
his advisors may not have been intentionally deceiving the American
people, it is likely that they succeeded in deceiving themselves. That
is, they may have succeeded in convincing themselves that invading
Iraq was worthwhile even in the absence of WMDs. 22
How can a leader avoid falling into the self-justification trap?
Historical examples show us that the way out of this process is for a
leader to bring in skilled advisors from outside his or her inner circle because the advisors will not be caught up in the need to reduce
the dissonance created by the leader’s earlier decisions. As the historian Doris Kearns Goodwin23 points out, it was precisely for this reason that Abraham Lincoln chose a cabinet that included several
people who disagreed with his policies.
Let’s return to the Vietnam War for a moment. Why did the
Joint Chiefs make the ill-advised decision to increase the bombing—
to escalate a war that was unwinnable? They were staying the course;
justifying earlier actions with identical or even more extreme ones.
Escalation of this sort is self-perpetuating. Once a small commitment is made, it sets the stage for ever-increasing commitments. The
behavior needs to be justified, so attitudes are changed; this change
Self-Justification
197
in attitudes influences future decisions and behavior. The flavor of
this kind of cognitive escalation is nicely captured in an analysis of
the Pentagon Papers by the editors of Time magazine.
Yet the bureaucracy, the Pentagon Papers indicate, always demanded new options; each option was to apply more force.
Each tightening of the screw created a position that must
be defended; once committed, the military pressure must be
maintained.24
The process underlying escalation has been explored, on a more
individual level, under controlled experimental conditions. Suppose
you would like to enlist someone’s aid in a massive undertaking, but
you know the job you have in mind for the person is so difficult, and
will require so much time and effort, that the person will surely decline. What should you do? One possibility is to get the person involved in a much smaller aspect of the job, one so easy that he or she
wouldn’t dream of turning it down. This action serves to commit the
individual to “the cause.” Once people are thus committed, the likelihood of their complying with the larger request increases. This phenomenon was demonstrated by Jonathan Freedman and Scott
Fraser.25 They attempted to induce several homeowners to put up a
huge sign in their front yards reading “Drive Carefully.” Because of
the ugliness and obtrusiveness of this sign, most residents refused to
put it up; only 17 percent complied. A different group of residents,
however, was first “softened up” by an experimenter who got them to
sign a petition favoring safe driving. Because signing a petition is an
easy thing to do, virtually all who were asked agreed to sign. A few
weeks later, a different experimenter went to each resident with the
obtrusive, ugly sign reading “Drive Carefully.” More than 55 percent
of these residents allowed the sign to be put up on their property.
Thus, when individuals commit themselves in a small way, the likelihood that they will commit themselves further in that direction is
increased. This process of using small favors to encourage people to
accede to larger requests has been dubbed the foot-in-the-door
technique. It is effective because having done the smaller favor sets
up pressure toward agreeing to do the larger favor; in effect, it provides justification in advance for complying with the large request.
Similar results were obtained by Patricia Pliner and her associates.26 These investigators found that 46 percent of their sample were
18 8 The Social An imal
willing to make a small donation to the American Cancer Society
when they were approached directly. A similar group of people were
asked 1 day earlier to wear a lapel pin publicizing the fund-raising
drive. When approached the next day, approximately twice as many
of these people were willing to make a contribution.
Think back to Stanley Milgram’s classic experiments on obedience discussed in Chapter 2. Suppose that, at the very beginning of
the experiment, Milgram had instructed his participants to deliver a
shock of 450 volts. Do you think many people would have obeyed?
Probably not. My guess is that, in a sense, the mild shocks near the
beginning of the experiment served as a foot-in-the-door induction
to Milgram’s participants. Because the increases in shock level are
gradual, the participant is engaged in a series of self-justifications. If
you are the participant, once you have justified step one, that justification makes it easier to go to step two; once you justify step two, it
is easier to go to step three; and so on. By the time you get to 450
volts, well, heck, that’s not much different from 435 volts, is it? In
other words, once individuals start down that slippery slope of selfjustification, it becomes increasingly difficult to draw a line in the
sand—because in effect, they end up asking themselves, “Why draw
the line here if I didn’t draw it 15 volts ago?”
The Importance of Irrevocability
One of the important characteristics of the examples presented
above is the relative irrevocability of the decision. This needs some
explaining: Occasionally, we make tentative decisions. For example,
if you had indicated you might buy an expensive house near San
Francisco, but the decision was not finalized, chances are you would
not expend any effort trying to convince yourself of the wisdom of
the decision. Once you had put your money down, however, and you
knew you couldn’t easily get it back, you would probably start minimizing the importance of the dampness in the basement, the cracks
in the foundation, or the fact that the house happened to be built
on the San Andreas fault. Similarly, once a European Jew had decided not to pass and had allowed himself to be identified as a Jew,
the decision was irrevocable; he could not easily pretend to be a
Gentile. By the same token, once Pentagon officials intensified the
Self-Justification
199
bombing of North Vietnam, they could not undo it. And once a
homeowner had signed the petition, a commitment to safe driving
was established.
Some direct evidence for the importance of irrevocability comes
from a clever study of the cognitive gyrations of gamblers at a race
track. The race track is an ideal place to scrutinize irrevocability because once you’ve placed your bet, you can’t go back and tell the nice
man behind the window you’ve changed your mind. Robert Knox
and James Inkster27 simply intercepted people who were on their way
to place 12 bets. They had already decided on their horses and were
about to place their bets when the investigators asked them how certain they were that their horses would win. Because they were on
their way to the $2 window, their decisions were not irrevocable. The
investigators collared other bettors just as they were leaving the 12
window, after having placed their bets, and asked them how certain
they were that their horses would win. Typically, people who had just
placed their bets gave their horses a much better chance of winning
than did those who were about to place their bets. But, of course,
nothing had changed except the finality of the decision.
Moving from the racetrack to the Harvard campus, Daniel
Gilbert28 tested the irrevocability hypothesis in the context of a photography class. In this study, participants were recruited through an
advertisement for students interested in learning photography while
taking part in a psychology experiment. Students were informed that
they would shoot a roll of film and print two of the photographs.
They would rate the two photographs and then get to choose one to
keep. The other would be kept for administrative reasons. The students were randomly assigned to one of two conditions, one in which
they had the option to exchange photographs within a five-day period, and another in which their first choice was final and irrevocable. Gilbert found that prior to making the choice between the two
photographs, students liked the two photographs equally. Students
were contacted two, four, and nine days after they had made their
choice and questioned whether their feelings about the photographs
had changed.
The results of the experiment showed that the students who had
the option of exchanging photographs liked the one they finally
ended up with less than those who made the final choice on the first
18 8 The Social An imal
day. In other words, once a decision is final people can get busy making themselves feel good about the choice they have made. And thus,
it is often the case that people frequently become more certain that
they have made a wise decision after there is nothing they can do
about it.
Although the irrevocability of a decision always increases dissonance and the motivation to reduce it, there are circumstances in
which irrevocability is unnecessary. Let me explain with an example.
Suppose you enter an automobile showroom intent on buying a new
car. You’ve already priced the car you want at several dealers; you
know you can purchase it for about $19,300. Lo and behold, the
salesman tells you he can sell you one for $18,942. Excited by the
bargain, you agree to the deal and write out a check for the down
payment. While the salesman takes your check to the sales manager
to consummate the deal, you rub your hands in glee as you imagine
yourself driving home in your shiny new car. But alas, 10 minutes
later, the salesman returns with a forlorn look on his face; it seems he
made a calculation error, and the sales manager caught it. The price
of the car is actually $19,384. You can get it cheaper elsewhere; moreover, the decision to buy is not irrevocable. And yet, far more people
in this situation will go ahead with the deal than if the original asking price had been $19,384—even though the reason for purchasing
the car from this dealer (the bargain price) no longer exists. Indeed,
Robert Cialdini, 29 a social psychologist who temporarily joined the
sales force of an automobile dealer, discovered that the strategy described above is a common and successful ploy called lowballing, or
throwing the customer a lowball.
What is going on in this situation? There are at least three important things to notice. First, while the customer’s decision to buy
is certainly reversible, there is a commitment emphasized by the act
of signing a check for a down payment. Second, this commitment
triggered the anticipation of a pleasant or interesting experience:
driving out with a new car. To have the anticipated event thwarted
(by not going ahead with the deal) would have produced dissonance
and disappointment. Third, although the final price is substantially
higher than the salesman said it would be, it is only slightly higher
than the price somewhere else. Under these circumstances, the customer in effect says, “Oh, what the hell. I’m already here; I’ve already
Self-Justification
201
filled out the forms—why wait?” Clearly, such a ploy would not be
effective if the consequences were somewhat higher, as in matters of
life and death.
The Decision to Behave Immorally How can an honest
person become corrupt? Conversely, how can we get a person to be
more honest? One way is through the dissonance that results from
making a difficult decision. Suppose you are a college student enrolled in a biology course. Your grade will hinge on the final exam
you are now taking. The key question on the exam involves some
material you know fairly well—but, because of anxiety, you draw a
blank. You are sitting there in a nervous sweat. You look up, and lo
and behold, you happen to be sitting behind a woman who is the
smartest person in the class (who also happens, fortunately, to be the
person with the most legible handwriting in the class). You glance
down and notice she is just completing her answer to the crucial
question. You know you could easily read her answer if you chose to.
What do you do? Your conscience tells you it’s wrong to cheat—and
yet, if you don’t cheat, you are certain to get a poor grade. You wrestle with your conscience. Regardless of whether you decide to cheat
or not to cheat, you are doomed to experience dissonance. If you
cheat, your cognition “I am a decent moral person” is dissonant with
your cognition “I have just committed an immoral act.” If you decide to resist temptation, your cognition “I want to get a good grade”
is dissonant with your cognition “I could have acted in a way that
would have ensured a good grade, but I chose not to.”
Suppose that, after a difficult struggle, you decide to cheat. How
do you reduce the dissonance? Before you read on, think about it for
a moment. One way to reduce dissonance is to minimize the negative aspects of the action you have chosen (and to maximize the positive aspects)—much the same way the women did after choosing an
appliance in Jack Brehm’s experiment. In this instance, an efficacious
path of dissonance reduction would entail a change in your attitude
about cheating. In short, you will adopt a more lenient attitude. Your
reasoning might go something like this: “Cheating isn’t so bad under
some circumstances. As long as nobody gets hurt, it’s really not very
immoral. Anybody would do it. Therefore, it’s a part of human nature—so how could it be bad? Since it is only human, those who get
18 8 The Social An imal
caught cheating should not be severely punished but should be
treated with understanding.”
Suppose that, after a difficult struggle, you decide not to cheat.
How would you reduce dissonance? Once again, you could change
your attitude about the morality of the act—but in the opposite direction. That is, to justify giving up a good grade, you must convince yourself that cheating is a heinous sin, one of the lowest things a person
can do, and that cheaters should be found out and severely punished.
The interesting and important thing to remember here is that
two people acting in the two different ways described above could
have started out with almost identical attitudes. Their decisions
might have been a hairbreadth apart: One came within an ace of resisting but decided to cheat, while the other came within an ace of
cheating but decided to resist. Once they have made their decisions,
however, their attitudes toward cheating will diverge sharply as a
consequence of their decisions.
These speculations were put to the test by Judson Mills 30 in an
experiment with sixth-graders. Mills first measured their attitudes toward cheating. He then had them participate in a competitive exam
with prizes being offered to the winners. The situation was arranged
so that it was almost impossible to win without cheating; also, it was
easy for the children to cheat, thinking they would not be detected.
As one might expect, some of the students cheated and others did not.
The next day, the sixth-graders were again asked to indicate how they
felt about cheating. In general, those children who had cheated became more lenient toward cheating, and those who resisted the temptation to cheat adopted a harsher attitude toward cheating.
The data from Mills’s experiment are provocative indeed. One
thing they suggest is that the most zealous opponents of a given position are not those who have always been distant from that position.
For example, one might hazard a guess that the people who are most
angry at the apparent sexual freedom associated with the current generation of young people may not be those who have never been
tempted to engage in casual sexual activity themselves. Indeed, Mills’s
data suggest the possibility that the people who have the strongest
need to crack down hard on this sort of behavior are those who have
been sorely tempted, who came dangerously close to giving in to this
temptation, but who finally resisted. People who almost decided to live
in glass houses are frequently the ones most prone to throw stones.
Self-Justification
203
By the same token, it would follow that those individuals who fear
that they may be sexually attracted to members of their own sex might
be among those most prone to develop antigay attitudes. In an interesting experiment, Henry Adams and his colleagues31 showed a group
of men a series of sexually explicit erotic videotapes consisting of heterosexual, male homosexual, and lesbian encounters while measuring
their sexual arousal (actual changes in their penile circumference). Although almost all of the men showed increases in sexual arousal while
watching the heterosexual and lesbian videos, it was the men with the
most negative attitudes toward male homosexuals who were the most
aroused by the videos depicting male homosexual lovemaking.
Early in this chapter, I mentioned that the desire for selfjustification is an important reason why people who are strongly committed to an attitude on an issue tend to resist any direct attempts to
change that attitude. In effect, such people are invulnerable to the
propaganda or education in question. We can now see that the same
mechanism that enables a person to cling to an attitude can induce that
individual to change an attitude. It depends on which course of action
will serve most to reduce dissonance under the circumstances. A person who understands the theory can set up the proper conditions to
induce attitude change in other people by making them vulnerable to
certain kinds of beliefs. For example, if a modern Machiavelli were advising a contemporary ruler, he might suggest the following strategies
based on the theory and data on the consequences of decisions:
1. If you want people to form more positive attitudes toward an
object, get them to commit themselves to own that object.
2. If you want people to soften their moral attitudes toward some
misdeed, tempt them so that they perform that deed; conversely, if you want people to harden their moral attitudes toward a misdeed, tempt them—but not enough to induce them
to commit the deed.
The Psychology of Inadequate
Justification
Attitude change as a means of reducing dissonance is not, of course,
limited to postdecision situations. It can occur in countless other
contexts, including every time a person says something he or she
18 8 The Social An imal
doesn’t believe or does something stupid or immoral. The effects can
be extremely powerful. Let us look at some of them.
In a complex society, we occasionally find ourselves saying or
doing things we don’t completely believe. Does this always lead to
attitude change? No. To illustrate, I will choose a simple example. Joe
enters the office and sees that his law partner, Joyce, has hung a perfectly atrocious painting on the wall of the office they share. He is
about to tell her how awful he thinks it is when she says proudly,
“How do you like the painting? I did it myself—you know, in the art
class I’m taking at night.”
“Very nice, Joyce,” Joe answers. Theoretically, Joe’s cognition “I
am a truthful person” is dissonant with the cognition “I said that
painting was nice, although it really is disastrous.” Whatever dissonance might be aroused by this inconsistency can easily and quickly
be reduced by Joe’s cognition that it is important not to hurt other
people: “I lied so as not to hurt Joyce; why should I tell her it’s an
ugly painting? It serves no useful purpose.” This is an effective way
of reducing dissonance because it completely justifies Joe’s action. In
effect, the justification is situation-determined. I will call this external justification.
But what happens if there is not ample justification in the situation itself? For example, imagine that Joe, who is politically conservative, finds himself at a cocktail party with many people he doesn’t
know very well. The conversation turns to politics. The people are
talking with horror about the fact that the United States seems to be
escalating its friendly overtures toward Cuba. Joe’s belief is a complicated one; he has mixed feelings about the topic, but generally he is
opposed to our forming an alliance with the Cuban dictatorship because he feels it is an evil regime and we should not compromise with
evil. Partly because Joe’s companions are sounding so pious and
partly as a lark, he gradually finds himself taking a much more
liberal-radical position than the one he really holds. As a matter of
fact, Joe even goes so far as to assert that Castro is an extraordinarily gifted leader and that the Cuban people are better off with communism than they’ve been in hundreds of years. Somebody counters
Joe’s argument by talking about the thousands of people that Castro
is alleged to have murdered or imprisoned to achieve a unified government. In the heat of the argument, Joe replies that those figures
are grossly exaggerated. Quite a performance for a man who does, in
Self-Justification
205
fact, believe that Castro killed thousands of innocent people during
his rise to power.
When Joe awakens the next morning and thinks back on the
previous evening’s events, he gasps in horror. “Oh, my God, what
have I done?” he says. He is intensely uncomfortable. Put another
way, he is experiencing a great deal of dissonance. His cognition “I
misled a bunch of people; I told them a lot of things about Cuba that
I don’t really believe” is dissonant with his cognition “I am a reasonable, decent, and truthful person.” What does he do to reduce dissonance? He searches around for external justifications. First, it occurs
to Joe that he might have been drunk and therefore not responsible
for what he said. But he remembers he had only one or two beers—
no external justification there. Because Joe cannot find sufficient external justification for his behavior, it is necessary for him to attempt
to explain his behavior by using internal justification, changing his
attitude in the direction of his statements. That is, if Joe can succeed
in convincing himself that his statements were not so very far from
the truth, then he will have reduced dissonance; that is, his behavior
of the preceding night will no longer be absurd in his own view. I do
not mean to imply that Joe will suddenly become an avowed Communist revolutionary. What I do mean is that he might begin to feel
a little less harsh about the Cuban regime than he felt before he made
those statements. Most events and issues in our world are neither
completely black nor completely white; there are many gray areas.
Thus, Joe might begin to take a different look at some of the events
that have taken place in Cuba during the past 50 years. He might
start looking into Castro’s politics and decisions and become more
disposed toward seeing wisdom that he hadn’t seen before. He might
also begin to be more receptive to information that indicates the extent of the corruption, brutality, and ineptitude of the previous government. To repeat: If an individual states a belief that is difficult to
justify externally, that person will attempt to justify it internally by
making his or her attitudes more consistent with the statement.
I have mentioned a couple of forms of external justification. One
is the idea that it’s all right to tell a harmless lie to avoid hurting a
person’s feelings—as in the case of Joe Lawyer and his partner. Another is drunkenness as an excuse for one’s actions. Still another form
of external justification is reward. Put yourself in Joe’s shoes for a moment, and suppose that you and I both were at that cocktail party and
18 8 The Social An imal
I am an eccentric millionaire. As the conversation turns to Cuba, I
pull you aside and say, “Hey, I would like you to come out strongly in
favor of Castro and Cuban communism.” What’s more, suppose I
hand you $5,000 for doing it. After counting the money, you gasp, put
the $5,000 in your pocket, return to the discussion, and defend Castro to the hilt. The next morning when you wake up, would you experience any dissonance? I don’t think so. Your cognition “I said some
things about Castro and Cuban communism that I don’t believe” is
dissonant with the cognition “I am a truthful and decent person.” But,
at the same time, you have adequate external justification for having
made that statement: “I said those favorable things about Cuban communism to earn $5,000—and it was worth it.” You don’t have to
soften your attitude toward Castro to justify that statement because
you know why you made those statements: You made them not because you think they are true but to get the $5,000. You’re left with
the knowledge you sold your soul for $5,000—and it was worth it.
This kind of situation has been called the ”saying is believing
paradigm. That is, dissonance theory predicts that we begin to believe our own lies—but only if there is not abundant external justification for making the statements that run counter to our original
attitudes. Let’s now elaborate on our earlier discussion of conformity. Recall in Chapter 2 we found that the greater the reward for
compliance, the greater the probability that a person will comply.
Now we can go one step further: When it comes to producing a lasting change in attitude, the greater the reward, the less likely any attitude change will occur. If all I want you to do is recite a speech
favoring Cuba, the Marx brothers, socialized medicine, or anything
else, the most efficient thing for me to do would be to give you the
largest possible reward. This would increase the probability of your
complying by making that speech. But suppose I have a more ambitious goal: Suppose I want to effect a lasting change in your attitudes
and beliefs. In that case, just the reverse is true. The smaller the external reward I give to induce you to recite the speech, the more likely
it is that you will be forced to seek additional justification for delivering it by convincing yourself that the things you said were actually
true. This would result in an actual change in attitude rather than
mere compliance. The importance of this technique cannot be overstated. If we change our attitudes because we have made a public
statement for minimal external justification, our attitude change will
Self-Justification
207
be relatively permanent; we are not changing our attitudes because
of a reward (compliance) or because of the influence of an attractive
person (identification). We are changing our attitudes because we
have succeeded in convincing ourselves that our previous attitudes
were incorrect. This is a very powerful form of attitude change.
Thus far, we have been dealing with highly speculative material.
These speculations have been investigated scientifically in several experiments. Among these is a classic study by Leon Festinger and
J. Merrill Carlsmith.32 These investigators asked college students to
perform a very boring and repetitive series of tasks—packing spools
in a tray, dumping them out, and then refilling the tray over and over,
or turning rows and rows of screws a quarter turn and then going
back and turning them another quarter turn. The students engaged
in these activities for a full hour. The experimenter then induced
them to lie about the task; specifically, he employed them to tell a
young woman (who was waiting to participate in the experiment)
that the task she would be performing was interesting and enjoyable.
Some of the students were offered 120 for telling the lie; others were
offered only $1 for telling the lie. After the experiment was over, an
interviewer asked the liars how much they enjoyed the tasks they had
performed earlier in the experiment. The results were clear-cut:
Those students who had been paid $20 for lying—that is, for saying
the spool packing and screw turning had been enjoyable—rated the
activity as dull. This is not surprising—it was dull. But what about
the students who had been paid only II for lying? They rated the
task as enjoyable. In other words, people who received abundant external justification for lying told the lie but didn’t believe it, whereas
those who told the lie in the absence of a great deal of external justification moved in the direction of believing that what they said was
true.
Research support for the “saying is believing” phenomenon has
extended beyond relatively unimportant attitudes like the dullness of
a monotonous task. Attitude change has been shown on a variety of
important issues. For example, in one experiment, Arthur R.
Cohen33 induced Yale University students to engage in a particularly
difficult form of counter-attitudinal behavior. Cohen conducted his
experiment immediately after a student riot in which the New
Haven police had overreacted and behaved brutally toward the students. The students (who strongly believed the police had behaved
188The Social An imal
badly) were asked to write a strong and forceful essay in support of
the actions taken by the police. Before writing the essay, some students were paid 110; others, $5; still others, $1; and a fourth group,
50 cents. After writing his or her essay, each student was asked to indicate his or her own private attitudes about the police actions. The
results were perfectly linear: The smaller the reward, the greater the
attitude change. Thus, students who wrote in support of the New
Haven police for the meager sum of 50 cents developed a more favorable attitude than did those who wrote the essay for $1; the students who wrote the essay for $1 developed a more favorable attitude
toward the actions of the police than did those who wrote the essay
for $5; and those who wrote the essay for 110 remained the least
favorable.
Let’s look at race relations and racial prejudice—surely one of
our nation’s most enduring problems. Would it be possible to get
people to endorse a policy favoring a minority group—and then see
if their attitudes become more favorable toward that group? In an
important set of experiments, Mike Leippe and Donna Eisenstadt34
induced white college students to write an essay demonstrating
counter-attitudinal advocacy: publicly endorsing a controversial
proposal at their university—to double the amount of funds available
for academic scholarships for African American students. Because
the total amount of scholarship funds were limited, this meant cutting by half the amount of funds available for scholarships for white
students. As you might imagine, this was a highly dissonant situation. How might the students reduce dissonance? The best way
would be to convince themselves that they really believed deeply in
that policy—that, taking the big picture into consideration, it was
only fair to offer more financial aid to African Americans. Moreover,
it is reasonable to suggest that dissonance reduction might generalize beyond the specific policy—that is, the theory would predict that
their general attitude toward African Americans would become more
favorable and much more supportive. And that is exactly what
Leippe and Eisenstadt found.
What Constitutes External Justification? As I mentioned
a moment ago, external justification can and does come in a variety of
forms. People can be persuaded to say things or do things that contradict their beliefs or preferences if they are threatened with punish-
Self-Justification
209
ment or enticed by rewards other than monetary gain—such as praise
or the desire to please. Furthermore, most of us would consider doing
something that we otherwise wouldn’t do if a good friend asked us to
do it as a favor. To take a farfetched example, suppose a friend asked
you to eat an unusual food she or he had recently learned to prepare
in an “exotic foods” cooking class. And just to make things interesting, let’s say the food in question was a fried grasshopper. Now, imagine the reverse situation—that someone you didn’t like very much
asked you to sink your teeth into a fried grasshopper.
Okay, are you ready? Assuming you went ahead and ate the
grasshopper, under which circumstance do you think you would
enjoy the taste of it more—when asked to eat it by a good friend or
by someone you didn’t like? Common sense might suggest that the
grasshopper would taste better when recommended by a friend.
After all, a friend is someone you can trust and, hence, would be a
far more credible source of information than someone you didn’t like.
But think about it for a moment: Which condition involves less external justification? Common sense notwithstanding, the theory of
cognitive dissonance would predict that you would come to like eating grasshoppers more if you ate one at the request of someone you
didn’t like.
Here’s how it works: Your cognition that eating a grasshopper is
repulsive would be at odds with the fact that you just ate one. But if
it was yourfriend who made the request, you would have a great deal
of external justification for having eaten it—you did it as a favor for
a good friend. On the other hand, you would not have as much external justification for munching on a grasshopper if you did it at the request of someone you didn’t like. In this case, how could you justify
your contradictory behavior to yourself? Simple. The way to reduce
dissonance would be to change your attitude toward grasshoppers in
the direction of liking them better—”Gee, they’re pretty tasty critters
after all.”
Although this may seem a rather bizarre example of dissonancereducing behavior, it’s not as farfetched as you might think. Philip
Zimbardo and his colleagues35 conducted an analogous experiment in
which army reservists were asked to try fried grasshoppers as part of a
study allegedly about “survival” foods. For half of the participants, the
request was made by a warm, friendly officer; for the other half, it was
made by a cold, unfriendly officer. The reservists’ attitudes toward
18 8 The Social An imal
eating grasshoppers were measured before and after they ate them.
The results were exactly as predicted above: Reservists who ate
grasshoppers at the request of the unpleasant officer increased their
liking for them far more than those who ate grasshoppers at the request of the pleasant officer. Thus, when sufficient external justification was present—when reservists complied with the friendly officer s
request-—they experienced little need to change their attitudes toward
grasshoppers. They already had a convincing explanation for why they
ate them—they did it to help a “nice guy.” But reservists who complied with the unfriendly officer’s request had little external justification for their action. As a result, they adopted a more positive attitude
toward eating grasshoppers to rationalize their discrepant behavior.
What Is Inadequate Justification? Throughout this section, I have made reference to situations in which there is inadequate
external justification and to those with an abundance of external justification. These terms require some additional clarification. In the
Festinger-Carlsmith experiment, all of the participants did, in fact,
agree to tell the he—including all of those paid only $1. In a sense,
then, $1 was adequate—that is, adequate to induce the participants
to tell the lie; but as it turns out, it wasn’t sufficient to keep them
from feeling foolish. To reduce their feelings of foolishness, they had
to reduce the dissonance that resulted from telling a lie for so paltry
a sum. This entailed additional bolstering in the form of convincing
themselves that it wasn’t completely a lie and the task wasn’t quite as
dull as it seemed at first; as a matter of fact, when looked at in a certain way, it was actually quite interesting.
It would be fruitful to compare these results with Judson Mills’s
data on the effects of cheating among sixth-graders. Recall that, in
Mills’s experiment, the decision about whether to cheat was almost
certainly a difficult one for most of the children. This is why they experienced dissonance, regardless of whether they cheated or resisted
temptation. One could speculate about what would happen if the rewards to be gained by cheating were very large. For one thing, it
would be more tempting to cheat; therefore, more children would actually cheat. But, more important, if the gains for cheating were astronomical, those who cheated would undergo very little attitude
change. Much like the college students who lied in Festinger and
Carlsmith’s $20 condition, those children who cheated for a great re-
Self-Justification
211
ward would have less need to reduce dissonance, having been provided with an abundance of external justification for their behavior.
In fact, Mills included this refinement in his experiment, and his results are consistent with this reasoning: Those who cheated to obtain
a small reward tended to soften their attitude about cheating more
than those who cheated to obtain a large reward. Moreover, those
who refrained from cheating in spite of the temptation of a large reward—a choice that would create a great deal of dissonance—hardened their attitude about cheating to a greater extent than those who
refrained in the face of a small reward—just as one might expect.
Dissonance and the Self-Concept The analysis of the dissonance phenomenon presented in this section requires a departure
from Festinger’s original theory. In the experiment by Festinger and
Carlsmith, for example, the original statement of dissonance went like
this: The cognition “I believe the task is dull” is dissonant with the
cognition “I said the task was interesting.” Several years ago, I reformulated the theory in a way that focuses more attention on the way
people conceive of themselves.36 Basically, this reformulation suggests
that dissonance is most powerful in situations in which the selfconcept is threatened. Thus, for me, the important aspect of dissonance in the situation described above is not that the cognition “I said
‘X'” is dissonant with the cognition “I believe not X.'” Rather, the crucial fact is that I have misled people: The cognition “I have told people something I don’t believe” is dissonant with my self-concept; that
is, it is dissonant with my cognition that “I am a person of integrity.”
This formulation is based on the assumption that most individuals like to think of themselves as decent people who wouldn’t ordinarily mislead someone. For example, consider Kathy, who believes
marijuana is dangerous and should definitely not be legalized. Suppose she is induced to make a speech advocating the use of marijuana. Let us assume she makes the speech to an audience consisting
of individuals whom she knows to be irrevocably opposed to the use
of marijuana (e.g., the members of a police vice squad, the Daughters of the American Revolution, or prohibitionists). In this case,
there is little likelihood that she will influence this audience because
of the firmness of their convictions. According to my view of dissonance theory, Kathy would not change her attitude because she has
not affected anyone’s behavior. Similarly, if Kathy were asked to
18 8 The Social An imal
make the same statement to a group of individuals whom she knows
to be irrevocably committed to the use of marijuana, there would be
no possibility of influencing the audience. On the other hand, if
Kathy were induced to make the identical speech to a group of individuals who have no prior information about marijuana, we would
expect her to experience much more dissonance than in the other situations. Her cognition that she is a good and decent person is dissonant with her cognition that she has said something she doesn’t
believe that is likely to have serious belief or behavioral consequences
for her audience. To reduce dissonance, she needs to convince herself
that the position she advocated is correct. This would allow her to
believe that she is a person of integrity. Moreover, in this situation,
the smaller the incentive she receives for advocating the position, the
greater the attitude change. I tested and confirmed this hypothesis in
collaboration with Elizabeth Nel and Robert Helmreich.37 We found
an enormous change in attitude toward marijuana when participants
were offered a small reward for making a videotape recording of a
speech favoring the use of marijuana—but only when they were led
to believe that the tape would be shown to an audience that was uncommitted on the issue. On the other hand, when participants were
told that the tape would be played to people who were irrevocably
committed on the subject of marijuana (one way or the other), there
was relatively little attitude change on the part of the speaker. Thus,
lying produces greater attitude change when the liar is undercompensated for lying, especially when the lie is likely to evoke a change
in the audience’s belief or behavior.*
A great deal of subsequent research38 supports this reasoning and
allows us to state a general principle about dissonance and the selfconcept: Dissonance effects are greatest when (1) people feel personally responsible for their actions, and (2) their actions have serious
consequences. That is, the greater the consequence and the greater
our responsibility for it, the greater the dissonance; the greater the
dissonance, the greater our own attitude change.
*It should be mentioned that, in this as well as in the other experiments discussed here, each participant was completely debriefed as soon as he or she had
finished participating in the experiment. Every attempt was made to avoid causing a
permanent change in the attitudes of the participants. It is always important to debrief participants after an experiment; it is especially important when the experiment
induces a change in an important attitude or has important behavioral consequences.
Self-Justification
213
My notion that dissonance is aroused whenever the self-concept
is challenged has many interesting ramifications. Let us look at one
in some detail. Suppose you are at home and someone knocks at your
door, asking you to contribute to a worthy charity. If you didn’t want
to contribute, you probably wouldn’t find it too difficult to come
up with reasons for declining—you don’t have much money, your
contribution probably wouldn’t help much anyway, and so on. But
suppose that, after delivering a standard plea for a donation, the
fund-raiser adds that “even a penny will help.” Refusing to donate
after hearing this statement would undoubtedly stir up some dissonance by challenging your self-concept. After all, what kind of person is it who is too mean or stingy to come up with a penny? No
longer would your previous rationalizations apply. Such a scenario
was tested experimentally by Robert Cialdini and David Schroeder.39
Students acting as fund-raisers went door to door, sometimes just
asking for donations and sometimes adding that “even a penny will
help.” As conjectured, the residents who were approached with the
even-a-penny request gave contributions more often, donating almost twice as frequently as those getting just the standard plea. Furthermore, on the average, the even-a-penny contributors were likely
to give as much money as the others; that is, the statement legitimizing the small donation did not reduce the size of the contributions.
Why? Apparently, not only does the lack of external justification for
refusing to donate encourage people to give money, but after they
have decided whether to contribute, the desire to avoid appearing
stingy affects their decision of how much to give. Once people reach
into their pockets, emerging with a mere penny is self-demeaning; a
larger donation is consistent with their self-perception of being reasonably kind and generous.
Inadequate Rewards as Applied to Education A great
deal of research has shown that the insufficient-reward phenomenon
applies to all forms of behavior—not simply the making of counterattitudinal statements. Remember, it has been shown that if people
actually perform a dull task for very little external justification, they
rate the task as more enjoyable than if they have a great deal of external justification for performing it.40 This does not mean people
would rather receive low pay than high pay for doing a job. People
prefer to receive high pay—and they often work harder for high pay.
18 8 The Social An imal
But if they are offered low pay for doing a job and still agree to do
it, there is dissonance between the dullness of the task and the low
pay. To reduce the dissonance, they attribute good qualities to the job
and, hence, come to enjoy the mechanics of the job more if the salary
is low than if it is high. This phenomenon may have far-reaching
consequences. For example, let’s look at the elementary-school classroom. If you want Johnny to recite multiplication tables, then you
should reward him; gold stars, praise, high grades, presents, and the
like are good external justifications. Will Johnny recite the tables just
for the fun of it, long after the rewards are no longer forthcoming?
In other words, will the high rewards make him enjoy the task? I
doubt it. But if the external rewards are not too high, Johnny will add
his own justification for performing the math drill; he may even
make a game of it. In short, he is more likely to continue to memorize the multiplication tables long after school is out and the rewards
have been withdrawn.
For certain rote tasks, educators probably do not care whether
Johnny enjoys them or not, as long as he masters them. On the other
hand, if Johnny can learn to enjoy them, he will perform them outside of the educational situation. Consequently, with such increased
practice, he may come to gain greater mastery over the procedure and
he may retain it indefinitely. Thus, it may be a mistake to dole out extensive rewards as an educational device. If students are provided with
just barely enough incentive to perform the task, teachers may succeed in allowing them to maximize their enjoyment of the task. This
may serve to improve long-range retention and performance. I am not
suggesting that inadequate rewards are the only way people can be
taught to enjoy material that lacks inherent attractiveness. What I am
saying is that piling on excessive external justification inhibits one of
the processes that can help set the stage for increased enjoyment.
Several experiments by Edward Deci and his colleagues41 make
this point very nicely. Indeed, Deci carries this analysis one step further by demonstrating that offering rewards to people for performing a pleasant activity actually decreases the intrinsic attractiveness
of that activity. In one experiment, for example, college students
worked individually on an interesting puzzle for an hour. The next
day, the students in the experimental condition were paid $1 for each
piece of the puzzle they completed. The students in the control group
worked on the puzzle as before, without pay. During the third ses-
Self-Justification
215
sion, neither group was paid. The question is: How much liking did
each group have for the puzzle? Deci measured this during the third
session by noting whether each student worked on the puzzle during
a free break when they could do whatever they pleased. The unrewarded group spent more free time on the task than the rewarded
group—whose interest waned when no rewards were forthcoming.
Mark Lepper and his colleagues found the same kind of relationship
with preschool children.42 The researchers instructed half the kids to
work on a set of plastic jigsaw puzzles and promised them a more rewarding activity later. They instructed the remaining kids to play
with the puzzles without promising them anything in return. After
playing with the puzzles, all of the children were allowed to engage
in the “more rewarding” activity (but recall that only half of them
were led to believe this was a reward for having worked on the puzzles). A few weeks later, they turned all the youngsters loose on the
puzzles. Those who had worked on the puzzles to earn the chance to
engage in the more rewarding activity spent less of their free time
playing with the puzzles. In short, by offering the children a reward
for playing, the experimenters succeeded in turning play into work.
What happens if, instead of offering prizes or payments, we reward people by praising them? Most parents and teachers believe
that praising a child’s good performance is always a useful thing to
do. Jennifer Henderlong and Mark Lepper43 recently reviewed a host
of studies in this area and found that it is not that simple. Praise can
be beneficial but only if it is done in moderation and in a way that
makes children feel competent. However, if a parent or a teacher lavishes praise on children in such a way that it creates the illusion that
the reason they performed the activity was to earn the praise, children will not learn to enjoy the activity itself. By the same token, if
the emphasis is placed on competition—that is, on doing better than
most of the other kids in the class—the children’s focus is on winning rather than on doing, and, consequently, they do not enjoy the
thing they are doing. These findings parallel the results of the experiments on reward discussed above; causing a person to focus on the
extrinsic reasons for performing well will reduce the attractiveness of
the task itself. Moreover, as Carol Dweck44 has shown, praise is most
effective if it is focused on the child’s effort rather than on the child’s
talent or ability. That is, if children are praised for their effort on a
difficult task, they learn an important lesson: “When the going gets
18 8 The Social An imal
tough, I will work harder because hard work will result in a better
performance.” But if they are praised for being smart—then, if a situation arises where they are failing, they frequently draw the conclusion that “I am not as smart as people thought I was.” This can have
devastating consequences.
Insufficient Punishment In our everyday lives, we are continually faced with situations wherein those who are charged with the
duty of maintaining law and order threaten to punish us if we do not
comply with the demands of society. As adults, we know that if we
exceed the speed limit and get caught, we will end up paying a substantial fine. If it happens too often, we will lose our licenses. So we
learn to obey the speed limit when there are patrol cars in the vicinity. Youngsters in school know that if they cheat on an exam and get
caught, they could be humiliated by the teacher and severely punished. So they learn not to cheat while the teacher is in the room
watching them. But does harsh punishment teach them not to cheat?
I don’t think so. I think it teaches them to try to avoid getting caught.
In short, the use of threats of harsh punishment as a means of getting
someone to refrain from doing something he or she enjoys doing necessitates constant harassment and vigilance. It would be much more
efficient and would require much less noxious restraint if, somehow,
people could enjoy doing those things that contribute to their own
health and welfare—and to the health and welfare of others. If children enjoyed not beating up smaller kids or not cheating or not stealing from others, then society could relax its vigilance and curtail its
punitiveness. It is extremely difficult to persuade people (especially
young children) that it’s not enjoyable to beat up smaller people. But
it is conceivable that, under certain conditions, they will persuade
themselves that such behavior is not enjoyable.
Let’s take a closer look. Picture the scene: You are the parent of
a 5-year-old boy who enjoys beating up his 3-year-old sister. You’ve
tried to reason with him, but to no avail. So, to protect the welfare
of your daughter and to make a nicer person out of your son, you
begin to punish him for his aggressiveness. As a parent, you have at
your disposal a number of punishments that range from extremely
mild (a stern look) to extremely severe (a hard spanking, forcing th…

Are you stuck with your online class?
Get help from our team of writers!

Order your essay today and save 20% with the discount code RAPID