Skip to Main Content

Credit and Blame at Work

How Better Assessment Can Improve Individual, Team and Organizational Success

About The Book

Previously published as The Blame Game, this acclaimed guide by a leading workplace expert offers essential advice about how to succeed at work by avoiding the pitfalls of pervasive credit-grabbing and finger-pointing.

Credit and Blame at Work, praised by bestselling management expert Robert Sutton as “a modern management classic; one of the most well-crafted business books I have ever read,” psychologist and workplace consultant Ben Dattner reveals that at the root of the worst problems at work is the skewed allocation of credit and blame. It’s human nature to resort to blaming others, as well as to take more credit for successes than we should. Many managers also foster a “blame or be blamed” culture that can turn a workplace into a smoldering battlefield and upend your career. Individuals are scapegoated, teams fall apart, projects get derailed, and people become disengaged because fear and resentment take hold. But Dattner shows that we can learn to understand the dynamics of this bad behavior so that we can inoculate ourselves against it.

In lively prose, Dattner tells a host of true stories from individuals and teams he’s worked with, identifying the eleven personality types who are especially prone to credit and blame problems and introducing simple methods for dealing with each of them. The rich insights and powerful practical advice Dattner offers allow readers to master the vital skills necessary for rising above the temptations of the blame game, defusing the tensions, and achieving greater success.


Credit and Blame at Work CHAPTER 1 The Nature of Credit and Blame
The Beatles were angry and upset when they thought that someone else was getting the credit they deserved for their psychedelic eighth album, Sgt. Pepper’s Lonely Hearts Club Band.1 They felt they had been underappreciated because Time magazine had described the record as “George Martin’s new album,” crediting their producer.2 Martin had famously played a key role in defining the band’s sound, and for this new album he had experimented with new techniques in sound mixing which contributed to its groundbreaking nature. Although the album went down in history as one of the greatest ever recorded, this story illustrates just how fraught credit and blame issues can be. Even the Beatles, despite all of their success and fame, had difficulty sharing credit.

Across all human cultures, and throughout all of human history, keeping close track of credit and blame has been considered vital. Many religions have concepts of a system of divine accounting, or a “book of life,” wherein God or the gods keep track of one’s good and bad deeds in order to determine reward or punishment, either in this world or the next. Even for people who do not believe in any such divine, cosmic, or karmic judgment, the principle that we should receive due credit for our good intentions or good deeds and not be unfairly blamed holds powerful sway.

So powerful, in fact, is our desire to receive due credit that we sometimes demand it at what might be considerable expense. A friend of mine, whom I’ll call Pria, took me out to lunch one day to ask for some career advice. Pria was preparing for an upcoming meeting with her boss, Larry, in order to talk about her future at the nonprofit think tank where she works. Pria had worked very hard over the last year and knew she was being underpaid in her role. She also resented that she had the same title as the two people who reported to her. As we discussed her game plan over lunch, Pria told me that she planned to begin the meeting by expressing her dissatisfaction with her title, and then to make a strong argument for getting a promotion. She would then talk about her compensation, which she knew from research about comparable organizations was about $20,000 lower than that of her counterparts.

During our discussion, it became clear to me that Pria cared more about the title than about the salary. When I asked her if this was the case, she readily agreed, saying, “I lose credibility with outsiders because I have the same title as the people who report to me.” But I was skeptical. In the more than twenty years I’ve known her, Pria’s charisma and intellect have impressed anyone she’s met, both personally and professionally. Her title wouldn’t have been more than a footnote to any conversation. I decided to press the issue, asking her if she could think of a single example of a situation in which she’d lost credibility. After thinking it through for a moment, she admitted she couldn’t. I then asked her whether she felt Larry valued her. This time she answered without delay, saying, “No, he doesn’t.” While Larry heavily relied on Pria to ghost-write articles and speeches for him, he never publicly acknowledged that she was providing most of the “thinking” that powered the think tank he had founded.

The real reason why Pria wanted the promotion had little or nothing to do with what she had called “optics” to outsiders. Instead, the promotion had everything to do with her feeling undervalued in her role by Larry, who had originally recruited her from a prestigious private-sector consulting firm. I asked Pria to make a hypothetical choice: Would she prefer a new, more prestigious title or a bump in pay that would bring her in line with her industry peers? In other words, would she “buy” a promotion if that possibility were offered to her? Thinking about it, she realized that the money would actually be much more beneficial for her than the new title. Pria had been “suboptimizing,” caring more about the perceived meaning of something—which, in this case, was the “credit” that a promotion would have symbolized to her—than about her substantive economic interest. Pria’s case is a good example of about how this bias toward receiving credit can play out in our work lives in our relationships with our bosses. Working for a boss who gives us what we consider to be fair credit is something that people are willing to “pay for” in a variety of ways. In my experience, having a good relationship with one’s boss—defined in large part by the fairness with which one’s boss allocates credit and blame—is actually as important as a 25 percent difference in salary. Many people are willing to forgo higher compensation in order to have a boss who makes them feel respected and valued. I often challenge my clients, as I did with my friend Pria, to consider the potential trade-offs between symbols and substance. However, for many of us, it’s hard to determine what is symbolic and what is substantive, and we can blame our innate confusion on evolution.

We are all vulnerable to falling into traps regarding the allocation and distribution of credit and blame because, as crucial as fair judgment is to us, disagreements about who deserves credit or blame are the norm and agreement the exception. Why is this the case? A good number of our default actions and reactions can be explained by human evolution.

In recent years, researchers in evolutionary psychology have made notable progress in explaining ways in which the survival pressures that ruled our earliest ancestors’ lives have shaped our thinking and our behavior: for example, why we gorge ourselves on fatty foods and sweets; why we often misperceive risks, as in the recent subprime mortgage fiasco; and why racism lingers even when we imagine ourselves to be equal-opportunity and enlightened. Many of our ways of thinking and behaving have roots back in the African savannah, and our tendency to overzealously keep track of credit and blame also began there.

This tab-keeping seems, in fact, to be a deeply hardwired feature of the animal kingdom. Consider the research conducted on chimpanzees and capuchin monkeys by two biologists, Sarah Brosnan and Frans de Waal, at the Yerkes National Primate Research Center in Atlanta, who investigated how animals react to inequities.3 The scientists conducted a series of experiments in which the monkeys and chimpanzees could trade tokens—small rocks or pieces of pipe—for food. While some of the animals received bits of cucumber or celery in return for their tokens, others received tasty grapes—which might be the equivalent of a co-worker getting a promotion or a raise while you got, well, a piece of cucumber. When the animals, which had previously been quite content receiving celery, saw that others were receiving grapes, they literally turned their backs to an offer of a treat like cucumber. Some monkeys went so far as to toss the cucumber back at the researchers in fits of rage.

In addition to the analogy of relating to our compensation and our bosses that these primate cousins provide, we can learn lessons about how we relate to our co-workers from an unexpected branch of our extended mammalian family tree—bats. Researchers have studied bat behavior and discovered that bats, like humans, track reciprocal social relationships. One study focused on the aptly named vampire bats, which inhabit regions of Central America and feed at night on the blood of large animals.4 If a vampire bat misses a meal, it can turn to its “co-workers” for help. Once bats return to their dens each night, those lucky enough to find food that evening will regurgitate blood into the mouths of their less successful colleagues. What’s really interesting is that vampire bats have an innate sense of fairness and reciprocity and keep track of who has been generous and who has been greedy. If a bat has been denied a favor by a fellow bat, it will refuse to share with that same stingy bat in the future. As we’ll discuss further in chapter 4, people also pay close attention to reciprocal social relationships, and keep careful track of how they are credited and blamed by co-workers.

John Stacey Adams, a workplace and behavioral psychologist, developed equity theory in the 1960s to explain why and when workers feel they have been treated unfairly, and to predict how they will react when they perceive inequity.5 Adams found that employees seek to maintain equity between the inputs that they bring to a job (such as time, effort, and personal sacrifice) and the outcomes they receive from their employer (like salary and recognition). According to Adams’s equity theory, our innate sense of what is fair also relies heavily on a comparison of our inputs and outcomes with the input and outcomes of those around us. If we’re working our tails off and fail to get the promotion we feel we deserve, and at the same time we see that a lazy co-worker lands a plum position, there’s a good chance we’ll do something equivalent to turning our backs on, or even throwing the cucumber in the face of, our boss. When we perceive inequity, stinginess, or a lack of reciprocity on the part of colleagues or the organizations we work for, we are much less likely to collaborate and much more likely to try to find ways to somehow right the wrongs that we feel have been done to us.

Due to the much greater and more numerous threats to survival that our ancestors were subject to, our brains have evolved to make quick, unconscious decisions. They have also evolved to simplify matters in order to facilitate a speedy response in the face of too much information. Over time our species was wired to use mental shortcuts, such as relying on past experience to judge future outcomes, and the result has been that our perceptions are influenced by what psychologists call cognitive illusions. These are subconscious mental maps that often get in the way of accurate or rational assessments. As the psychologist Daniel Gilbert writes in his book Stumbling on Happiness: “We feel as though we are sitting comfortably inside our heads, looking out through the clear glass windshield of our eyes, watching the world as it truly is. We tend to forget that our brains are talented forgers, weaving a tapestry of memory and perception whose detail is so compelling its inauthenticity is rarely detected.”6

We are also a good deal less in command of our thoughts, and our actions, than we tend to believe we are. One illuminating study even showed that when we perceive we are making a conscious decision, our unconscious brain is way ahead of us. The researchers found that they could predict if a subject was going to press a button or not a full seven seconds before the subject reported having made the decision.7 While this kind of experiment has yet to test when we unconsciously make decisions about credit and blame before we are aware we have done so, the implications are clear—our subconscious is in the driver’s seat.

We still have our ancient brains, and they may be driven to perceive and act without reflection. This is why in the middle of a particularly complicated or emotional situation, such as an imminent layoff at the office, we and our co-workers may act like we stepped right out of the Stone Age.

One way in which our evolutionary programming leads us astray is that trustworthiness and fairness have become so important to us that we have come to prioritize fairness even when it conflicts with our “rational” self-interest. This is demonstrated powerfully by something called “the ultimatum game,” an experiment designed by German economists in 1982.8 Two players are given a sum of money and told to split it up between them. One player is given the power to split the money and to decide how much he and the other player will get. The other player then has the chance to either accept or reject the offer. If the second player decides to reject the offer, neither player receives anything. Rationally, the second player should accept any offer he is given; at least he’ll receive something. But what researchers who have conducted this experiment countless times in cultures all around the world have found is that when the offers slide out of balance, moving from a fifty-fifty split to, say, an eighty-twenty one, the second player will reject the offer. Just like the capuchin monkeys who threw their cucumbers after grapes caused wrath, people would rather receive nothing than be subject to a raw deal. Our emotional primate hardwiring trumps our rational economic self-interest.

Another way in which our brains distort our perceptions and behavior is by leading us to give ourselves more credit for successes than we’re due and to downplay our responsibility for failures. This behavioral trait was termed “beneffectance” by psychologist Anthony Greenwald,9 derived from the combination of “beneficence,” doing good, and “effectance,” or competence. He defined it as “the tendency to take credit for success while denying responsibility for failure.” A number of studies have shown that people, particularly Westerners, overcredit themselves for everything from how much they have contributed to group efforts to their skill at public speaking to how much they will be missed at a social event they can’t attend. Studies have also shown that if you ask each member of a work group to estimate what percentage of the group’s work output he or she contributed, the total adds up, like the fictional shares sold in the Broadway musical The Producers, to much more than 100 percent.

Most of us also have wildly optimistic views of our futures, where we believe that due to our interpersonal skills and intellectual gifts, we are more likely than our peers to have a happy marriage or a successful career. Psychologists term this trait illusory superiority, a cognitive bias where people overestimate their positive qualities and underestimate their negative ones. It may be better known as the Lake Wobegon Effect, inspired by the fictional town where, according to Garrison Keillor, “all the children are above average.” Keillor’s point, of course, is that everyone tends to see him- or herself as above average. As the psychologist Steven Pinker writes in The Blank Slate:


People consistently overrate their own skill, honesty, generosity, and autonomy. They overestimate their contribution to a joint effort, chalk up their successes to skill and their failures to luck, and always feel that the other side has gotten the better deal in a compromise. People keep up these self-serving illusions even when they are wired to what they think is an accurate lie detector. This shows that they are not lying to the experimenter but lying to themselves. For decades every psychology student has learned about “cognitive dissonance reduction,” in which people change whatever opinion it takes to maintain a positive self-image.10


As Bertrand Russell noted: “Every man, wherever he goes, is encompassed in a cloud of comforting convictions, which move with him like flies on a summer day.”11

Ironically, it is often the least talented among us who are the most likely to overcredit their contributions and abilities. Consider for example a study of college students conducted in 1999 by Justin Kruger and David Dunning of Cornell University.12 The researchers tested the students for their aptitude in logic, grammar, and humor, and then asked the students to self-rank—to guess at how they would stack up against their peers. What they found was that students with the lowest scores most overrated themselves, believing they were more skilled than they really were, and assessing themselves far above their actual ranking.

Our natural tendency to delude ourselves and to be legends in our own minds doesn’t stop here. We also prefer to think that our positive traits strike much closer to the core of who we are than any negative aspects of our personality. So, when a friend or co-worker describes us as “a hard worker” or “a good listener,” we say to ourselves, “Yes, that’s me.” But when we read in our annual performance review that we “tend to lose focus” or that we may be “too social with our co-workers,” we tend to get angry and defensive—even accusatory. This can occur even when the evaluations we receive are entirely credible.

The distortions of “beneffectance” fall within a larger category of self-serving biases, subconscious processes by which we assess our capabilities and achievements in a distorted, overly positive manner. We basically trip over ourselves in pursuit of praise and credit but conveniently find a way to blame external causes—anyone and anything—for our failures. This tendency even applies to organizational leaders. There have been numerous studies around the world dating back to the early 1980s that have investigated how senior executives make self-serving attributions all the time to protect their self-image and their reputation—either by claiming credit for positive results or by deflecting blame toward external and environmental factors for negative outcomes.

In one of the most influential of these studies, researchers Gerald Salancik and James Meindl found that management teams frequently used self-serving attributions in letters to their shareholders.13 Management teams generally attributed the firm’s good fortune to their contributions and were three times more likely to fault the environment for setbacks than they were to take responsibility for them. In the same vein, if a manufacturing company has a bad year, the temptation is to blame anything from a depressed economy to an influx of cheap foreign imports rather than the management of the firm. On the flip side, when external factors beyond a management team’s control, such as a fluctuating currency or a rebound in consumer demand, have had a strong role in improving a company’s bottom line, the company and its CEO will likely find a way to show how the improved performance was instead a direct result of their smart decisions and flawless execution.

In another study, researchers found that entrepreneurs tend to take credit for the success of their business—citing internal factors such as their work ethic or management skills—while blaming external factors like taxes and the scarcity of available financing when their business struggles or fails.14 Outside observers believed that those same outcomes were actually the result of internal factors such as an entrepreneur’s lack of planning or strategic vision as opposed to a poor economy or onerous government regulations. The study involved asking two samples of entrepreneurs to name the factors they believed had contributed to or impeded the success of their businesses. One sample was composed of 189 pharmacy owners and the other of 231 owners of small businesses, which ranged from antique stores to travel agencies. The researchers also recruited a panel of sixteen business “experts”—a group composed of professors of entrepreneurship, graduates of an MBA program who majored in entrepreneurship, and business counselors in a college-based entrepreneurship center—and asked them the same two questions about the success or failure of small businesses. The results showed that both the entrepreneurs and the experts largely attributed success to internal factors.

When it came to identifying what causes businesses to stumble or fail, however, there was quite a divergence in explanations. Entrepreneurs believed that 84.1 percent of the reasons for their struggles were factors external to their business, with government regulation and the labor market ranking as the two most common answers. The experts, on the other hand, attributed 72.6 percent of a business’s struggles to the internal personality characteristics of the entrepreneur, which included work ethic, knowledge, and dedication.

Self-serving attributions can be a mixed blessing in terms of both self-esteem and social esteem, the regard one is held in by others. On one hand, taking undue credit for good outcomes and denying blame for bad outcomes is a tempting short-term strategy for protecting one’s self-esteem. The potential downside of this approach, though, is that unrealistic credit-takers do not develop a true assessment of their contributions and capabilities over the longer term. Similarly, there can be a short-term temptation in taking undue credit or even denying due blame as a means of building social esteem. In the short run, someone who conveys a high degree of certainty about his creditworthiness and a conviction that he is not to blame for anything of importance, may convince others that there is truth to those self-serving attributions. However, over the longer run, if someone develops a reputation as a credit hog or an unfair blamer, the social esteem in which others hold him may take a precipitous dive.

A relevant area of psychology known as attribution theory casts interesting light on how our brains are able to convince us of the veracity of our self-serving self-assessments. Simply put, attribution theory describes the patterns by which we explain the causes of our own versus others’ behavior. There are actually two broad kinds of attribution: personal attribution, which is when we explain the actions of ourselves and others based on ability, personality, mood, or effort; and situational attribution, which tries to explain behavior by invoking contextual or environmental factors. For example, if your boss stomps into your office and snaps at you for something particularly minor and petty, you might call him a jerk once he’s left the office. “How dare he talk to me like that,” you might think to yourself. “He’s really bad at keeping his emotions under control, I can’t believe such a hostile person ever got promoted.” That’s personal attribution in action because you’ve explained your boss’s behavior based on some particular personality trait you have ascribed to him. You could also, though, have taken into account factors other than your boss’s character that might explain his behavior. Perhaps you knew he was having trouble at home or that his own boss had just chewed him out earlier in the day. Then, you might have thought, “I can tell he’s upset about something else, but I wish he wasn’t taking it out on me,” taking his outburst less personally.

When it comes to ourselves, we also have a choice of what factors we attribute our behavior, or our success or failure, to. It’s easy to succumb to the temptation to use inconsistent explanations for good and bad outcomes. For example, we seek to collect credit for our successes based on a personal attribution—“Of course, I deserved that promotion; in fact, it was overdue”—but use situational attribution to distance ourselves from failure—“I didn’t get that promotion because the boss has always had it in for me.” In one study that demonstrates this dynamic in action, psychologists Kathryn Saulnier and Daniel Perlman from the University of Manitoba asked both prison inmates and their counselors to explain why the prisoners had committed their crimes.15 What they found was that the prisoners blamed situational factors for their incarceration; the counselors, on the other hand, thought that the prisoners themselves were to blame.

Consider a situation where your company loses an important customer that you were responsible for working with. In this case, you are likely to blame the loss of the account on something situational, like budget cuts at the customer’s company, but your boss might attribute the loss to you personally, arguing that you did not follow up often enough. On the other hand, where we might see a co-worker as rigid, we see ourselves as resolute. And, when a colleague makes a mistake in, say, placing a purchase order, we are quick to blame and criticize her for it even though we personally have made the same exact mistake many times. When we did it, we might have said to ourselves, “It’s not my fault; the process is broken.”

Because of self-serving biases and double standards, we tend to not only favor ourselves but also to be biased against others. Our own decisions and behaviors make perfect sense to us, while those of others are often irksome, and we are quick to blame them. As David Foster Wallace, the late novelist and author of Infinite Jest, explained in a commencement speech to the Kenyon College class of 2005, “A huge percentage of the stuff that I tend to be automatically certain of is, it turns out, totally wrong and deluded.”16 As Wallace suggested, we tend to see the world too narrowly through our own eyes, not taking into account other people’s perspectives and situations, and we jump to conclusions about why they are behaving as they do. In his speech, Wallace speculated that perhaps the driver of that SUV that just cut you off didn’t do it to try to kill you. Maybe he’s the father of a sick child rushing to the hospital, and you were blocking his vehicle, not the other way around. It comes naturally to simplify other people’s situations and motivations, and then to blame them for the motivations we ascribe to them. We seem to have programming that leads us to be quick to accuse others, for a variety of reasons. As deeply bred into us as self-preservation is, so too are impulses to scapegoat and blame others.

Consider the story of Koko (her real name, unlike most of the other individuals mentioned in this book), the gorilla living in captivity in California who reportedly has learned one thousand words of sign language as a way to communicate with researchers. One night, Koko broke a toy cat she was playing with.17 When asked by her scientist keeper the next morning what had happened, Koko quickly signaled that it was her nighttime attendant who was to blame. Koko, like early humans, automatically blamed someone else for her misstep because she feared retribution and possibly rejection from her “tribe”—in this case, the group made up of her keepers. Our ancestors would have known that rejection by their tribe would have meant certain death. Even our primate predecessors learned the not-so-subtle art of pointing the finger of responsibility elsewhere.

Scapegoating is an ancient practice, one found throughout history and across cultures. The term itself has its origins in the biblical Jewish ceremony of Yom Kippur, where the sins of the people are symbolically transferred to a goat that is then sent away into the wilderness.18 Over our history, humans have constantly embraced rituals that involve transferring blame for something gone wrong to everything from clay pots to snakes and a menagerie of other animals and, quite often, to other humans. Sir James Frazer, the Scottish anthropologist and author of the twelve-volume classic The Golden Bough, published in 1890, labels such rituals “the transference of evil.” As he writes (with the mind-set of the colonial period):


The notion that we can transfer our guilt and sufferings to some other being who will bear them for us is familiar to the savage mind…. Because it is possible to shift a load of wood, stones, or what not, from our own back to the back of another, the savage fancies that it is equally possible to shift the burden of his pains and sorrows to another, who will suffer them in his stead. Upon this idea he acts, and the result is an endless number of very unamiable devices for palming off upon someone else the trouble which a man shrinks from bearing himself.19


Frazer recounts dozens of odd and often bloody scapegoating rituals performed by ancient man. When Arabian tribes were suffering from a plague, they would lead a camel through the village so that the animal would somehow absorb the pestilence. They would then take the camel to a sacred place and strangle it. When cholera struck the ancient villages of Central India, every resident would retire to his or her house at sunset. The village priests would then parade the streets, plucking a straw from the roof of each house, which they would burn along with an offering of rice, ghee, and turmeric. The priests then daubed chickens with vermilion and drove them in the direction of the smoke, hopefully carrying the disease along with them. If the chickens failed, the priests tried again, using goats and then pigs.

When the Aymara Indians of Bolivia and Peru were suffering from a plague in 1857, they loaded a black llama with the clothes of the plague-stricken people, sprinkled the load with brandy, and turned the animal loose into the mountains, hoping that it would carry the sickness away with it. In the Middle Ages, animals were even put on trial for their misdeeds—from beetles who dared to chomp the wood inside a church to a locust that devoured a farmer’s field to a pig who decided to bite a drunkard who had fallen into a ditch. As Julian Barnes describes in Nothing to Be Frightened Of:


Sometimes the animal would be brought before the court, sometimes (as with insects) necessarily tried in absentia. There would be a full judicial hearing, with prosecution, defence, and a robed judge, who could hand down a range of punishments—probation, banishment, even excommunication. Sometimes there was even judicial execution: a pig might be hanged by the neck until it was dead by a gloved and hooded officer of the court.20

We have also, of course, often scapegoated people, both individuals and whole groups. Think of the strange events in the upstart religious colony of Salem Village, Massachusetts, in 1692. Tragically, over the period of just a few months, more than a hundred men and women were accused of practicing witchcraft and cavorting with the devil—all of it started by accusations leveled by two young girls. In the end, nineteen people were hanged on Gallows Hill (five more, including an infant, died while being held in prison). The sequence of events raged out of control like a wildfire, almost consuming the Puritan movement with it, all within a year.21

Salem has continued to draw the attention of historians over the past three hundred years. By painstakingly putting together key pieces of evidence, such as letters and diary entries penned at the time, they have shed much light on the factors that led to the frenzy. Their conclusions reveal a sobering tale about how individuals, groups, and even whole cultures can turn on others, including their own members. Those living in Salem were in a society where war was a constant threat; each new day could bring a surprise attack from the Native Americans, who greatly outnumbered them. The threat of deadly diseases was also rampant, and neighbors eyed each other warily for signs of infection. Meanwhile, the local economy had collapsed and anger had built up toward the wealthier members of the community. A rigid and inflexible moral code woven into everyday life was haphazardly enforced. When combined, all of these elements added up to a powder keg of fear, distrust, and resentment that only needed a simple spark to ignite it. Fear—particularly of forces that we can’t see or completely understand—is a great motivator of scapegoating, and evolutionary psychologists and other commentators tell us that witch-hunts, metaphorically speaking, are hardly a thing of the past.

When the news broke in March 2009 that several dozen executives at American International Group (AIG), the finance and insurance giant, had collected some $165 million in bonuses on the heels of the company’s fantastic collapse,22 you could almost hear the gasp around the country—quickly followed by a torrent of outrage: “How dare they!” After all, AIG received the money as part of the federal government’s “bailout” of many of the big Wall Street Firms. There is clearly an argument to be made about whether those executives did or did not deserve the money (call them retention bonuses if you will, but most editorials and public opinion as a whole sided with did not). But more relevant for our purposes is to consider that while AIG was certainly one of the key players in the economic house of cards that imploded, the causes of the crisis were much more complex than any one factor or organization, and many parties contributed to it. But regardless of how complex and multifaceted the reasons for the collapse, the country suddenly had this one company in particular to point the finger at. As Kathleen Hall Jamieson, an expert in political communication, told a New York Times reporter: “Under these circumstances, you have victims and you need to find a villain.”23

One of the great dangers of the tendency to scapegoat is that it diverts attention away from broader, deeper problems in societies or organizations, such as the structural, cultural, economic, demographic, or technological shortcomings of a system or an enterprise. Researcher Jo-Ellen Pozner has studied how businesses often deflect blame for corporate misconduct by making their employees into sacrificial scapegoats. She says that companies are driven to blame low-level employees for high-level dysfunction or for larger structural problems that might exist throughout the company. “It’s very convenient to have a couple of people who are easily identifiable, it’s easy to use them as a scapegoat to protect the reputation of [the companies],” she told a reporter for the San Francisco Business Times.24

Just as members of various groups have been historically persecuted or purged, corporations these days often demonstrate the same kind of primitive response to threats, whether in the form of poor earnings, internal dissent, or increasing competition. Entire departments may be outsourced or replaced in a manner that makes no economic sense. In fact, management experts have argued that layoffs often don’t help the bottom line and advocate alternative ways to cut head-count expenses, while arguing that many organizations end up having to hire back employees under different arrangements at greater cost.25 Nonetheless, when times are tough, companies continue to yield to the primitive instinct to purge departments, or assign disproportionate blame to particular individuals or groups. Too many leaders miss opportunities to identify and fix the real, but usually subtle and complex, causes of losses or failures. Scapegoating is a convenient way to increase cohesion in the short term, but the first victims of scapegoating are rarely the last. Scapegoating anyone in an organization ultimately yields harmful social dynamics and cultural risks that threaten everyone.

As a species, we are all too ready to go along with the tide of blame, and there are a number of powerful subconscious processes through which our brains determine who we will attach blame to. Unfortunately, it’s rare that organizations ever blame the right people for the right things, for the right reasons, at the right time, in the right way.

One of the disturbing ways in which individual people and whole groups are singled out for blame is through stereotyping, whether based on ethnicity, gender, or personality type. This is a mechanism that, once again, was bred into us over the course of evolution. In ancient times, stereotyping helped us speed up our decision making in life-or-death situations. Is that creature staring at you from across the river going to want you for its lunch? If the creature was a fellow human, failing to quickly identify him as a member of an enemy tribe would have been what we call in contemporary workplace-speak “career limiting.” In our distant past, friends were friends, foes were foes, and there weren’t fine gradations or distinctions in between. No matter how enlightened and equal-opportunity we imagine ourselves to be, unconscious stereotypes still influence how we evaluate people and situations.

Consider what the Harvard psychologists Anthony Greenwald, Debbie McGhee, and Jordan Schwartz learned when they created what is now known as the Implicit Associations Test, or IAT, in the late 1990s.26 When you take the test online (which you can do at, you are asked to match up or “associate” words with pictures that are flashed on your computer screen. The test measures how quickly you make those connections. Although several varieties of the excercise are available—such as ones based on gender, sexuality, or age—the test that measures racial stereotypes is the best known. In this version, you are shown faces of white and black people and then asked to label them with either positive words like trust or negative ones like crime. You basically take the test twice: first, you are asked to associate “good” words with the pictures of white people or with ones of black people, and then it all repeats with “bad” words. What the results reveal is the common time delay in linking black people with positive words—which means that many people have an unconscious bias that links black images with negative terms. The same is true of connecting white people with positive words. Therefore, for the majority of people, it’s easier to make faster associations between white and good and creditworthy, and bad and black and blameworthy. How you score on the test, however, will depend a lot on your own racial and social background, and whether you yourself are a member of a minority.

Thankfully, there are legal, ethical, and cultural reasons why demographic group memberships are “protected classes” in the American workplace. And despite lingering sexism and racism, most organizations have incentives to provide equal treatment to, even if not equal opportunity for, all of their workers. However, as we’ll explore later, the people in accounting and the folks in HR can still interact as if they were Croats and Serbs, or Hutus and Tutsis. And there’s no law protecting the equal rights of HR or accounting. We all have to be wary of this kind of group warfare within companies, and take care not to make unjustified associations between people and attributes simply because they are members of a certain group, or because they happen to have, or play, a certain role.

In December 2009, CBS announced that it would be canceling As the World Turns, the long-running daytime soap opera. Eileen Fulton had played the character “Lisa,” one of TV’s first villainesses, a woman who was married nine times, divorced frequently, and even widowed on several occasions, since the show got its start in 1960. Originally cast as the sweet little girl-next-door, Fulton says that over time she steered Lisa to the dark side and embraced her role as a character everyone loved to hate. But what’s on point for us is that the fans of the show apparently had a hard time distinguishing between Fulton the actor and Lisa the character—an association that actually became so dangerous that Fulton had to hire bodyguards. As she told an interviewer on National Public Radio’s Morning Edition:


[The fans] love to hate me. But you know how it really began was I was standing on the street corner in front of Lord and Taylor’s and this elegant woman came up to me and she said, “Aren’t you Lisa?” And I thought, “Oh, my first autograph.” I said, “Yes, that’s the character I play.” And she said, “Well, I hate you,” and she hit me. And that was the beginning. I thought, well, this is kind of scary. But at least it was a compliment to the acting.27


Fulton’s “fans” in this case were falling prey to an attribution error which caused them to judge Fulton as being bad or evil based on the behavior of her fictional character. The same principle might hold true for someone like Alex Trebek, the host of the game show Jeopardy! According to the official Jeopardy! website, Trebek holds two philosophy degrees from the University of Ottawa. While Trebek is likely an intelligent person, some fans might forget that when he corrects a contestant for an incorrect response, he already has the answers.28 And Trebek didn’t even write the questions himself: he has an entire staff that does that. (Incidentally, Jeopardy! writer and research positions are coveted and turnover is among the lowest of any job, anywhere: one writer, Steve Dorfman, was credited for having written more than fifty thousand clues for the show.)29

Psychologists Lee Ross, Teresa Amabile, and Julia Steinmet of Stanford University used a game show simulation in 1977 to study what Ross termed the fundamental attribution error.30 Subjects were randomly designated as either contestants or hosts. When asked to evaluate the other participants in the study, contestants thought that the hosts were more intelligent even though they were aware they had been assigned randomly and had been instructed to make up their own questions.

The point is that it can be easy to fall into crediting or blaming people simply because of the role they are playing, whether that role is game show host or chief financial officer. Many CFOs I have coached over the years complain that they are viewed as cold and calculating simply because they are tasked with keeping close tabs on corporate spending. Most attorneys in the role of general counsel feel like they are seen as rigid, inflexible, and demanding because they are responsible for ensuring that the letter and the spirit of applicable laws and regulations are followed. Sometimes, managers and executives who are brought in as change agents end up encountering too much resistance from the organization, and they become concerned that they will be blamed for a status quo that they are unable to change. In some organizations, women feel like they are set up for failure by being given too many challenging goals to achieve and too few resources to achieve them, and then are scapegoated when they can’t do the impossible. Some refer to this dynamic as the “glass cliff” that women risk falling off if given the “opportunity” to break the glass ceiling.

Although the advice to “separate the person from the problem” is wise and helpful in theory, it is hard to do in practice. Because of our evolutionary hardwiring, we tend to associate individuals and groups with problems quickly and conclusively. Those who speak unpopular truths, who publicly challenge prevailing wisdom or their boss’s rationale, or who sound alarm bells about unacknowledged internal or external threats to an organization, may find themselves scapegoated and blamed for problems they did not cause, but simply tried to warn others about. Consider this illustrative example of one man who spoke too quickly and almost got himself fired by a CEO who didn’t appreciate his honesty. Bart had been the director of compensation and benefits at a global manufacturing organization for six years. He was appreciated by his colleagues for his no-nonsense style and his willingness to work hard to bring in the best talent with competitive pay packages. One day, in a presentation to the CEO and the compensation committee of the board of directors, an external director asked Bart how the compensation of the company’s senior management compared to that of comparable organizations. “We’re definitely at the high end of the range, probably substantially higher than the average, and our increases have outpaced those of any other company,” Bart answered. This was factually true, but Bart immediately realized that he had said too much and had made a political mistake. He had forgotten that the people whose compensation he was talking about, the senior management of the company, were in the room.

What happened next is an all-too-familiar story in the workplace. The CEO, an eminent elder-statesman type, had risen through the ranks over the course of twenty years. He had never particularly liked Bart, but now began to actively dislike him and to develop negative associations about him. During meetings over the following weeks, the CEO avoided eye contact and asked Bart all kinds of picayune questions and then impatiently cut him off as he tried to provide answers. It is likely that the CEO did not even consciously realize either that he had been angered by Bart’s statement in the meeting or that he was treating Bart any differently than he had in the past. During this time, Bart’s boss, Steve, had his annual performance review. The CEO gave him a “meets expectations” rating rather than the “exceeds expectations” he had received each of the four preceding years. When Steve asked about the basis for his rating, the CEO told him, “You haven’t effectively managed Bart and now you need to get rid of him.”

Bart was at high risk of being a scapegoat for several reasons, having to do both with his role and with his personality. People who are in charge of compensation and benefits in most organizations tend to be lightning rods for all kinds of negative associations and projections from others in the organization. Many people feel undercompensated and blame their boss or human resources for being cheap and unappreciative. Bart also willingly embraced the role of “messenger,” and although he didn’t enjoy taking anyone down a peg by letting them know that they overestimated their “market value,” he made no apologies for his estimates of what people were worth. Bart believed in “speaking truth to power.”

Unfortunately for Bart, the CEO himself was insecure about his compensation relative to his performance and worried about his future at the organization. He had unconsciously perceived Bart’s answer to the board member to be critical and disloyal of him. Bart was lucky to have Steve as a boss, because Steve had strong principles, prized fairness, and was good at managing the situation with the CEO. Steve argued that Bart’s performance had been good, while acknowledging that he could improve in some areas, which he pledged to help Bart work on. Knowing that the turning point had been the compensation committee meeting, Steve also knew that it would serve no purpose to try to enlighten the CEO and to tell him that he had observed a change in his perceptions and behavior toward Bart. And Steve knew better than to challenge the CEO’s reality by arguing that Bart was being unfairly scapegoated. Steve instead told the CEO that he would coach Bart on his “delivery skills” in order to make sure that his style was appropriate to the situation at hand, and he asked for six months to help Bart be more successful. The CEO reluctantly agreed.

Unfortunately, many of us are not lucky enough to have a fair and protective boss like Steve to shield us from unfair scapegoating. One of the ways in which we can protect ourselves from becoming scapegoats is to learn to be more attuned to how what we say, and when and where we say it, might make us a target. It’s helpful always to carefully consider the political risks of challenging the status quo, or an organization’s established “articles of faith.” We should also be on alert for scapegoating others in these ways ourselves. This behavior is all too easy to fall into and it can lead to serious but avoidable problems, both for others and ourselves.

Our best hope for counteracting the many automatic biases and impulses that have been bred into us, both by the long process of evolution and by our specific upbringing, is to become more cognizant of the powerful pull they have on us. Recognizing that our mental maps are preprogrammed increases the chances that we can learn to navigate the workplace in a new way. We need to constantly question our assumptions, and monitor our behavior, to make sure we’re not reacting or acting out in a way that is more appropriate to the savannah than to the office. We should also regularly check in with trusted colleagues and mentors or coaches to get their perspective on how we are doing, and what we might do differently or better.

It is critical to strive for balance in terms of how we react to credit and blame. On the one hand, successful people gain a healthy degree of self-esteem and sense of identity from their contributions and social standing in the workplace. Caring about getting personal credit, and helping others do so, while avoiding blame for oneself and one’s colleagues can be very motivational. At the same time, it is possible to care too much about getting credit and avoiding blame. Having too much of one’s identity wrapped up in work can paradoxically lead to worse results if it makes people too anxious or hard driving. In The Hero with a Thousand Faces, Joseph Campbell described the archetype of the reluctant hero, who heeds the call to adventure, but with some trepidation in doing so.31 As in many areas of work and life, the “middle path,” in this case between caring too much and not caring enough, is the best one to take.

The rest of this book will describe the reflexive ways in which people assign credit and blame, and respond to credit and blame, and how these default actions and reactions can create problems in the workplace for individuals, teams, and entire organizations. I’ll then argue that becoming mindful of the habitual ways in which we all give and get credit and blame is the first step toward replacing potentially dysfunctional reactions with more constructive and adaptive responses. I will also suggest ways to hear, but not succumb, to the siren song of unduly crediting ourselves and unfairly blaming others, which, while seductive, can derail careers and destroy companies.

About The Author

Seth Dinnerman

Ben Datner is the founder of Dattner Consulting, a workplace consulting firm whose client list includes Pfizer, Novartis, MasterCard, and Goodyear. He is the workplace consultant for NPR’s Morning Edition, writes the Minds at Work blog for Psychology Today, and is an adjunct professor at New York University. He lives in New York City. 

Product Details

  • Publisher: Free Press (February 7, 2012)
  • Length: 256 pages
  • ISBN13: 9781439169575

Browse Related Books

Resources and Downloads

High Resolution Images