Skip to Main Content

Outsmarting IQ

The Emerging Science of Learnable Intelligence

LIST PRICE $17.99

PRICE MAY VARY BY RETAILER

Digital products purchased on SimonandSchuster.com must be read on the Simon & Schuster Books app. Learn more.

Free shipping when you spend $40. Terms apply.

Buy from Other Retailers

About The Book

Since the turn of the century, the idea that intellectual capacity is fixed has been generally accepted. But increasingly, psychologists, educators, and others have come to challenge this premise. Outsmarting IQ reveals how earlier discoveries about IQ, together with recent research, show that intelligence is not genetically fixed. Intelligence can be taught.

David Perkins, renowned for his research on thinking, learning, and education, identifies three distinct kinds of intelligence: the fixed neurological intelligence linked to IQ tests; the specialized knowledge and experience that individuals acquire over time; and reflective intelligence, the ability to become aware of one's mental habits and transcend limited patterns of thinking. Although all of these forms of intelligence function simultaneously, it is reflective intelligence, Perkins shows, that affords the best opportunity to amplify human intellect. This is the kind of intelligence that helps us to make wise personal decisions, solve challenging technical problems, find creative ideas, and learn complex topics in mathematics, the sciences, management, and other areas. It is the kind of intelligence most needed in an increasingly competitive and complicated world.

Using his own pathbreaking research at Harvard and a rich array of other sources, Perkins paints a compelling picture of the skills and attitudes underlying learnable intelligence. He identifies typical pitfalls in multiple perspectives, and neglecting evidence. He reveals the underlying mechanisms of intelligent behavior. And he explores new frontiers in the development of intelligence in education, business, and other settings.

This book will be of interest to people who have a personal or professional stake in increasing their intellectual skills, to those who look toward better education and a more thoughtful society, and not least to those who follow today's heated debates about the nature of intelligence.

Excerpt

Chapter 1

Telescopes and Intelligence

When I was a child, my family lived in a large sprawling house in a small town in Maine, a former schoolhouse in fact, with a couple of the blackboards still in place. There was a flat roof over part of the house, where my mother hung the laundry to billow and dry on clear days. But the flat roof meant more to me during clear nights. I would sometimes go out with a pillow to rest my head, lie on the roof, and stargaze for an hour or so.

Later, I bought a cheap reflecting telescope. I picked out craters on the Moon, scrutinized the red sphere of Mars, located Saturn, marveling at the tiny image of the planet with its delicate rings. I was caught up by the magnitude of creation and our diminutive place in it, an experience innumerable human beings have had, albeit far fewer human beings than there are stars of which to stand in awe.

Although I was not very educated when I went stargazing on the laundry roof, I had the benefit of some knowledge. I knew that the earth circled around the sun, the moon around the earth. I knew that the other planets circled around the sun. I knew that the stars were way out there, our sun but one of countless hydrogen sparks wending their firefly ways through a vastness without compass. My father told me about such things, and I had read about some of them in books.

So, between 10 and 11 PM up there on the laundry roof, that's how the universe looked to me. Scanning the sweep of the Milky Way, our own galaxy, I often felt I was no more than a speck of rust on the fragile spokes of gravity that made that awesome wheel go round.

I was harvesting the heritage of human intelligence. For thousands of years, priests and scientists, magicians and navigators, astrologers and philosophers had been looking at the sky and wondering, making up stories, proposing theories, building conceptions -- a motley team of human intelligences at work, stirred by every concern from the most cosmic to the most pragmatic.

We think of the telescope as our instrument of inquiry for the heavens. Even more fundamental is the instrument behind the instrument, the resource of human intelligence. Every one of those stargazers drew upon it to make sense of what they were seeing.

An Apple Cart Waiting to Be Upset

Whatever I knew about astronomy at the time, there was another matter of which I knew nothing: how hard-won was that look of infinite reach in the sky, what a work of intelligence it was, what a revolution it took -- a conceptual revolution that changed the universe, by changing what people took the universe to be.

Children, grandfathers, and everyone in between have been looking up at the stars for a long time -- something like 2 million years, if one counts as human the tool-using hominids who once lived and hunted in the environs of the Olduvai Gorge of the Serengeti. It would be easy to assume that what we see today is not much different today than it has been for millennia.

Easy to assume, but quite false. While the physical pattern of the stars has not changed much, what we see has. The look of the stars depends not only on the light that tickles our retinas but on our conceptions, on what we think things out there are really like. Five centuries before, a young ancestor of mine somewhere in the British Isles might have spent that same hour between 10 and 11 lying in a field to watch the heavenly pageant march around the earth. He would not have felt like a mote in an infinite universe. Believing something very different, he would see something quite different, the stars parading for his benefit.

The neat cosmology of Aristotle and the Catholic church had put the universe into a satisfying order and served it up in a form pleasing to most everyone concerned. The earth lay at the center of the universe. Around the earth circled the sun, the moon, the fixed stars, and the planets, all ordered on crystalline spheres at various distances from our planet.

Make no mistake: This was magnificently intelligent in its time. It solved the problems of its day. It certainly made my ancestor happy. He could be proud of his position on the reviewing stand, enjoying the parade of stars, because in the dogma of his day it was all there for him and his fellow human beings, lords of creation every one. But reviewing stand it actually was not. More of an apple cart, just waiting to be upset.

No one then knew that the planets were whole worlds in themselves, like the earth. They were just more lights in the sky like the stars, but with a difference. A half-dozen aberrant points of light wandered around against the background of the innumerable "fixed" stars over periods of months and years. They came to be called planets -- wanderers -- from the Greek verb planasthai, to wander.

The planets became a problem for an idealized earth-centered picture of the universe. As far back as the time of Christ, those who studied the stars had looked upward with better and better instrumentation. They saw the planets, night after night and season after season, tracing their paths against the backdrop of the fixed stars. They measured those paths with some precision. They tried to fit what they found to the theory that the planets were making circles around the earth, but the paths did not quite match. The circles did not quite work.

What to do? When a theory has a leak, patch it! The great patchmaker was the Egyptian astronomer Ptolemy, who lived from about 100 to 170 A.D. Ptolemy made his observations in Alexandria, Egypt, during the period 127-151 A.D. He wanted to preserve the notion that heavenly objects moved in circles, an idea precious to the philosophy of the times. So Ptolemy proposed that each planet not only made a great circle around the earth but also continuously made little circles within that big circle -- epicycles, they were called. These epicycles accounted for the mismatch between a circle-around-the-earth theory and the data on the positions of the planets in the sky. And best of all, they kept the planets moving around the earth and things moving in circles generally.

Ptolemy rescued the whole notion of an earth-centered universe with his epicycles, a brilliant albeit mistaken act of intelligence. Aristotle before him was no fool either, nor the others of those times who looked at, pondered, and wrote about the sky. Intelligence does not make you right, but it helps you find patterns, right or not. It's striking that we understand the optics of telescopes so much better than the true instrument of all inquiry, human intelligence.

Reconstructing the Universe

Ptolemy's patch was clever, but sooner or later the flaws were bound to show up. Copernicus, a Polish astronomer in the years 1473-1543, devoted his life to the meticulous gathering of data about the motions of the stars and planets. He pored over his measurements, compared them with the epicycles theory, and could not make epicycles work, at least not in any reasonably straightforward version. Hoping to get rid of these errors, he reviewed the older literature of the subject and found that a minority opinion had been long neglected, the heliocentric concept that placed the sun at the center of the universe.

So Copernicus bit the cosmic bullet. He mustered his intelligence and his evidence. We had it all wrong, he argued. The planets do not circle around the earth. They make circles around the sun -- and so does the earth. This was his proposal in his great work, On the Revolutions of the Celestial Spheres, not published until well after his death around 1543.

With Copernicus's proposal began modern astronomy and cosmology, sciences that have grown tremendously over the five centuries since, sciences that, every step of the way, have expanded our conception of the scope and complexity of the universe. These same sciences have just as steadily shrunk our conception of our own importance in the scheme of things.

This was the original Copernican revolution, but not the one and only. Historians of science have borrowed the name of Copernicus for other equally fundamental shifts of theory. A Copernican revolution has come to mean any revolution in our scientific conception of things that upsets the apple cart, reverses old conceptions, remakes the fabric of our beliefs down to their warp and weft. There have been many such collective acts of intelligence in the rising spiral of science, for instance the dual revolutions of quantum mechanics and of relativity theory that marked the first decades of the twentieth century.

Goddard's List

This book concerns a Copernican revolution in the making, one far from cosmological and about as close to home as you can get. It concerns what I called the instrument behind the instruments, that very instrument with which you are reading these words, that very instrument with which you walk, talk, find your way to work and home again. It concerns the nature of intelligence and especially the idea that intelligence is not fixed but learnable. It asks whether you or I or most anyone can learn to behave more intelligently.

Who would not want a sharper mental edge? Yet the classic view of human intelligence takes a grudging stance on that prospect. Just as Ptolemy's epicycles gave the original Copernican revolution something to revolt against, so this classic view of intelligence does for the new Copernican revolution -- it is a view to think with and think against, to test and critique, and perhaps even to come back to, if in the end the evidence insists. This severe picture of intelligence has rarely been as concisely and chillingly asserted as by H. H. Goddard in 1920.

Goddard was one of the three major pioneers of hereditarianism in America. He popularized the Binet scale and used the scores from these tests to measure intelligence as a single, inherited entity. As director of research at the Vineland Training School for Feeble-Minded Girls and Boys in New Jersey he claimed to localize the cause of feeble-mindedness in a single gene. It was Goddard who coined the phrase moron for high-grade defectives -- all people with mental ages between eight and twelve -- from a Greek word meaning foolish. Here is what he said:

Stated in its boldest form, our thesis is that the chief determiner of human conduct is a unitary mental process which we call intelligence: that this process is conditioned by a nervous mechanism which is inborn: that the degree of efficiency to be attained by that nervous mechanisms and the consequent grade of intellectual or mental level for each individual is determined by the kind of chromosomes that come together with the union of the germ cells: that it is but little affected by any later influences except such serious accidents as may destroy part of the mechanism.

rdWe have good reason to resent Goddard's list. It says that we are pretty much stuck with the intelligence we are born with. According to Goddard, and such contemporaries of Goddard as psychologist Lewis M. Terman and psychometrician Charles Spearman, people can learn particular bodies of knowledge and skill, but there is not much they can do to get more of this special stuff, intelligence. Your personal allotment determines limits of insight you must live with all your life.

Of course, how angry Goddard's list makes us and how true it is are different matters. As a broad generalization, the universe has not been especially responsive to how human beings feel about things. For instance, we might have preferred that the earth stand still at the center of the universe, the mansion of humankind. However, it turned out to spin around a quite ordinary star spinning around the center of a quite ordinary galaxy adrift among billions of others. Preferences do not count for much in the face of facts. Whatever we prefer about intelligence, it simply might turn out another way.

For a long time, however, psychologists technically concerned with the nature of intelligence have found reason to challenge the claims in Goddard's list. Here are some of their reservations:

Unitary. Although Goddard wrote of intelligence as a unitary mental process, several old and new views of intelligence argue that intelligence is multiple -- there are different kinds of intelligence. Rather than measuring intelligence by a single yardstick, we might find that different people have different kinds of strengths.

Inborn. Although Goddard wrote of intelligence as inborn, research suggests that most people can learn to behave considerably more intelligently. Learnable intelligence might help us all to meet better a myriad of social and personal challenges.

Nervous. Although Goddard saw the nervous system as the seat of intelligence, current science argues that intelligent behavior is only partly attributable to an efficient nervous system. When people learn to conduct themselves more intelligently, they learn how to make the most of the nervous system with which they are endowed.

Process. Although Goddard treated intelligence as a process, contemporary investigations suggest that intelligent behavior has at least as much to do with knowledge and attitudes -- what you know and understand about the way your mind works, what strategies you have at your fingertips, what attitudes you hold toward the potential of your mind and toward intellectual challenges.

Goddard's list is in trouble. As in the original Copernican revolution, fundamental conceptions are at stake. What was once viewed as central -- Goddard's unitary inborn nervous process -- is getting pushed toward the periphery as our inner cosmology of mind undergoes reconstruction. Conceptions of intelligence today are in radical flux, with new theories asserting their rights like African bees. It is to the Copernican revolution in our ideas about intelligence that this book is dedicated.

The Revolution We Need

As a cognitive psychologist, I have good reason to be interested in this Copernican revolution. However, as a parent, a citizen, a voter, a shopper, a boss, an employee, and a player of many other roles, why should I care? Why should any of us care?

Because how intelligence really works matters in a very concrete practical way -- arguably it matters much more than whether the earth circles around the sun or vice versa. We could use more intelligence. Other people do not always behave as intelligently as we would like, and neither do we ourselves, as we realize when we stand back and reflect on our behavior. If intelligence is learnable, we can hope to do something about it directly. If intelligence is not, we just have to live with the fact and work around it.

Of many sources pointing to the need for more intelligence in the form of better thinking, I was recently struck by a report from the Rand Corporation, Global Preparedness and Human Resources: College and Corporate Perspectives. The report examined what people from the corporate and academic sectors felt was needed to meet the escalating challenges of the times. Their answer: General cognitive skills were rated more highly than knowledge in an academic major, social skills, and personal traits. Good thinking counts most; so say some of those who have thought about it.

Another compelling appeal for better thinking and learning appears in the recent book Thinking for a Living by Ray Marshall and Marc Tucker. These authors turn to the educational shortfalls of U. S. students in comparison with students in several other nations. They examine the economic roots and economic consequences of the education gap. Their conclusions are cautionary: As the title of their book suggests, economic productivity and competitiveness in the world today depend on workers who are skillful thinkers and learners. This is just what U.S. education is not producing, with certain minorities particularly suffering. Marshall and Tucker cite these statistics in illustration:

Fewer than four in ten young adults can summarize in writing the main argument from a lengthy news column --- one in four whites, one in four blacks, and two in ten Hispanics. Only twenty-five out of 100 young adults can use a bus schedule to select the appropriate bus for a given departure or arrival -- three in 100 blacks and seven in 100 Hispanics. Only 10 percent of the total group can select the least costly product from a list of grocery items on the basis of unit-pricing information -- twelve in 100 whites, one in 100 blacks, and four in 100 Hispanics.

They sum up the reality this way:

These findings make it clear that only a tiny fraction of our workers can function effectively in an environment requiring strong communications skills and the application of sophisticated conceptual understanding to complex real-world problems.

Of course, one response to this concern might be that students need more back-to-basics education, more drill and practice, more reading, 'riting, and 'rithmetic in a classic regimen. However, in terms of routine competence -- the kind built by drill and practice -- U.S. students score reasonably well. It is tasks requiring a modicum of reasoning that floor many of them. Students need better thinking and learning skills. As I argued in my recent book Smart Schools, real learning is a consequence of thinking. People retain, understand, and make active use of knowledge through learning experiences that center on thinking both through and with what is learned. Good thinking and good learning are as closely tied as the hydrogen and oxygen in a molecule of water, and they make up the drink that students need.

Not-so-intelligent behavior is a stark reality of our own lives and the lives around us. While later chapters will turn to further evidence, let me offer three tales by way of illustration.

The populist senator. A while ago, a United States senator responded to concerns some had expressed that a Supreme Court nominee might be mediocre. His thought: "Even if he is mediocre, there are a lot of mediocre judges and people and lawyers. They are entitled to a little representation, aren't they, and a little chance? We can't have all Brandeises, Cardozos and Frankfurters and stuff like that there."

The whale movers. A while ago, a forty-five-foot eight-ton dead whale washed up on an Oregon beach. Local authorities faced the problem of getting rid of the carcass. So they decided to dynamite it, expecting it would pulverize. The pieces would be eaten by sea gulls or at least easily removed. Instead, the dynamite exploded the whale into huge chunks of blubber, one of which crushed a nearby car. Fortunately, the spectators only suffered a rain of small gobs of blubber on their heads.

The willing student. A while ago, a colleague I hadn't seen in some years passed through town, and over a cup of coffee we fell to discussing her efforts to awaken her college students to the art and craft of thinking. She told me of a student who for some time seemed not to catch on at all. But eventually he came to an insight. "Oh I see," the student said. "So in this class, you want us to reach our conclusions on the basis of reason and argument."

These stories catch us human beings with our intellectual pants down. Although mediocre people certainly deserve representation, they presumably do not want representation by a mediocre person. Although exploding a whale might work, it might go wildly wrong, so there is something to be said for a trial run on a small part like the tail. Although it's nice that the willing student seems ready to exercise reason and argument in my colleague's class, it would be even nicer if the student recognized that there was some point to reaching conclusions on the basis of reason and argument in many circumstances. To be sure, people sometimes legitimately arrive at conclusions in ways far removed from reason and argument -- observation, authority, tradition, faith, intuition. What is striking about the willing student is that he viewed reasoning and argument in the classroom as somewhat exotic, a reaction that perhaps raises concerns as much about what his other classes were like as about his own thinking.

While fun, these stories are also fundamentally cautionary. They remind us that people sometimes do not think very well. They urge us to ask how often and under what conditions people think not-so-intelligently.

All too often, it seems. For a case in point, the health plan proposed by the Clinton administration in 1993 precipitated a persistent and acrimonious debate about its merits. How carefully has that debate been conducted through the media? The Annenberg Public Policy Center at the University of Pennsylvania undertook an analysis of 125 print and 73 broadcast ads drawn from both sides of the question. The analysis disclosed that about 60 percent of the broadcast ads and 30 percent of the print ads were false, misleading, or unfair. The ads tended to question the integrity and goodwill of people on the other side of the issue. The anti-Clinton ads played upon five fears: increased taxation, rationing, bureaucracy and government control, diminished choice of doctors, and massive job loss. The pro-Clinton ads tended to exaggerate the number of people harmed by present medical insurance practices and the rate of growth of medical costs, as well as suggesting that Republican programs offered people no help.

A cynical view of this would take it as no more than cutthroat politics. It is certainly that, but it also tells us something about ourselves. Political and special interest groups offer such thin and biased arguments because many people are moved by them. People of generally good character commonly simply fail to investigate or even ponder the other side of issues about which they have come to feel strongly.

Because of narrow and biased thinking, major political figures occasionally even make decisions contrary to their own best interests. The noted historian Barbara Tuchman has written an entire book about this. In her The March of Folly, she traces out episode after episode in history where the key players made disastrous mistakes that, she argues, they could in principle have anticipated and avoided. One of these episodes was the prolonged United States involvement in Vietnam. In Chapter 6, I discuss examples from Tuchman's work further, along with a number of other cases of faulty thinking.

If less-than-intelligent thought and action stain the world of politics, other slices of society fare no better. In the world of business, the savings and loan scandal of the late 1980s marked a clear occasion where acquisitiveness ran too far ahead of prudence. One can also ponder the Edsel and the efforts of the Coca-Cola Company to replace the old Coke with a new and sweeter one. In the medical world, quackery proves to be a persistent problem as people driven by fear spend dearly on hopeless remedies. At the same time, grateful as we all must be for modern medicine, there are some valid medical malpractice suits that point up the occasionally shaky character of medical reasoning.

In the legal world, dubious evidence and argument demonstrably win the day from time to time. A case in point appears in documentary film maker Errol Morris's well-known The Thin Blue Line. The film focuses on the 1976 murder of a policeman by the driver of a car stopped for a minor traffic violation. In his 1988 film, made more than a decade after the event, Morris exposed the lapses of evidence and manipulation of logic involved in convicting Randall Adams, the supposed driver, and sentencing him to life imprisonment. The film led to reopening the case and setting aside Adams' conviction.

Were these avoidable errors or were people thinking absolutely as well as they could with the information they had at the time? While lack of information certainly can be a factor, shortfalls of imagination and critical insight usually seem to play a role in such circumstances. In a 1994 article in Educational Researcher, Keith Stanovich of the Ontario Institute for Studies in Education pointed up such concerns by proposing a label for them: dysrationalia. He defined the syndrome of dysrationalia as follows:

Dysrationalia is the inability to think and behave rationally despite adequate intelligence [in the sense of IQ]. It is a general term that refers to a heterogenous group of disorders manifest by significant difficulties in belief formation, in the assessment of belief consistency, or in the determination of action to achieve one's goals...The key diagnostic criterion for dysrationalia is a level of rationality, as demonstrated in thinking and behavior, that is significantly below the level of the individual's intellectual capacity.

Stanovich argues that people often think much more poorly than they can. Is this really so? Why does it happen? And can anything be done about it?

The answer developed in the following chapters comes down to this. While lapses in thinking are hearteningly rare in some circumstances, they are all too common in others. People think quite well and behave quite intelligently much of the time -- so far so good. However, when faced with situations that are complex or novel or invite bias, people often think quite poorly, falling into one of several "intelligence traps." These traps figure both in the broad reach of public and political affairs and in the fine texture of everyday events. People get trapped into not-so-intelligent behavior in numerous corners of their lives -- an educational opportunity missed or dismissed, a casual purchase, a hasty marriage, a job too easily abandoned, a task too stubbornly pursued, a bad risk taken, a good risk avoided. Life is peppered with such situations. They will always be there, but putting learnable intelligence to work can help to keep them at bay. We need the revolution.

An Evolutionary Double Bind

But why do we need any such revolution? Why do we find ourselves in this fix? There is something distinctly odd about not performing up to par in our day-to-day thinking, and still more so in high-stakes circumstances. We are sophisticated organisms, the product of millions of years of evolution, with intelligence our most conspicuous and powerful adaptation. How can it be, as Stanovich puts it, that we commonly function below the level of our intellectual capacity?

The answer I will develop later in this book is that we as a species are caught in an evolutionary double bind. Along with our intellectual strengths come inherent intellectual weaknesses. To preview ideas detailed later: Our minds function in a very pattern-driven way. As we go through life, puzzle out problems, and gain experience, we store up patterns that work well for us. In meeting new situations, we automatically try to make a match to what we know and select a pattern from our storehouse that might apply. This matching process gets influenced not only by what patterns we have stored up but also by our goals, prejudices, and passions.

All this works very effectively most of the time. It keeps us functioning efficiently in light of our past experience. But inherent in the process is a tendency toward rapid, stereotyped, less appropriate responses when situations are unusually complex or novel or when they evoke biases. This shortfall is built into the very nature of the way our minds work.

It may seem odd that a fundamentally adaptive system would prove self-limiting in some ways. However, this is actually quite common. The mammalian immune system provides an excellent case in point. Most people are aware that the immune system detects invasive organisms and constructs countermeasures tailored to them. It is a powerful line of defence against disease, contributing greatly to our survival. However, fewer people are aware that a number of maladies called autoimmune diseases result when a person's own white blood cells fail to recognize normal cells and attack some constituent of the body itself. These include pernicious anemia (caused by deficiency of vitamin B), thyrotoxicosis (the condition resulting from an overactivity of the thyroid gland), systemic lupus erythematosus, some types of hepatitis, many forms of arthritis, some forms of chronic liver and kidney disease, and possibly even diabetes. In other words, although fundamentally highly adaptive, the immune system is not perfect. The same can be said of the natural pattern-making proclivities of the human mind.

There is a way out of this double bind, however. Perhaps we can learn to make calculatedly better use of our own minds, not always surrendering to our quick pattern-making proclivities but sometimes working strategically against them for the sake of more critical and creative thinking.

The idea that we can learn to put our thoughts in better order is as old as Plato and Aristotle, and as new as wide-spread contemporary efforts to teach more effective thinking practices in schools and seminars. It may seem an artificial enterprise -- and it is, because it works against natural proclivities. But for all its artificiality, it is no more inappropriate than people striving for a better diet or a healthier pattern of physical effort contrary to their natural leaning.

Three Mindware Questions

Basically this book is an argument for learnable intelligence. It's an argument that a revolution in our conceptions of intelligence is underway, that it's warranted, that we need it, and that we can carry it further. I want to outline a theory of learnable intelligence that says to what extent and in what ways our intelligence can be amplified.

One way to track the debate around this revolution is to use a metaphor, the idea of mindware. What is mindware? It is whatever people can learn that helps them to solve problems, make decisions, understand difficult concepts, and perform other intellectually demanding tasks better. To draw an analogy with computers, mindware is software for the mind -- the programs you run in your mind that enable you to do useful things with data stored in your memory. Or to make a more prosaic but equally apt analogy with cooking, mindware is like kitchenware, the equipment of the mind, the pots and pans, measuring spoons and spatulas, egg beaters and corkscrews that enable people to cook up something compelling out of the information at their disposal. Or to put it yet another way, mindware is whatever knowledge, understanding, and attitudes you have that support you in making the best use of your mind.

The idea of mindware suggests three key questions about intelligence that will help us to track the arguments about its nature and progress toward an expanded view of intelligence. I will call them the mindware questions:

MWQ #1. What mechanisms underlie intelligence?

MWQ #2. Can people learn more intelligence?

MWQ #3. What aspects of intelligence especially need attention?


The first asks about the mechanisms of intelligence -- the neural structures; the cognitive processes; the roles of knowledge, skill, belief, and attitude; whatever contributes to intelligent behavior. The second asks whether, in light of those mechanisms, intelligence or some aspects of intelligence can be learned. If intelligence is at least partly learnable, then the third question comes into play: What kinds or aspects or facets of intelligence most call for attention? To put it in mindware terms, what sorts of mindware do we most need?

Throughout this book, I will use the three mindware questions as a yardstick for measuring progress toward a theory of learnable intelligence. Goddard's list defines our starting point, because it suggests answers that grant nothing to the notion of learnable intelligence:

MWQ # 1. What mechanisms underlie intelligence?

* The efficiency of the neural system as an information processing device, which is largely genetically determined, Goddard says.

MWQ #2. Can people learn more intelligence?

* No, because genetics determines basic intelligence.

MWQ #3. What aspects of intelligence especially need attention?

* The question is pointless, given the answer to question 2.

While those are Goddard's answers, throughout the three parts of this book I will build steadily toward a different set of answers with ample room for learnable intelligence. In particular, Part I, In Search of Intelligence, looks at the historical roots of the theory of intelligence, examines the case for Goddard's list, reviews fundamental criticisms of it, and synthesizes all this into the idea that we need to recognize three dimensions of intelligence. The first is neural intelligence, the contribution of neural efficiency to intelligent behavior, and a nod to Goddard's tradition. The second is experiential intelligence, the contribution of a storehouse of personal experience in diverse situations to intelligent behavior. The third is reflective intelligence, the contribution of knowledge, understanding, and attitudes about how to use our minds to intelligent behavior -- in other words, the contribution of mindware. The second and third are both learnable and make up learnable intelligence.

Part II, Learnable Intelligence on Trial, focuses on the need for and prospects of learnable intelligence, especially reflective intelligence. It details the case for troublingly frequent shortfalls in human thinking sketched earlier in this chapter. It provides an analysis of the evolutionary double bind that often leads us to think less effectively than our capacities allow. Finally, it reviews some well-known efforts to teach people better thinking -- to teach them reflective intelligence -- gauges their success, and analyzes a persistent debate about whether people can actually learn to think better.

Part III, What the Mind is Made Of, ranges beyond current theory to introduce a new way of conceptualizing what reflective intelligence is and how it operates. You can "know your way around" the good use of your mind in much the same sense that you can know your way around your neighborhood, the game of baseball, or the stock market. To acquire such knowledge, people can "learn their way around" important kinds of thinking, gaining concepts, beliefs, feelings, and patterns of action that allow them to handle problem solving, decision making, explanation, and other intellectually demanding activities better. This view is very different from the usual emphatically process-centered view of reflective intelligence, which sees it as a bundle of cognitive processes that need organization and expansion. Finally, in a concluding chapter, I explore areas of reflective intelligence that may have great social importance in the next decades.

Through twists and turns of evidence and argument, analysis and synthesis, in the course of this book I hope to make the case that most people can learn to function in a substantially more intelligent way. We can beat the evolutionary double bind and its intelligence traps if we invest our efforts in cultivating our intelligence.

Isn't the Revolution Over Yet?

But why make so much of this revolution in our conception of intelligence? Isn't it old news? Some might count this revolution as virtually over, saying "IQ? We don't believe in that any more!" As to cultivating better thinking, books and seminars on creative thinking are popular in business and other settings. Several states in the United States include thinking skills as part of their educational goals. Many textbooks have thinking-oriented questions at the end of each chapter. It looks as though people already believe in learnable intelligence and have set out to do something about it.

I cherish these signs of a thaw in our attitudes toward human intelligence. But, as a person who works professionally in this area, I have to view them more as the crocus that blooms here and there toward the end of winter than as spring itself. In point of fact, books and seminars on creative and other kinds of thinking reach relatively few people and often with rather poor models of better thinking. Very few schools mount persistent and effective efforts to cultivate students' thinking, despite the mention of thinking on state agendas. The thinking-oriented questions in textbooks by and large only make a token contribution. The idea of learnable intelligence has not penetrated our society widely and deeply.

Why not? Why is most of the revolution yet to come?

One reason is taken up in Chapter 4 and again from a different perspective in Chapter 9: Some psychologists skeptical about viewing intelligence as IQ are also skeptical about the learnability of reflective intelligence. While rejecting the idea that intelligence is mostly a matter of neural efficiency, they hold that especially intelligent behavior is a matter of learning how to think and act within innumerable particular situations, within different subject matters or professions for instance, what I call experiential intelligence. Since what is learnable about intelligence lacks generality, they say, people cannot expect to learn to be more intelligent in general, only situation by situation. I will argue later that this view is mistaken, although it has a point to make.

Another reason why the revolution is not over is that the old IQ lives! Many people believe firmly in intelligence as a fixed, genetically determined characteristic of themselves and others. Historically, many people have thought that some racial groups differ in their fundamental intellectual capacities. It is unpopular to express such a view today, but certainly the attitude persists. More broadly, a view of intelligence as fixed pervades our reasoning about human performance. For instance, when we or our children have difficulty with a demanding intellectual task, we commonly say "Well, it's just too hard. You either get it or you don't." We attribute failure to a fundamental lack of ability. Likewise, when people succeed conspicuously, we laud their talent and envy their genes. Curiously, this pattern of thinking figures much more in United States culture than, for instance, in Japan. Research shows that Japanese parents lay much more emphasis on the role of effort in success: The way to deal with a difficult problem or a puzzling concept is to persevere systematically until you have mastered it.

Anyone who doubts that IQ has contemporary champions need only turn to The Bell Curve, a 1994 book by Richard Herrnstein, late professor of psychology at Harvard University, and sociologist Charles Murray, political scientist at the American Enterprise Institute. The Bell Curve takes IQ as the only reasonable conception of general cognitive ability and focuses on the relationship between low IQ and social ills such as poverty, unwed motherhood, crime, welfare, and the chronically unemployed. Among other things, the authors argue that racial differences in IQ are probably in part genetic and that education at present has little prospects of helping people to become more intelligent. While their position is not as stark as that of Goddard, it leans in his direction. One might almost call Herrnstein and Murray the new Goddards.

I disagree with Herrnstein and Murray on education, race, and other matters, as discussed in later chapters. Although intellectual talent is certainly a real phenomenon, I will argue that most people can learn to use whatever intellectual talents they have much better than they normally do. But whatever we make of Herrnstein's and Murray's viewpoint, its mere presence on the modern scene testifies to the continuing battle over the nature of human intelligence.

Another reason the revolution is not over is simple confusion. The public is not given a clear picture of what IQ theorists like Herrnstein and Murray say, what other scientists say about intelligence, and how it all fits together. Indeed, there is little appreciation of IQ as a construct, limited though it may be. In a recent conversation with a psychologist friend, I was startled to discover that he thought IQ was a completely discredited concept. He knew little about the basic pattern of findings behind the notion of IQ. On the contrary, IQ is probably the single most robust concept in the theory of intelligence, and any effort to expand our conception of intelligence needs to come to terms with it. In Part I of this book, I will try to make crystal clear what IQ means, what it implies, and what it does not imply.

Since the revolution is not over, perhaps to speak of a Copernican revolution is too hasty and too brash. It remains to be seen which theories will endure scientifically and empower us practically. Also, if a revolution is underway in our conceptions of intelligence, it is a leisurely one. Challenges to what I have called the classic view of intelligence are by no means new. The first scientist to measure intelligence, the French psychologist Alfred Binet, saw intelligence as a potpourri of different abilities rather than one genetically determined essence. But researchers in the United States such as Goddard, Terman, and Spearman promoted what came to be the dominant and classic position. So, from its very inception, the notion of intelligence has been controversial.

One can argue about whether all the earmarks of a Copernican revolution concerning intelligence are apparent. But certainly the spirit is right. To speak of a Copernican revolution evokes a spirit of adventure, of worlds in the making. It captures some of the excitement that many working in this area of human intelligence feel. It honors what may emerge as a new science fundamental to our understanding of ourselves and our best use of our own mental resources.

The Affirmative Revolution

From Copernicus on down, most Copernican revolutions have diminished our human sense of centrality and power, even as they have given us more panoramic views of the universe in which we live. Copernicus taught us that we do not reside at the center of things. The Darwinian revolution taught us that we evolved from primitive stock. The Freudian revolution declared our subjugation to the dark forces of the id within us.

The revolution underway in our conceptions of intelligence is certainly more modest than any of these. It will not shake things up as much. There is no startling proposal here that porpoises or planaria or poplar trees can acquire intelligence. The claim is much more modest: Human beings, manifestly the most intelligent life form on the planet, can become even more so. From a cosmic perspective, this is no great surprise.

Although a more modest revolution, in at least one respect it strikes a happy contrast with its better known siblings in the Copernican family. Far from taxing away yet more of the power and position we thought we had, this revolution has a restorative character. It says that we are not so boxed in by our genetic heritage. On the contrary, intelligence is something that can be cultivated and acquired. People can learn to think and act more intelligently.

Although the technical debates and educational reforms underlying this revolution occur far from the conventional campaigns, polls, and maneuvering of politics, this revolution even has its political side. Luis Alberto Machado, a minister of state of Venezuela, several years ago fostered the development of nationwide programs dedicated specifically to enhancing intelligence. In doing so, he spoke and wrote of the "revolution of intelligence." Machado saw a more cultivated intelligence as a political force, an enlightened electorate as a sharpened blade that could slice through bureaucratic complexities toward a more fruitful society. Not only individually but in our collectivity we can perhaps learn to think and act more intelligently.

This news, if correct, could not come at a better time. In the transition to the next millennium, the human race faces daunting problems on a global scale -- famine, overpopulation, political strife, ecological disintegration. At home, in schools, and in the workplace, people face more personal problems -- violence, harassment, rivalries, discrimination -- as well as the spiraling technical challenges of a complex civilization. As never before, the human race needs all the wit it can muster.

This revolution in intelligence is a revolution well-timed for the twenty-first century. At last, here is a Copernican revolution in the making that is empowering rather than disempowering, heartening rather than disheartening, the hopeful new science of learnable intelligence. May the revolution succeed, because we very much need it!

Copyright © 1995 by David N. Perkins

About The Author

David Perkins, Ph.D., is co-director of Harvard Project Zero, one of the foremost research centers in the country on children’s learning, and a professor at the Harvard Graduate School of Education.

Product Details

  • Publisher: Free Press (March 1, 1995)
  • Length: 390 pages
  • ISBN13: 9781439105610

Browse Related Books

Raves and Reviews

Howard Gardner Harvard University, Author, Frames of Mind Moving beyond the tired debates within the IQ community, David Perkins daringly places mindfulness and reflection at the center of intelligence. With insight and humanity he shows us how we can all use our minds more effectively.

Robert J. Sternberg Yale University, Author, Defying the Crowd and Beyond IQ: The Triarchic Mind Outsmarting IQ, more than any other book currently available, combines in an integrative and exciting way what we know on the one hand about thinking, and on the other about intelligence. Perkins argues that theorists of intelligence need to go beyond not just IQ but mental processes as well, and to look at dispositions in thinking, which are both teachable and readily learnable. This is a book that anyone interested in intelligence will want to read.

Israel Scheffler Harvard University A brilliant and also a hopeful book about learnable -- and even teachable -- intelligence. Everyone concerned with finding a path through the IQ wars -- and that means parents and policymakers as well as teachers -- ought to read this book by a master educator.

Matthew Lipman Montclair State University, Director, Institute for the Advancement of Philosophy for Children For years now, David Perkins has been commenting insightfully on developments in the field of cognitive education. How appropriate now that he brings us this long-awaited study of the contrast between the increasingly problematic concept of intelligence and the rapid strengthening of the concept of thinking! Parents, teachers, and administrators will all enthusiastically welcome this popularly written introduction to the educational approaches that, in the years ahead, will shape the way we learn and think. Outsmarting IQ provides a useful map of the newly emerging terrain that responsible educators should know.

Gavriel Saloman Haifa University, Editor, Educational Psychologist If intelligence is learnable, as Perkins so persuasively and refreshingly argues in Outsmarting IQ, then reading this book is indeed going to make you more intelligent. And this process of becoming smarter by reading the book is going to be a genuine, intellectually entertaining adventure.

Resources and Downloads

High Resolution Images

More books from this author: David Perkins