Chapter One: The Corporation's Rise to Dominance
Over the last 150 years the corporation has risen from relative obscurity to become the world's dominant economic institution. Today, corporations govern our lives. They determine what we eat, what we watch, what we wear, where we work, and what we do. We are inescapably surrounded by their culture, iconography, and ideology. And, like the church and the monarchy in other times, they posture as infallible and omnipotent, glorifying themselves in imposing buildings and elaborate displays. Increasingly, corporations dictate the decisions of their supposed overseers in government and control domains of society once firmly embedded within the public sphere. The corporation's dramatic rise to dominance is one of the remarkable events of modern history, not least because of the institution's inauspicious beginnings.
Long before Enron's scandalous collapse, the corporation, a fledgling institution, was engulfed in corruption and fraud. Throughout the late seventeenth and early eighteenth centuries, stockbrokers, known as "jobbers," prowled the infamous coffee shops of London's Exchange Alley, a maze of lanes between Lombard Street, Cornhill, and Birchin Lane, in search of credulous investors to whom they could sell shares in bogus companies. Such companies flourished briefly, nourished by speculation, and then quickly collapsed. Ninety-three of them traded between 1690 and 1695. By 1698, only twenty were left. In 1696 the commissioners of trade for England reported that the corporate form had been "wholly perverted" by the sale of company stock "to ignorant men, drawn in by the reputation, falsely raised and artfully spread, concerning the thriving state of [the] stock." Though the commissioners were appalled, they likely were not surprised.
Businessmen and politicians had been suspicious of the corporation from the time it first emerged in the late sixteenth century. Unlike the prevailing partnership form, in which relatively small groups of men, bonded together by personal loyalties and mutual trust, pooled their resources to set up businesses they ran as well as owned, the corporation separated ownership from management -- one group of people, directors and managers, ran the firm, while another group, shareholders, owned it. That unique design was believed by many to be a recipe for corruption and scandal. Adam Smith warned in The Wealth of Nations
that because managers could not be trusted to steward "other people's money," "negligence and profusion" would inevitably result when businesses organized as corporations. Indeed, by the time he wrote those words in 1776, the corporation had been banned in England for more than fifty years. In 1720, the English Parliament, fed up with the epidemic of corporate high jinks plaguing Exchange Alley, had outlawed the corporation (though with some exceptions). It was the notorious collapse of the South Sea Company that had prompted it to act.
Formed in 1710 to carry on exclusive trade, including trade in slaves, with the Spanish colonies of South America, the South Sea Company was a scam from the very start. Its directors, some of the leading lights of political society, knew little about South America, had only the scantiest connection to the continent (apparently, one of them had a cousin who lived in Buenos Aires), and must have known that the King of Spain would refuse to grant them the necessary rights to trade in his South American colonies. As one director conceded, "unless the Spaniards are to be divested of common sense...abandoning their own commerce, throwing away the only valuable stake they have left in the world, and, in short, bent on their own ruin," they would never part with the exclusive power to trade in their own colonies. Yet the directors of the South Sea Company promised potential investors "fabulous profits" and mountains of gold and silver in exchange for common British exports, such as Cheshire cheese, sealing wax, and pickles.
Investors flocked to buy the company's stock, which rose dramatically, by sixfold in one year, and then quickly plummeted as shareholders, realizing that the company was worthless, panicked and sold. In 1720 -- the year a major plague hit Europe, public anxiety about which "was heightened," according to one historian, "by a superstitious fear that it had been sent as a judgment on human materialism" -- the South Sea Company collapsed. Fortunes were lost, lives were ruined, one of the company's directors, John Blunt, was shot by an angry shareholder, mobs crowded Westminster, and the king hastened back to London from his country retreat to deal with the crisis. The directors of the South Sea Company were called before Parliament, where they were fined, and some of them jailed, for "notorious fraud and breach of trust." Though one parliamentarian demanded they be sewn up in sacks, along with snakes and monies, and then drowned, they were, for the most part, spared harsh punishment. As for the corporation itself, in 1720 Parliament passed the Bubble Act, which made it a criminal offense to create a company "presuming to be a corporate body," and to issue "transferable stocks without legal authority."
Today, in the wake of corporate scandals similar to and every bit as nefarious as the South Sea bubble, it is unthinkable that a government would ban the corporate form. Even modest reforms -- such as, for example, a law requiring companies to list employee stock options as expenses in their financial reports, which might avoid the kind of misleadingly rosy financial statements that have fueled recent scandals -- seem unlikely from a U.S. federal government that has failed to match its strong words at the time of the scandals with equally strong actions. Though the Sarbanes-Oxley Act, signed into law in 2002 to redress some of the more blatant problems of corporate governance and accounting, provides welcome remedies, at least on paper, the federal government's general response to corporate scandals has been sluggish and timid at best. What is revealed by comparing that response to the English Parliament's swift and draconian measures of 1720 is the fact that, over the last three hundred years, corporations have amassed such great power as to weaken government's ability to control them. A fledgling institution that could be banned with the stroke of a legislative pen in 1720, the corporation now dominates society and government.
How did it become so powerful?
The genius of the corporation as a business form, and the reason for its remarkable rise over the last three centuries, was -- and is -- its capacity to combine the capital, and thus the economic power, of unlimited numbers of people. Joint-stock companies emerged in the sixteenth century, by which time it was clear that partnerships, limited to drawing capital from the relatively few people who could practicably run a business together, were inadequate for financing the new, though still rare, large-scale enterprises of nascent industrialization. In 1564 the Company of the Mines Royal was created as a joint-stock company, financed by twenty-four shares sold for £1,200 each; in 1565, the Company of Mineral and Battery Works raised its capital by making calls on thirty-six shares it had previously issued. The New River Company was formed as a joint-stock company in 1606 to transport fresh water to London, as were a number of other utilities. Fifteen joint-stock companies were operating in England in 1688, though none with more than a few hundred members. Corporations began to proliferate during the final decade of the seventeenth century, and the total amount of investment in joint-stock companies doubled as the business form became a popular vehicle for financing colonial enterprises. The partnership still remained the dominant form for organizing businesses, however, though the corporation would steadily gain on it and then overtake it.
In 1712, Thomas Newcomen invented a steam-driven machine to pump water out of a coal mine and unwittingly started the industrial revolution. Over the next century, steam power fueled the development of large-scale industry in England and the United States, expanding the scope of operations in mines, textiles (and the associated trades of bleaching, calico printing, dyeing, and calendaring), mills, breweries, and distilleries. Corporations multiplied as these new larger-scale undertakings demanded significantly more capital investment than partnerships could raise. In postrevolutionary America, between 1781 and 1790, the number of corporations grew tenfold, from 33 to 328.
In England too, with the Bubble Act's repeal in 1825 and incorporation once again legally permitted, the number of corporations grew dramatically, and shady dealing and bubbles were once again rife in the business world. Joint-stock companies quickly became "the fashion of the age," as the novelist Sir Walter Scott observed at the time, and as such were fitting subjects for satire. Scott wryly pointed out that, as a shareholder in a corporation, an investor could make money by spending it (indeed, he likened the corporation to a machine that could fuel its operations with its own waste):
Such a person [an investor] buys his bread from his own Baking Company, his milk and cheese from his own Dairy Company...drinks an additional bottle of wine for the benefit of the General Wine Importation Company, of which he is himself a member. Every act, which would otherwise be one of mere extravagance, is, to such a person...reconciled to prudence. Even if the price of the article consumed be extravagant, and the quality indifferent, the person, who is in a manner his own customer, is only imposed upon for his own benefit. Nay, if the Joint-stock Company of Undertakers shall unite with the medical faculty...under the firm of Death and the Doctor, the shareholder might contrive to secure his heirs a handsome slice of his own death-bed and funeral expenses.
At the moment Scott was satirizing it, however, the corporation was poised to begin its ascent to dominance over the economy and society. And it would do so with the help of a new kind of steam-driven engine: the steam locomotive.
America's nineteenth-century railroad barons, men lionized by some and vilified by others, were the true creators of the modern corporate era. Because railways were mammoth undertakings requiring huge amounts of capital investment -- to lay track, manufacture rolling stock, and operate and maintain systems -- the industry quickly came to rely on the corporate form for financing its operations. In the United States, railway construction boomed during the 1850s and then exploded again after the Civil War, with more than one hundred thousand miles of track laid between 1865 and 1885. As the industry grew, so did the number of corporations. The same was true in England, where, between 1825 and 1849, the amount of capital raised by railways, mainly through joint-stock companies, increased from £200,000 to £230 million, more than one thousand-fold.
"One of the most important by-products of the introduction and extension of the railway system," observed M. C. Reed in Railways and the Growth of the Capital Market,
was the part it played in "assisting the development of a national market for company securities." Railways, in both the United States and England, demanded more capital investment than could be provided by the relatively small coterie of wealthy men who invested in corporations at the start of the nineteenth century. By the middle of the century, with railway stocks flooding markets in both countries, middle-class people began, for the first time, to invest in corporate shares. As The Economist
pronounced at the time, "everyone was in the stocks now...needy clerks, poor tradesman's apprentices, discarded service men and bankrupts -- all have entered the ranks of the great monied interest."
One barrier remained to broader public participation in stock markets, however: no matter how much, or how little, a person had invested in a company, he or she was personally
liable, without limit, for the company's debts. Investors' homes, savings, and other personal assets would be exposed to claims by creditors if a company failed, meaning that a person risked financial ruin simply by owning shares in a company. Stockholding could not become a truly attractive option for the general public until that risk was removed, which it soon was. By the middle of the nineteenth century, business leaders and politicians broadly advocated changing the law to limit the liability of shareholders to the amounts they had invested in a company. If a person bought $100 worth of shares, they reasoned, he or she should be immune to liability for anything beyond that, regardless of what happened to the company. Supporters of "limited liability," as the concept came to be known, defended it as being necessary to attract middle-class investors into the stock market. "Limited liability would allow those of moderate means to take shares in investments with their richer neighbors," reported the Select Committee on Partnerships (England) in 1851, and that, in turn, would mean "their self-respect [would be] upheld, their intelligence encouraged and an additional motive given to preserve order and respect for the laws of property."
Ending class conflict by co-opting workers into the capitalist system, a goal the committee's latter comment subtly alludes to, was offered as a political justification for limited liability, alongside the economic one of expanding the pool of potential investors. An 1853 article in the Edinburgh Journal,
The workman does not understand the position of the capitalist. The remedy is, to put him in the way by practical experience....Working-men, once enabled to act together as the owners of a joint capital, will soon find their whole view of the relations between capital and labour undergo a radical alteration. They will learn what anxiety and toil it costs even to hold a small concern together in tolerable order...the middle and operative classes would derive great material and social good by the exercise of the joint-stock principle.
Limited liability had its detractors, however. On both sides of the Atlantic, critics opposed it mainly on moral grounds. Because it allowed investors to escape unscathed from their companies' failures, the critics believed it would undermine personal moral responsibility, a value that had governed the commercial world for centuries. With limited liability in place, investors could be recklessly unconcerned about their companies' fortunes, as Mr. Goldbury, a fictitious company promoter, explained in song in Gilbert and Sullivan's sharp satire of the corporation, Utopia Ltd:
Though a Rothschild you may be, in your own capacity,
As a Company you've come to utter sorrow,
But the liquidators say, "Never mind -- you needn't pay,"
So you start another Company Tomorrow!
People worried that limited liability would, as one parliamentarian speaking against its introduction in Englan said, attack "The first and most natural principle of commercial legislation...that every man was bound to pay the debts he had contracted, so long as he was able to do so" and that it would "enable persons to embark in trade with a limited chance of loss, but with an unlimited chance of gain" and thus encourage "a system of vicious and improvident speculation."
Despite such objections, limited liability was entrenched in corporate law, in England in 1856 and in the United States over the latter half of the nineteenth century (though at different times in different states). With the risks of investment in stocks now removed, at least in terms of how much money investors might be forced to lose, the way was cleared for broad popular participation in stock markets and for investors to diversify their holdings. Still, publicly traded corporations were relatively rare in the United States up until the end of the nineteenth century. Beyond the railway industry, leading companies tended to be family-owned, and if shares existed at all they were traded on a direct person-to-person basis, not in stock markets. By the early years of the twentieth century, however, large publicly traded corporations had become fixtures on the economic landscape.
Over two short decades, beginning in the 1890s, the corporation underwent a revolutionary transformation. It all started when New Jersey and Delaware ("the first state to be known as the home of corporations," according to its current secretary of state for corporations), sought to attract valuable incorporation business to their jurisdictions by jettisoning unpopular restrictions from their corporate laws. Among other things, they
- Repealed the rules that required businesses to incorporate only for narrowly defined purposes, to exist only for limited durations, and to operate only in particular locations
- Substantially loosened controls on mergers and acquisitions; and
- Abolished the rule that one company could not own stock in another
Other states, not wanting to lose out in the competition for incorporation business, soon followed with similar revisions to their laws. The changes prompted a flurry of incorporations as businesses sought the new freedoms and powers incorporation would grant them. Soon, however, with most meaningful constraints on mergers and acquisitions gone, a large number of small and medium-size corporations were quickly absorbed into a small number of very large ones -- 1,800 corporations were consolidated into 157 between 1898 and 1904. In less than a decade the U.S. economy had been transformed from one in which individually owned enterprises competed freely among themselves into one dominated by a relatively few huge corporations, each owned by many shareholders. The era of corporate capitalism had begun.
"Every tie in the road is the grave of a small stockholder," stated Newton Booth, a noted antimonopolist and railroad reformer, in 1873, when he was governor of California. Booth's message was clear: in large corporations stockholders had little, if any, power and control. By the early twentieth century, corporations were typically combinations of thousands, even hundreds of thousands, of broadly dispersed, anonymous shareholders. Unable to influence managerial decisions as individuals because their power was too diluted, they were also too broadly dispersed to act collectively. Their consequent loss of power in and control of large corporations turned out to be managers' gains. In 1913, a congressional committee set up to investigate the "money trust," led by Congressman Arsène Pujo, reported:
None of the witnesses called was able to name an instance in the history of the country in which the stockholders had succeeded in overthrowing an existing management in any large corporation, nor does it appear that stockholders have ever even succeeded in so far as to secure the investigation of an existing management of a corporation to ascertain whether it has been well or honestly managed....[In] all great corporations with numerous and widely scattered stockholders...the management is virtually self-perpetuating and is able through the power of patronage, the indifference of stockholders and other influences to control a majority of stock.
Shareholders had, for all practical purposes, disappeared from the corporations they owned.
With shareholders, real people, effectively gone from corporations, the law had to find someone else, some other person, to assume the legal rights and duties firms needed to operate in the economy. That "person" turned out to be the corporation itself. As early as 1793, one corporate scholar outlined the logic of corporate personhood when he defined the corporation as
a collection of many individuals united into one body, under a special denomination, having perpetual succession under an artificial form, and vested, by the policy of law, with the capacity of acting, in several respects, as an individual, particularly of taking and granting property, of contracting obligations, and of suing and being sued, of enjoying privileges and immunities in common.
In partnerships, another scholar noted in 1825, "the law looks to the individuals"; in corporations, on the other hand, "it sees only the creature of the charter, the body corporate, and knows not the individuals."
By the end of the nineteenth century, through a bizarre legal alchemy, courts had fully transformed the corporation into a "person," with its own identity, separate from the flesh-and-blood people who were its owners and managers and empowered, like a real person, to conduct business in its own name, acquire assets, employ workers, pay taxes, and go to court to assert its rights and defend its actions. The corporate person had taken the place, at least in law, of the real people who owned corporations. Now viewed as an entity, "not imaginary or fictitious, but real, not artificial but natural," as it was described by one law professor in 1911, the corporation had been reconceived as a free and independent being. Gone was the centuries-old "grant theory," which had conceived of corporations as instruments of government policy and as dependent upon government bodies to create them and enable them to function. Along with the grant theory had also gone all rationales for encumbering corporations with burdensome restrictions. The logic was that, conceived as natural entities analogous to human beings, corporations should be created as free individuals, a logic that informed the initiatives in New Jersey and Delaware, as well as the Supreme Court's decision in 1886 that, because they were "persons," corporations should be protected by the Fourteenth Amendment's rights to "due process of law" and "equal protection of the laws," rights originally entrenched in the Constitution to protect freed slaves.
As the corporation's size and power grew, so did the need to assuage people's fears of it. The corporation suffered its first full-blown legitimacy crisis in the wake of the early-twentieth-century merger movement, when, for the first time, many Americans realized that corporations, now huge behemoths, threatened to overwhelm their social institutions and governments. Corporations were now widely viewed as soulless leviathans -- uncaring, impersonal, and amoral. Suddenly, they were vulnerable to popular discontent and organized dissent (especially from a growing labor movement), as calls for more government regulation and even their dismantling were increasingly common. Business leaders and public relations experts soon realized that the institution's new powers and privileges demanded new public relations strategies.
In 1908, AT&T, one of America's largest corporations at the time and the parent company of the Bell System, which had a monopoly on telephone services in the United States, launched an advertising campaign, the first of its kind, that aimed to persuade a skeptical public to like and accept the company. In much the same way that law had transformed the corporation into a "person" to compensate for the disappearance of the real people within it, AT&T's campaign imbued the company with human values in an effort to overcome people's suspicions of it as a soulless and inhuman entity. "Bigness," worried one vice president at AT&T, tended to squeeze out of the corporation "the human understanding, the human sympathy, the human contacts, and the natural human relationships." It had convinced "the general public [that] a corporation is a thing." Another AT&T official believed it was necessary "to make the people understand and love the company. Not merely to be consciously dependent upon it -- not merely regard it as a necessity -- not merely to take it for granted -- but to love it -- to hold real affection for it." From 1908 into the late 1930s, AT&T trumpeted itself as a "friend and neighbor" and sought to give itself a human face by featuring real people from the company in its advertising campaigns. Employees, particularly telephone operators and linemen, appeared regularly in the company's advertisements, as did shareholders. One magazine advertisement entitled "Our Shareholders," depicts a woman, presumably a widow, examining her AT&T share certificates as her two young children look on; another pronounces AT&T "a new democracy of public service ownership" that is "owned directly by the people -- controlled not by one, but controlled by all."
Other major corporations soon followed AT&T's lead. General Motors, for example, ran advertisements that, in the words of the agency responsible for them, aimed "to personalize the institution by calling it a family."
"The word 'corporation' is cold, impersonal and subject to misunderstanding and distrust," noted Alfred Swayne, the GM executive in charge of institutional advertising at the time, but "'Family' is personal, human, friendly. This is our picture of General Motors -- a big congenial household."
By the end of World War I, some of America's leading corporations, among them General Electric, Eastman Kodak, National Cash Register, Standard Oil, U.S. Rubber, and the Goodyear Tire & Rubber Company, were busily crafting images of themselves as benevolent and socially responsible. "New Capitalism," the term used to describe the trend, softened corporations' images with promises of good corporate citizenship and practices of better wages and working conditions. As citizens demanded that governments rein in corporate power and while labor militancy was rife, with returning World War I veterans, having risked their lives as soldiers, insisting upon better treatment as workers, proponents of the New Capitalism sought to demonstrate that corporations could be good without the coercive push of governments and unions.
A leader of the movement, Paul W. Litchfield, who presided over Goodyear Tire for thirty-two years through the middle part of the twentieth century, believed capitalism would not survive unless equality and cooperation between workers and capitalists replaced division and conflict. Though branded a socialist and a Marxist by some of his business peers at the time, Litchfield forged ahead with programs designed to promote the health, welfare, and education of his workers and their families, and to give his workers a greater voice in company affairs. One of his proudest achievements was a workers' Senate and House of Representatives, modeled after the national one, that had jurisdiction over employment issues, including wages. Litchfield defended his benevolent policies as necessary for Goodyear's success. "Goodyear has all about her the human quality," he said, "and it has been to this human quality fully as much as to her business methods, that Goodyear owes her meteoric rise in the ranks of American Industry."
Corporate social responsibility blossomed again during the 1930s as corporations suffered from adverse public opinion. Many people believed at the time that corporate greed and mismanagement had caused the Great Depression. They shared Justice Louis Brandeis's view, stated in a 1933 Supreme Court judgment, that corporations were "Frankenstein monsters" capable of doing evil. In response, business leaders embraced corporate social responsibility. It was the best strategy, they believed, to restore people's faith in corporations and reverse their growing fascination with big government. Gerard Swope, then president of General Electric, voiced a popular sentiment among big-business leaders when, in 1934, he said that "organized industry should take the lead, recognizing its responsibility to its employees, to the public, and to its shareholders rather than that democratic society should act through its government"
Adolf Berle and Gardiner Means had endorsed a similar idea two years earlier in their classic work The Modern Corporation and Private Property.
The corporation, they argued, was "potentially (if not yet actually) the dominant institution of the modern world"; its managers had become "princes of industry," their companies akin to feudal fiefdoms. Because they had amassed such power over society, corporations and the men who managed them were now obliged to serve the interests of society as a whole, much as governments were, not just those of their shareholders. "[T]he 'control' of the great corporations should develop into a purely neutral technocracy," they wrote, "balancing a variety of claims by various groups in the community and assigning to each a portion of the income stream on the basis of public policy rather than private cupidity." Corporations would likely have to embrace this new approach, Berle and Means warned, "if the corporate system [was] to survive." Professor Edwin Dodd, another eminent scholar of the corporation at the time, was more skeptical about corporations becoming socially responsible, but he believed they risked losing their legitimacy, and thus their power, if they did not at least appear to do so. "Modern large-scale industry has given to the managers of our principal corporations enormous power," Dodd wrote in 1932 in the Harvard Law Review.
"Desire to retain their present powers accordingly encourages [them] to adopt and disseminate the view that they are guardians of all the interests which the corporation affects and not merely servants of its absentee owners."
Despite corporate leaders' claims that they were capable of regulating themselves, in 1934 President Franklin D. Roosevelt created the New Deal, a package of regulatory reforms designed to restore economic health by, among other things, curbing the powers and freedoms of corporations. As the first systematic attempt to regulate corporations and the foundation of the modern regulatory state, the New Deal was reviled by many business leaders at the time and even prompted a small group of them to plot a coup to overthrow Roosevelt's administration. Though the plot (which is more fully discussed in Chapter 4, as is the New Deal itself) failed, it was significant for reflecting the depth of hostility many business leaders felt for Roosevelt. The spirit of the New Deal, along with many of its regulatory regimes, nonetheless prevailed. For fifty years following its creation, through World War II, the postwar era, and the 1960s and 1970s, the growing power of corporations was offset, at least in part, by continued expansion of government regulation, trade unions, and social programs. Then, much as steam engines and railways had combined with new laws and ideologies to create the corporate behemoth one hundred years earlier, a new convergence of technology, law, and ideology -- economic globalization -- reversed the trend toward greater regulatory control of corporations and vaulted the corporation to unprecedented power and influence.
In 1973, the economy was shaken by a surge in oil prices due to the formation of the Organization of the Petroleum Exporting Countries (OPEC), which operated in cartel-like fashion to control the world's oil supply. High unemployment, runaway inflation, and deep recession soon followed. Prevailing economic policies, which, true to their New Deal lineage, had favored regulation and other modes of government intervention, came under sustained attack for their inability to deal with the crisis. Governments throughout the West began to embrace neoliberalism, which, like its laissez-faire predecessor, celebrated economic freedom for individuals and corporations and prescribed a limited role for government in the economy. When Margaret Thatcher became prime minister of Britain in 1979, and then Ronald Reagan president of the United States in 1980, it was clear that the economic era inspired by New Deal ideas and policies had come to an end. Over the next two decades, governments pursued neoliberalism's core policies of deregulation, privatization, spending cuts, and inflation reduction with increasing vigor. By the early 1990s, neoliberalism had become an economic orthodoxy.
In the meantime, technological innovations in transportation and communications had profoundly enhanced corporations' mobility and portability. Fast and large jet planes and new container-shipping techniques (which allowed for sea shipping to be smoothly integrated with rail and truck networks) drove down the costs and increased the speed and efficiency of transportation. Communications were similarly improved with innovations to long-distance phone networks, telex and fax technology, and, more recently, the creation of the Internet. Corporations, no longer tethered to their home jurisdictions, could now scour the earth for locations to produce goods and services at substantially lower costs. They could buy labor in poor countries, where it was cheap and where environmental standards were weak, and sell their products in wealthy countries, where people had disposable income and were prepared to pay decent prices for them. Costly tariffs had gradually come down since 1948, when the General Agreement on Tariffs and Trade (GATT) was introduced, enabling corporations to take advantage of their newfound mobility without suffering punishing financial penalties.
By leveraging their freedom from the bonds of location, corporations could now dictate the economic policies of governments. As Clive Allen, a vice president at Nortel Networks, a leading Canadian high-tech company, explained, companies "owe no allegiance to Canada....Just because we [Nortel Networks] were born there doesn't mean we'll remain there....The place has to remain attractive for us to be interested in staying there." To remain attractive, whether to keep investment within their jurisdictions or to lure new investment to them, governments would now have to compete among themselves to persuade corporations that they provided the most business-friendly policies. A resulting "battle to the bottom" would see them ratchet down regulatory regimes -- particularly those that protected workers and the environment -- reduce taxes, and roll back social programs, often with reckless disregard for the consequences.
With the creation of the World Trade Organization (WTO) in 1993, the deregulatory logic of economic globalization was deepened. Given a mandate to enforce existing GATT standards, and also to create new ones that would bar regulatory measures that might restrict the flow of international trade, the WTO was poised to become a significant fetter on the economic sovereignty of nations. By the time tens of thousands of people spilled into the streets of Seattle in 1999 to protest against a meeting of WTO officials and member-state representatives, the organization had evolved into a powerful, secretive, and corporate-influenced overseer of government's mandate to protect citizens and the environment from corporate harms.
When Enron collapsed and accounting firm Arthur Andersen's role in its misdeeds was revealed, people called for better regulatory oversight of the accounting industry. What few knew at the time, however, was that the U.S. government, through its membership in the WTO, had already relinquished some of its authority to fix the problem. Driven by a stated belief that "regulations can be an unnecessary, and usually unintended, barrier to trade in services" and in response to intense lobbying from industry groups and firms, the WTO in the late 1990s had established a set of "disciplines" designed to ensure that member states do not regulate accounting in ways that are "more trade restrictive than...necessary to fulfill a legitimate objective." In 1998, member states, including the United States, agreed to abide by these new rules, which do not formally come into full effect until 2005, and thus subjected themselves to standards imposed by, and soon to be adjudicated by, an outside and undemocratic body.
When the disciplines were first being considered, U.S. representatives inquired of WTO officials whether a law that prohibited accounting firms from working both as consultants and as auditors for the same company -- a law that might help avoid another Enron/Andersen debacle, and that has recently been enacted as part of the Sarbanes-Oxley Act of 2002 -- would contravene them. A final answer to the question must await a WTO ruling once the disciplines are officially operative, which likely will take the form of a tribunal's decision in a member-state's complaint against the Act. But, in the meantime, the fact that the question even had to be asked demonstrates the discipline's potential impact on government's authority to regulate the accounting industry and hence "the people's" democratic sovereignty over it.
Regulation of accounting is not unique as an area in which the WTO has the authority to restrict governments' policy choices. On numerous occasions the organization has required nations, under threat of punishing penalties, to change or repeal laws designed to protect environmental, consumer, or other public interests. In one case, for example, a U.S. law that banned shrimp imports from producers that refused to use gear that protected sea turtles from being accidentally caught was deemed to violate WTO standards; in another case, an EU measure that banned production and imports of beef from cows treated with synthetic hormones was similarly treated. The full extent of the WTO's impact cannot be gauged from its formal decisions alone, however. As is true of any set of legal standards, WTO rules exert their strongest influence through informal channels. Governments might self-censor their behavior to ensure that they comply with the rules -- as the State of Maryland did when it scuttled a proposed law that would have barred it from buying products from companies doing business in Nigeria (while that country was under the rule of a cruel dictatorship) after warnings from the U.S. State Department that such a law could expose the United States to a WTO challenge. Governments can also use WTO standards to pressure other governments to change their policies, threatening to initiate a WTO complaint if they refuse to do so -- as the United States and Canada did to get the European Union to back off proposed regulations that would have banned the import of fur from animals caught in leg-hold traps and of cosmetics that had been tested on animals.
That the WTO's policies and decisions tend to champion corporations' interests is hardly surprising, given the privileged place and considerable influence industry groups enjoy within the organization. The trade and commerce ministers who represent the member states are usually "closely aligned with the commercial and financial interests of those in the advanced industrial countries," as Nobel laureate economist Joseph Stiglitz notes, and thus easy targets for corporations to influence. Corporations and industry groups also enjoy close relationships with the organization's bureaucrats and officials. "We want neither to be the secret girlfriend of the WTO nor should [our group] have to enter the World Trade Organization through the servant's entrance" is how one member of the International Chamber of Commerce, an influential group at the WTO, describes the special relationship between his organization -- and, one can infer, industry groups in general -- and the WTO.
Over its relatively short life, the WTO has become a significant fetter on nations' abilities to protect their citizens from corporate misdeeds. More generally, economic globalization, of which the WTO is just one element, has substantially enhanced corporations' abilities to evade the authority of governments. "Corporations have become sufficiently powerful to pose a threat to governments," says William Niskanen, chairman of the Cato Institute, and that is "particularly the case with respect to multinational corporations, who will have much less dependence upon the positions of particular governments, much less loyalty in that sense." As Ira Jackson, former director of the Center for Business and Government at Harvard's Kennedy School of Government, observes, corporations and their leaders have "displaced politics and politicians as...the new high priests and reigning oligarchs of our system." And, according to Samir Gibara, former CEO of Goodyear Tire, governments have "become powerless [in relation to corporations] compared to what they were before."
Corporations now govern
society, perhaps more than governments themselves do; yet ironically, it is their very power, much of which they have gained through economic globalization, that makes them vulnerable. As is true of any ruling institution, the corporation now attracts mistrust, fear, and demands for accountability from an increasingly anxious public. Today's corporate leaders understand, as did their predecessors, that work is needed to regain and maintain the public's trust. And they, like their predecessors, are seeking to soften the corporation's image by presenting it as human, benevolent, and socially responsible. "It's absolutely fundamental that a corporation today has as much of a human and personal characteristic as anything else," says public relations czar Chris Komisarjevsky, CEO of Burson-Marsteller. "The smart corporations understand that people make comparisons in human terms...because that's the way people think, we think in terms that often are very, very personal....If you walked down the street with a microphone and a camera and you stopped [people] on the street...they will describe [corporations] in very human terms."
Today, corporations use "branding" to create unique and attractive personalities for themselves. Branding goes beyond strategies designed merely to associate corporations with actual human beings -- such as AT&T's early campaigns that featured workers and shareholders or the more recent use of celebrity endorsements (such as Nike's Michael Jordan advertisements) and corporate mascots (such as Ronald McDonald, Tony the Tiger, the Michelin Man, and Mickey Mouse). Corporations' brand identities are "personification[s]" of "who they are and where they've come from," says Clay Timon, chairman of Landor Associates, the world's largest and oldest branding firm. "Family magic" for Disney, "invent" for Hewlett-Packard, "sunshine foods" for Dole are a few examples of what Timon calls "brand drivers." "Corporations, as brands...have...soul[s]," says Timon, which is what enables them to create "intellectual and emotional bond[s]" with the groups they depend upon, such as consumers, employees, shareholders, and regulators.
Timon points to Landor's brand drivers for British Petroleum -- "progressive, performance, green, innovative" -- as evidence of how corporate environmental and social responsibility are emerging today as key branding themes. However, he says, even companies that do not explicitly brand themselves as such must now embrace corporate social responsibility. "Out of necessity," says Timon, "companies, whether they want it or not, have had to take on a social responsibility." And that is partly a result of their new status as dominant institutions. They must now show that they deserve to be free of governmental constraints and, indeed, to participate in governing society. "Corporations need to become more trustworthy," says Sam Gibara, a successor to social responsibility pioneer P. W. Litchfield. "There has been a transfer of authority from the government...to the corporation, and the corporation needs to assume that responsibility...and needs to really behave as a corporate citizen of the world; needs to respect the communities in which it operates, and needs to assume the self-discipline that, in the past, governments required from it."
Beginning in the mid-1990s, mass demonstrations against corporate power and abuse rocked North American and European cities. The protestors, part of a broader "civil society" movement, which also included nongovernmental organizations, community coalitions, and labor unions, targeted corporate harms to workers, consumers, communities, and the environment. Their concerns were different from those of post-Enron worriers, for whom shareholders' vulnerability to corrupt managers was paramount. But the two groups had something in common: they both believed the corporation had become a dangerous mix of power and unaccountability. Corporate social responsibility is offered today as an answer to such concerns. Now more than just a marketing strategy, though it is certainly that, it presents corporations as responsible and accountable to society and thus purports to lend legitimacy to their new role as society's rulers.
Copyright © 2004 by Joel Bakan