Nineteenth-century America was the closest thing to capitalism—a system in which government is limited to protecting individual rights—that has ever existed. There was no welfare state, no central bank, no fiat money, no deficit spending to speak of, no income tax for most of the century, and no federal regulatory agencies or antitrust laws until the end of the century. Consequently, total (federal, state, and local) government spending averaged a mere 3.26 percent of Gross Domestic Product (GDP).1 The Constitution’s protection of individual rights and limitation on the power of government gave rise to an economy in which individuals were free to pursue their own interests, to start new businesses, and to create as much wealth as their ability and ambition allowed. This near laissez-faire politico-economic system led to the freest, most innovative, and wealthiest nation in history.
Since the beginning of the 20th century, however, capitalism and freedom have been undermined by an explosion in the size and power of government: Total government spending has increased from 6.61 percent of GDP in 1907 to a projected 45.19 percent of GDP in 2009;2 the dollar has lost more than 95 percent of its value due to the Federal Reserve’s inflationary policies; top marginal income tax rates have been as high as 94 percent; entitlement programs now constitute more than half of the federal budget; and businesses are hampered and hog-tied by more than eighty thousand pages of regulations in the Federal Register.
What happened? How did America shift from a predominantly free-market economy to a heavily regulated mixed economy; from capitalism to welfare state; from limited government to big government? This article will survey the progression of laws, acts, programs, and interventions that brought America to its present state—and show their economic impact. Let us begin our survey by taking a closer look at the state of the country in the 19th century.
America’s Former Free Market
The Constitution established the political framework necessary for a free market. It provided for the protection of private property (the Fifth Amendment) including intellectual property (Article I, Section 8), the enforcement of private contracts (Article 1, Section 10), and the establishment of sound (gold or silver)3 money (Article I, Sections 8 and 10). It prohibited the states from erecting trade barriers (Article I, Section 9), thereby establishing the whole nation as one large free-trade zone. It permitted direct taxes such as the income tax only if apportioned among the states on the basis of population (Article 1, Sections 2 and 9), which made them very difficult to levy.4 Finally, it specifically enumerated and therefore limited Congress’s powers (Article I, Section 8), severely constraining the government’s power to intervene in the marketplace.
Federal regulatory agencies dictating how goods could be produced and traded did not exist. Rather than being forced to accept the questionable judgments of agencies such as the FDA, FTC, and USDA, participants in the marketplace were governed by the free-market principle of caveat emptor (let the buyer beware). As historian Larry Schweikart points out:
merchants stood ready to provide customers with as much information as they desired. . . . In contrast to the modern view of consumers as incompetent to judge the quality or safety of a product, caveat emptor treated consumers with respect, assuming that a person could spot shoddy workmanship. Along with caveat emptor went clear laws permitting suits for damage incurred by flawed goods.5
To be sure, 19th-century America was not a fully free market. Besides the temporary suspension of the gold standard and the income tax levied during the Civil War, the major exceptions to the free market in the 19th century were tariffs, national banking, and subsidies for “internal improvements” such as canals and railroads. These exceptions, however, were limited in scope and were accompanied by considerable debate about whether they should exist at all. Alexander Hamilton, Henry Clay, and Abraham Lincoln supported such interventions; Thomas Jefferson, Andrew Jackson, and John Tyler generally opposed them. These interventions (except for tariffs) were, as Jefferson, Jackson, and Tyler pointed out, unconstitutional. But history shows that they were also impractical. Tariffs were initially implemented, beginning with the Tariff Act of 1789, as a source of revenue—the main source in the 19th century—for the federal government. Pressure from northern manufacturers, however, to implement tariffs for purposes of protection led to the “Tariff of Abominations” (1828), which was scaled back by 1833 due to heavy opposition from the South. Tariff rates then remained relatively low—about 15 percent—until the Civil War. By 1864, average tariff rates had risen to 47.09 percent for protectionist reasons and remained elevated for the remainder of the century.6
As to national banking, the Second Bank of the United States’ charter expired in 1836, thereby paving the way for the free banking era—which lasted until a national bank was reinstituted during the Civil War. By virtually every measure of bank health, this free banking era was the soundest in American history. In terms of capital adequacy, asset quality, liquidity, profitability, and prudent management, national banking proved to be inferior to free banking.7
As to subsidies for internal improvements, although private entrepreneurs financed and built most roads and many canals,8 state governments intervened in the 1820s to subsidize canal building—amending their constitutions to do so.9 However, most state-funded canals either went unfinished, generated little to no income, or went bankrupt. As a result, by 1860 most state constitutions were amended again to prohibit such subsidies.10 After the Civil War, federal subsidies for the transcontinental railroads caused similar problems—as well as corruption. Further, they were proven to be a hindrance to rather than a precondition of a thriving railroad industry: James Jerome Hill’s Great Northern was the most successful of the transcontinental railroads, yet was built without any subsidies or land grants.11
The foregoing interventions, though impractical, were motivated in part by a desire to help promote the development of business and industry. But lurking in the periphery, growing in popularity, and poised to fuel further government interference in the marketplace, was the ideology of collectivism—the notion that the individual must be subordinated to the collective or the “common good.” This idea was stated by economist Daniel Raymond in his 1820 textbook: “it is the duty of every citizen to forgo his own private advantage for the public good.”12 And as the 19th century progressed, this idea was increasingly cited as a justification for government intervention. One of the most important instances of this was the Supreme Court’s decision in Munn v. Illinois (1876). In the majority opinion, Chief Justice Morrison Waite declared:
Property does become clothed with a public interest when used in a manner to make it of public consequence. . . . When, therefore, one devotes his property to a use in which the public has an interest, he, in effect, grants to the public an interest in that use, and must submit to be controlled by the public for the common good. . . .13
Although the case applied only to the states, Munn undermined the sanctity of private property rights by establishing the precedent that property “clothed with a public interest” (i.e., any property related to business) is subject to government regulation and control. As a result, Munn helped pave the way for the two major assertions of federal control over the economy—the Interstate Commerce Act and the Sherman Antitrust Act—that would come in the Gilded Age.14
The “Gilded Age”
Many historians have disparagingly labeled the era that began with the end of the Civil War and extended into the early 20th century the “Gilded Age.” Yet the economic achievements of this largely laissez-faire period, during which America surpassed Britain as the world’s leading industrial nation, are arguably the most impressive in world history. From 1870 to 1910, real GDP increased at an average annual rate of 3.97 percent,15 while real wages in manufacturing doubled.16 From 1879 to 1910, because the output of goods grew at a faster rate than the money supply (the norm under a gold standard), consumer prices actually declined at an average annual rate of .16 percent.17
However, most intellectuals in the late 19th century were unimpressed. While the entrepreneurs and innovators were busy with their unprecedented creative and productive achievements, the journalists, economists, novelists, and other intellectuals castigated them as ruthless exploiters and called for government intervention into the economy. Henry Demarest Lloyd, for instance, wrote the anti-big business book Wealth Against Commonwealth (1894), in which he smeared Standard Oil and advocated the nationalization of trusts.18 Economist Henry George, author of the popular book Progress and Poverty (1879), advocated a 100 percent land tax.19 Thorstein Veblen denounced the rich for their “conspicuous consumption” and advocated a form of socialism in which the economy would be run by a small group of engineers.20 Edward Bellamy, the most popular author among a surge of utopian novelists, wrote the influential novel Looking Backwards (1888), in which the main character falls asleep in 1887 and wakes up in 2000 to an America that has been transformed into a socialist utopia. The social gospel movement advocated government paternalism as a means of stamping out sin and ensuring social welfare. One of the social gospel movement’s leaders, Richard T. Ely, founded the American Economic Association, which stated in its 1885 proposed platform: “[W]e hold that the doctrine of laissez-faire is unsafe in politics and unsound in morals. . . .”21 And the populists, who later formed the political party of the same name, advocated the nationalization of several industries, a progressive income tax, and propping up agricultural prices via inflation.
Although the fortunes of the Gilded Age were not looted, but created through production and trade, few intellectuals recognized this fact. Instead, they held the wealth creators responsible for other people’s poverty. As one clergyman claimed, “Mr. [Andrew] Carnegie’s ‘progress’ is accompanied by the growing ‘poverty’ of his less fortunate fellow-countrymen.”22 Intellectuals such as Henry George asserted that Gilded Age wealth was necessarily gained through the coercive power of monopoly: “This element of monopoly, of appropriation and spoliation [plunder] will . . . be found largely to account for all great fortunes . . .”23
But coercive monopolies—which exist only when government restricts competition—were not a defining characteristic of the Gilded Age. Most of the great entrepreneurs who are today vilified as “robber barons”—including Carnegie, John D. Rockefeller, J.J. Hill, and Cornelius Vanderbilt—did not receive government protection from competition, did not use force against their competitors, and did not force anyone to do business with them. Although some industrialists benefited from subsidies and protective tariffs, most thrived not by means of government favor or force, but by means of intelligent production and voluntary trade. They provided superior products, dramatically increased output, and significantly reduced prices in their respective markets. Thanks primarily to Carnegie, for example, the price of steel dropped from $56 per ton in 1872 to $11.50 per ton by 1900.24
Most historians regard the growth of large-scale industrial enterprises or “trusts” as an unequivocal sign that the American economy was becoming less competitive and more monopolistic in the Gilded Age, but this view also runs counter to historical facts. In the early 19th century, large enterprises were economically unfeasible because high transportation costs prohibited integration of distant markets and operations. But this changed with the construction of the transcontinental railroads following the Civil War. Between 1860 and 1900, the number of miles of railroad track increased from 30,626 to 193,346,25 resulting in a decrease in average freight rates between 1865 and 1900—from 20 cents per ton-mile to 1.75 cents per ton-mile.26 This low-cost transportation system expanded and connected markets, many of which went from being local or regional to become national and global. Many local businesses that had been relatively insulated from competition found themselves competing against large national firms that achieved lower costs through economies of scale. As a result, many local firms either went out of business or were acquired by larger firms. A wave of mergers took place between 1895 and 1904,27 but although the total number of firms in some industries declined, competition increased because previously insulated firms were competing in a national or global market.
Despite these facts, however, the notion that big business wielded too much power persisted and—combined with the perception of increased inequality of wealth and income—helped to fuel growing public discontent and anti-big business sentiment. Some of the public’s discontent was sown by the intellectuals who asserted that all big businesses were coercive, exploitative monopolies. And some of it stemmed from legitimate indignation toward the subsidies, land grants, and other privileges granted to some railroads by the government—and the corruption to which it led. But rather than simply eliminate the subsidies, grants, and privileges, the government attempted in 1887 to assuage popular discontent by creating the first federal regulatory agency—the Interstate Commerce Commission (ICC)—to control railroad rates. (Because it used government force to eliminate price competition, the ICC established a quasi-cartel with actual monopolistic powers.)28 The ICC was soon followed by the Sherman Antitrust Act (1890), which was intended to mitigate the economic power of big business.29 The few sources of monopoly power that actually did exist—subsidies, protective tariffs, and the ICC—were caused not by a free market, but by government restrictions on competition.
Whereas earlier government intervention—tariffs, national banking, and internal improvement subsidies—was intended, in part, to promote the development of business and industry, the ICC and Sherman Antitrust Act were motivated solely by the idea that big business had too much power and needed to be reformed and regulated for purposes of the “common good.” By the late 19th century, the debate had shifted from whether the government should intervene in the economy to which interventions should be implemented.
The Progressive Era and World War I
By the beginning of the 20th century, the collectivist and anti-big business trends of the late 19th century culminated in what came to be known as the “Progressive Era.” “Progressivism” was a bipartisan movement that advocated social reconstruction through activist government. Whereas the Founding Fathers considered the concentration of political power to be a serious threat to freedom, the Progressives applauded such concentration as a means of ensuring freedom—“freedom,” that is, as they redefined it. According to the Progressive president Woodrow Wilson, freedom “is something more than being let alone. The program of a government of freedom must in these days be positive, not negative merely.”30
In his book The Promise of American Life (1909), Herbert Croly systematically presented the Progressive ideology. He attacked individualism and laissez-faire while exalting the moral necessity of collectivism and big government. The “traditional American confidence in individual freedom,” Croly lamented, “has resulted in a morally and socially undesirable distribution of wealth,” something that should be corrected by sacrificing and subordinating “the individual to the demand of a dominant and constructive national purpose.”31 By “constructive national purpose,” Croly meant the elimination of wealth inequality.
The Progressives eagerly sought to put their reformist ideas into practice. They implemented a plethora of new laws, interventions, and regulations, including the Elkins Act (1903) and the Hepburn Act (1906), both of which strengthened the power of the ICC to regulate railroads; the Clayton Antitrust Act (1914), which supplemented the Sherman Antitrust Act by outlawing price discrimination and other legitimate business practices; and the Federal Trade Commission Act (1914), which outlawed “unfair methods of competition” but did not provide an objective definition of what that means.
The muckrakers directly inspired some of the Progressive legislation. For example, in his dystopian novel The Jungle (1906), Upton Sinclair portrayed horrors stemming from the exploitation of workers in Chicago’s meat-packing plants. Although the kinds of gruesome events depicted in the novel, such as workers falling into tanks and being ground up with animal parts, could not be confirmed by the Department of Agriculture or by a Congressional investigation,32 the novel spawned a public uproar that led in 1906 to the Meat Inspection Act and the Pure Food and Drug Act, both of which imposed strict requirements on food production and distribution.
Progressive president Theodore Roosevelt pushed for the aggressive enforcement of the antitrust laws, proclaiming, “We wish to control big business.”33 His “trust-busting” efforts ultimately led to the breakup of Standard Oil in 1911, despite the fact that the company was never a coercive monopoly to begin with, that it earned its market share by its superior productive prowess, and that, due to increasing competition, its market share had actually declined from 88 percent in 1890 to 64 percent by 1911.34 (For an excellent account of how Standard Oil legitimately achieved its success, see Alex Epstein’s article “Vindicating Capitalism: The Real History of the Standard Oil Company,” The Objective Standard, vol. 3, no. 2, Summer 2008.)
But the climax of the Progressive Era came in 1913 when two fateful measures were taken. The first was the creation of a central bank—the Federal Reserve—which marked the beginning of the end of sound money and of fiscal restraints on the federal government. The second was the passage of the Sixteenth Amendment, which authorized Congress to levy and collect income taxes without regard to the Constitution’s original apportionment and enumeration requirements. With the advent of the Fed and the Sixteenth Amendment, the means of financing big government—inflation, chronic deficit spending, and income taxation—were in place, just in time for the drastic measures that would be taken during World War I.
The ultimate Progressive crusade was the push to involve America in what was hailed as the “War to End All Wars.” President Wilson felt that America had a duty to make the world “safe for democracy.” “We have no selfish ends to serve,” he proclaimed. “We seek no . . . compensation for the sacrifices we shall freely make.”35 Herbert Croly, who had become the editor of the New Republic, justified the war on the grounds that the nation needed “the tonic of a serious moral adventure.”36 A war with “no selfish ends to serve” provided the “serious moral adventure” and “constructive national purpose” the Progressives were seeking.
Government intervention had been increasing since the Civil War, but even factoring in the ICC and antitrust, such intervention in the early 1900s was still relatively minor compared to what was to come. America’s decisive break with essentially limited government finally came with World War I. The Progressives transformed the American politico-economic system into what has aptly been called “war socialism.” In order to finance the war, the government raised the top marginal income tax rate to 77 percent, borrowed heavily (causing the national debt to soar from $1.5 billion in 1916 to $24 billion by 1919),37 and—via the Fed—doubled the money supply.38 The government established numerous acts, commissions, and boards—such as the National Defense Act (1916), the Army Appropriations Act (1916), and the War Industries Board (1917)—which, combined, granted the government extensive and unprecedented powers over the economy. The government proceeded to use these powers to allocate resources, fix prices, and take over entire industries, including the railroads and ocean shipping. In addition to trampling economic freedom, the government trampled civil liberties with the Selective Service Act (1917), which instituted military conscription; and the Sedition Act (1918), which made criticism of the government a criminal offense.
Many of the measures enacted during the war, such as nationalization of the railroads, were repealed after it was over, but the level of government intervention that remained was substantially higher than that prior to the war. For example, although the top marginal income tax rate was brought down from 77 percent to 25 percent in the 1920s, it was still much higher than the 7 percent rate that existed before the war began.39 More importantly, in cases such as Schenck v. United States (1919), the Supreme Court upheld most of the powers the government exercised during the war, establishing precedents for future legal proceedings.40
The Great Depression and the New Deal
After a sharp but brief recession from 1920 to 1921, a dramatic economic expansion and bull market in stocks characterized the remainder of the 1920s. Consumer prices were stable, real per capita income increased by 37.16 percent,41 stock prices more than tripled, and real GDP grew at an average annual rate of 4.68 percent.42 During this period, growing industrial use of electricity greatly increased productivity, time-saving electrical household appliances became commonplace, and most middle-class Americans bought their first cars and radios.
Unfortunately, the Roaring Twenties were followed by the Great Depression in the 1930s. The depression was and continues to be blamed on capitalism, but a comparison with previous economic downturns does not support this assessment. Prior to the Great Depression, America had many economic downturns, most lasting no more than eighteen months.43 The government typically took a relatively laissez-faire stance with respect to such recessions. During the 1837 recession, for example, President Martin Van Buren lauded the Founders for “wisely judg[ing] that the less government interferes with private pursuits the better for the general prosperity. It is not its legitimate object to . . . repair by direct grants of money or legislation in favor of particular pursuits.”44
The major difference between the Great Depression and all prior downturns —what made the depression so severe and lengthy—was massive government intervention. Ad hoc intervention bred uncertainty and a lack of confidence. With the government constantly changing the rules, businessmen could not rationally plan for the future because they had no idea what the government was going to do next. As a result, they postponed both new investments and the hiring of additional workers, thereby delaying economic recovery.
The notion that capitalism caused the Great Depression is reinforced by the myth that Herbert Hoover was a “do-nothing” laissez-faire president whose nonaction was to blame. Nothing could be further from the truth. In fact, Hoover was the most interventionist peacetime president up to that point. The initial economic downturn was caused by the Fed’s contraction of credit following a credit expansion it had engineered in the 1920s.45 Hoover’s interventionist policies, along with those of the Fed, turned what would otherwise have been a minor downturn into a major depression.
In 1930, Hoover signed the Smoot-Hawley tariff, which raised the average tariff rate to 59.1 percent, the highest it had ever been.46 By sparking a trade war with other countries (who responded with protectionist policies of their own), the rate increase squelched international trade, which led to the collapse of American agriculture. And the collapse of agriculture, in turn, contributed to a wave of bank failures: Because the portfolios of rural banks consisted predominantly of loans to farmers (due to legal restrictions on branch banking), they were not sufficiently diversified to weather the economic storm.
Hoover believed that a drop in wages would exacerbate the depression because it would result in a fall in purchasing power. He held a series of conferences with the nation’s top business leaders in which he “urged” (read: threatened) them to maintain wages.47 While prices and therefore revenues were falling, businesses, heeding Hoover’s threat, attempted to maintain wages, further squeezing profits and ultimately leaving them with no choice but to lay off workers. In 1931, when most corporations found themselves with negative profits, workers were laid off in droves. According to one analysis, Hoover’s high wage policy, together with the Smoot-Hawley tariff, caused the unemployment rate in 1931 to be 9.8 percentage points higher than it otherwise would have been.48 By 1933, the unemployment rate peaked at a record 25 percent.
“[S]acrifice by all groups,” Hoover argued, “is essential to the salvation of the nation.”49 Hoover put this creed into practice through a host of additional interventions, including: deficit spending; the Agriculture Marketing Act (1929), which created the Federal Farm Board to prop up agricultural prices by restricting supply; the Davis-Bacon Act (1931), which artificially propped up wages by mandating that “prevailing wages” be paid on all government construction contacts; the Emergency Relief and Construction Act (1932), which created the Reconstruction Finance Corporation to fund public works projects; and the Revenue Act of 1932, which raised the top marginal income tax rate to 63 percent.50 (An “increase in taxes,” Hoover stated, is one of the “sacrifices which are a part of the country’s war on depression.”)51 During Hoover’s term, real output per capita dropped by 31 percent52 and, from peak to trough, the stock market fell by almost 90 percent.
The Fed also played a major role in causing the Great Depression. From 1929 to 1933, the money supply contracted by about one-third. Although the Fed is responsible for most of this contraction, part of it is attributable to rumors that were circulating shortly before Franklin Delano Roosevelt (FDR) took office that he was going to abandon the gold standard—rumors that turned out to be true. Panic set in, leading to bank runs in which the general public sought to withdraw its gold from the banks, thereby precipitating further bank failures and deflation.53
Hoover’s and the Fed’s policies caused the Great Depression, but FDR’s policies exacerbated and prolonged it. In fact, FDR’s “New Deal” simply built on and extended the policies that Hoover initiated. Rexford Tugwell, one of FDR’s top advisors, later admitted that “practically the whole New Deal was extrapolated from programs that Hoover started.”54
Like Hoover, FDR implemented widespread government intervention. In 1933, the National Industrial Recovery Act (NIRA) created a system of government sanctioned industrial cartels.55 That same year, in order to prop up agricultural prices, the Agricultural Adjustment Act (AAA) authorized paying farmers to reduce production—by destroying crops and slaughtering livestock.56 The Gold Reserve Act (1934) took the nation off the gold standard, confiscated all gold held by the banks, and nullified all contractual gold clauses.57 Several public works programs, such as the Works Projects Administration (1935), simply transferred resources from the private to the public sector. And several acts under FDR’s “Second New Deal,” as it came to be called, laid the foundations of the welfare state and increased taxes well beyond Hoover’s increases. The Wagner Act (1935) gave broad new powers to unions and punished employer resistance to them as an “unfair labor practice”;58 the Social Security Act (1935) provided transfer payments for the old, the unemployed, and the “needy”; and the Fair Labor Standards Act (1938) created a minimum wage. During FDR’s first two terms in office, federal spending doubled,59 while the average rate of unemployment was 18 percent.60
The moral basis of the New Deal—like that of the policies of the Progressive Era—was altruism: the idea that being moral consists of self-sacrificially serving others. This idea was explicitly stated as the purpose of government policy: “‘Not for ourselves but for others.’ That motto can well be the inspiration of all of us,” proclaimed FDR, “not alone for the fine purposes of charity, but also for our guidance in our public and private service. Selfishness is without doubt the greatest danger that confronts our beloved country today.”61
Although the Supreme Court initially declared much of the New Deal legislation, including the NIRA and AAA, to be unconstitutional, the Court did an about-face (when Justice Owen J. Roberts started siding with the liberal judges) and began effectively rewriting the Constitution. “We are under a Constitution,” Chief Justice Charles Evans Hughes declared, “but the Constitution is what the judges say it is.”62 This subjectivist approach unleashed the federal government from much of its constitutional restraints.
A couple of the Court’s rulings are particularly noteworthy. In Helvering v. Davis (1937), the Court upheld the Social Security Act on the basis of the General Welfare Clause, giving Congress carte blanche to tax and spend on anything and everything it arbitrarily deems to promote the “general welfare.” (James Madison had argued that the General Welfare Clause was simply shorthand for Congress’s enumerated powers and certainly did not authorize taxing and spending for any purposes beyond those powers.)63 In Wickard v. Filburn (1942), the Court unanimously ruled on the basis of the Interstate Commerce Clause that a farmer growing wheat for his own consumption affects interstate commerce and is therefore subject to federal regulation.64 Explaining the decision, Justice Robert H. Jackson claimed that it “is within the federal power to regulate interstate commerce, if for no better reason than that the commerce clause is what Congress says it is.”65 This, along with other Court rulings, opened the door for the federal government to regulate any activity it pleases, thereby giving it unlimited power over the economy and the lives of citizens. Unsurprisingly, given the precedents set by the above rulings, not a single federal law was declared unconstitutional from 1937 to 1995.66
Whereas the Great Depression provided FDR the excuse for the New Deal, World War II provided him with an excuse for even more government intervention.
World War II and the Postwar Economic Recovery
Government intervention during World War II was similar to that during the First World War: “War socialism” once again took hold. “If ever there was a time,” FDR argued, “to subordinate individual . . . selfishness for the national good, that time is now.”67 Taxes, the national debt, and inflation went through the roof. Federal spending as a percentage of GDP surged from 11.18 percent in 1941 to a record 47.9 percent in 1945.68 The First War Powers Act (1941), the Second War Powers Act (1942), and the Economic Stabilization Act (1942) gave FDR additional broad powers over the economy, with which he proceeded to dictate prices and wages; allocate resources; and renationalize whole industries including the railroads and coal mines. In addition to the involuntary servitude of the military draft, the government rounded up 120,000 innocent Japanese Americans and confined them to internment camps, a policy upheld by the Supreme Court.69
World War II did finally end the mass unemployment problem of the 1930s. Between 1940 and 1944, the unemployment rate fell from 14.6 percent to 1.2 percent because the government conscripted and employed millions of citizens to fight the war.70 Contrary to conventional wisdom, however, the war did not end the Great Depression. Although GDP grew substantially during this period, such growth is useless as a measure of economic progress given the dramatic rise in military expenditures that constituted most of it.71 In fact, the nation’s standard of living declined because a large proportion of economic activity was diverted from the production of consumer goods to the production of military goods. Many previously available consumer goods—such as automobiles and refrigerators—were simply not produced at all. Moreover, access to those goods that were still being produced was hampered by price controls and rationing.
John Maynard Keynes, the most influential economist of the 20th century, argued that massive government spending was necessary to end the depression, and, on the surface, the experience of World War II appeared to confirm his thesis. Consequently, the Employment Act of 1946 essentially instituted Keynesianism as official government policy, declaring that “it is the continuing policy and responsibility of the Federal Government to promote maximum employment, production, and purchasing power.”72 By legally obliging the government to intervene in the economy, the Employment Act officially outlawed a free market.
On the Keynesian premise, the reintroduction of millions of soldiers back into the civilian workforce, along with the rapid drop in government spending after the war (reduced to 16.95 percent of GDP by 1947),73 should have depressed the economy. In actuality, during the postwar era the economy finally started to recover.74 The postwar economic expansion was spurred by the retrenchment of government spending and intervention after the war, as well as by the removal of the fog of political uncertainty that characterized the New Deal. It was also aided by a scaling back of protectionism under The General Agreement on Tariffs and Trade (1947) and by the Bretton Woods system (1944), which loosely linked the dollar back to gold and gave the Fed incentive to minimize inflation. With government intervention held somewhat in check, the postwar economic expansion would continue throughout the 1950s and 1960s.
The Great Society and “Stagflation”
In its 1960 political platform, the Democratic Party adopted the conception of rights that FDR had previously expounded in his “Economic Bill of Rights”—that is, rights as entitlements. Among other things, the platform stated that all Americans have a “right to a useful and remunerative job,” a “right to adequate medical care,” and a “right to adequate protection from the economic fears of old age, sickness, accidents, and unemployment.”75 This conception was at odds with that of the Founding Fathers, who held that every individual has a right to “life, liberty, and the pursuit of happiness,” not a right to specific outcomes or goods such as a “remunerative job” or “adequate medical care.”
This new conception of “rights” provided the rationalization for an avalanche of new legislation and entitlement spending under President Lyndon B. Johnson’s “Great Society” program. In his 1964 State of the Union address, LBJ proclaimed, “This administration, today, here and now, declares unconditional war on poverty in America.”76 This “war” consisted primarily of shifting the individual’s responsibility for supporting himself and his family from the individual to the government, which is funded, of course, by American taxpayers. This “Government must always be compassionate,” proclaimed LBJ. It is by the “great dedication of selfless men” that we help the “ill clad and ill fed and ill housed.”77
Under the Great Society, social welfare programs such as Aid to Families with Dependent Children (AFDC) were expanded and numerous new ones, such as Head Start and Food Stamps, were established. The crown collectivist jewels of the Great Society, however, were Medicare and Medicaid, which redistributed wealth in order to subsidize health insurance. Adjusted for inflation, federal welfare expenditures, including health-care entitlements, soared from $34.29 billion in 1965 to $203.77 billion by 1980.78
The number of pages in the Federal Register, which lists all proposed and new federal regulations, increased from 11,687 pages in 1960 to 61,283 pages by 1978.79 This burst of regulations violated the rights of businessmen, dramatically increased compliance costs, reduced productivity, and made American business internationally uncompetitive.80 Meanwhile, the government would compound the havoc it was already wreaking on American citizens by drastically changing the monetary system.
Leading up to the late 1960s, the Bretton Woods system had kept inflation relatively low. Designed so that foreign central banks could redeem dollars for gold, Bretton Woods gave the Fed some incentive to minimize inflation, which would cause gold to flow out of the U.S. government’s coffers. It worked relatively well, and from 1945 to 1967, the average annual rate of inflation was 2.85 percent.81 But during the late 1960s, inflation started creeping higher and more gold flowed out of the country. President Richard Nixon responded by scrapping Bretton Woods, thereby completely severing the dollar from gold. In the first ten years (1972–1981) following the end of Bretton Woods, the average annual rate of inflation jumped to 9 percent.82 In an attempt to treat the symptoms of inflation (higher prices) rather than the cause (the Fed’s increase of the fiat money supply), Nixon implemented wage and price controls. High inflation caused double-digit interest rates, and price controls caused shortages, which led to the energy crisis of the 1970s, including its notoriously long lines at gas stations.
“A strong and healthy spirit,” Nixon declared, “means a willingness to sacrifice . . . when a short-term personal sacrifice is needed in the long-term public interest.”83 The government did indeed compel Americans to sacrifice, but the sacrifices were far from short-term. Out-of-control government spending—combined with heavy regulation, combined with high inflation, combined with wage and price controls—led to economic stagnation. Such was the economic equation of the 1970s, the first decade in the history of the nation during which the federal government ran a budget deficit every single year. A new concept—“stagflation”—was formed to denote the situation that had emerged, one that the Keynesians thought impossible: a combination of high unemployment and high inflation. The “misery index” measured stagflation by simply adding the rate of unemployment to the rate of inflation. This index peaked in June 1980 at a high of 21.98.84 All the government intervention of the 1960s and 1970s and the economic stagnation that it caused were reflected in stocks, which suffered a lengthy bear market from 1966 until the early 1980s, when a shift toward freer markets would once again foster economic prosperity.
The Short-Lived Move Toward Freer Markets
Starting in the early 1980s, a backlash against the excessive government intervention of the 1970s resulted in a modest move toward freer markets under President Ronald Reagan. During this period the Fed maintained a tighter monetary policy, which drove down inflation. Price controls on oil were scrapped, finally ending the energy crisis. The inheritance tax was abolished for most Americans, and the top marginal income tax rate was incrementally slashed from 70 percent to 28 percent. Several industries—including trucking, airlines, and telecommunications—were partially deregulated, fostering greater competition, increased output, and lower prices.85 And relaxed antitrust enforcement and innovative new uses of financial instruments, such as so-called “junk bonds,” facilitated a much needed restructuring of corporate America, which had become inefficient and uncompetitive due to the regulatory shackles of the previous decade.
These free-market policies helped the economy recover from the stagflation of the 1970s. By December 1986, the misery index dropped to 7.7.86 Although it is often ridiculed for being the “decade of greed,” the 1980s was a time of prosperity for all economic classes, not just the rich: From 1982 to 1989, the real income of the bottom fifth of households increased by 14.3 percent.87
This trend toward freer markets continued throughout the 1990s under President Bill Clinton. The North American Free Trade Agreement (1994) lowered tariffs with Mexico and Canada, which considerably increased international trade, resulting in greater employment, income, and economic growth.88 The Personal Responsibility and Work Opportunity Reconciliation Act (1996) instituted welfare reform: Block grants to the states eliminated an automatic entitlement to benefits, limits on the amount of time that benefits could be collected further reduced expenditures, and, in most cases, work was made a condition of eligibility for welfare. As a result, welfare caseloads declined significantly—yet without the corresponding increase in poverty that might have been expected. (In fact, poverty actually declined among children and female-headed households.)89 In addition to greater free trade and welfare reform, the capital gains tax rate was cut from 28 percent to 20 percent, and, in 1998, the federal government balanced its budget for the first time in decades.
From its low in 1982 to its high in 2000, the S&P 500 index increased by 1384 percent. This great bull market was the result of the free-market policies of the 1980s and 1990s. To be clear, however, the U.S. economy was still far from a free market. Many interventions remained throughout this period, and the large budget deficits of the 1980s and income tax increases in the 1990s partially offset the positive effects of many of the free-market reforms. Still, the overall rate of government growth slowed considerably, as indicated by the drop in federal spending as a percentage of GDP, from 22.91 percent in 1982 to 18.23 percent in 2000.90
The Resurgence of Big Government and The Housing Crisis
The modest move toward freer markets that characterized the 1980s and 1990s came to an abrupt end in 2000 when Clinton, in the three months prior to his leaving office, added 26,542 pages of “midnight regulations”—enacted without Congressional approval—to the Federal Register.91 These regulations imposed burdensome costs on business, industry, and consumers. One example of the countless new rules was an “energy efficiency” requirement, which raised the cost of washing machines by a couple hundred dollars per unit.92
Numerous government interventions were then instituted by President George W. Bush and the Republican-controlled Congress. Sarbanes-Oxley (2002) granted the government the power to micromanage the accounting practices of all public companies, forcing them to expend time and money complying with a mass of new regulations. Bush’s faith-based initiatives expanded the welfare state by granting large sums of taxpayer money to religious organizations for charitable activities. Bush also enacted a prescription drug entitlement for seniors—the largest expansion of Medicare since the program was enacted. We are “committed to compassion for moral reasons,” Bush said. “We’ve had enough of the stale debate between big Government and indifferent Government. Government must be active enough to fund services for the poor.”93 Such compassion contributed to a massive increase in government expenditures: From 2001 to 2006, total federal spending grew from $1.86 trillion to $2.66 trillion, a 43 percent increase.94
In order to ease the 2001 recession—caused by the Fed’s interest rate hikes to reign in its own inflation—the Fed engaged, as it typically does during recessions, in an easy-money policy, pumping large quantities of credit into the financial markets, gradually dropping interest rates (specifically the federal funds rate) from 6.50 percent to 1 percent. Mortgage rates followed suit, significantly increasing the demand for housing. The resulting credit binge and the massive demand for mortgages, combined with the government’s aggressive promotion of home ownership, led to numerous individuals buying houses they could not afford.
Although it had been promoting home ownership for years, in 2002, the government set a goal of increasing home ownership from 65 percent to 70 percent of households.95 “To build an ownership society,” Bush proclaimed, “we’ll help even more Americans to buy homes.”96 The government’s primary method of promoting home ownership was to debase lending standards by enticing and pressuring lenders to extend mortgages to borrowers who were not creditworthy.97 The Community Reinvestment Act prohibited lenders from discriminating against those in high-risk neighborhoods. The Federal Housing Administration loosened down-payment requirements. Furthermore, at the behest of Congress, the Federal National Mortgage Association (Fannie Mae) and the Federal Home Loan Mortgage Corporation (Freddie Mac), two government-sponsored enterprises, avidly purchased high-risk mortgages from lenders on the secondary mortgage market. Knowing that the high-risk mortgages would be lapped up by Fannie Mae and Freddie Mac, lenders had incentive to extend as many mortgages as possible, regardless of the creditworthiness of borrowers. Meanwhile, the “too-big-to-fail” doctrine encouraged excessive risk-taking by big banks that knew the government would bail them out if they got into trouble.
When the Fed finally raised interest rates (from 1 percent in 2004 to 5.25 percent by 2006) in order to avoid high inflation, the mortgage payments of those with adjustable-rate mortgages shot up, the demand for housing dried up, foreclosures multiplied, the credit crunch ensued, and heavily leveraged Wall Street firms such as Bear Stearns and Lehman Brothers collapsed. This financial crisis, caused by government intervention, led to a frantic and haphazard escalation of government intervention. The Fed resumed the easy-money policies that had fueled the credit binge in the first place, including slashing the federal funds rate (from 5.25 percent to 0–0.25 percent), slashing the discount rate (from 6.25 percent to 0.50 percent), creating numerous lending facilities, and injecting massive liquidity into the financial markets. On top of this, the government completely took over Fannie Mae and Freddie Mac, forced the merger of Bank of America and Merrill Lynch, bailed out AIG, bailed out the auto industry, and presided over the biggest bailout in history—the Emergency Economic Stabilization Act of 2008, a $700 billion “rescue package” that allowed the Treasury to quasi-nationalize America’s top banks by purchasing equity stakes in them against their will. Such bailouts came with the attached strings of greater government regulation, control, and oversight. Right after the $700 billion bailout was passed, the Dow Jones Industrials average had the worst week in its history, plunging 18 percent. By propping up insolvent businesses, preventing unsound investments from being liquidated, and retarding necessary price adjustments, the government has only postponed recovery.
Within one month of taking office, President Barack Obama signed the American Recovery and Reinvestment Act of 2009, a $787 billion “stimulus package” that in part effectively rolled back the welfare reforms of the 1990s.98 Since then, Obama has intruded heavily into the private sector—firing the CEO of GM and sandbagging Chrysler’s senior secured creditors—and is pushing for even more aggressive antitrust enforcement. Meanwhile, Obama is working to institute health-care “reforms,” which will take us ever closer to socialized medicine; and is promising to sign the Waxman-Markey “cap-and-trade” bill (if it passes the Senate), which will dramatically increase energy costs and substantially retard economic growth.99
Obama rode into the White House on an undefined promise of “change.” Although predictable given his history, his ideology, and his campaign slips such as his stated intention to “spread the wealth around,” Obama quickly demonstrated that the kind of change he seeks is an unprecedented expansion of the size, scope, and power of the federal government. This agenda, declared Obama, “will require not just new policies, but a new spirit of service and sacrifice.”100
Repudiating Big Government
History shows that all of the major economic problems of the last century were caused by government intervention. Such intervention has stifled economic growth, exacerbated unemployment, and given rise to inflation. This is what happens when the government violates individuals’ and businesses’ rights to produce and trade according to their own judgment.
Through its rights-violating laws and regulations, the U.S. government expropriates our income, restricts how we can use our property, overrides or nullifies our private contracts, and micromanages countless aspects of our lives. Given the incontrovertible fact that government intervention is destroying both the economy and personal liberty, why have so few Americans stepped forward to argue that the government should be strictly limited to its originally intended purpose, the protection of individual rights? The answer, in short, is that the dominant morality in America today is incompatible with such limited government.
Observe the historically consistent moral “justification” offered in defense of increasing the government’s power to control our lives. The Munn doctrine declared that individuals and businesses “must submit to be controlled by the public for the common good.” The Progressives proclaimed that the individual must be subordinated “to the demand of a dominant and constructive national purpose.” Wilson said, “We seek no compensation for the sacrifices we shall freely make.” Hoover argued that “sacrifice by all groups is essential to the salvation of the nation.” FDR and the New Dealers adhered to the motto, “Not for ourselves but for others,” and called for every citizen to “subordinate individual selfishness for the national good.” LBJ and the proponents of the Great Society declared that “government must always be compassionate.” Nixon posited that a “strong and healthy spirit means a willingness to sacrifice” for “the long-term public interest.” Bush was “committed to compassion for moral reasons,” holding that the “government must be active enough to fund services for the poor.” And Obama believes we need “a new spirit of service and sacrifice” so we can “spread the wealth” from those who have earned it to those who have not.
The unquestioned premise in each of these pronouncements is the moral propriety of altruism. Widespread acceptance of the morality of self-sacrifice is the cause of the rise of big government in America.
To the extent that Americans accept the idea that individuals and businesses have a moral duty to selflessly serve others or the so-called “common good,” Americans will continue to advocate a government that forces individuals and businesses to sacrifice accordingly; they will continue to elect politicians who seek to “spread the wealth around”; and they will continue to fuel the flames of big government that are consuming the land of liberty.
If Americans want to return this country to a rights-respecting republic with a properly limited government, if we want to reinstate freedom as the basic law of the land, then we must repudiate not only big government, but also—and more importantly—the morality on which such government depends: altruism, the morality of self-sacrifice.
The principle of individual rights is the recognition of the fact that each individual’s life belongs to him and must be maintained through his own thought and effort. The rights to life, liberty, property, and the pursuit of happiness are principles of individual freedom—principles identifying the moral prerogative of each individual to freely pursue his own life-serving values for his own sake, so long as he does not violate the same rights of others. These fundamental rights are not principles of self-sacrifice; they are principles of self-interest, and they can be defended only by means of a morality of self-interest.
If Americans want to return government to its proper size and function, then we must come to understand and embrace the morality of rational self-interest.101 This is where advocates of liberty must now, more than ever, focus their efforts.