Nineteenth-century America was the closest thing to capitalism—a system in which government is limited to protecting individual rights—that has ever existed. There was no welfare state, no central bank, no fiat money, no deficit spending to speak of, no income tax for most of the century, and no federal regulatory agencies or antitrust laws until the end of the century. Consequently, total (federal, state, and local) government spending averaged a mere 3.26 percent of Gross Domestic Product (GDP).1 The Constitution’s protection of individual rights and limitation on the power of government gave rise to an economy in which individuals were free to pursue their own interests, to start new businesses, and to create as much wealth as their ability and ambition allowed. This near laissez-faire politico-economic system led to the freest, most innovative, and wealthiest nation in history.
Since the beginning of the 20th century, however, capitalism and freedom have been undermined by an explosion in the size and power of government: Total government spending has increased from 6.61 percent of GDP in 1907 to a projected 45.19 percent of GDP in 2009;2 the dollar has lost more than 95 percent of its value due to the Federal Reserve’s inflationary policies; top marginal income tax rates have been as high as 94 percent; entitlement programs now constitute more than half of the federal budget; and businesses are hampered and hog-tied by more than eighty thousand pages of regulations in the Federal Register.
What happened? How did America shift from a predominantly free-market economy to a heavily regulated mixed economy; from capitalism to welfare state; from limited government to big government? This article will survey the progression of laws, acts, programs, and interventions that brought America to its present state—and show their economic impact. Let us begin our survey by taking a closer look at the state of the country in the 19th century.
America’s Former Free Market
The Constitution established the political framework necessary for a free market. It provided for the protection of private property (the Fifth Amendment) including intellectual property (Article I, Section 8), the enforcement of private contracts (Article 1, Section 10), and the establishment of sound (gold or silver)3 money (Article I, Sections 8 and 10). It prohibited the states from erecting trade barriers (Article I, Section 9), thereby establishing the whole nation as one large free-trade zone. It permitted direct taxes such as the income tax only if apportioned among the states on the basis of population (Article 1, Sections 2 and 9), which made them very difficult to levy.4 Finally, it specifically enumerated and therefore limited Congress’s powers (Article I, Section 8), severely constraining the government’s power to intervene in the marketplace.
Federal regulatory agencies dictating how goods could be produced and traded did not exist. Rather than being forced to accept the questionable judgments of agencies such as the FDA, FTC, and USDA, participants in the marketplace were governed by the free-market principle of caveat emptor (let the buyer beware). As historian Larry Schweikart points out:
merchants stood ready to provide customers with as much information as they desired. . . . In contrast to the modern view of consumers as incompetent to judge the quality or safety of a product, caveat emptor treated consumers with respect, assuming that a person could spot shoddy workmanship. Along with caveat emptor went clear laws permitting suits for damage incurred by flawed goods.5
To be sure, 19th-century America was not a fully free market. Besides the temporary suspension of the gold standard and the income tax levied during the Civil War, the major exceptions to the free market in the 19th century were tariffs, national banking, and subsidies for “internal improvements” such as canals and railroads. These exceptions, however, were limited in scope and were accompanied by considerable debate about whether they should exist at all. Alexander Hamilton, Henry Clay, and Abraham Lincoln supported such interventions; Thomas Jefferson, Andrew Jackson, and John Tyler generally opposed them. These interventions (except for tariffs) were, as Jefferson, Jackson, and Tyler pointed out, unconstitutional. But history shows that they were also impractical. Tariffs were initially implemented, beginning with the Tariff Act of 1789, as a source of revenue—the main source in the 19th century—for the federal government. Pressure from northern manufacturers, however, to implement tariffs for purposes of protection led to the “Tariff of Abominations” (1828), which was scaled back by 1833 due to heavy opposition from the South. Tariff rates then remained relatively low—about 15 percent—until the Civil War. By 1864, average tariff rates had risen to 47.09 percent for protectionist reasons and remained elevated for the remainder of the century.6
As to national banking, the Second Bank of the United States’ charter expired in 1836, thereby paving the way for the free banking era—which lasted until a national bank was reinstituted during the Civil War. By virtually every measure of bank health, this free banking era was the soundest in American history. In terms of capital adequacy, asset quality, liquidity, profitability, and prudent management, national banking proved to be inferior to free banking.7
As to subsidies for internal improvements, although private entrepreneurs financed and built most roads and many canals,8 state governments intervened in the 1820s to subsidize canal building—amending their constitutions to do so.9 However, most state-funded canals either went unfinished, generated little to no income, or went bankrupt. As a result, by 1860 most state constitutions were amended again to prohibit such subsidies.10 After the Civil War, federal subsidies for the transcontinental railroads caused similar problems—as well as corruption. Further, they were proven to be a hindrance to rather than a precondition of a thriving railroad industry: James Jerome Hill’s Great Northern was the most successful of the transcontinental railroads, yet was built without any subsidies or land grants.11
The foregoing interventions, though impractical, were motivated in part by a desire to help promote the development of business and industry. But lurking in the periphery, growing in popularity, and poised to fuel further government interference in the marketplace, was the ideology of collectivism—the notion that the individual must be subordinated to the collective or the “common good.” This idea was stated by economist Daniel Raymond in his 1820 textbook: “it is the duty of every citizen to forgo his own private advantage for the public good.”12 And as the 19th century progressed, this idea was increasingly cited as a justification for government intervention. One of the most important instances of this was the Supreme Court’s decision in Munn v. Illinois (1876). In the majority opinion, Chief Justice Morrison Waite declared:
Property does become clothed with a public interest when used in a manner to make it of public consequence. . . . When, therefore, one devotes his property to a use in which the public has an interest, he, in effect, grants to the public an interest in that use, and must submit to be controlled by the public for the common good. . . .13
Although the case applied only to the states, Munn undermined the sanctity of private property rights by establishing the precedent that property “clothed with a public interest” (i.e., any property related to business) is subject to government regulation and control. As a result, Munn helped pave the way for the two major assertions of federal control over the economy—the Interstate Commerce Act and the Sherman Antitrust Act—that would come in the Gilded Age.14 . . .