Author’s note: The following is adapted from a chapter of my book in progress, “The Inductive Method in Physics.” Whereas my article “The 19th-Century Atomic War” (TOS, Summer 2006) focused on the opposition to the atomic theory that arose from positivist philosophy, this article focuses on the evidence for the atomic theory and the epistemological criteria of proof. It is necessary to repeat some material from my earlier articles in TOS, but the repetition is confined mainly to the first few pages below.

Scientists need objective standards for evaluating theories. Nowhere is this need more apparent than in the strange history of the atomic theory of matter. Prior to the 19th century, there was little evidence for the theory—yet many natural philosophers believed that matter was made of atoms, and some even wasted their time constructing imaginative stories about the nature of the fundamental particles (e.g., René Descartes’s physics). Then, during the 19th century, a bizarre reversal occurred: As strong evidence for the theory accumulated rapidly, many scientists rejected the idea of atoms and even crusaded against it.

Both of these errors—the dogmatic belief that was unsupported by evidence, followed by the dogmatic skepticism that ignored abundant evidence—were based on false theories of knowledge. The early atomists were rationalists; they believed that knowledge can be acquired by reason alone, independent of sensory data. The 19th-century skeptics were modern empiricists; they believed that knowledge is merely a description of sensory data and therefore references to non-observable entities are meaningless. But scientific knowledge is neither the floating abstractions of rationalists nor the perceptual-level descriptions of empiricists; it is the grasp of causal relationships identified by means of the inductive method. In this article, we will see how the atomic nature of matter was identified as the fundamental cause that explains a wide range of narrower laws.

If we follow the idea of atoms from ancient Greece to the 19th century, one remarkable fact stands out. So long as the atomic theory was not induced from scientific data, it was entirely useless. For more than two millennia, scientists were unable to make any predictions or to devise any experiments based on the theory. It explained nothing and integrated nothing. Because the Greek idea of atoms did not derive from observed facts, it remained isolated from the real knowledge of those who investigated nature. If one tries to think about the implications of an arbitrary idea, one simply draws a blank; implications depend upon connections to the rest of one’s knowledge.

Everyone, including scientists, must start with the evidence available to the senses, and there is no direct perceptual evidence for the existence of atoms. On the perceptual level, matter appears to be continuous. At the early stages of science, questions about the ultimate, irreducible properties of matter—including the question of whether it is discrete on some imperceptible scale—do not legitimately arise. The questions that do arise from observations are very challenging: How do bodies move? What forces can they exert on each other? How do they change when heated or cooled? Why do objects appear colored, and how is colored light related to ordinary white light? When different materials react, what transformations can they undergo, and under what circumstances?

By 1800, after centuries of investigating such questions, scientists finally had the advanced knowledge that made the question of atoms meaningful and the answer possible. When the idea of atoms arose from observed facts, scientists had a context in which they could think about it, and therefore they could derive implications, make predictions, and design experiments. The result was a sudden flurry of scientific activity that quickly revealed an enormous depth and range of evidence in favor of the atomic composition of matter.

Chemical Elements and Atoms

Prior to the Enlightenment, there was no science of chemistry. There was practical knowledge about the extraction of metals, the synthesis of glass, and the dyeing of clothes. There were also premature attempts to reduce the bewildering variety of known materials to a few basic elements. The Greeks had supposed that all terrestrial matter was made of four elements: earth, water, air and fire. Later, some alchemists tried to reduce “earth” to salt, mercury, and sulfur. But these early attempts were empty speculation, not scientific theory; such ideas were unsupported by the observations and incapable of explaining them.

The situation changed dramatically in the second half of the 18th century, when the method that had led to such spectacular success in physics was applied to chemistry. The father of modern chemistry, Antoine Lavoisier, wrote in a letter to Benjamin Franklin that his aim was “to follow as much as possible the torch of observation and experiment.” He added: “This course, which has not yet been followed in chemistry, led me to plan my book according to an absolutely new scheme, and chemistry has been brought much closer to experimental physics.”1

The first step in the science of chemistry was to make a clear distinction between pure substances and mixtures. Unlike mixtures, pure substances have well-defined and invariant properties. Under the same conditions, every sample of a pure substance will melt (or boil) at precisely the same temperature. Every such sample has the same hardness, the same mass density, and the same amount of heat will always cause the same rise in temperature (for a unit mass). Furthermore, when a portion of a substance undergoes a chemical reaction, the properties of the remaining part do not change. Thus the concept “substance” relied on the prior conceptualization of various physical and chemical properties, and the knowledge of how to measure those properties. The concept “mixture” could then be defined as a material composed of two or more substances.

The next key step was the division of substances into elements and compounds. This was made possible by the discovery that mass is conserved in chemical reactions, that is, the weights of the reactants are always equal to the weights of the products. It was found that some substances can be decomposed into two or more other substances with the same total weight; on the other hand, other substances resisted all such attempts at chemical decomposition. Those that cannot be decomposed are elements; those that can be decomposed are compounds—substances made of two or more elements. Armed with the principle of mass conservation and the method of quantitative analysis, chemistry was finally freed from arbitrary conjecture; the weight scale gave an objective verdict.

The “elements” of the Greeks could not withstand this new quantitative method. In 1774, air was discovered to be a mixture of nitrogen and oxygen. At about the same time, Lavoisier proved that combustion was the result of combining substances with oxygen, not the release of elementary “fire.” A few years later, water was shown to be a compound of two elements, hydrogen and oxygen. The remaining Greek element, earth, was shown to consist of many different substances. In total, chemists of the 18th century identified more than twenty elements.

Much of the confusion that had plagued early chemistry was built into its terminology. The same name was often used for different substances (e.g., all gases were referred to as various modifications of “air”). Conversely, different names were sometimes used to refer to the same substance, depending on how it was synthesized (e.g., the element antimony had at least four names). In other cases, compounds were named after their discoverer or the place where they were found; such names gave no clue to the composition of the substance. The situation was made worse by the legacy of the alchemists, who had viewed themselves as a secret cult and therefore used terminology that was intentionally obscure (e.g., “green lion,” “star regulus of Mars”).

Lavoisier took the lead in bringing order to this chaos. He recognized that true generalizations can be reached and expressed only by means of an objective language. He originated many of the modern names for elements, and his names for compounds identified their constituent elements. Substances were placed into wider groups (e.g., acids, alkaloids, salts) according to their essential properties. He understood that concepts are not arbitrary conventions; they are integrations of similar particulars, and the groupings must be based on the facts. A word that refers haphazardly to a collection of dissimilar things can give rise only to error and confusion. On the other hand, Lavoisier noted: “A well-composed language . . . will not allow the teachers of chemistry to deviate from the course of nature; either they must reject the nomenclature or they must irresistibly follow the course marked out by it. The logic of the sciences is thus essentially dependent on their language.”2 He presented his new language—the chemical language we still use today—in a landmark book, Elements of Chemistry, published in 1789.

With the foundation provided by a quantitative method and an objective language, the chemists who followed Lavoisier made rapid progress in understanding how elements combine to form compounds. The next crucial discovery was made by Joseph Louis Proust, who devoted many years to studying various compounds of metals (e.g., carbonates, oxides, sulfides). In 1799, he announced the law of constant composition, which states that different samples of a compound always contain the same elements in the same proportions by mass. For example, he showed that copper carbonate always contains copper, oxygen, and carbon in the same mass ratio (roughly, five to four to one), regardless of how the sample was prepared in the laboratory or how it was isolated from nature.

Initially, the evidence supporting Proust’s law was strong but not conclusive. Another French chemist, Claude Louis Berthollet, claimed to find counterexamples. He pointed out that lead can react with a variable amount of oxygen, resulting in a material that changes color in a continuous manner. A similar example is provided by mercury dissolved in nitric acid, which also reacts with a variable amount of oxygen. However, Proust carefully analyzed these cases and showed that the products were mixtures, not compounds; lead forms three different oxides, and the mercury was reacting to form two distinct salts. By about 1805, after numerous alleged counterexamples had been identified as mixtures of two or more compounds, chemists accepted the law of constant composition.

Chemists of this period discovered that several other metals, like lead, combine with oxygen to form more than one compound (e.g., two oxides of tin, two of copper, and three of iron). Furthermore, this phenomenon was not limited to metal oxides; chemists identified two different gases made of carbon and oxygen, and another two gases made of carbon and hydrogen. When they carefully measured the weights of the combining elements in such cases, a crucial pattern emerged.

In 1803, John Dalton analyzed three gases composed of nitrogen and oxygen. The first is a colorless gas that has a pleasant odor and the capacity to cause hysterical laughter when inhaled; the second is colorless, nearly odorless, and has a significantly lower mass density; the third has the highest mass density of the three and a deep brown color at high temperatures. Quantitative analysis showed that the three gases are also distinguished by the relative weights of the two combining elements. If we consider samples of each gas that contain 1.75 grams of nitrogen, we find that the laughing gas contains one gram of oxygen, the second gas contains two grams, and the third gas contains four grams. Dalton found a similar result when he analyzed two different gases that are both composed of carbon and hydrogen; for samples containing the same weight of carbon, the weight of hydrogen contained in one of the gases is precisely twice that of the other. On the basis of such data, Dalton arrived at a new law: When two elements combine to form more than one compound, the weights of one element that combine with identical weights of the other are in simple multiple proportions.

As was the case with the law of constant composition, chemists did not immediately accept the law of multiple proportions when Dalton published his book, A New System of Chemical Philosophy, in 1808. There were three problems. First, more accurate data was needed to verify the integer mass ratios. Second, more instances of the law were needed, spanning the range from light gases to heavy metal compounds (Dalton had studied only a few gases). Third, apparent violations of the law needed to be resolved by showing that such cases always involve mixtures, not compounds. Thanks in large part to the outstanding work of a Swedish chemist, Jacob Berzelius, all three problems were solved in less than a decade. By 1816, Berzelius had proven the law of multiple proportions beyond any reasonable doubt.

This law gave the first clear evidence for the atomic theory of matter. It implies that when elements combine into compounds, they do so in discrete units of mass. A compound may contain one unit of an element, or two, or three, but never 2.63 such units. This is precisely what one would expect if each chemical element is composed of atoms with identical masses. The discrete nature of matter had finally revealed itself in observations.

The concept “atom” that emerged from observations differed from the old idea based on deductions from floating abstractions. In ancient Greece, the “atom” had been defined as the ultimate, immutable, irreducible unit of matter. However, Lavoisier pointed out that the Greek idea was vacuous because nothing was known about such ultimate particles. In contrast, the new scientific concept that took shape in the early 19th century was that of the chemical atom. Scientists came to understand that an atom had to be redefined as the smallest particle of an element that can enter into a chemical combination. The question of whether it is possible to break these chemical atoms into smaller constituents by non-chemical means had to be put aside and left open. With this understanding, the concept “atom” was given real content for the first time.

At about the same time that Dalton’s book was published, a French chemist named Joseph Louis Gay-Lussac discovered another law manifesting the discrete nature of chemical elements. Gay-Lussac found that the volumes of gases involved in a reaction can always be expressed as a ratio of small whole numbers. At a temperature above the boiling point of water, for example, one liter of oxygen will combine with two liters of hydrogen to give exactly two liters of steam. He cited several other instances of the law; for example, one liter of nitrogen will combine with three liters of hydrogen to give exactly two liters of ammonia gas; or, the one liter of nitrogen can react with one liter of oxygen to produce two liters of nitric oxide.

Although Gay-Lussac described his law of combining gas volumes as “very favorable to the [atomic] theory,” he did not make the implications explicit.3 That task was left to an Italian scientist named Amadeo Avogadro.

A substance, according to the atomic theory, is composed of molecules—particles that consist of one or more atoms. Considered at the microscopic level, a chemical reaction occurs when a small number of molecules (usually two) come together and their atoms rearrange, thereby producing a small number of different molecules. Thus if we compare the number of molecules of each reactant and product, they must exist in ratios of small whole numbers. This is precisely what Gay-Lussac found for the volumes of gases involved in a reaction. Therefore, Avogadro reasoned, there must be a direct relationship between volume and number of molecules. In 1811, he proposed his hypothesis: Equal volumes of gases (at the same temperature and pressure) contain the same number of molecules. (Here the term “hypothesis” refers not to an arbitrary proposition but to one induced from evidence that is not yet conclusive.)

Avogadro’s hypothesis has extraordinary implications. It relates volume ratios—a measurable, macroscopic quantity—to the number of molecules involved in each individual, microscopic chemical reaction. Thus the fact that two liters of hydrogen react with one liter of oxygen to produce two liters of steam implies, according to Avogadro, that two hydrogen molecules react with one oxygen molecule to produce two water molecules. In many cases, his hypothesis enabled him to determine the number of atoms in each molecule. By comparing the various known reactions, he concluded that the common gases are diatomic; a hydrogen molecule consists of two hydrogen atoms bound together, and the same is true for oxygen and nitrogen. From the volume ratios and his hypothesis, Avogadro concluded that a water molecule is composed of two hydrogen atoms combined with an oxygen atom, and an ammonia molecule is three hydrogen atoms combined with a nitrogen atom. The measurement of macroscopic variables such as mass and volume was unveiling the hidden world of atoms and molecules.

Scientists soon discovered another macroscopic variable that shed light on the microscopic realm. The “specific heat” of a substance is defined as the amount of heat required to raise the temperature of one gram of the substance by one degree. It was known that the specific heats of different elements vary over a wide range. When scientists compared equal weights of copper and lead, for example, they found that in order to cause the same rise in temperature, the copper requires more than three times more heat than the lead.

In 1819, two French physicists, Pierre Dulong and Alexis Petit, measured the specific heats of many pure metals and discovered a remarkable relationship. Rather than comparing equal weights of the different elements, they decided to compare samples with equal numbers of atoms. So they multiplied the measured specific heats by the relative atomic weights that had been determined by chemists. This simple calculation gives the specific heat for a particular number of atoms, rather than the specific heat for a unit mass.

When Dulong and Petit performed their careful measurements of specific heat and multiplied by the atomic weights, they arrived at very nearly the same number for all of the metallic elements they tested. In other words, they found that equal numbers of atoms absorb equal amounts of heat. Thus the wide variations in the specific heats of metals can be explained simply by differences in the number of atoms. This is impressive evidence for the atomic composition of matter.

The Dulong-Petit law was shown to be approximately true for the vast majority of solid elements. However, one major exception was discovered: The specific heat of carbon is far lower than that predicted by the law. But because carbon was known to have other unusual properties, it was reasonable for scientists to accept the causal relationship between heat capacity and number of atoms, and to regard carbon as a special case in which additional, unknown causal factors played a major role.

One of the unusual properties of carbon had been discovered a few years earlier by Humphry Davy, an English chemist. The common form of pure carbon is the black, chalky substance we call graphite. However, carbon also exists in another solid form. In 1814, Davy used an enormous magnifying glass to heat and burn a diamond—and, to his amazement, it disintegrated into carbon dust. He had proven that graphite and diamond—materials with radically different properties—are composed of the same element. This is a rare phenomenon, but not unique; chemists also knew that tin exists as both a gray powder and a white, malleable metal. Different forms of the same element are called “allotropes.”

In the next decade, chemists discovered a similar phenomenon among compounds. In 1823, the German chemist Justus von Liebig carried out a quantitative analysis of silver fulminate; at about the same time, his colleague Friedrich Wohler analyzed silver cyanate. These compounds have very different chemical properties; the fulminate reacts explosively, whereas the cyanate does not. Yet, when the two chemists compared notes, they found that both compounds are made of silver, oxygen, carbon, and nitrogen in identical proportions. Soon thereafter, Wohler proved that ammonium cyanate and urea—which, again, have very different properties—are composed of identical proportions of hydrogen, nitrogen, carbon, and oxygen. Several other such cases were discovered in the 1820s. Different chemical compounds made of the same elements in identical proportions were named “isomers,” a term that derives from the Greek words meaning “equal parts.”

The existence of allotropes and isomers was recognized to be problematic for any continuum theory of matter. Such a theory would seem to imply that the same elements mixed together in the same proportions should always result in the same material. However, according to the atomic theory, the chemical properties of a molecule are determined by the elements and their arrangement; it is not merely the proportions of elements that are relevant, but also which atoms are attached to which other atoms and in what spatial configuration. Thus when Berzelius discussed isomers, he stated the implication clearly: “It would seem as if the simple atoms of which substances are composed may be united with each other in different ways.”4 As we shall see, isomers were destined to play an important role in the determination of many molecular structures and in the proof of the atomic theory.

About a decade after the discovery of isomers, yet another field provided evidence for the atomic composition of matter. Since 1800, when Alessandro Volta announced his invention of the electric battery, scientists had investigated the relation between chemical reactions and electricity. A battery is simply a chemical reaction occurring between electrodes, one that generates an electric current in any conductor that connects the electrodes. Such a chemical reaction occurs spontaneously and causes the current. Scientists quickly discovered that they could reverse this process: Under specific conditions, an electric current can be used to charge electrodes and stimulate a chemical reaction in the solution between them. This reverse process is called electrolysis.

Recall that Dulong and Petit had discovered that heat capacity is proportional not to the mass but to the number of molecules contained in the sample. In the early 1830s, Michael Faraday found that a similar relationship exists between electricity and number of molecules. Consider, for example, an electrolytic solution of hydrochloric acid. Faraday discovered that a specific amount of electricity passing through the solution generates one gram of hydrogen gas at the negative electrode and thirty-six grams of chlorine gas at the positive electrode. It was known that hydrochloric acid contains hydrogen and chlorine in the mass ratio of one to thirty-six. Faraday concluded that the molecules of hydrochloric acid are broken into positively charged hydrogen “ions” (attracted to the negative electrode) and negatively charged chlorine “ions” (attracted to the positive electrode), with each atom carrying an equal magnitude of electric charge through the solution.

Faraday reached this view after conducting experiments with many different electrolytic solutions. In each case, he used the relative atomic weights to show that the quantity of electricity is always proportional to the number of molecules reacting at each electrode. “[T]here is an immensity of facts,” he wrote, “which justify us in believing that the atoms of matter are in some way endowed or associated with electrical powers. . . .”5 And, he added, the results of his electrolysis experiments seem to imply that “the atoms of bodies . . . have equal quantities of electricity naturally associated with them.”6

Actually, some of the elements in Faraday’s experiments were singly ionized, carrying one unit of electric charge, whereas others were doubly ionized, carrying two units of electric charge. Faraday overlooked this fact because in the cases of doubly ionized elements, he used atomic weights that were half of the correct value, an error that caused him to underestimate the electric charge per atom by a factor of two. In conjunction with the correct atomic weights, his experiments would have led him to the crucial discovery that the electric charges of ions vary in discrete units.

At this stage, there was still a great deal of controversy and confusion about the relative weights of atoms. In an invalid argument based mainly on “simplicity,” Dalton had concluded that a molecule of water contains one hydrogen atom and one oxygen atom. If hydrogen is assigned a weight of one unit, this incorrect molecular formula implies that the weight of oxygen is eight rather than sixteen. Because the weights of many other elements were measured relative to oxygen, this erroneous factor of two propagated through the table of atomic weights like an infectious disease.

The cure for this disease was Avogadro’s hypothesis. As we have already seen, Avogadro’s idea led to the correct molecular formula for water and thus to the correct atomic weight for oxygen. Unfortunately, two false premises prevented many scientists from grasping this crucial truth.

The first false premise was a hasty generalization regarding the nature of chemical bonding. Because compounds can be decomposed by electrolysis into positively and negatively charged ions, many scientists concluded that chemical bonding can be explained simply as an electrical attraction. For example, a molecule of hydrochloric acid was assumed to be held together by the attractive force between the electropositive hydrogen and the electronegative chlorine. All chemical bonding, according to this idea, must occur between atoms with different “electrical affinities,” so that an attractive force can arise between the positive and negative atoms.

Avogadro’s hypothesis, however, implies that the molecules of many gaseous elements are diatomic. For instance, Avogadro claimed that two atoms of hydrogen—both electropositive—bind together, and two atoms of chlorine—both electronegative—bind together. But, because like charges repel, the existence of such diatomic molecules seemed impossible according to the electrical theory of chemical bonding. As a result, many chemists—including leaders in the field such as Berzelius and Davy—rejected Avogadro’s hypothesis.

The second false premise concerned the physical nature of gases. Many scientists, including Dalton, thought that the pressure and elasticity of gases implied the existence of a repulsive force between the identical atoms of the gas. Dalton proved that in the case of gaseous mixtures, each elemental gas behaved independently, that the total pressure was simply the sum of the individual pressures. Thus he concluded that there was no repulsive force between unlike gas atoms. However, because he thought there was such a force between identical atoms, he could not accept Avogadro’s hypothesis and its implication of diatomic molecules. How could two atoms that strongly repelled each other come together and bond?

Without Avogadro’s hypothesis, there was no explanation for the law of combining gas volumes and no way to arrive at unambiguous atomic weights. The wrong atomic weights led to myriad problems, from incorrect molecular formulas to clashes with the Dulong-Petit law of heat capacities. Thus the explanatory power and integrating function of the atomic theory seemed to be severely undercut. It took decades to solve this problem, despite the fact that the pieces of the solution were available the whole time. The reason is that chemists alone could not validate the pieces, put them together, and reach a fundamental theory of matter. As it turned out, they needed help from those who studied the fundamental science of matter—the physicists.

The Kinetic Theory of Gases

The study of heat and gases led physicists to the atomic theory, and eventually brought the sciences of physics and chemistry together into a unified whole.

In the 18th century, heat was widely believed to be a fluid (called “caloric”) that flowed from hot to cold bodies. At the end of the century, however, two experiments provided strong evidence that heat was not a substance, but rather internal motion of the matter composing the bodies. The first experiment was performed by Benjamin Thompson, known as Count Rumford; the second, by Humphry Davy.

In 1798, Rumford was supervising the manufacture of cannons at the military arsenal in Munich. He was surprised by the enormous amount of heat generated in the process of boring the cannons, and he decided to investigate the phenomenon. He placed a brass gun barrel in a wooden box containing cold water and bored it with a blunt steel drill. After about two and one-half hours, the water began to boil. The apparently inexhaustible supply of heat led Rumford to reject the caloric theory. As he put it: “Anything which any insulated body, or system of bodies, can continue to furnish without limitation, cannot possibly be a material substance.”7 In the experiment, what was being supplied to the system of bodies was motion; thus Rumford concluded that heat must be a form of internal motion.

In the following year, Davy was led to the same conclusion by means of an experiment in which the heated bodies were more carefully insulated. In an evacuated glass container that was kept very cold by means of an ammonia-ice bath, he contrived a means of vigorously rubbing together two blocks of ice. The ice melted, despite the fact that there was no possible source of “fluid heat.” Davy stated his conclusion emphatically: “It has then been experimentally demonstrated that caloric, or the matter of heat, does not exist.” Agreeing with Rumford, he added: “Heat . . . may be defined as a peculiar motion, probably a vibration, of the corpuscles of bodies. . . .”8

Although these experiments pointed the way toward a new understanding of heat, they did not influence physics significantly in the early 19th century for two reasons. First, many physicists were reluctant to give up the concept “caloric” because it seemed to explain the similarities between heat conduction and fluid flow. Second, the merely qualitative idea that heat was some unspecified form of internal motion held little explanatory power. So, until the quantitative relationship between heat and motion was identified, the idea lay dormant; it could not be integrated with mechanics, and its implications could not be investigated and exploited. When the idea was finally given mathematical form, it quickly led to crucial discoveries.

The quantitative relationship was established in a brilliant series of experiments conducted by James Joule in the mid-1840s. Joule designed an apparatus in which falling weights were used to turn paddle wheels in a container of water. In this arrangement, the external motion of the weights is converted into heat, raising the temperature of the water. He showed that the rise in temperature is proportional to the product of the weights and the distance they fall.

The paddle wheels were driven by two thirty-pound lead weights falling repeatedly about five feet. The temperature of the water was measured by thermometers that were accurate to within one-hundredth of a degree Fahrenheit, and Joule was extraordinarily careful to account for possible sources of error in the experiment (the heat absorbed by the container, the loss of motion when the weights impacted the floor, etc.). He concluded that a 772-pound weight falling a distance of one foot is capable of raising the temperature of one pound of water by one degree Fahrenheit, a result that is impressively close to the correct value of 778 pounds. In addition, he performed similar experiments using sperm oil and mercury instead of water, and showed that the conversion factor between external motion and heat is always the same.

Joule’s quantitative analysis could be integrated with Newtonian mechanics to provide insight into the specific function of motion that is related to a rise in temperature. In free fall, the product of the body’s weight and the distance fallen is equal to half the body’s mass multiplied by the square of its speed. Physicists refer to this function of motion as the body’s “kinetic energy.” Joule had proven that when the temperature of a material is raised by the conversion of motion into heat, the amount of external kinetic energy expended is proportional to the rise in temperature. Thus if temperature can be interpreted as a measure of the internal kinetic energy of molecules, then this process is simply one of converting external to internal kinetic energy.

J. J. Waterston, another English physicist, was familiar with Joule’s work, and he took the next crucial step. Waterston turned his attention to the absorption of heat by gases, rather than by liquids or solids. It was known that gas volumes are enormous compared to the corresponding liquid volumes; for example, at standard atmospheric pressure and boiling temperature, steam occupies a volume about fifteen hundred times that of the same mass of liquid water. According to the atomic theory, this means that the distance between gas molecules is very large compared to their size. Thus, in opposition to Dalton, Waterston argued that any forces between the molecules must be negligibly small in the gaseous state. If so, each molecule will move with constant velocity until it collides with another gas molecule or with the walls of the container.

Waterston could also make a reasonable hypothesis about the nature of such collisions. It was known that when heat is transferred from hot bodies to cold bodies, the total amount of heat is conserved. It was also known that, in the case of elastic collisions, kinetic energy is conserved. So, if heat is a measure of the internal kinetic energy of molecules, then molecular collisions must be elastic in nature. Thus Waterston arrived at his simple model of gases: The molecules move freely except at impact, when they change their motion in such a way that both momentum and kinetic energy are conserved.9

On the basis of this model, Waterston was able to derive a law relating the pressure, volume, and energy of a gas. He showed that the product of pressure and volume is proportional to the number of molecules and the average kinetic energy of the molecules. This result integrated perfectly with what was already known about heat and gases. If temperature is equated with the average kinetic energy of molecules, then Waterston’s law says that the product of pressure and volume is proportional to temperature—which is precisely the gas law that had been proven experimentally by Jacques Charles almost sixty years earlier. Furthermore, Waterston’s law implies that at constant temperature the pressure of a gas is proportional to its mass density—a relationship that had also been proven experimentally decades earlier. Finally, the law implies that equal volumes of gases must contain equal numbers of molecules, provided that the pressure and temperature are the same: Avogadro’s hypothesis follows necessarily from this molecular model of gases and Newton’s laws. The explanatory power of Waterston’s analysis was astonishing. We can understand why he wrote: “It seems almost impossible now to escape from the inference that heat is essentially molecular [kinetic energy].”10

The atomic law of gases had an implication, however, that initially struck many physicists as problematic. The law relates the average speed of gas molecules to the pressure and mass density of the gas, both of which can be measured. Thus the speed of the molecules could be calculated, and it was surprisingly high. For example, at standard temperature and pressure, the law implies that air molecules move at an average speed of nearly five hundred meters per second. However, it was known that when a gas with a strong odor is generated at one corner of a laboratory, a couple of minutes may pass before it is detected at the far side of the room. If gas molecules travel so quickly, what is the reason for the time delay?

The answer was first given in an 1858 paper by Rudolph Clausius, a German physicist. Clausius suggested that molecules travel only a very short distance before impacting other molecules. While traversing a room, the progress of a molecule is greatly slowed by billions of collisions and subsequent changes of direction. Thus he introduced the idea of the “mean free path” of a molecule: the average distance that a molecule travels before suffering a collision. He estimated that this distance, although perhaps one thousand times greater than the diameter of the molecule, was still very small compared to the macroscopic dimensions of a laboratory.

The idea of “mean free path” was seized on by the greatest theoretical physicist of the 19th century, James Clerk Maxwell, and it played a key role in his development of the atomic theory of gases. Maxwell was able to derive equations relating the mean free path to rates of gaseous diffusion and heat conduction. He showed that the diffusion rate is proportional to the product of the average speed of the molecules and their mean free path, and that the rate of heat conduction is proportional to the product of speed, mean free path, and heat capacity. These equations were soon verified by experiment, and Maxwell declared with satisfaction: “The numerical results . . . agree in a very remarkable manner with the formula derived from the kinetic theory.”11

Although this explanation for rates of diffusion and heat conduction was very impressive, the single most remarkable prediction of Maxwell’s theory concerned the resistance that gases offer to bodies moving through them. He showed that the resistance is proportional to the product of the mass density of a gas, the mean free path, and the average speed of the molecules. Considered in isolation, this relationship does not seem so surprising. However, the mean free path is itself inversely proportional to the number of molecules per unit volume in the gas. Combining these two equations leads to the conclusion that the resistance is independent of density: The atomic theory predicts that the resistance of a gas will remain the same even when most of the gas molecules are removed.

At first glance, this result seems to violate common sense. In a letter to a colleague, Maxwell commented: “This is certainly very unexpected, that the friction should be as great in a rare as in a dense gas.”12 After some reflection, however, Maxwell grasped why this result emerged from the atomic theory. The force of resistance is caused by collisions between the gas molecules and the moving body. But only those molecules that are within about one mean free path of the body can collide with it; the more distant molecules are shielded from the body by collisions with other molecules. When a portion of the gas is removed, this has the effect of increasing the mean free path. So, despite the fact that the total number of molecules has been decreased, the number of molecules resisting the motion of the moving body—those within one mean free path—remains constant.

This prediction provided a crucial test for the kinetic theory of gases. In 1860, when Maxwell derived the result and searched for data to verify it, he found that no adequate experiment had been done. So he resolved to do the experiment himself. He used a torsion pendulum in which the back and forth rotation of glass discs was resisted by air friction. The pendulum was enclosed in a glass container, and a pump was used to vary the air pressure in the container (which was measured by a mercury barometer). It was a dramatic triumph for the atomic theory when Maxwell proved that the damping rate of the pendulum remained constant as the air pressure was varied from one-half inch to thirty inches of mercury. The results were published in 1866. A generation later, one physicist commented: “[I]n the whole range of science there is no more beautiful or telling discovery than that gaseous [resistance] is the same at all densities.”13

The theory of gases had yet another interesting implication. It was shown that the diameter of molecules can be expressed in terms of two quantities: the mean free path in the gaseous state; and the “condensation coefficient,” which is the ratio of liquid to gas volume at the boiling point. The mean free path could be determined from the results of experiments measuring gaseous resistance, diffusion, or heat conduction. The condensation coefficients for several gases had also been measured. In 1865, the German physicist Joseph Loschmidt put these data together and came to the conclusion that molecules are about one nanometer in diameter. A few years later, Maxwell improved the calculation and arrived at a somewhat smaller and more accurate estimate of molecular diameters.

In 1870, William Thomson (a.k.a. Lord Kelvin) extended the analysis by showing that sizes of molecules could be estimated by several methods. To the above result, he added estimates based on the dispersion of light, contact electricity in metals, and capillary action in thin films such as soap bubbles. The results of the four independent methods were in agreement, and Kelvin noted that it was very impressive that such a vast range of data should all point to the same conclusion.

Physicists had done their part. And while they had been developing the atomic theory of gases, the chemists had not been idle. The middle decades of the 19th century were a period of extraordinary progress in chemistry, and the progress was dependent upon thinking in terms of atoms.

The Unification of Chemistry

Earlier, we left chemists facing a problem. They had discovered strong evidence in favor of the atomic theory, but false ideas concerning the nature of gases and chemical bonding had led many to deny the possibility of diatomic gas molecules—which led them to reject Avogadro’s hypothesis—which left them unable to arrive at the correct atomic weights—which led to a hornet’s nest of problems, including contradictions with the Dulong-Petit law of heat capacities. The fact that elemental gas molecules often consist of more than one atom had to be accepted before chemistry could free itself from the contradictions.

An important clue to resolving this problem came from the same type of experiments that had been used to justify the oversimplified ionic theory of bonding. When powerful batteries were employed in electrolysis experiments, some chemists noted an “odor of electricity” at the positive electrode. The same odor had been noticed previously in experiments that involved electrical discharges in air. Chemists realized that a new gas, which they named “ozone,” was being produced in such experiments.

The first breakthrough regarding the identity of this gas was made in 1845 by Jean de Marignac, a chemistry professor in Geneva. Marignac showed that ozone could be produced by an electrical discharge through ordinary oxygen gas, and he concluded that ozone must be an allotrope of oxygen. The existence of two very different oxygen gases seemed impossible to explain unless oxygen atoms could combine with other oxygen atoms. Here was an experimental result that directly contradicted the argument against diatomic gas molecules. Berzelius himself, one of the leading proponents of the ionic theory of bonding, recognized the importance of this discovery. It was no longer reasonable to deny that gas molecules could be combinations of identical atoms.

Thus the discovery of ozone, in combination with Waterston’s derivation of the gas law, should have led to the acceptance of Avogadro’s hypothesis in the 1840s. However, a consequence of the prevailing philosophical ideas was a strong bias against the atomic theory, and Waterston’s paper was never published.14 As a result, the widespread confusion about atomic weights persisted until 1856, when a German physicist, August Kroenig, repeated Waterston’s derivation and had it published.

Unlike Waterston, Kroenig did not explicitly point out that the atomic theory of gases provided a fundamental validation of Avogadro’s hypothesis. Nevertheless, at least one chemist grasped the implication. In 1858, Stanislao Cannizzaro presented the solution to the problem of atomic weights in a landmark paper titled “Sketch of a Course of Chemical Philosophy.” The paper is a model of clear thinking about an issue that had been mired in obscurity and contradictions for decades.

Cannizzaro showed how to integrate all the relevant data to arrive at one uniquely determined set of atomic weights. Avogadro’s hypothesis formed the centerpiece of his argument. Many of the most common elements—hydrogen, carbon, nitrogen, oxygen, sulfur, chlorine—combine in various ways to form gases. The vapor densities of the gases had been measured. Furthermore, in the compound gases, the percentage weight of each element was known. The product of the vapor density and the percentage weight gives the mass density of each element in the gas. Because the number of molecules is always the same for equal volumes (by Avogadro’s hypothesis), these mass densities are proportional to the number of atoms of the element that are contained in the gas molecule. Thus the atomic composition of gas molecules can be determined, and then the relative atomic weights of the elements can be deduced. Cannizzaro showed that if hydrogen (the lightest element) is assigned an atomic weight of one, then the weight of carbon is twelve, nitrogen is fourteen, oxygen is sixteen, and sulfur is thirty-two.

Once the atomic weights of these elements were known, the weights of most other elements could be determined relative to them. In cases where ambiguities persisted, Cannizzaro often resolved them by using the Dulong-Petit law of heat capacities. Typically, the candidate atomic weights differed by a factor of two, with one value conforming to the Dulong-Petit law and the other value contradicting it.

In 1860, Cannizzaro seized the opportunity to present his paper at a major conference held in Karlsruhe, Germany. This famous meeting became a turning point for 19th-century chemistry; within a few years, most chemists accepted the correct atomic weights. One professor of chemistry, Lothar Meyer, expressed his reaction to Cannizzaro’s paper in the following words: “It was as though scales fell from my eyes, doubt vanished, and was replaced by a feeling of peaceful certainty.”15

After their greatest obstacle had been removed, chemists made rapid progress over the next decade. With the correct atomic weights, they could determine the correct molecular formulas. A pattern emerged that led to the new concept “valence,” which refers to the capacity of an atom to combine with other atoms. For example, a carbon atom can combine with up to four other atoms, a nitrogen atom with three, an oxygen atom with two, and a hydrogen atom with one. Figuratively, valence can be thought of as the number of “hooks” an atom has for attaching to other atoms.

The concept “valence” was first introduced by the English chemist Edward Frankland, who had arrived at the idea while studying the various ways that carbon combines with metals.16 Frankland was one of the first chemists to make use of ball-and-wire models of molecular structure; in such models, the valence of an atom corresponds to the number of wires attached to the ball. In 1861, he wrote: “The behavior of the organo-metallic bodies teaches a doctrine which affects chemical compounds in general, and which may be called the doctrine of atomic saturation; each element is capable of combining with a certain number of atoms; and this number can never be exceeded. . . .”17 He first identified this “law of valency” in the 1850s, but in many cases the valences he initially assigned to atoms were wrong. In 1866, after the errors had been corrected, Frankland expressed his gratitude to Cannizzaro: “I do not forget how much this law in its present development owes to the labors of Cannizzaro. Indeed until the latter had placed the atomic weights of metallic elements upon their present consistent basis, the satisfactory development of the doctrine was impossible.”18

In 1869, an English journal published a review article about the atomic theory that referred to valence as “the new idea that is revolutionizing chemistry.”19 This was no exaggeration. Earlier that year, the Russian chemist Dmitry Mendeleev had proposed a systematic classification of the chemical elements, based on the properties of atomic weight and valence, “which unified and rationalized the whole effort of chemical investigation.”20

Several chemists had previously noted that natural groups of chemical elements possess similar properties. Six such groups (known at the time) are: (1) lithium, sodium, and potassium; (2) calcium, strontium, and barium; (3) fluorine, chlorine, bromine, and iodine; (4) oxygen, sulfur, selenium, and tellurium; (5) nitrogen, phosphorus, arsenic, and antinomy; and (6) carbon, silicon, and tin. In each group, the elements have the same valence and similar electrical affinities.

Mendeleev constructed a table of all the known elements in order of increasing atomic weight, and he noticed that the similar elements recur at definite intervals. Thus he announced his “periodic law”: The properties of elements are periodic functions of their atomic weights. In the modern version of his table, which is presented in every introductory course in chemistry, the elements appear in order of ascending atomic weight in the horizontal rows; the vertical columns contain elements of the same valence.

Mendeleev pointed out that “vacant places occur for elements which perhaps shall be discovered in the course of time.” By grasping how other properties of an element are related to its atomic weight and valence, he realized that “it is possible to foretell the properties of an element still unknown.” For example, a gap under aluminum was filled when gallium was discovered in 1875; another gap under silicon was filled when germanium was discovered in 1886. Mendeleev predicted the properties of both these elements—and his predictions were remarkably accurate. With the construction of the periodic table, the chemical elements no longer stood apart in isolation; the atomic theory had made it possible to connect them together into an intelligible whole.21

The correct atomic weights and valences also led to breakthroughs in the new frontier of chemical investigation: the determination of molecular structure. In 1861, the Russian chemist Alexander Butlerov described the ambitious goal of this new research program: “Only one rational formula is possible for each compound, and when the general laws governing the dependence of chemical properties on chemical structure have been derived, this formula will represent all these properties.”22 It was not enough, Butlerov realized, to know the number of atoms and their identities; in order to understand the properties of a compound, the spatial arrangement of the atoms within the molecule must be determined. Among the first major triumphs of this program was the discovery of the molecular structure of benzene.

The existence of benzene had been known for decades. In 1825, the Portable Gas Company in London asked Michael Faraday to analyze a mysterious liquid by-product of the process that generated natural gas for illumination. By distillation, he obtained the first sample of pure benzene, a compound that was destined to be very useful in both chemical theory and industrial practice. Faraday’s attempt at analysis, however, was not entirely successful. Without Avogadro’s hypothesis and with the assumption that the atomic weight of carbon is six rather than twelve, he arrived at the incorrect molecular formula of C2H.

After Cannizzaro’s work, the molecular formula for benzene was correctly identified as C6H6, but its structure was still a mystery. It was a very stable compound that could be converted into many derivatives without decomposing. This was surprising, given that the ratio of hydrogen to carbon was so low. In contrast, the compound acetylene (C2H2) has the same low hydrogen-carbon ratio and is highly reactive.

The study of benzene derivatives—compounds in which one or more of the hydrogen atoms are replaced by different atoms—revealed another interesting property. It was found, for example, that only one chlorobenzene compound is formed when a chlorine atom is substituted for any of the hydrogen atoms. Thus the hydrogen atoms must occupy indistinguishable positions in the structure, implying that the benzene molecule is highly symmetrical.

The molecular structure of benzene that explains both its stability and its symmetry was first proposed in 1861 by Loschmidt. The six carbon atoms form a symmetrical hexagonal ring, with one hydrogen atom attached to each carbon. The tetravalent nature of carbon implies that the ring must be held together by alternating single and double bonds; in this way, each carbon atom uses three of its bonds on adjacent carbon atoms and the fourth on the hydrogen atom.

Unfortunately, Loschmidt’s book was given only a small printing by an obscure publisher, and, consequently, few chemists read it. Most chemists learned about this proposed structure of benzene from a paper by the famous German chemist August Kekulé, which was published it in 1865. Kekulé went much further than Loschmidt by conducting a lengthy study of many isomers of benzene derivatives. For example, he showed that dichlorobenzene, trichlorobenzene, and tetrachlorobenzene each exist in three different isomeric forms—exactly as predicted by the hexagonal ring structure. By 1872, Kekulé reported that “no example of isomerism among benzene derivatives had come to light which could not be completely explained by the difference in relative positions of the atoms substituted for hydrogen.”23 It was a landmark achievement: Chemists now had the knowledge and experimental techniques required to infer the spatial distribution of atoms in molecules. The molecules themselves could not be seen—but they could no longer hide, either.

The case of benzene was simplified by the fact that the molecule has a planar structure. In most molecules, the atoms are distributed in three dimensions. The study of three-dimensional molecular structures is called stereochemistry, and its origins can be traced back to the first great discovery of Louis Pasteur.

In 1846, Pasteur began a study of sodium ammonium tartrate, a crystalline substance that was known to be optically active (i.e., it rotates the plane of polarized light). When he viewed the small crystals with a magnifying glass, he noticed that they were subtly asymmetric; there was a small facet on one side that did not appear on the other side. He thought that this asymmetry might be the cause of the optical activity. So he decided to examine the crystals of sodium ammonium racemate, an optically inactive isomer of the tartrate. If the crystals of the racemate were symmetrical, he would have strong evidence that the asymmetry in the tartrate causes the rotation of the light.

Instead, Pasteur discovered that the racemic crystals came in two varieties, both asymmetrical, and each the mirror image of the other. Using tweezers, he painstakingly separated the two types of crystals. One of the crystals was identical to the tartrate, and it rotated the plane of polarized light in the same direction. The mirror-image crystal rotated the light by the same amount but in the opposite direction. The racemate is optically inactive because the effect of one crystal is canceled by the effect of its mirror image.

Why is this relevant to molecular structure? Because these isomeric compounds rotate the plane of polarized light even when they are dissolved in solution. The asymmetry that causes the rotation persists after the crystal is broken apart into its component molecules. Pasteur concluded that the asymmetry is a property of the molecules themselves. In 1860, he wrote: “Are the atoms . . . grouped on the spiral of a right handed helix, or are they situated at the apices of a regular tetrahedron, or are they disposed according to some other asymmetric arrangement? We do not know. But of this there is no doubt, that the atoms possess an asymmetric arrangement like that of an object and its mirror image.”24

The answer to Pasteur’s question was provided by Jacobus van’t Hoff in 1874. Van’t Hoff realized that the mirror-image structures, and the resulting optical activity, could be explained if the four bonds of a carbon atom pointed to the vertices of a tetrahedron (a triangular pyramid). In this arrangement, whenever four different atoms (or groups of atoms) are attached to the carbon in the center of the tetrahedron, it will be possible to create two isomers that are mirror images. The tetrahedral structure of carbon bonds explained Pasteur’s results and the other known instances of optical activity in organic compounds (e.g., in lactic acid or glyceraldehyde).

Furthermore, van’t Hoff’s symmetrical 3-D arrangement of carbon bonds provided the solution to another problem. The assumption of a planar structure with the four carbon bonds arranged in a square had led to the prediction of non-existent isomers. For example, consider the compound methyl dichloride, which has the formula CH2Cl2. The planar structure predicts two isomers; the chlorine atoms can be on one side of the square or diagonally opposite each other. In the tetrahedral structure, however, all arrangements are identical—which was consistent with the laboratory analysis that could isolate only one methyl dichloride.

Two decades earlier, the goal of understanding the properties of compounds in terms of the spatial arrangement of atoms in the molecules had seemed unattainable. It had now been attained. The atomic theory had integrated the science of chemistry and demonstrated its explanatory power in dramatic fashion. In regard to van’t Hoff’s contribution, one historian writes: “By 1874 most chemists had already accepted the structure theory, and now here was definitive proof. To all intents and purposes the great debate about whether atoms and molecules truly existed, the roots of which could be traced back to Dalton, and before him to the ancients, was resolved.”25

This is true; by the mid-1870s, the evidence for the atomic theory was overwhelming. Reasonable doubts were no longer possible.

The Method of Proof

Nineteenth-century scientists discovered the fundamental nature of matter in the same way that 17th-century scientists discovered the fundamental laws of motion. Throughout this article, we have seen the same method at work—objective concepts functioning as “green-lights” to induction,26 the role of experiment and mathematics in identifying causal laws, and the grand-scale integration culminating in proof. Let us now analyze the steps that led to this magnificent achievement.

Lavoisier understood that language is more than a necessary precondition for expressing our thoughts about the world; our concepts entail a commitment to generalize about their referents, and thus they both direct and make possible our search for laws. As he put it, if chemists accepted his proposed language, “they must irresistibly follow the course marked out by it.”

Lavoisier’s language guided chemists down the path that eventually led to the atomic theory. His system mandated that certain questions be answered in order to properly conceptualize a material: Is it a substance or a mixture? If it is a substance, can it be decomposed into different elements? In cases where different substances are made of the same elements, how are the substances distinguished? These questions put chemists on a course to discover the laws of constant composition and multiple proportions, and eventually to identify the relative weights of atoms and the molecular formulas of compounds.

As was the case in physics, the path to modern chemistry first needed to be cleared by the elimination of invalid concepts. For example, Lavoisier led the battle against the concept “phlogiston,” which referred to the alleged element associated with fire. It was thought that a burning candle releases phlogiston into the surrounding air. If the candle is placed in a closed container, the flame is extinguished when the air becomes saturated and can absorb no more phlogiston. The remaining gas that would not support combustion—which was pure nitrogen—was identified as normal air plus phlogiston. When the part of air that did support combustion—oxygen—was isolated, it was identified as normal air minus phlogiston. A similar error was made when water was first decomposed into hydrogen and oxygen. Hydrogen was identified as water plus phlogiston, and oxygen was identified as water minus phlogiston.

By quantitative analysis, however, it was discovered that a substance undergoing combustion gains weight while the surrounding air loses weight. Those who followed the course marked out by the concept “phlogiston” were then compelled to ascribe to this element a negative mass. If chemists had continued down this path and conceded the possibility of elements with negative mass, they would have given up the principle that enabled them to distinguish elements from compounds—and progress in chemistry would have come to an abrupt halt. Fortunately, most chemists dismissed the idea of “negative mass” as arbitrary. They followed Lavoisier when he rejected phlogiston and identified combustion as the process of a substance combining with oxygen.

Lavoisier’s chemical language provided part of the foundation, but we have seen that the atomic theory required the formation of many other concepts. Nearly every law that contributed to the proof of the theory made use of a crucial new concept: Avogadro’s hypothesis and the concept “molecule”; the Dulong-Petit law and the concepts “specific heat” and “atomic weight”; Faraday’s law of electrolysis and the concept “ion”; Waterston’s gas law and the concept “energy” (integrating motion and heat); Maxwell’s laws of gaseous transport processes and the concept “mean free path”; and Mendeleev’s periodic law and the concept “valence.” In each case, the facts gave rise to the need for a new concept, and then that concept made it possible to grasp a causal relationship that had been inaccessible without it.

Given a valid conceptual framework, the causal relationships were discovered by means of experiment. In some cases, they were induced directly from the experimental data (e.g., Charles’s law of gases, the law of constant composition, the law of multiple proportions, and the law of combining gas volumes). These laws were reached and validated independent of the atomic theory. In other cases, as illustrated by the kinetic theory of gases, laws were mathematically deduced from premises about atoms that were themselves based on experimental data; these laws were then confirmed by further experiments.

All of the experiments discussed in this article contributed to the proof of the atomic theory. Nevertheless, some of them stand out as crucial. The best example is Maxwell’s experiment showing that gaseous resistance is independent of density. The atomic theory offered a simple explanation for this surprising result, an explanation relying on the fact that gases consist of widely separated particles moving freely between collisions. In contrast, a continuum theory of matter would seem to imply that resistance is proportional to density (which is true in liquids). Thus this experiment isolated a case in which the atomic theory made a prediction that differentiated it from any continuum theory.

A “crucial” experiment or observation confirms a prediction of one theory while contradicting the alternative theories. A good example from astronomy is provided by Galileo’s telescopic observations of Venus, which showed a full range of phases as the planet moved around the Sun. These observations were predicted by the heliocentric theory and they flatly contradicted Ptolemy’s theory. Another example can be found in optics: Newton’s recombination of the color spectrum to produce white light, which was predicted by his theory and contradicts the alternative theories proposed by Descartes and Robert Hooke.

Many philosophers of science deny that any particular experiment can be regarded as “crucial.” This rejection of crucial experiments can be traced back to Pierre Duhem and Willard Quine, and it is commonly referred to as the Duhem-Quine thesis. They argued that no single experiment can play a decisive role in validating or refuting a broad theory. In a trivial sense, this is true: An experiment—considered in isolation—cannot perform such a function. However, the results of a single experiment—when its implications are not evaded by arbitrary hypotheses and when it is judged within the total context of knowledge—can and regularly does play such a decisive role. The evaluation of the new data as crucial is dependent on the context.

Experiment provides entrance into mathematics by means of numerical measurement. At the beginning of an investigation, non-quantitative experiments may be performed in order to determine whether an effect occurs under specific circumstances. Such experiments characterize the early history of chemistry, electricity, and heat. As the investigation advances, however, the experiments must involve making numerical measurements. The discoveries described in this article were entirely dependent on measurements made with precision mass balances, thermometers, barometers, ammeters, apparatus for determining vapor densities, and so forth. The causal laws integrated by the atomic theory are equations that express relations among the numerical data.

The progress made in chemistry would not have been possible without mathematics. Just as an ancient shepherd could use his observations of the sky as a rough clock and compass, an early chemist could use his knowledge of reactions for the practical purpose of purifying metals or synthesizing dyes. In both cases, however, this pre-mathematical stage consisted of a very long list of separate items of knowledge—with no way to grasp the causal relationships that connect them together. Both of these fields became sciences only when numerical measurements were unified by laws expressed in mathematical form.

The same point is seen in the study of heat. The qualitative experiments of Rumford and Davy provided strong evidence that heat was a form of internal motion, yet the idea led nowhere for the next forty-five years. In contrast, when Joule measured the quantitative relationship between heat and motion, he opened the floodgates to further discoveries. Temperature was identified with the average kinetic energy of molecules, and within twenty years the kinetic theory of gases had explained an enormous range of data.

In physical science, qualitative truths are mere starting points; they can suggest a course of investigation, but that investigation is successful only when it arrives at a causal relationship among quantities. Then the power of mathematics is unleashed; connections can be identified between facts that had previously seemed entirely unrelated (e.g., elastic collisions in mechanics and heat flow between bodies, or the heat capacity of an element and the mass of its atoms, or the volumes of reacting gases and the composition of molecules, or heat conduction in gases and the size of molecules). Mathematics is the physical scientist’s means of integrating his knowledge.

Let us now consider the final payoff of the inductive method: the proof of a physical theory. A theory is a set of broad principles on which an entire subject is based; as such, it subsumes and explains the particular facts and narrower laws. Because of the abstract nature of a theory, it can be difficult for a scientist to decide when the evidence has culminated in proof. Yet this is the ultimate question of induction and therefore of the greatest practical importance. If a scientist’s standard of proof is too low, he may easily accept a false theory and pursue research that leads nowhere (the theories based on phlogiston and caloric are good examples). But the failure to accept a proven theory in the name of allegedly higher standards is equally disastrous; because further progress in the field is dependent on the theory, the scientist’s research will stagnate—and if he persists in pursuing such a course, he will find himself actively engaged in a war against the facts (as illustrated by many opponents of the atomic theory).

We have seen the evidence for atoms accumulate over seven decades. Of course, this progress did not stop in the mid-1870s; every decade thereafter, more discoveries added to the explanatory power of the theory. But these later discoveries did not contribute to the proof, which was already complete. Let us now review the evidence and identify the criteria of proof.

In the first third of the 19th century, four laws were discovered that supported the atomic nature of chemical elements: (1) elements combine in discrete units of mass to form compounds; (2) gaseous reactions involve integer units of volume; (3) a solid element has a heat capacity that is proportional to the number of discrete mass units (or atoms); (4) the amount of electricity generated by a chemical reaction is proportional to the number of discrete mass units that react at the electrodes. Furthermore, the discovery of allotropes and isomers provided additional evidence for the atomic theory, which had the potential to explain them as different spatial arrangements of the same atoms.

These laws subsume a large amount of data, ranging across the fields of chemical reactions, heat, and electricity. By attributing to atoms or molecules the appropriate masses, gaseous volumes, quantities of heat, and electric charges, the atomic theory could offer explanations of all four laws. Why is this not enough to prove the theory?

The problem was that scientists had no independent reasons for assigning to atoms and molecules the properties that were required to explain these laws. Consider Avogadro’s hypothesis, which was introduced to explain the law of combining volumes. Initially, the hypothesis could not be connected to other knowledge about gases and, as we have seen, it clashed with widely held (but false) views about chemical bonding and the cause of gaseous pressure. A similar point applies to the atomic explanation offered by Dulong and Petit for their law of heat capacities. They could give no reason why each atom should absorb the same amount of heat, and they were unable to connect their hypothesis to other knowledge about heat and temperature. Thus, at this stage, the ideas of Avogadro and Dulong-Petit were promising but doubtful—which also meant that atomic weights and molecular formulas remained clouded in ambiguity.

We have seen how these doubts were eventually overcome by discoveries concerning the nature of heat and gases. Experiments studying the conversion of motion into heat provided strong evidence that temperature is a measure of internal kinetic energy. This idea integrated with the Dulong-Petit law; it was reasonable to expect atoms in thermal equilibrium to have the same average kinetic energy. Furthermore, when the idea was combined with a simple molecular model of gases, it could be used to derive the basic law relating the pressure, volume, and temperature of a gas. Avogadro’s hypothesis emerged from this analysis as a consequence—thus connecting the hypothesis to Charles’s law of gases and to Newton’s laws of motion.

This was nearly enough to transform Avogadro’s idea from a hypothesis into a law. However, reasonable doubts could still be raised about the simple “billiard-ball” model of gas molecules that had been assumed in the derivation of Charles’s law. The theory was not entirely convincing until it could explain other known properties of gases. Thus Maxwell’s work was crucial; when he developed and extended the model to explain gaseous transport processes (i.e., diffusion, heat conduction, and resistance), the range of data integrated by the kinetic theory of gases left no legitimate grounds for lingering skepticism.

Given the strength of this evidence, chemists were compelled to accept Avogadro’s idea along with its full implications. Was the atomic theory then proven? Not quite, for one reason. Chemistry had been in chaos for decades, and it took some time for chemists to use their new knowledge about atoms to integrate and explain the facts of their science. Once they had identified the correct atomic weights and valences, the development of the periodic table and the triumph of molecular structure theory completed the proof.

At this stage, the evidence for the atomic theory satisfied three criteria that are essential to the proof of any broad theory.

First, every concept and every generalization contained within the theory must be derived from observations by a valid method. A proven theory can have no arbitrary elements—no concepts such as “phlogiston” or “absolute space”—and no causal relationships that are unsupported by observational data. It is often claimed that a good theory correctly predicts some observations while contradicting none. This commonly stated criterion is necessary, of course, but very far from sufficient; it can be satisfied by theories that are false or even arbitrary. We have seen that the atomic theory has a different relationship to the data: Every law subsumed by the theory was rigorously induced from the results of experiments.

Second, a proven theory must form an integrated whole. It cannot be a conglomeration of independent parts that are freely adjusted to fit the data (as was the case in Ptolemaic astronomy). Rather, the various parts of the theory are interconnected and mutually reinforcing, so that the denial of any part leads to contradictions throughout the whole. A theory must have this characteristic in order to be necessitated by the evidence. Otherwise, one could not reach a conclusive evaluation about the relationship between the evidence and the theory; at best, one could evaluate only the relationship between the evidence and parts of the theory. On the other hand, when a theory is an integrated whole, then the evidence for any part is evidence for the whole.

By the mid-1870s, the atomic theory satisfied this criterion. Properties of atoms that had been hypothesized to explain one set of experimental facts were found to be indispensable in the explanation of other facts and laws. For instance, Avogadro’s hypothesis began as an explanation for a law governing the chemical reactions of gases—and then it became an essential part of a theory that derived the physical properties of gases from Newtonian mechanics. One cannot deny Avogadro’s hypothesis without contradicting the explanation for a half dozen laws and severing the connection between atoms and the laws of motion. Similarly, the atomic hypothesis of Dulong and Petit does not merely explain the heat capacities of solids; it became part of a theory relating heat to atomic motion in all materials. These ideas then led inexorably to a specific set of atomic weights and valences, which were shown to explain the relationships among the elements, the results of electrolysis experiments, and the properties of chemical compounds.

In order to further concretize this point, let us consider the consequences of making one small change to the theory. For instance, imagine that scientists refused to correct Faraday’s proposed molecular formula for benzene, C2H. Vapor density measurements of benzene gas would then contradict Avogadro’s hypothesis, which would have to be rejected along with the kinetic theory of gases that entails it. Furthermore, because this incorrect formula implies the wrong valence for carbon, chemists would also be compelled to reject Mendeleev’s periodic law and the theory of molecular structures in organic chemistry. The fallout from this one change would leave the entire atomic theory in ruins. When a theory is a whole, the parts cannot be freely adjusted; they are constrained by their relations to the rest of the theory and the facts on which it is based.

The third criterion pertains to the range of data integrated by the theory. The scope of a proven theory must be necessitated by the data from which it is induced—the theory must be no broader or narrower than required to integrate the data. This criterion is not independent of the first two; it simply makes explicit a crucial implication. If the theory is too broad, then one has made a leap beyond the evidence and thereby violated the first criterion (in such a case, the theory may be a legitimate hypothesis, but it cannot be regarded as proven). If the theory is too narrow, it will not achieve the integration described by the second criterion.

For instance, when Johannes Kepler integrated Tycho Brahe’s observations of the planets, he arrived at laws that were restricted to planetary motion. In order for Newton to reach universal laws, he had to include data on the movements of terrestrial bodies, moons, oceans, and comets. Similarly, in the early 19th century, when the atomic theory was supported only by data from chemistry, conclusions could be reached only about the discrete nature of chemical reactions. At that stage, it was a hypothesis that the theory could be generalized to a fundamental theory of matter.

In order to reach that generalization, scientists needed a range of data that compelled them to regard atoms as basic units of matter, not merely as the units of chemical reactions. The breakthrough occurred when physicists explained the nature of heat and gases in terms of atoms of a certain size moving in accordance with Newton’s laws. The laws of motion apply to matter qua matter, not to chemical elements qua chemical elements. When these physical properties of materials could be explained by atoms in motion, the atomic theory became a fundamental theory of matter that brought together into a whole the laws governing chemical reactions, motion, heat, electrical current, and the various properties of gases.

The three criteria describe the relationship between a proven theory and the evidence supporting it. When every aspect of the theory is induced from the data (not invented from imagination), and the theory forms a cognitive whole (not an independent collection of laws), and the scope of the theory is objectively derived from the range of data—then the theory truly is an integration (criterion 2) of the data (criterion 1), no more and no less (criterion 3).

A valid concept must satisfy similar criteria; it is derived from observations (not a product of fantasy), it is an integration of similar concretes (not a mere collection), and its definition must not be too broad or too narrow. There are many differences, of course, between a concept and a scientific theory. Yet the criteria of validation are similar, because these are broad principles that identify how a conceptual faculty properly forms a cognitive whole.

These criteria cannot be quantified. It would be ridiculous to claim, for example, that 514 pieces of evidence are required in order to prove a fundamental theory. Yet the criteria identify within narrow limits when the evidence culminates in proof. The atomic theory was obviously not proven prior to Maxwell’s climactic work on the kinetic theory of gases, published in 1866; whereas the necessary integration of physical and chemical data had clearly been achieved by the time Mendeleev’s prediction of gallium was confirmed in 1875. There may be room for rational debate about whether the atomic theory is properly evaluated as proven in 1870. But little is at stake in such a debate.

The claim that the atomic theory was proven by 1875 does not mean that it left no unanswered questions. On the contrary, the theory gave rise to an entire realm of very important unanswered questions. For example, measurements of heat capacity seemed to imply that diatomic molecules move rapidly and rotate at normal temperatures, but they do not vibrate as expected: Why not? There were many questions about the nature of chemical bonds, for example: Why do some atoms seem to have a variable valence, and how do these atoms differ from those with a constant valence? There were questions about the interaction of atoms and light: Why does each kind of atom emit and absorb light at specific characteristic wavelengths? Furthermore, there was evidence supporting the view that atoms contain electrically charged particles: If so, how could the charge be distributed in a way that is consistent with the stability of atoms? Such questions, however, did not cast doubt on the atomic theory; rather, they presupposed it. The questions pertained to the new frontier that the theory made possible: the investigation of atomic structure. The answers were discovered in the early decades of the 20th century.

The sub-microscopic world of atoms is accessible only by a very long and complex chain of reasoning. For more than two millennia, the idea of atoms was nothing more than a hope—a hope that someday, somehow, man’s mind would be able to reach far beyond what is given by his senses and grasp the fundamental nature of matter. This could not be done by means of a reckless leap into a cognitive void; scientists had to discover the method of proceeding step-by-step from observations to knowledge of atoms, while staying on solid ground. No shortcut of this process is possible or desirable; every step of the journey is its own reward, contributing a valuable piece of the knowledge that is condensed and integrated by the final theory.

Endnotes

1 Thomas L. Hankins, Science and the Enlightenment (New York: Cambridge University Press, 1985), p. 112.

[groups_can capability="access_html"]

2 Ibid., p. 109.

3 The World of the Atom, vol. 1, edited by Henry Boorse and Lloyd Motz (New York: Basic Books, Inc., 1966), p. 169.

4 J. R. Partington, A Short History of Chemistry (New York: Dover Publications, Inc., 1989), p. 204.

5 World of the Atom, p. 321.

6 Ibid., p. 327.

7 The Beginnings of Modern Science, edited by Holmes Boynton (Roslyn, NY: Walter J. Black, Inc., 1948), p. 198.

8 Humphry Davy, “An Essay on Heat, Light, and the Combinations of Light” in Contributions to Physical and Medical Knowledge, edited by T. Beddoes (Bristol, 1799), reprinted in Davy’s Collected Works, vol. 2(London, 1839), p. 9.

9 Later researchers such as Clausius and Maxwell realized that Waterston’s model was oversimplified. Heat absorbed by a polyatomic gas does not merely increase the speed of the molecules; it can also increase their rate of rotation and vibration. Fortunately, the basic law of gases depends only on the average speed, and, therefore, Waterston’s model was adequate for his purpose. In order to understand the heat capacities of gases, the other motions must be taken into account.

10 Stephen G. Brush, The Kind of Motion We Call Heat (Amsterdam: Elsevier Science B.V., 1986), p. 146.

11 The Scientific Papers of James Clerk Maxwell, vol. 2, edited by W. A. Niven (New York: Dover Publications, 1965), pp. 344–45.

12 Brush, Kind of Motion We Call Heat, p. 190.

13 Ibid., p. 191.

14 David Harriman, “The 19th-Century Atomic War,” The Objective Standard, vol. 1, no. 2 (Summer 2006), p. 83.

15 World of the Atom, p. 278.

16 Frankland originated the concept, but used the term “atomicity” instead of “valence.” The word “valence” came into use in the late 1860s.

17 W. G. Palmer, A History of the Concept of Valency to 1930 (London: Cambridge University Press, 1965), p. 34.

18 Ibid., p. 14.

19 Ibid., p. 27.

20 Ibid., p. 76.

21 Cecil J. Schneer, Mind and Matter (New York: Grove Press, Inc., 1969), p. 178.

22 Alexander Butlerov, “On the Chemical Structure of Substances,” reprinted in Journal of Chemical Education, 48 (1971), pp. 289–91.

23 Palmer, History of Valency (London: Cambridge University Press, 1965), p. 62.

24 John Hudson, The History of Chemistry (New York: Chapman & Hall, 1992), p. 148.

25 John Buckingham, Chasing the Molecule (England: Sutton Publishing, 2004), p. 206.

26 For more on the subject of inductive “green-lights,” see my article “Induction and Experimental Method,” in the Spring 2007 issue of The Objective Standard.

[/groups_can]

Return to Top
You have loader more free article(s) this month   |   Already a subscriber? Log in

Thank you for reading
The Objective Standard

Enjoy unlimited access to The Objective Standard for less than $5 per month
See Options
  Already a subscriber? Log in

Pin It on Pinterest