Dark Matter Mysteries

There is more to the Universe than meets the eye—way more. Over the past quarter century, it has become clear that all the points of light in the night sky, the stars, the Milky Way, the nubulae, all the distant galaxies, when added up with the nonluminous dust, constitute only a small fraction of the total energy density of the Universe. In fact, “normal” matter, like the stuff of which we are made—star dust—contributes only 4% to everything that is. The rest is something else, something different, something that doesn’t show up in the most sophisticated laboratory experiments, not even the Large Hadron Collider [1]. It is unmeasurable on terrestrial scales, and even at the scale of our furthest probe—the Voyager I spacecraft that left our solar system several years ago—there have been no indications of deviations from Newton’s law of gravity. To the highest precision we can achieve, it is invisible and non-interacting on any scale smaller than our stellar neighborhood. Perhaps it can never be detected in any direct sense. If so, then how do we know it is there? The answer comes from galactic trajectories. The motions in and of galaxies have been, and continue to be, the principal laboratory for the investigation of  cosmic questions about the dark matter of the universe.

Today, the nature of Dark Matter is one of the greatest mysteries in physics, and the search for direct detection of Dark Matter is one of physics’ greatest pursuits.

 

Island Universes

The nature of the Milky Way was a mystery through most of human history. To the ancient Greeks it was the milky circle (γαλαξίας κύκλος , pronounced galaktikos kyklos) and to the Romans it was literally the milky way (via lactea). Aristotle, in his Meteorologica, briefly suggested that the Milky Way might be composed of a large number of distant stars, but then rejected that idea in favor of a wisp, exhaled like breath on a cold morning, from the stars. The Milky Way is unmistakable on a clear dark night to anyone who looks up, far away from city lights. It was a constant companion through most of human history, like the constant stars, until electric lights extinguished it from much of the world in the past hundred years. Geoffrey Chaucer, in his Hous of Fame (1380) proclaimed “See yonder, lo, the Galaxyë Which men clepeth the Milky Wey, For hit is whyt.” (See yonder, lo, the galaxy which men call the Milky Way, for it is white.).

474336main_p1024ay_full

Hubble image of one of the galaxies in the Coma Cluster of galaxies that Fritz Zwicky used to announce that the universe contained a vast amount of dark matter.

Aristotle was fated, again, to be corrected by Galileo. Using his telescope in 1610, Galileo was the first to resolve a vast field of individual faint stars in the Milky Way. This led Emmanual Kant, in 1755, to propose that the Milky Way Galaxy was a rotating disk of stars held together by Newtonian gravity like the disk of the solar system, but much larger. He went on to suggest that the faint nebulae might be other far distant galaxies, which he called “island universes”. The first direct evidence that nebulae were distant galaxies came in 1917 with the observation of a supernova in the Andromeda Galaxy by Heber Curtis. Based on the brightness of the supernova, he estimated that the Andromeda Galaxy was over a million light years away, but uncertainty in the distance measurement kept the door open for the possibility that it was still part of the Milky Way, and hence the possibility that the Milky Way was the Universe.

The question of the nature of the nebulae hinged on the problem of measuring distances across vast amounts of space. By line of sight, there is no yard stick to tell how far away something is, so other methods must be used. Stellar parallax, for instance, can gauge the distance to nearby stars by measuring slight changes in the apparent positions of the stars as the Earth changes its position around the Sun through the year. This effect was used successfully for the first time in 1838 by Fredrich Bessel, and by the year 2000 more than a hundred thousand stars had their distances measured using stellar parallax. Recent advances in satellite observatories have extended the reach of stellar parallax to a distance of about 10,000 light years from the Sun, but this is still only a tenth of the diameter of the Milky Way. To measure distances to the far side of our own galaxy, or beyond, requires something else.

Because of Henrietta Leavitt

In 1908 Henrietta Leavitt, working at the Harvard Observatory as one of the famous female “computers”, discovered that stars whose luminosities oscillate with a steady periodicity, stars known as Cepheid variables, have a relationship between the period of oscillation and the average luminosity of the star [2]. By measuring the distance to nearby Cepheid variables using stellar parallax, the absolute brightness of the Cepheid could be calibrated, and the Cepheid could then be used as “standard candles”. This meant that by observing the period of oscillation and the brightness of a distant Cepheid, the distance to the star could be calculated. Edwin Hubble (1889 – 1953), working at the Mount Wilson observatory in Passedena CA, observed Cepheid variables in several of the brightest nebulae in the night sky. In 1925 he announced his observation of individual Cepheid variables in Andromeda and calculated that Andromeda was more than a million light years away, more than 10 Milky Way diameters (the actual number is about 25 Milky Way diameters). This meant that Andromeda was a separate galaxy and that the Universe was made of more than just our local cluster of stars. Once this door was opened, the known Universe expanded quickly up to a hundred Milky Way diameters as Hubble measured the distances to scores of our neighboring galaxies in the Virgo galaxy cluster. However, it was more than just our knowledge of the universe that was expanding.

Armed with measurements of galactic distances, Hubble was in a unique position to relate those distances to the speeds of the galaxies by combining his distance measurements with spectroscopic observations of the light spectra made by other astronomers. These galaxy emission spectra could be used to measure the Doppler effect on the light emitted by the stars of the galaxy. The Doppler effect, first proposed by Christian Doppler (1803 – 1853) in 1843, causes the wavelength of emitted light to be shifted to the red for objects receding from an observer, and shifted to the blue for objects approaching an observer. The amount of spectral shift is directly proportional the the object’s speed. Doppler’s original proposal was to use this effect to measure the speed of binary stars, which is indeed performed routinely today by astronomers for just this purpose, but in Doppler’s day spectroscopy was not precise enough to accomplish this. However, by the time Hubble was making his measurements, optical spectroscopy had become a precision science, and the Doppler shift of the galaxies could be measured with great accuracy. In 1929 Hubble announced the discovery of a proportional relationship between the distance to the galaxies and their Doppler shift. What he found was that the galaxies [3] are receding from us with speeds proportional to their distance [4]. Hubble himself made no claims at that time about what these data meant from a cosmological point of view, but others quickly noted that this Hubble effect could be explained if the universe were expanding.

Einstein’s Mistake

The state of the universe had been in doubt ever since Heber Curtis observed the supernova in the Andromeda galaxy in 1917. Einstein published a paper that same year in which he sought to resolve a problem that had appeared in the solution to his field equations. It appeared that the universe should either be expanding or contracting. Because the night sky literally was the firmament, it went against the mentality of the times to think of the universe as something intrinsically unstable, so Einstein fixed it with an extra term in his field equations, adding something called the cosmological constant, denoted by the Greek lambda (Λ). This extra term put the universe into a static equilibrium, and Einstein could rest easy with his firm trust in the firmament. However, a few years later, in 1922, the Russian physicist and mathematician Alexander Friedmann (1888 – 1925) published a paper that showed that Einstein’s static equilibrium was actually unstable, meaning that small perturbations away from the current energy density would either grow or shrink. This same result was found independently by the Belgian astronomer Georges Lemaître in 1927, who suggested that not only was the universe  expanding, but that it had originated in a singular event (now known as the Big Bang). Einstein was dismissive of Lemaître’s proposal and even quipped “Your calculations are correct, but your physics is atrocious.” [5] But after Hubble published his observation on the red shifts of galaxies in 1929, Lemaître pointed out that the redshifts would be explained by an expanding universe. Although Hubble himself never fully adopted this point of view, Einstein immediately saw it for what it was—a clear and simple explanation for a basic physical phenomenon that he had foolishly overlooked. Einstein retracted his cosmological constant in embarrassment and gave his support to Lemaître’s expanding universe. Nonetheless, Einstein’s physical intuition was never too far from the mark, and the cosmological constant has been resurrected in recent years in the form of Dark Energy. However, something else, both remarkable and disturbing, reared its head in the intervening years—Dark Matter.

Fritz Zwicky: Gadfly Genius

It is difficult to write about important advances in astronomy and astrophysics of the 20th century without tripping over Fritz Zwicky. As the gadfly genius that he was, he had a tendency to shoot close to the mark, or at least some of his many crazy ideas tended to be right. He was also in the right place at the right time, at the Mt. Wilson observatory nearby Cal Tech with regular access the World’s largest telescope. Shortly after Hubble proved that the nebulae were other galaxies and used Doppler shifts to measure their speeds, Zwicky (with his assistant Baade) began a study of as many galactic speeds and distances as they could. He was able to construct a three-dimensional map of the galaxies in the relatively nearby Coma galaxy cluster, together with their velocities. He then deduced that the galaxies in this isolated cluster were gravitational bound to each other, performing a whirling dance in each others thrall, like stars in globular star clusters in our Milky Way. But there was a serious problem.

Star clusters display average speeds and average gravitational potentials that are nicely balanced, a result predicted from a theorem of mechanics that was named the Virial Theorem by Rudolf Clausius in 1870. The Virial Theorem states that the average kinetic energy of a system of many bodies is directly related to the average potential energy of the system. By applying the Virial Theorem to the galaxies of the Coma cluster, Zwicky found that the dynamics of the galaxies were badly out of balance. The galaxy kinetic energies were far too fast relative to the gravitational potential—so fast, in fact, that the galaxies should have flown off away from each other and not been bound at all. To reconcile this discrepancy of the galactic speeds with the obvious fact that the galaxies were gravitationally bound, Zwicky postulated that there was unobserved matter present in the cluster that supplied the missing gravitational potential. The amount of missing potential was very large, and Zwicky’s calculations predicted that there was 400 times as much invisible matter, which he called “dark matter”, as visible. With his usual flare for the dramatic, Zwicky announced his findings to the World in 1933, but the World shrugged— after all, it was just Zwicky.

Nonetheless, Zwicky’s and Baade’s observations of the structure of the Coma cluster, and the calculations using the Virial Theorem, were verified by other astronomers. Something was clearly happening in the Coma cluster, but other scientists and astronomers did not have the courage or vision to make the bold assessment that Zwicky had. The problem of the Coma cluster, and a growing number of additional galaxy clusters that have been studied during the succeeding years, was to remain a thorn in the side of gravitational theory through half a century, and indeed remains a thorn to the present day. It is an important clue to a big question about the nature of gravity, which is arguably the least understood of the four forces of nature.

Vera Rubin: Galaxy Rotation Curves

Galactic clusters are among the largest coherent structures in the observable universe, and there are many questions about their origin and dynamics. Smaller gravitationally bound structures that can be handled more easily are individual galaxies themselves. If something important was missing in the dynamics of galactic clusters, perhaps the dynamics of the stars in individual galaxies could help shed light on the problem. In the late 1960’s and early 1970’s Vera Rubin at the Carnegie Institution of Washington used newly developed spectrographs to study the speeds of stars in individual galaxies. From simple Newtonian dynamics it is well understood that the speed of stars as a function of distance from the galactic center should increase with increasing distance up to the average radius of the galaxy, and then should decrease at larger distances. This trend in speed as a function of radius is called a rotation curve. As Rubin constructed the rotation curves for many galaxies, the increase of speed with increasing radius at small radii emerged as a clear trend, but the stars farther out in the galaxies were all moving far too fast. In fact, they are moving so fast that they exceeded escape velocity and should have flown off into space long ago. This disturbing pattern was repeated consistently in one rotation curve after another.

A simple fix to the problem of the rotation curves is to assume that there is significant mass present in every galaxy that is not observable either as luminous matter or as interstellar dust. In other words, there is unobserved matter, dark matter, in all galaxies that keeps all their stars gravitationally bound. Estimates of the amount of dark matter needed to fix the velocity curves is about five times as much dark matter as observable matter. This is not the same factor of 400 that Zwicky had estimated for the Coma cluster, but it is still a surprisingly large number. In short, 80% of the mass of a galaxy is not normal. It is neither a perturbation nor an artifact, but something fundamental and large. In fact, there is so much dark matter in the Universe that it must have a major effect on the overall curvature of space-time according to Einstein’s field equations. One of the best probes of the large-scale structure of the Universe is the afterglow of the Big Bang, known as the cosmic microwave background (CMB).

The Big Bang

The Big Bang was incredibly hot, but as the Universe expanded, its temperature cooled. About 379,000 years after the Big Bang, the Universe cooled sufficiently that the electron-nucleon plasma that filled space at that time condensed primarily into hydrogen. Plasma is charged and hence is opaque to photons.  Hydrogen, on the other hand, is neutral and transparent. Therefore, when the hydrogen condensed, the thermal photons suddenly flew free, unimpeded, and have continued unimpeded, continuing to cool, until today the thermal glow has reached about three degrees above absolute zero. Photons in thermal equilibrium with this low temperature have an average wavelength of a few millimeters corresponding to microwave frequencies, which is why the afterglow of the Big Bang got its CMB name.

The CMB is amazingly uniform when viewed from any direction in space, but it is not perfectly uniform. At the level of 0.005 percent, there are variations in the temperature depending on the location on the sky. These fluctuations in background temperature are called the CMB anisotropy, and they play an important role helping to interpret current models of the Universe. For instance, the average angular size of the fluctuations is related to the overall curvature of the Universe. This is because in the early Universe not all parts of it were in communication with each other because of the finite size and the finite speed of light. This set an original spatial size to thermal discrepancies. As the Universe continued to expand, the size of the regional variations expanded with it, and the sizes observed today would appear larger or smaller, depending on how the universe is curved. Therefore, to measure the energy density of the Universe, and hence to find its curvature, required measurements of the CMB temperature that were accurate to better than a part in 10,000.

 

Andrew Lange and Paul Richards: The Lambda and the Omega

In graduate school at Berkeley in 1982, my first graduate research assistantship was in the group of Paul Richards, one of the world leaders in observational cosmology. One of his senior graduate students at the time, Andrew Lange, was sharp and charismatic and leading an ambitious project to measure the cosmic background radiation on an experiment borne by a Japanese sounding rocket. My job was to create a set of far-infrared dichroic beamsplitters for the spectrometer.   A few days before launch, a technician noticed that the explosive bolts on the rocket nose-cone had expired. When fired, these would open the cone and expose the instrument at high altitude to the CMB. The old bolts were duly replaced with fresh ones. On launch day, the instrument and the sounding rocket worked perfectly, but the explosive bolts failed to fire, and the spectrometer made excellent measurements of the inside of the nose cone all the way up and all the way down until it sank into the Pacific Ocean. I left Paul’s comology group for a more promising career in solid state physics under the direction of Eugene Haller and Leo Falicov, but Paul and Andrew went on to great fame with high-altitude balloon-borne experiments that flew at 40,000 feet, above most of the atmosphere, to measure the CMB anisotropy.

By the late nineties, Andrew was established as a professor at Cal Tech. He was co-leading an experiment called BOOMerANG that flew a high-altitude balloon around Antarctica, while Paul was leading an experiment called MAXIMA that flew a balloon from Palastine, Texas. The two experiments had originally been coordinated together, but operational differences turned the former professor/student team into competitors to see who would be the first to measure the shape of the Universe through the CMB anisotropy.  BOOMerANG flew in 1997 and again in 1998, followed by MAXIMA that flew in 1998 and again in 1999. In early 2000, Andrew and the BOOMerANG team announced that the Universe was flat, confirmed quickly by an announcement by MAXIMA [BoomerMax]. This means that the energy density of the Universe is exactly critical, and there is precisely enough gravity to balance the expansion of the Universe. This parameter is known as Omega (Ω).  What was perhaps more important than this discovery was the announcement by Paul’s MAXIMA team that the amount of “normal” baryonic matter in the Universe made up only about 4% of the critical density. This is a shockingly small number, but agreed with predictions from Big Bang nucleosynthesis. When combined with independent measurements of Dark Energy known as Lambda (Λ), it also meant that about 25% of the energy density of the Universe is made up of Dark Matter—about five times more than ordinary matter. Zwicky’s Dark Matter announcement of 1933, virtually ignored by everyone, had been 75 years ahead of its time [6].

Dark Matter Pursuits

Today, the nature of Dark Matter is one of the greatest mysteries in physics, and the search for direct detection of Dark Matter is one of physics’ greatest pursuits. The indirect evidence for Dark Matter is incontestable—the CMB anisotropy, matter filaments in the early Universe, the speeds of galaxies in bound clusters, rotation curves of stars in Galaxies, gravitational lensing—all of these agree and confirm that most of the gravitational mass of the Universe is Dark. But what is it? The leading idea today is that it consists of weakly interacting particles, called cold dark matter (CDM). The dark matter particles pass right through you without ever disturbing a single electron. This is unlike unseen cosmic rays that are also passing through your body at the rate of several per second, leaving ionized trails like bullet holes through your flesh. Dark matter passes undisturbed through the entire Earth. This is not entirely unbelievable, because neutrinos, which are part of “normal” matter, also mostly pass through the Earth without interaction. Admittedly, the physics of neutrinos is not completely understood, but if ordinary matter can interact so weakly, then dark matter is just more extreme and perhaps not so strange. Of course, this makes detection of dark matter a big challenge. If a particle exists that won’t interact with anything, then how would you ever measure it? There are a lot of clever physicists with good ideas how to do it, but none of the ideas are easy, and none have worked yet.

[1] As of the writing of this chapter, Dark Matter has not been observed in particle form, but only through gravitational effects at large (galactic) scales.

[2] Leavitt, Henrietta S. “1777 Variables in the Magellanic Clouds”. Annals of Harvard College Observatory. LX(IV) (1908) 87-110

[3] Excluding the local group of galaxies that include Andromeda and Triangulum that are gravitationally influenced by the Milky Way.

[4] Hubble, Edwin (1929). “A relation between distance and radial velocity among extra-galactic nebulae”. PNAS 15 (3): 168–173.

[5] Deprit, A. (1984). “Monsignor Georges Lemaître”. In A. Barger (ed). The Big Bang and Georges Lemaître. Reidel. p. 370.

[6] I was amazed to read in Science magazine in 2004 or 2005, in a section called “Nobel Watch”, that Andrew Lange was a candidate for the Nobel Prize for his work on BoomerAng.  Around that same time I invited Paul Richards to Purdue to give our weekly physics colloquium.  There was definitely a buzz going around that the BoomerAng and MAXIMA collaborations were being talked about in Nobel circles.  The next year, the Nobel Prize of 2006 was indeed awarded for work on the Cosmic Microwave Background, but to Mather and Smoot for their earlier work on the COBE satellite.

Wave-Particle Duality and Hamilton’s Physics

Wave-particle duality was one of the greatest early challenges to quantum physics, partially clarified by Bohr’s Principle of Complementarity, but never easily grasped even today.  Yet long before Einstein proposed the indivisible quantum  of light (later to be called the photon by the chemist Gilbert Lewis), wave-particle duality was firmly embedded in the foundations of the classical physics of mechanics.

Light led the way to mechanics more than once in the history of physics.

 

Willebrord Snel van Royen

The Dutch physicist Willebrord Snel van Royen in 1621 derived an accurate mathematical description of the refraction of beams of light at a material interface in terms of sine functions, but he did not publish.  Fifteen years later, as Descartes was looking for an example to illustrate his new method of analytic geometry, he discovered the same law, unaware of Snel’s prior work.  In France the law is known as the Law of Descartes.  In the Netherlands (and much of the rest of the world) it is known as Snell’s Law.  Both Snell and Descartes based their work on Newton’s corpuscles of light.  The brilliant Fermat adopted corpuscles when he developed his principle of least time to explain the law of Descartes in 1662.  Yet Fermat was forced to assume that the corpuscles traveled slower in the denser material even though it was generally accepted that light should travel faster in denser media, just as sound did.  Seventy-five years later, Maupertuis continued the tradition when he developed his principle of least action and applied it to light corpuscles traveling faster through denser media, just as Descartes had prescribed.

HuygensParticle-02

The wave view of Snell’s Law (on the left). The source resides in the medium with higher speed. As the wave fronts impinge on the interface to a medium with lower speed, the wave fronts in the slower medium flatten out, causing the ray perpendicular to the wave fronts to tilt downwards. The particle view of Snell’s Law (on the right). The momentum of the particle in the second medium is larger than in the first, but the transverse components of the momentum (the x-components) are conserved, causing a tilt downwards of the particle’s direction as it crosses the interface. [i]

Maupertuis’ paper applying the principle of least action to the law of Descartes was a critical juncture in the development of dynamics.  His assumption of faster speeds in denser material was wrong, but he got the right answer because of the way he defined action for light.  Encouraged by the success of his (incorrect) theory, Maupertuis extended the principle of least action to mechanical systems, and this time used the right theory to get the right answers.  Despite Maupertuis’ misguided aspirations to become a physicist of equal stature to Newton, he was no mathematician, and he welcomed (and  somewhat appropriated) the contributions of Leonid Euler on the topic, who established the mathematical foundations for the principle of least action.  This work, in turn, attracted the attention of the Italian mathematician Lagrange, who developed a general new approach (Lagrangian mechanics) to mechanical systems that included the principle of least action as a direct consequence of his equations of motion.  This was the first time that light led the way to classical mechanics.  A hundred years after Maupertuis, it was time again for light to lead to the way to a deeper mechanics known as Hamiltonian mechanics.

Young Hamilton

William Rowland Hamilton (1805—1865) was a prodigy as a boy who knew parts of thirteen languages by the time he was thirteen years old. These were Greek, Latin, Hebrew, Syriac, Persian, Arabic, Sanskrit, Hindoostanee, Malay, French, Italian, Spanish, and German. In 1823 he entered Trinity College of Dublin University to study science. In his second and third years, he won the University’s top prizes for Greek and for mathematical physics, a run which may have extended to his fourth year—but he was offered the position of Andrew’s Professor of Astronomy at Dublin and Royal Astronomer of Ireland—not to be turned down at the early age of 21.

Hamilton1

Title of Hamilton’s first paper on his characteristic function as a new method that applied his theory from optics to the theory of mechanics, including Lagrangian mechanics as a special case.

His research into mathematical physics  concentrated on the theory of rays of light. Augustin-Jean Fresnel (1788—1827) had recently passed away, leaving behind a wave theory of light that provided a starting point for many effects in optical science, but which lacked broader generality. Hamilton developed a rigorous mathematical framework that could be applied to optical phenomena of the most general nature. This led to his theory of the Characteristic Function, based on principles of the variational calculus of Euler and Lagrange, that predicted the refraction of rays of light, like trajectories, as they passed through different media or across boundaries. In 1832 Hamilton predicted a phenomenon called conical refraction, which would cause a single ray of light entering a biaxial crystal to refract into a luminous cone.

Mathematical physics of that day typically followed experimental science. There were so many observed phenomena in so many fields that demanded explanation, that the general task of the mathematical physicist was to explain phenomena using basic principles followed by mathematical analysis. It was rare for the process to work the other way, for a theorist to predict a phenomenon never before observed. Today we take this as very normal. Einstein’s fame was primed by his prediction of the bending of light by gravity—but only after the observation of the effect by Eddington four years later was Einstein thrust onto the world stage. The same thing happened to Hamilton when his friend Humphrey Lloyd observed conical refraction, just as Hamilton had predicted. After that, Hamilton was revered as one of the most ingenious scientists of his day.

Following the success of conical refraction, Hamilton turned from optics to pursue a striking correspondence he had noted in his Characteristic Function that applied to mechanical trajectories as well as it did to rays of light. In 1834 and 1835 he published two papers On a General Method in Mechanics( I and II)[ii], in which he reworked the theory of Lagrange by beginning with the principle of varying action, which is now known as Hamilton’s Principle. Hamilton’s principle is related to Maupertuis’ principle of least action, but it was more rigorous and a more general approach to derive the Euler-Lagrange equations.  Hamilton’s Principal Function allowed the trajectories of particles to be calculated in complicated situations that were challenging for a direct solution by Lagrange’s equations.

The importance that these two papers had on the future development of physics would not be clear until 1842 when Carl Gustav Jacob Jacobi helped to interpret them and augment them, turning them into a methodology for solving dynamical problems. Today, the Hamiltonian approach to dynamics is central to all of physics, and thousands of physicists around the world mention his name every day, possibly more often than they mention Einstein’s.

[i] Reprinted from D. D. Nolte, Galileo Unbound: A Path Across Life, the Universe and Everything (Oxford, 2018)

[ii] W. R. Hamilton, “On a general method in dynamics I,” Phil. Trans. Roy. Soc., pp. 247-308, 1834; W. R. Hamilton, “On a general method in dynamics II,” Phil. Trans. Roy. Soc., pp. 95-144, 1835.

Huygens’ Tautochrone

In February of 1662, Pierre de Fermat wrote a paper Synthesis ad refractiones that explained Descartes-Snell’s Law of light refraction by finding the least time it took for light to travel between two points. This famous approach is now known as Fermat’s principle, and it motivated other searches for minimum principles. A few years earlier, in 1656, Christiaan Huygens had invented the pendulum clock [1], and he began a ten-year study of the physics of the pendulum. He was well aware that the pendulum clock does not keep exact time—as the pendulum swings wider, the period of oscillation slows down. He began to search for a path of the pendular mass that would keep the period the same (and make pendulum clocks more accurate), and he discovered a trajectory along which a mass would arrive at the same position in the same time no matter where it was released on the curve. That such a curve could exist was truly remarkable, and it promised to make highly accurate time pieces.

It made minimization problems a familiar part of physics—they became part of the mindset, leading ultimately to the principle of least action.

This curve is known as a tautochrone (literally: same or equal time) and Huygens provided a geometric proof in his Horologium Oscillatorium sive de motu pendulorum (1673) that the curve was a cycloid. A cycloid is the curve traced by a point on the rim of a circular wheel as the wheel rolls without slipping along a straight line. Huygens invented such a pendulum in which the mass executed a cycloid curve. It was a mass on a flexible yet inelastic string that partially wrapped itself around a solid bumper on each half swing. In principle, whether the pendulum swung gently, or through large displacements, the time would be the same. Unfortunately, friction along the contact of the string with the bumper prevented the pendulum from achieving this goal, and the tautochronic pendulum did not catch on.

HuygensIsochron

Fig. 1 Huygens’ isochronous pendulum.  The time it takes the pendulum bob to follow the cycloid arc is independent of the pendulum’s amplitude, unlike for the circular arc, as the pendulum slows down for larger excursions.

The solution of the tautochrone curve of equal time led naturally to a search for the curve of least time, known as the brachistochrone curve for a particle subject to gravity, like a bead sliding on a frictionless wire between two points. Johann Bernoulli published a challenge to find the brachistochrone in 1696 in the scientific journal Acta Eruditorum that had been founded in 1682 by Leibniz in Germany in collaboration with Otto Mencke. Leibniz envisioned the journal to be a place where new ideas in the natural sciences and mathematics could be published and disseminated rapidly, and it included letters and commentaries, acting as a communication hub to help establish a community of scholars across Europe. In reality, it was the continental response to the Proceedings of the Royal Society in England.  Naturally, the Acta and the Proceedings would later take partisan sides in the priority dispute between Leibniz and Newton for the development of the calculus.

When Bernoulli published his brachistochrone challenge in the June issue of 1696, it was read immediately by the leading mathematicians of the day, many of whom took up the challenge and replied. The problem was solved and published in the May 1697 issue of the Acta by no less than five correspondents, including Johann Bernoulli, Jakob Bernoulli (Johann’s brother), Isaac Newton, Gottfried Leibniz and Ehrenfried Walther von Tschirnhaus. Each of them varied in their approaches, but all found the same solution. Johann and Jakob each considered the problem as the path of a light beam in a medium whose speed varied with depth. Just as in the tautochrone, the solved curve was a cycloid. The path of fastest time always started with a vertical path that allowed the fastest acceleration, and the point of greatest depth always was at the point of greatest horizontal speed.

The brachistrochrone problem led to the invention of the variational calculus, with first steps by Jakob Bernoulli and later more rigorous approaches by Euler.  However, its real importance is that it made minimization problems a familiar part of physics—they became part of the mindset, leading ultimately to the principle of least action.

[1] Galileo conceived of a pendulum clock in 1641, and his son Vincenzo started construction, but it was never finished.  Huygens submitted and received a patent in 1657 for a practical escape mechanism on pendulum clocks that is still used today.

 

 

 

Geometry as Motion

Nothing seems as static and as solid as geometry—there is even a subfield of geometry known as “solid geometry”. Geometric objects seem fixed in time and in space. Yet the very first algebraic description of geometry was born out of kinematic constructions of curves as René Descartes undertook the solution of an ancient Greek problem posed by Pappus of Alexandria (c. 290 – c. 350) that had remained unsolved for over a millennium. In the process, Descartes’ invented coordinate geometry.

Descartes used kinematic language in the process of drawing  curves, and he even talked about the speed of the moving point. In this sense, Descartes’ curves are trajectories.

The problem of Pappus relates to the construction of what were known as loci, or what today we call curves or functions. Loci are a smooth collection of points. For instance, the intersection of two fixed lines in a plane is a point. But if you allow one of the lines to move continuously in the plane, the intersection between the moving line and the fixed line sweeps out a continuous succession of points that describe a curve—in this case a new line. The problem posed by Pappus was to find the appropriate curve, or loci, when multiple lines are allowed to move continuously in the plane in such a way that their movements are related by given ratios. It can be shown easily in the case of two lines that the curves that are generated are other lines. As the number of lines increases to three or four lines, the loci become the conic sections: circle, ellipse, parabola and hyperbola. Pappus then asked what one would get if there were five such lines—what type of curves were these? This was the problem that attracted Descartes.

What Descartes did—the step that was so radical that it reinvented geometry—was to fix lines in position rather than merely in length. To us, in the 21st century, such an act appears so obvious as to remove any sense of awe. But by fixing a line in position, and by choosing a fixed origin on that line to which other points on the line were referenced by their distance from that origin, and other lines were referenced by their positions relative to the first line, then these distances could be viewed as unknown quantities whose solution could be sought through algebraic means. This was Descartes’ breakthrough that today is called “analytic geometry”— algebra could be used to find geometric properties.

Newton too viewed mathematical curves as living things that changed in time, which was one of the central ideas behind his fluxions—literally curves in flux.

Today, we would call the “locations” of the points their “coordinates”, and Descartes is almost universally credited with the discovery of the Cartesian coordinate system. Cartesian coordinates are the well-known grids of points, defined by the x-axis and the y-axis placed at right angles to each other, at whose intersection is the origin. Each point on the plane is defined by a pair of numbers, usually represented as (x, y). However, there are no grids or orthogonal axes in Descartes’ Géométrie, and there are no pairs of numbers defining locations of points. About the most Cartesian-like element that can be recognized in Descartes’ La Géométrie is the line of reference AB, as in Fig. 1.

Descartesgeo5

Fig. 1 The first figure in Descartes’ Géométrie that defines 3 lines that are placed in position relative to the point marked A, which is the origin. The point C is one point on the loci that is to be found such that it satisfies given relationships to the 3 lines.

 

In his radical new approach to loci, Descartes used kinematic language in the process of drawing the curves, and he even talked about the speed of the moving point. In this sense, Descartes’ curves are trajectories, time-dependent things. Important editions of Descartes’ Discourse were published in two volumes in 1659 and 1661 which were read by Newton as a student at Cambridge. Newton also viewed mathematical curves as living things that changed in time, which was one of the central ideas behind his fluxions—literally curves in flux.

 

Descartes’ Odd Geometry

Rene Descartes was an unlikely candidate to revolutionize geometry. He began his career as a mercenary soldier, his mind wrapped around things like war and women, which are far from problems of existence and geometry. Descartes’ strange conversion from a life of action to a life of mind occurred on the night of November 10-11 in 1619 while he was bivouacked in an army encampment in Bavaria as a mercenary early in the Thirty Years’ War (1618—1648). On that night, Descartes dreamed that exact rational thought, even mathematical method, could be applied to problems of philosophy. This became his life’s work, and because he was a man of exceptional talent, he succeeded in exceptional ways.

Even Descartes’ footnotes were capable of launching new fields of thought.

Descartes left his mercenary employment and established himself in the free-thinking republic of the Netherlands which was ending the long process of casting off the yolk of Spanish rule towards the end of the Eighty Years War (1568—1648). In 1623, he settled in The Hague, a part of the republic that had been free of Spanish troops for many years, and after a brief absence (during which he witnessed the Siege of Rochelle by Cardinal Richelieu), he returned to the Netherlands in 1628, at the age of 32. He remained in the Netherlands, moving often, taking classes or teaching classes at the Universities of Leiden and Utrecht until 1649, when he was enticed away by Queen Christina of Sweden to colder climes and ultimately to his death.

Descartes3Char

Descartes’ original curve (AC), constructed on non-orthogonal (oblique) x and y coordinates (La Géométrie, 1637)

Descartes’ years in the Netherlands were the most productive epoch of his life as he created his philosophy and pursued his dream’s promise. He embarked on an ambitious project to codify his rational philosophy to gain a full understanding of natural philosophy. He called this work Treatise on the World, known in short as Le Monde, and it quite naturally adopted Copernicus’ heliocentric view of the solar system, which by that time had become widely accepted in learned circles even before Galileo’s publication in 1632 of his Dialogue Concerning the Two Chief World Systems. However, when Galileo was convicted in 1633 of suspicion of heresy (See Galileo Unbound, Oxford University Press, 2018), Descartes abruptly abandoned his plans to publish Le Monde, despite being in the Netherlands where he was well beyond the reach of the Church. It was, after all, the Dutch publisher Elzevir who published Galileo’s last work on the Two Sciences in 1638 when no Italian publishers would touch it. However, Descartes remained a devout Catholic throughout his life and had no desire to oppose its wishes. Despite this setback, Descartes continued to work on less controversial parts of his project, and in 1637 he published three essays preceded by a short introduction.

The introduction was called the Discourse on the Method (which contained his famous cogito ergo sum), and the three essays were La Dioptrique on optics, Les Météores on atmosphere and weather and finally La Géométrie on geometry in which he solved a problem posed by Pappus of Alexandria in the fourth century AD. Descartes sought to find a fundamental set of proven truths that would serve as the elements one could use in a deductive method to derive higher-level constructs. It was partially as an exercise in deductive reasoning that he sought to solve the classical mathematics problem posed by Pappus. La Géométrie was published as an essay following the much loftier Discourse, so even Descartes’ footnotes were capable of launching new fields of thought. The new field is called analytical geometry, also known as Cartesian or coordinate geometry, in which algebra is applied to geometric problems. Today, coordinates and functions are such natural elements of mathematics and physics, that it is odd to think that they emerged as demonstrations of abstract philosophy.

Bibliography:  R. Descartes, D. E. Smith, and M. L. Latham, The geometry of René Descartes. Chicago: Open Court Pub. Co., 1925.

 

The Oxford Scholars

 

Oxford University, and specifically Merton College, was a site of intense intellectual ferment in the middle of the Medieval Period around the time of Chaucer. A string of natural philosophers, today called the Oxford Scholars or the Oxford Calculators, began to explore early ideas of motion, taking the first bold steps beyond Aristotle. They were the first “physicists” (although that term would not be used until our own time) and laid the foundation upon which Galileo would build the first algebraic law of physics.

It is hard to imagine today what it was like doing mathematical physics in the fourteenth century. Mathematical symbolism did not exist in any form. Nothing stood for anything else, as we routinely use in algebra, and there were no equations, only proportions.

Thomas Bradwardine (1290 – 1349) was the first of the Scholars, arriving at Oxford around 1320. He came from a moderately wealthy family from Sussex on the southern coast of England not far from where the Anglo Saxon king Harold lost his kingdom and his life at the Battle of Hastings. The life of a scholar was not lucrative, so Bradwardine supported himself mainly through the royal patronage of Edward III, for whom he was chaplain and confessor during Edward’s campaigns in France, eventually becoming the Archbishop of Canterbury, although he died of the plague returning from Avignon before he could take up the position. When not campaigning or playing courtier, Bradwardine found time to develop a broad-ranging program of study that spanned from logic and theology to physics.

UK-2014-Oxford-Merton_College_05

Merton College, Oxford (attribution: Andrew Shiva / Wikipedia)

Bradwardine began a reanalysis of an apparent paradox that stemmed from Aristotle’s theory of motion. As anyone with experience pushing a heavy box across a floor knows, the box does not move until sufficient force is applied. Today we say that the applied force must exceed the static force of friction. However, this everyday experience is at odds with Aristotle’s theory that placed motion in inverse proportion to the resistance. In this theory, only an infinite resistance could cause zero motion, yet the box does eventually move if enough force is applied. Bradwardine sought to resolve this paradox. Within the scholastic tradition, Aristotle was always assumed to have understood the truth, even if fourteenth-century scholars could not understand it themselves. Therefore, Bradwardine constructed a mathematical “explanation” that could preserve Aristotle’s theory of proportion while still accounting for the fact that the box does not move.

It is hard to imagine today what it was like doing mathematical physics in the fourteenth century. Mathematical symbolism did not exist in any form. Nothing stood for anything else, as we routinely use in algebra, and there were no equations, only proportions. The introduction of algebra into Europe through Arabic texts was a hundred years away. Not even Euclid or Archimedes had been rediscovered by Bradwardine’s day, so all he had to work with was Pythagorean theory of ratios and positive numbers—even negative numbers did not exist—and the only mathematical tools at his disposal were logic and language. Nonetheless, armed only with these sparse tools, Bradwardine constructed a verbal argument that the proportion of impressed force to resistance must itself be as a proportionality between speeds. As awkward as this sounds in words, it is a first intuitive step towards the concept of an exponential relationship—a power law. In Bradwardine’s rule for motion, the relationships among force, resistance and speed were like compounding interest on a loan. Bradwardine’s rule is not correct physics, because the exponential function is never zero, and because motion does not grow exponentially, but it did introduce the intuitive idea that a physical process could be nonlinear (using our modern terminology), changing from small effects to large effects disproportionate to the change in the cause. Therefore, the importance of Bradwardine was more his approach than his result. He applied mathematical reasoning to a problem of kinetics and set the stage for mathematical science.

A few years after Bradwardine had devised his rule of motion, a young mathematician named William Heytesbury (1313—1373) arrived as a fellow of Merton College. In the years that they overlapped at Oxford, one can only speculate what was transmitted from the senior to the junior fellow, but by 1335 Heytesbury had constructed a theory of continuous magnitudes and their continuous changes that included motion as a subset. The concept of the continuum had been a great challenge for Aristotelian theory, leading to many paradoxes or sophisms, like Zeno’s paradox that supposedly proved the impossibility of motion. Heytesbury shrewdly recognized that the problem was the ill-defined idea of instantaneous rate of change.

Heytesbury was just as handicapped as Bradwardine in his lack of mathematical notation, but worse, he was handicapped by the Aristotelian injunction against taking ratios of unlike qualities. According to Aristotle, proportions must only be taken of like qualities, such as one linear length to another, or one mass to another. To take a proportion of a mass to a length was nonsense. Today we call it “division” (more accurately a distribution), and mass divided by length is a linear mass density. Therefore, because speed is distance divided by time, no such ratio was possible in Heytesbury’s day because distance and time are unlike qualities. Heytesbury ingeniously got around this restriction by considering the linear distances covered by two moving objects in equal times. The resulting linear distances were similar qualities and could thereby be related by proportion. The ratio of the distances become the ratio of speeds, even though speed itself could not be defined directly. This was a first step, a new tool. Using this conceit, Heytesbury was able to go much farther, to grapple with the problem of nonuniform motion and hence the more general concept of instantaneous speed.

In the language of calculus (developed by Newton and Leibniz 300 years later), instantaneous speed is a ratio of an element of length to an element of time in the limit as the elements vanish uniformly. In the language of Heytesbury (Latin), instantaneous speed is simply the average speed between two neighboring speeds (still working with ratios of distances traversed in equal times). And those neighboring speeds are similarly the averages of their neighbors, until one reaches zero speed on one end and final speed on the other. Heytesbury called this kind of motion difform as opposed to uniform motion.

A special case of difform motion was uniformly difform motion—uniform acceleration. Acceleration was completely outside the grasp of Aristotelian philosophers, even Heytesbury, but he could imagine a speed that changed uniformly. This requires that the extra distance travelled during the succeeding time unit relative to the distance travelled during the current time unit has a fixed value. He then showed, without equations, using only his math-like language, that if a form changes uniformly in time (constant rate of change) then the average value of the form over a fixed time is equal to the average of the initial and final values. This work had a tremendous importance, not only for the history of mathematics, but also for the history of physics, because when the form in question is speed, then this represents the discovery of the mean speed theorem for the case of uniform acceleration. The mean speed theorem is often attributed to Galileo, who proved the theorem as part of his law of fall, and he deserves the attribution because there is an important difference in context. Heytesbury was not a scientist nor even a natural philosopher. He was a logician interested in sophisms that arose in discussions of Aristotle. The real purpose of Heytesbury’s analysis was to show that paradoxes like that of Zeno could be resolved within the Aristotelian system. He certainly was not thinking of falling bodies, whereas Galileo was.

Not long after Heytesbury demonstrated the mean speed theorem, he was joined at Merton College by yet another young fellow, Richard Swineshead (fl. 1340-1354). Bradwardine was already gone, but his reputation survived, as well as his memory, in the person of Heytesbury, and Swineshead became another member in the tradition of the Merton Scholars. He was perhaps the most adept at mathematics of the three, and he published several monumental treatises that mathematically expanded upon both Bradwardine and Heytesbury, systematizing their results and disseminating them in published accounts that spread across scholastic Europe—all still without formulas, symbols or equations. For these works, he became known as The Calculator. By consolidating and documenting the work of the Oxford Scholars, his influence on the subsequent history of thought was considerable, as he was widely read by later mathematicians, including Leibniz, who had a copy of Heytesbury in his personal library.

( To read more about the Oxford Scholars, and their connections with members of their contemporaries in Paris, see Chapter 3 of Galileo Unbound (Oxford University Press, 2018).)

 

A Wealth of Motions: Six Generations in the History of the Physics of Motion

SixGenerations3

Since Galileo launched his trajectory, there have been six broad generations that have traced the continuing development of concepts of motion. These are: 1) Universal Motion; 2) Phase Space; 3) Space-Time; 4) Geometric Dynamics; 5) Quantum Coherence; and 6) Complex Systems. These six generations were not all sequential, many evolving in parallel over the centuries, borrowing from each other, and there surely are other ways one could divide up the story of dynamics. But these six generations capture the grand concepts and the crucial paradigm shifts that are Galileo’s legacy, taking us from Galileo’s trajectory to the broad expanses across which physicists practice physics today.

Universal Motion emerged as a new concept when Isaac Newton proposed his theory of universal gravitation by which the force that causes apples to drop from trees is the same force that keeps the Moon in motion around the Earth, and the Earth in motion around the Sun. This was a bold step because even in Newton’s day, some still believed that celestial objects obeyed different laws. For instance, it was only through the work of Edmund Halley, a contemporary and friend of Newton’s, that comets were understood to travel in elliptical orbits obeying the same laws as the planets. Universal Motion included ideas of momentum from the start, while concepts of energy and potential, which fill out this first generation, took nearly a century to develop in the hands of many others, like Leibniz and Euler and the Bernoullis. This first generation was concluded by the masterwork of the Italian-French mathematician Joseph-Louis Lagrange, who also planted the seed of the second generation.

The second generation, culminating in the powerful and useful Phase Space, also took more than a century to mature. It began when Lagrange divorced dynamics from geometry, establishing generalized coordinates as surrogates to directions in space. Ironically, by discarding geometry, Lagrange laid the foundation for generalized spaces, because generalized coordinates could be anything, coming in any units and in any number, each coordinate having its companion velocity, doubling the dimension for every freedom. The Austrian physicist Ludwig Boltzmann expanded the number of dimensions to the scale of Avogadro’s number of particles, and he discovered the conservation of phase space volume, an invariance of phase space that stays the same even as 1023 atoms (Avogadro’s number) in ideal gases follow their random trajectories. The idea of phase space set the stage for statistical mechanics and for a new probabilistic viewpoint of mechanics that would extend into chaotic motions.

The French mathematician Henri Poincaré got a glimpse of chaotic motion in 1890 as he rushed to correct an embarrassing mistake in his manuscript that had just won a major international prize. The mistake was mathematical, but the consequences were profoundly physical, beginning the long road to a theory of chaos that simmered, without boiling, for nearly seventy years until computers became common lab equipment. Edward Lorenz of MIT, working on models of the atmosphere in the late 1960s, used one of the earliest scientific computers to expose the beauty and the complexity of chaotic systems. He discovered that the computer simulations were exponentially sensitive to the initial conditions, and the joke became that a butterfly flapping its wings in China could cause hurricanes in the Atlantic. In his computer simulations, Lorenz discovered what today is known as the Lorenz butterfly, an example of something called a “strange attractor”. But the term chaos is a bit of a misnomer, because chaos theory is primarily about finding what things are shared in common, or are invariant, among seemingly random-acting systems.

The third generation in concepts of motion, Space-Time, is indelibly linked with Einstein’s special theory of relativity, but Einstein was not its originator. Space-time was the brain child of the gifted but short-lived Prussian mathematician Hermann Minkowski, who had been attracted from Königsberg to the mathematical powerhouse at the University in Göttingen, Germany around the turn of the 20th Century by David Hilbert. Minkowski was an expert in invariant theory, and when Einstein published his special theory of relativity in 1905 to explain the Lorentz transformations, Minkowski recognized a subtle structure buried inside the theory. This structure was related to Riemann’s metric theory of geometry, but it had the radical feature that time appeared as one of the geometric dimensions. This was a drastic departure from all former theories of motion that had always separated space and time: trajectories had been points in space that traced out a continuous curve as a function of time. But in Minkowski’s mind, trajectories were invariant curves, and although their mathematical representation changed with changing point of view (relative motion of observers), the trajectories existed in a separate unchanging reality, not mere functions of time, but eternal. He called these trajectories world lines. They were static structures in a geometry that is today called Minkowski space. Einstein at first was highly antagonistic to this new view, but he relented, and later he so completely adopted space-time in his general theory that today Minkowski is almost forgotten, his echo heard softly in expressions of the Minkowski metric that is the background to Einstein’s warped geometry that bends light and captures errant space craft.

The fourth generation in the development of concepts of motion, Geometric Dynamics, began when an ambitious French physicist with delusions of grandeur, the historically ambiguous Pierre Louis Maupertuis, returned from a scientific boondoggle to Lapland where he measured the flatness of the Earth in defense of Newtonian physics over Cartesian. Skyrocketed to fame by the success of the expedition, he began his second act by proposing the Principle of Least Action, a principle by which all motion seeks to be most efficient by taking a geometric path that minimizes a physical quantity called action. In this principle, Maupertuis saw both a universal law that could explain all of physical motion, as well as a path for himself to gain eternal fame in the company of Galileo and Newton. Unfortunately, his high hopes were dashed through personal conceit and nasty intrigue, and most physicists today don’t even recognize his name. But the idea of least action struck a deep chord that reverberates throughout physics. It is the first and fundamental example of a minimum principle, of which there are many. For instance, minimum potential energy identifies points of system equilibrium, and paths of minimum distances are geodesic paths. In dynamics, minimization of the difference between potential and kinetic energies identifies the dynamical paths of trajectories, and minimization of distance through space-time warped by mass and energy density identifies the paths of falling objects.

Maupertuis’ fundamentally important idea was picked up by Euler and Lagrange, expanding it through the language of differential geometry. This was the language of Bernhard Riemann, a gifted and shy German mathematician whose mathematical language was adopted by physicists to describe motion as a geodesic, the shortest path like a great-circle route on the Earth, in an abstract dynamical space defined by kinetic energy and potentials. In this view, it is the geometry of the abstract dynamical space that imposes Galileo’s simple parabolic form on freely falling objects. Einstein took this viewpoint farther than any before him, showing how mass and energy warped space and how free objects near gravitating bodies move along geodesic curves defined by the shape of space. This brought trajectories to a new level of abstraction, as space itself became the cause of motion. Prior to general relativity, motion occurred in space. Afterwards, motion was caused by space. In this sense, gravity is not a force, but is like a path down which everything falls.

The fifth generation of concepts of motion, Quantum Coherence, increased abstraction yet again in the comprehension of trajectories, ushering in difficult concepts like wave-particle duality and quantum interference. Quantum interference underlies many of the counter-intuitive properties of quantum systems, including the possibility for quantum systems to be in two or more states at the same time, and for quantum computers to crack unbreakable codes. But this new perspective came with a cost, introducing fundamental uncertainties that are locked in a battle of trade-offs as one measurement becomes more certain and others becomes more uncertain.

Einstein distrusted Heisenberg’s uncertainty principle, not that he disagreed with its veracity, but he felt it was more a statement of ignorance than a statement of fundamental unknowability. In support of Einstein, Schrödinger devised a thought experiment that was meant to be a reduction to absurdity in which a cat is placed in a box with a vial of poison that would be broken if a quantum particle decays. The cruel fate of Schrödinger’s cat, who might or might not be poisoned, hinges on whether or not someone opens the lid and looks inside. Once the box is opened, there is one world in which the cat is alive and another world in which the cat is dead. These two worlds spring into existence when the box is opened—a bizarre state of affairs from the point of view of a pragmatist. This is where Richard Feynman jumped into the fray and redefined the idea of a trajectory in a radically new way by showing that a quantum trajectory is not a single path, like Galileo’s parabola, but the combined effect of the quantum particle taking all possible paths simultaneously. Feynman established this new view of quantum trajectories in his thesis dissertation under the direction of John Archibald Wheeler at Princeton. By adapting Maupertuis’ Principle of Least Action to quantum mechanics, Feynman showed how every particle takes every possible path—simultaneously—every path interfering in such as way that only the path with the most constructive interference is observed. In the quantum view, the deterministic trajectory of the cannon ball evaporates into a cloud of probable trajectories.

In our current complex times, the sixth generation in the evolution of concepts of motion explores Complex Systems. Lorenz’s Butterfly has more to it than butterflies, because Life is the greatest complex system of our experience and our existence. We are the end result of a cascade of self-organizing events that began half a billion years after Earth coalesced out of the nebula, leading to the emergence of consciousness only about 100,000 years ago—a fact that lets us sit here now and wonder about it all. That we are conscious is perhaps no accident. Once the first amino acids coagulated in a muddy pool, we have been marching steadily uphill, up a high mountain peak in a fitness landscape. Every advantage a species gained over its environment and over its competitors exerted a type of pressure on all the other species in the ecosystem that caused them to gain their own advantage.

The modern field of evolutionary dynamics spans a wide range of scales across a wide range of abstractions. It treats genes and mutations on DNA in much the same way it treats the slow drift of languages and the emergence of new dialects. It treats games and social interactions the same way it does the evolution of cancer. Evolutionary dynamics is the direct descendant of chaos theory that turned butterflies into hurricanes, but the topics it treats are special to us as evolved species, and as potential victims of disease. The theory has evolved its own visualizations, such as the branches in the tree of life and the high mountain tops in fitness landscapes separated by deep valleys. Evolutionary dynamics draws, in a fundamental way, on dynamic processes in high dimensions, without which it would be impossible to explain how something as complex as human beings could have arisen from random mutations.

These six generations in the development of dynamics are not likely to stop, and future generations may arise as physicists pursue the eternal quest for the truth behind the structure of reality.