Posts by David D. Nolte

E. M. Purcell Distinguished Professor of Physics and Astronomy at Purdue University

Who Invented the Quantum? Einstein vs. Planck

Albert Einstein defies condensation—it is impossible to condense his approach, his insight, his motivation—into a single word like “genius”.  He was complex, multifaceted, contradictory, revolutionary as well as conservative.  Some of his work was so simple that it is hard to understand why no-one else did it first, even when they were right in the middle of it.  Lorentz and Poincaré spring to mind—they had been circling the ideas of spacetime for decades—but never stepped back to see what the simplest explanation could be.  Einstein did, and his special relativity was simple and beautiful, and the math is just high-school algebra.  On the other hand, parts of his work—like gravitation—are so embroiled in mathematics and the religion of general covariance that it remains opaque to physics neophytes 100 years later and is usually reserved for graduate study. 

            Yet there is a third thread in Einstein’s work that relies on pure intuition—neither simple nor complicated—but almost impossible to grasp how he made his leap.  This is the case when he proposed the real existence of the photon—the quantum particle of light.  For ten years after this proposal, it was considered by almost everyone to be his greatest blunder. It even came up when Planck was nominating Einstein for membership in the German Academy of Science. Planck said

That he may sometimes have missed the target of his speculations, as for example, in his hypothesis of light quanta, cannot really be held against him.

In this single statement, we have the father of the quantum being criticized by the father of the quantum discontinuity.

Max Planck’s Discontinuity

In histories of the development of quantum theory, the German physicist Max Planck (1858—1947) is characterized as an unlikely revolutionary.  He was an establishment man, in the stolid German tradition, who was already embedded in his career, in his forties, holding a coveted faculty position at the University of Berlin.  In his research, he was responding to a theoretical challenge issued by Kirchhoff many years ago in 1860 to find the function of temperature and wavelength that described and explained the observed spectrum of radiating bodies.  Planck was not looking for a revolution.  In fact, he was looking for the opposite.  One of his motivations in studying the thermodynamics of electromagnetic radiation was to rebut the statistical theories of Boltzmann.  Planck had never been convinced by the atomistic and discrete approach Boltzmann had used to explain entropy and the second law of thermodynamics.  With the continuum of light radiation he thought he had the perfect system that would show how entropy behaved in a continuous manner, without the need for discrete quantities. 

Therefore, Planck’s original intentions were to use blackbody radiation to argue against Boltzmann—to set back the clock.  For this reason, not only was Planck an unlikely revolutionary, he was a counter-revolutionary.  But Planck was a revolutionary because that is what he did, whatever his original intentions were, and he accepted his role as a revolutionary when he had the courage to stand in front of his scientific peers and propose a quantum hypothesis that lay at the heart of physics.

            Blackbody radiation, at the end of the nineteenth century, was a topic of keen interest and had been measured with high precision.  This was in part because it was such a “clean” system, having fundamental thermodynamic properties independent of any of the material properties of the black body, unlike the so-called ideal gases, which always showed some dependence on the molecular properties of the gas. The high-precision measurements of blackbody radiation were made possible by new developments in spectrometers at the end of the century, as well as infrared detectors that allowed very precise and repeatable measurements to be made of the spectrum across broad ranges of wavelengths. 

In 1893 the German physicist Wilhelm Wien (1864—1928) had used adiabatic expansion arguments to derive what became known as Wien’s Displacement Law that showed a simple linear relationship between the temperature of the blackbody and the peak wavelength.  Later, in 1896, he showed that the high-frequency behavior could be described by an exponential function of temperature and wavelength that required no other properties of the blackbody.  This was approaching the solution of Kirchhoff’s challenge of 1860 seeking a universal function.  However, at lower frequencies Wien’s approximation failed to match the measured spectrum.  In mid-year 1900, Planck was able to define a single functional expression that described the experimentally observed spectrum.  Planck had succeeded in describing black-body radiation, but he had not satisfied Kirchhoff’s second condition—to explain it. 

            Therefore, to describe the blackbody spectrum, Planck modeled the emitting body as a set of ideal oscillators.  As an expert in the Second Law, Planck derived the functional form for the radiation spectrum, from which he found the entropy of the oscillators that produced the spectrum.  However, once he had the form for the entropy, he needed to explain why it took that specific form.  In this sense, he was working backwards from a known solution rather than forwards from first principles.  Planck was at an impasse.  He struggled but failed to find any continuum theory that could work. 

Then Planck turned to Boltzmann’s statistical theory of entropy, the same theory that he had previously avoided and had hoped to discredit.  He described this as “an act of despair … I was ready to sacrifice any of my previous convictions about physics.”  In Boltzmann’s expression for entropy, it was necessary to “count” possible configurations of states.  But counting can only be done if the states are discrete.  Therefore, he lumped the energies of the oscillators into discrete ranges, or bins, that he called “quanta”.  The size of the bins was proportional to the frequency of the oscillator, and the proportionality constant had the units of Maupertuis’ quantity of action, so Planck called it the “quantum of action”. Finally, based on this quantum hypothesis, Planck derived the functional form of black-body radiation.

            Planck presented his findings at a meeting of the German Physical Society in Berlin on November 15, 1900, introducing the word quantum (plural quanta) into physics from the Latin word that means quantity [1].  It was a casual meeting, and while the attendees knew they were seeing an intriguing new physical theory, there was no sense of a revolution.  But Planck himself was aware that he had created something fundamentally new.  The radiation law of cavities depended on only two physical properties—the temperature and the wavelength—and on two constants—Boltzmann’s constant kB and a new constant that later became known as Planck’s constant h = ΔE/f = 6.6×10-34 J-sec.  By combining these two constants with other fundamental constants, such as the speed of light, Planck was able to establish accurate values for long-sought constants of nature, like Avogadro’s number and the charge of the electron.

            Although Planck’s quantum hypothesis in 1900 explained the blackbody radiation spectrum, his specific hypothesis was that it was the interaction of the atoms and the light field that was somehow quantized.  He certainly was not thinking in terms of individual quanta of the light field.

Figure. Einstein and Planck at a dinner held by Max von Laue in Berlin on Nov. 11, 1931.

Einstein’s Quantum

When Einstein analyzed the properties of the blackbody radiation in 1905, using his deep insight into statistical mechanics, he was led to the inescapable conclusion that light itself must be quantized in amounts E = hf, where h is Planck’s constant and f is the frequency of the light field.  Although this equation is exactly the same as Planck’s from 1900, the meaning was completely different.  For Planck, this was the discreteness of the interaction of light with matter.  For Einstein, this was the quantum of light energy—whole and indivisible—just as if the light quantum were a particle with particle properties.  For this reason, we can answer the question posed in the title of this Blog—Einstein takes the honor of being the inventor of the quantum.

            Einstein’s clarity of vision is a marvel to behold even to this day.  His special talent was to take simple principles, ones that are almost trivial and beyond reproach, and to derive something profound.  In Special Relativity, he simply assumed the constancy of the speed of light and derived Lorentz’s transformations that had originally been based on obtuse electromagnetic arguments about the electron.  In General Relativity, he assumed that free fall represented an inertial frame, and he concluded that gravity must bend light.  In quantum theory, he assumed that the low-density limit of Planck’s theory had to be consistent with light in thermal equilibrium in thermal equilibrium with the black body container, and he concluded that light itself must be quantized into packets of indivisible energy quanta [2].  One immediate consequence of this conclusion was his simple explanation of the photoelectric effect for which the energy of an electron ejected from a metal by ultraviolet irradiation is a linear function of the frequency of the radiation.  Einstein published his theory of the quanta of light [3] as one of his four famous 1905 articles in Annalen der Physik in his Annus Mirabilis

Figure. In the photoelectric effect a photon is absorbed by an electron state in a metal promoting the electron to a free electron that moves with a maximum kinetic energy given by the difference between the photon energy and the work function W of the metal. The energy of the photon is absorbed as a whole quantum, proving that light is composed of quantized corpuscles that are today called photons.

            Einstein’s theory of light quanta was controversial and was slow to be accepted.  It is ironic that in 1914 when Einstein was being considered for a position at the University in Berlin, Planck himself, as he championed Einstein’s case to the faculty, implored his colleagues to accept Einstein despite his ill-conceived theory of light quanta [4].  This comment by Planck goes far to show how Planck, father of the quantum revolution, did not fully grasp, even by 1914, the fundamental nature and consequences of his original quantum hypothesis.  That same year, the American physicist Robert Millikan (1868—1953) performed a precise experimental measurement of the photoelectric effect, with the ostensible intention of proving Einstein wrong, but he accomplished just the opposite—providing clean experimental evidence confirming Einstein’s theory of the photoelectric effect. 

The Stimulated Emission of Light

About a year after Millikan proved that the quantum of energy associated with light absorption was absorbed as a whole quantum of energy that was not divisible, Einstein took a step further in his theory of the light quantum. In 1916 he published a paper in the proceedings of the German Physical Society that explored how light would be in a state of thermodynamic equilibrium when interacting with atoms that had discrete energy levels. Once again he used simple arguments, this time using the principle of detailed balance, to derive a new and unanticipated property of light—stimulated emission!

Figure. The stimulated emission of light. An excited state is stimulated to emit an identical photon when the electron transitions to its ground state.

The stimulated emission of light occurs when an electron is in an excited state of a quantum system, like an atom, and an incident photon stimulates the emission of a second photon that has the same energy and phase as the first photon. If there are many atoms in the excited state, then this process leads to a chain reaction as 1 photon produces 2, and 2 produce 4, and 4 produce 8, etc. This exponential gain in photons with the same energy and phase is the origin of laser radiation. At the time that Einstein proposed this mechanism, lasers were half a century in the future, but he was led to this conclusion by extremely simple arguments about transition rates.

Figure. Section of Einstein’s 1916 paper that describes the absorption and emission of light by atoms with discrete energy levels [5].

Detailed balance is a principle that states that in thermal equilibrium all fluxes are balanced. In the case of atoms with ground states and excited states, this principle requires that as many transitions occur from the ground state to the excited state as from the excited state to the ground state. The crucial new element that Einstein introduced was to distinguish spontaneous emission from stimulated emission. Just as the probability to absorb a photon must be proportional to the photon density, there must be an equivalent process that de-excites the atom that also must be proportional the photon density. In addition, an electron must be able to spontaneously emit a photon with a rate that is independent of photon density. This leads to distinct coefficients in the transition rate equations that are today called the “Einstein A and B coefficients”. The B coefficients relate to the photon density, while the A coefficient relates to spontaneous emission.

Figure. Section of Einstein’s 1917 paper that derives the equilibrium properties of light interacting with matter. The “B”-coefficient for transition from state m to state n describes stimulated emission. [6]

Using the principle of detailed balance together with his A and B coefficients as well as Boltzmann factors describing the number of excited states relative to ground state atoms in equilibrium at a given temperature, Einstein was able to derive an early form of what is today called the Bose-Einstein occupancy function for photons.

Derivation of the Einstein A and B Coefficients

Detailed balance requires the rate from m to n to be the same as the rate from n to m

where the first term is the spontaneous emission rate from the excited state m to the ground state n, the second term is the stimulated emission rate, and the third term (on the right) is the absorption rate from n to m. The numbers in each state are Nm and Nn, and the density of photons is ρ. The relative numbers in the excited state relative to the ground state is given by the Boltzmann factor

By assuming that the stimulated transition coefficient from n to m is the same as m to n, and inserting the Boltzmann factor yields

The Planck density of photons for ΔE = hf is

which yields the final relation between the spontaneous emission coefficient and the stimulated emission coefficient

The total emission rate is

where the p-bar is the average photon number in the cavity. One of the striking aspects of this derivation is that no assumptions are made about the physical mechanisms that determine the coefficient B. Only arguments of detailed balance are required to arrive at these results.

Einstein’s Quantum Legacy

Einstein was awarded the Nobel Prize in 1921 for the photoelectric effect, not for the photon nor for any of Einstein’s other theoretical accomplishments.  Even in 1921, the quantum nature of light remained controversial.  It was only in 1923, after the American physicist Arthur Compton (1892—1962) showed that energy and momentum were conserved in the scattering of photons from electrons, that the quantum nature of light began to be accepted.  The very next year, in 1924, the quantum of light was named the “photon” by the American American chemical physicist Gilbert Lewis (1875—1946). 

            A blog article like this, that attributes the invention of the quantum to Einstein rather than Planck, must say something about the irony of this attribution.  If Einstein is the father of the quantum, he ultimately was led to disinherit his own brain child.  His final and strongest argument against the quantum properties inherent in the Copenhagen Interpretation was his famous EPR paper which, against his expectations, launched the concept of entanglement that underlies the coming generation of quantum computers.


Einstein’s Quantum Timeline

1900 – Planck’s quantum discontinuity for the calculation of the entropy of blackbody radiation.

1905 – Einstein’s “Miracle Year”. Proposes the light quantum.

1911 – First Solvay Conference on the theory of radiation and quanta.

1913 – Bohr’s quantum theory of hydrogen.

1914 – Einstein becomes a member of the German Academy of Science.

1915 – Millikan measurement of the photoelectric effect.

1916 – Einstein proposes stimulated emission.

1921 – Einstein receives Nobel Prize for photoelectric effect and the light quantum. Third Solvay Conference on atoms and electrons.

1927 – Heisenberg’s uncertainty relation. Fifth Solvay International Conference on Electrons and Photons in Brussels. “First” Bohr-Einstein debate on indeterminancy in quantum theory.

1930 – Sixth Solvay Conference on magnetism. “Second” Bohr-Einstein debate.

1935 – Einstein-Podolsky-Rosen (EPR) paper on the completeness of quantum mechanics.


Selected Einstein Quantum Papers

Einstein, A. (1905). “Generation and conversion of light with regard to a heuristic point of view.” Annalen Der Physik 17(6): 132-148.

Einstein, A. (1907). “Die Plancksche Theorie der Strahlung und die Theorie der spezifischen W ̈arme.” Annalen der Physik 22: 180–190.

Einstein, A. (1909). “On the current state of radiation problems.” Physikalische Zeitschrift 10: 185-193.

Einstein, A. and O. Stern (1913). “An argument for the acceptance of molecular agitation at absolute zero.” Annalen Der Physik 40(3): 551-560.

Einstein, A. (1916). “Strahlungs-Emission un -Absorption nach der Quantentheorie.” Verh. Deutsch. Phys. Ges. 18: 318.

Einstein, A. (1917). “Quantum theory of radiation.” Physikalische Zeitschrift 18: 121-128.

Einstein, A., B. Podolsky and N. Rosen (1935). “Can quantum-mechanical description of physical reality be considered complete?” Physical Review 47(10): 0777-0780.


Notes

[1] M. Planck, “Elementary quanta of matter and electricity,” Annalen Der Physik, vol. 4, pp. 564-566, Mar 1901.

[2] Klein, M. J. (1964). Einstein’s First Paper on Quanta. The natural philosopher. D. A. Greenberg and D. E. Gershenson. New York, Blaidsdell. 3.

[3] A. Einstein, “Generation and conversion of light with regard to a heuristic point of view,” Annalen Der Physik, vol. 17, pp. 132-148, Jun 1905.

[4] Chap. 2 in “Mind at Light Speed”, by David Nolte (Free Press, 2001)

[5] Einstein, A. (1916). “Strahlungs-Emission un -Absorption nach der Quantentheorie.” Verh. Deutsch. Phys. Ges. 18: 318.

[6] Einstein, A. (1917). “Quantum theory of radiation.” Physikalische Zeitschrift 18: 121-128.

Looking Under the Hood of the Generalized Stokes Theorem

Everyone who has taken classes in physics or engineering knows that the most magical of all vector identities (and there are so many vector identities) are Green’s theorem in 2D, and Stokes’ and Gauss’ theorem in 3D.  These theorems have the magical ability to take an integral over some domain and replace it with a simpler integral over the boundary of the domain.  For instance, the vector form of Stokes’ theorem in 3D is

for the curl of a vector field, where S is the surface domain, and C is the closed loop surrounding the domain.

Maybe the most famous application of these theorems is to convert Maxwell’s equations of electromagnetism from their differential form to their integral form.  For instance, we can start with the differential version for the curl of the B-field and integrate over a surface

then applying Stokes’ theorem in 3D (or Green’s theorem in 2D), that converts from the two-dimensional surface integral to a one-dimensional integral around a closed loop bounding the area integral domain yields the integral form of Ampere’s law

Stokes’ theorem has the important property that it converts a high-dimensional integral into a lower-dimensional integral over the closed boundary of the original domain. Stokes’ theorem in component form is

In the case of Green’s theorem in 2D, the principle is easy to explain by the oriented vector character of the integrals and the notion of dividing a domain into small elements with oriented edges. In the case of nonzero circulation, all internal edges of smaller regions cancel pairwise until the outer boundary is reached, where a macroscopic circulation persists along all the outer edges. Similarly in Gauss’ theorem in 3D, the flux of a vector through the face of one element is equal and opposite to the flux through the adjacent element, canceling out pairwise until the outer boundary is reached and the net flux is finite summed over the outer elements. This general property of pairwise cancelation on adjacent subdomains until the outer boundary is reached is the general property of Stokes’ theorem that can be extended to space of any dimensions or onto general manifolds that do not need to be Euclidean.

Figure. Principle of Stokes’ theorem. The circulation from all internal edges cancels out. But on the boundary, all edges add together for a macroscopic circulation.

George Stokes and the Cambridge Tripos

Since 1824, the mathematics course at Cambridge University has held a yearly exam called the Tripos to identify the top graduating mathematics student.  The winner of the contest is called the Senior Wrangler, and in the 1800’s the Senior Wrangler received a level of public fame and admiration for intellectual achievement that is somewhat like the fame reserved today for star athletes.  Famous Senior Wranglers include George Airy, John Herschel, Arthur Cayley, Lord Rayleigh, Arthur Eddington, J. E. Littlewood, Peter Guthrie Tait and Joseph Larmor.

Figure. Sir George Gabriel Stokes, 1st Baronet

            In his second year at Cambridge, Stokes had begun studying under William Hopkins (1793 – 1866), and in 1841 George Stokes became Senior Wrangler the same year he won the Smith’s Prize in mathematics.  The Tripos tested primarily on bookwork, while the Smith’s Prize tested on originality.  To achieve top scores on both designated the student as the most capable and creative mathematician of his class.  Stokes was immediately offered a fellowship at Pembroke College allowing him to teach and study whatever he willed. Within eight years he was chosen for the Lucasian Chair of Mathematics. The Lucasian Chair of Mathematics at Cambridge is one of the most famous academic chairs in the world.  The first Lucasian professor was Isaac Barrow in 1664 followed by Isaac Newton who held the post for 33 years.  Other famous Lucasian professors were George Airy, Charles Babbage, Joseph Larmor, Paul Dirac as well as Stephen Hawking. Among the many fields that Stokes made important contributions was hydrodynamics where he derived Stokes’ Law of Drag.

In 1854 Stokes was one of the Cambridge professors setting exam questions for the Tripos. In a letter that William Thompson (later Lord Kelvin) wrote Stokes, he suggested putting on the exam the task of extending Green’s Theorem to three dimensions and proving the theorem, and Stokes obliged. That year the Tripos consisted of 16 papers spread over 8 days, totaling over 40 hours of effort on 211 questions. One of the candidates for Senior Wrangler that year was James Clerk Maxwell, but he was narrowly beaten out by Edward Routh (1831 – 1907). Routh became famous, but not as famous as Maxwell who later applied Stokes’ Theorem to derive the equations of electrodynamics.

The Fundamental Theorem of Calculus

One of the first and simplest theorems that any student of intro calculus is taught is the Fundamental Theorem of Calculus

where F is called the “antiderivative” of the function f . The interpretation of the Fundamental Theorem is extremely simple:  The integral of a function over a domain is equal to its antiderivative evaluated at the boundary of the domain.  Generalizing this theorem a bit, it says that evaluating an integral over a domain is the same thing as evaluating a lower-dimensional quantity over the boundary of the domain.  The Fundamental Theorem of Calculus sounds a lot like Green’s Theorem or Stokes’ Theorem!  And in fact, they are all part of the same principle.  To understand this principle, we have to look into differential forms and the use of Grassmann’s wedge product and exterior algebra (the subject of my previous blog post).

Differential Forms

Just as in the case of the exterior algebra , the fundamental identities defined for differential forms are given by

A differential 1-form α and a differential 2-form β can be expressed as

The key to understanding why the wedge product shows up in this definition is to recognize that the operation of producing a product of differentials is only defined for the wedge product.  Within the language of differential forms, the symbol dxdy has no meaning, despite the fact that this symbol shows up routinely when integrating.  In fact, integrals that use the expression dxdy are ambiguous, because the oriented surface must be inferred from the context of the integral rather than given.  This is why integration over multiple variables should actually be performed using differential forms, though it is rarely (or never) stated in lower-level calculus classes.

Integration of Differential Forms

Line integrals, as in the Fundamental Theorem of Calculus, are obvious and unique.  However, as soon as we move to integrals over areas, the wedge product is needed.  This is because a general area is oriented.  If you think of a plane defined by z = 0, the surface element dxdy can be oriented along either the positive z-axis or the negative z-axis.  Which one should you take?  The answer is: don’t make the choice.  Work with differential forms, and the integral may be over dx^dy or dy^dx, depending on the exterior analysis that produced the integral in the first place.  One is the negative of the other.  You take the element as it arises from the algebra, and you cannot go wrong!

As an example, we can use differential forms to express a surface integral correctly as

If you make the substitutions: x = (p-q)/2 and y = (p+q)/2, then dp = dx + dy and dq = dy – dx and

which yields

In this case, you will recognize that the factor of -2 is just the Jacobian of the transformation.  Working this way with differential forms makes transformation simple, like a book-keeping trick, and safe, so you just follow the algebra through without needing to make choices.

Exterior Differentiation

The exterior derivative of the 1-form a (defined above) is defined as

where the exterior derivative turns a differential r-form into a differential (r+1)-form.  For instance, in 3D

This should look very familiar to you.  If we expressly make the equivalence

where the integral on the left is a surface integral over a domain, and the integral on the right is a line integral over a line bounding the domain, then

This is just the curl theorem (Stokes’ theorem).

Figure. Stokes Theorem in 3D vector form and general form.

Taking the dimension up one notch, consider the differential 2-form β where

This again looks very familiar, and if we write down the equivalence

then we immediately have the divergence theorem.

We can even find other vector identities using these differential forms.  For instance, if we start with a 2-form expressed as

then we have proven the vector identity

stating that the divergence of a curl must vanish.  This is like playing games with simple algebra to prove profound theorems in vector calculus!

Figure. Exterior differentiation of a differential 1-form to yield a differential 2-form.

Stokes’ Theorem in Higher Dimensions

The power of differential forms is their ability to generalize automatically to higher dimensions. The differential 1-form can have any number of indices for multiple dimensions, and exterior differentiation yields the familiar curl theorem in any number of dimensions

But the differential 2-form in 4D yields to exterior differentiation to give a mixed expression that is neither a curl nor a divergence

The differential 3-form in 4D under exterior differentiation yields the 4D divergence

although the orientations of the 3D boundary elements must be chosen appropriately.

Differential Forms in 4D Electromagnetics

As long as we are working with differential forms and Stokes’ Theorem, let’s finish up by looking at Maxwell’s electromagnetic equations as four-dimensional equations in spacetime.  First, construct the 2-form using the displacement field D and the magnetic intensity H.

The differential of this two-form creates a lot of terms, such as

This can be simplified by collecting like terms to

Renaming each coefficient so that

yields two of Maxwell’s equations

To find the other two Maxwell equations, start with the 1-form

and try the derivation yourself!

Differentiating yields a differential two-form. Then identify the curl of the vector potential as the B-field, etc., to derive the other two Maxwell equations

Bibliography

Vargas, J. G., Differential Geometry for Physicists and Mathematicians: Moving Frames and Differential Forms: From Euclid Past Riemann. 2014; p 1-293.

Hermann Grassmann's Nimble Wedge Product

          

Hyperspace is neither a fiction nor an abstraction. Every interaction we have with our every-day world occurs in high-dimensional spaces of objects and coordinates and momenta. This dynamical hyperspace—also known as phase space—is as real as mathematics, and physics in phase space can be calculated and used to predict complex behavior. Although phase space can extend to thousands of dimensions, our minds are incapable of thinking even in four dimensions—we have no ability to visualize such things. 

Grassmann was convinced that he had discovered a fundamentally new type of mathematics—he actually had.

            Part of the trick of doing physics in high dimensions is having the right tools and symbols with which to work.  For high-dimensional math and physics, one such indispensable tool is Hermann Grassmann’s wedge product. When I first saw the wedge product, probably in some graduate-level dynamics textbook, it struck me as a little cryptic.  It is sort of like a vector product, but not, and it operated on things that had an intimidating name— “forms”. I kept trying to “understand” forms as if they were types of vectors.  After all, under special circumstances, forms and wedges did produce some vector identities.  It was only after I actually stepped back and asked myself how they were constructed that I realized that forms and wedge products were just a simple form of algebra, called exterior algebra. Exterior algebra is an especially useful form of algebra with simple rules.  It goes far beyond vectors while harking back to a time before vectors even existed.

Hermann Grassmann: A Backwater Genius

We are so accustomed to working with oriented objects, like vectors that have a tip and tail, that it is hard to think of a time when that wouldn’t have been natural.  Yet in the mid 1800’s, almost no one was thinking of orientations as a part of geometry, and it took real genius to conceive of oriented elements, how to manipulate them, and how to represent them graphically and mathematically.  At a time when some of the greatest mathematicians lived—Weierstrass, Möbius, Cauchy, Gauss, Hamilton—it turned out to be a high school teacher from a backwater in Prussia who developed the theory for the first time.

Hermann Grassmann

            Hermann Grassmann was the son of a high school teacher at the Gymnasium in Stettin, Prussia, (now Szczecin, Poland) and he inherited his father’s position, but at a lower level.  Despite his lack of background and training, he had serious delusions of grandeur, aspiring to teach mathematics at the university in Berlin, even when he was only allowed to teach the younger high school students basic subjects.  Nonetheless, Grassmann embarked on a program to educate himself, attending classes at Berlin in mathematics.  As part of the requirements to be allowed to teach mathematics to the senior high-school students, he had to submit a thesis on an appropriate topic. 

Modern Szczecin.

            For years, he had been working on an idea that had originally come from his father about a mathematical theory that could manipulate abstract objects or concepts.  He had taken this vague thought and had slowly developed it into a rigorous mathematical form with symbols and manipulations.  His mind was one of those that could permute endlessly, and he defined and discovered dozens of different ways that objects could be defined and combined, and he wrote them all down in a tome of excessive size and complexity.  When it was time to submit the thesis to the examiners, he had created a broad new system of algebra—at a time when no one recognized what a new algebra even meant, especially not his examiners, who could understand none of it.  Fortunately, Grassmann had been corresponding with the famous German mathematician August Möbius over his ideas, and Möbius was encouraging and supportive, and the examiners accepted his thesis and allowed him to teach the upper class-men at his high school. 

The Gymnasium in Stettin

            Encouraged by his success, Grassmann hoped that Möbius would help him climb even higher to teach in Berlin.  Convinced that he had discovered a fundamentally new type of mathematics (he actually had), he decided to publish his thesis as a book under the title Die Lineale Ausdehnungslehre, ein neuer Zweig der Mathematik (The Theory of Linear Extension, a New Branch of Mathematics).  He published it out of his own pocket.  It is some measure of his delusion that he had thousands printed, but almost none sold, and piles of the books were stored away to be used later as scrap paper. Möbius likewise distanced himself from Grassmann and his obsessive theories. Discouraged, Grassmann turned his back on mathematics, though he later achieved fame in the field of linguistics.  (For more on Grassmann’s ideas and struggle for recognition, see Chapter 4 of Galileo Unbound).

Excerpt from Grassmann’s Ausdehnungslehre (Google Books).

The Odd Identity of Nicholas Bourbaki

If you look up the publication history of the famous French mathematician, Nicholas Bourbaki, you will be amazed to see a publication history that spans from 1935 to 2018 — over 85 years of publications!  But if you look in the obituaries, you will see that he died in 1968.  It’s pretty impressive to still be publishing 50 years after your death.  JRR Tolkein has been doing that regularly, but few others spring to mind.

            Actually, you have been duped!  Nicholas is a fiction, constructed as a hoax by a group of French mathematicians who were simultaneously deadly serious about the need for a rigorous foundation on which to educate the new wave of mathematicians in the mid 20th century.  The group was formed during a mathematics meeting in 1924, organized by André Weil and joined by Henri Cartan (son of Eli Cartan), Claude Chevalley, Jean Coulomb, Jean Delsarte, Jean Dieudonné, Charles Ehresmann, René de Possel, and Szolem Mandelbrojt (uncle of Benoit Mandelbrot).  They picked the last name of a French general, and Weil’s wife named him Nicholas.  The group began publishing books under this pseudonym in 1935 and has continued until the present time.  While their publications were entirely serious, the group from time to time had fun with mild hoaxes, such as posting his obituary on one occasion and a wedding announcement of his daughter on another. 

            The wedge product symbol took several years to mature.  Eli Cartan’s book on differential forms published in 1945 used brackets to denote the product instead of the wedge. In Chevally’s book of 1946, he does not use the wedge, but uses a small square, and the book  Chevalley wrote in 1951 “Introduction to the Theory of Algebraic Functions of One Variable” still uses a small square.  But in 1954, Chevalley uses the wedge symbol in his book on Spinors.  He refers to his own book of 1951 (which did not use the wedge) and also to the 1943 version of Bourbaki. The few existing copies of the 1943 Algebra by Bourbaki lie in obscure European libraries. The 1973 edition of the book does indeed use the wedge, although I have yet to get my hands on the original 1943 version. Therefore, the wedge symbol seems to have originated with Chevalley sometime between 1951 and 1954 and gained widespread use after that.

Exterior Algebra

Exterior algebra begins with the definition of an operation on elements.  The elements, for example (u, v, w, x, y, z, etc.) are drawn from a vector space in its most abstract form as “tuples”, such that x = [x1, x2, x3, …, xn] in an n-dimensional space.  On these elements there is an operation called the “wedge product”, the “exterior product”, or the “Grassmann product”.  It is denoted, for example between two elements x and y, as x^y.  It captures the sense of orientation through anti-commutativity, such that

As simple as this definition is, it sets up virtually all later manipulations of vectors and their combinations.  For instance, we can immediately prove (try it yourself) that the wedge product of a vector element with itself equals zero

Once the elements of the vector space have been defined, it is possible to define “forms” on the vector space.  For instance, a 1-form, also known as a vector, is any function

where a, b, c are scalar coefficients.  The wedge product of two 1-forms

yields a 2-form, also known as a bivector.  This specific example makes a direct connection to the cross product in 3-space as

where the unit vectors are mapped onto the 2-forms

Indeed, many of the vector identities of 3-space can be expressed in terms of exterior products, but these are just special cases, and the wedge product is more general.  For instance, while the triple vector cross product is not associative, the wedge product is associative

which can give it an advantage when performing algebra on r-forms.  Expressing the wedge product in terms of vector components

yields the immediate generalization to any number of dimensions (using the Einstein summation convention)

In this way, the wedge product expresses relationships in any number of dimensions.

            A 3-form is constructed as the wedge product of 3 vectors

where the Levi-Civita permuation symbol has been introduced such that

Note that in 3-space there can be no 4-form, because one of the basis elements would be repeated, rendering the product zero.  Therefore, the most general multilinear form for 3-space is

with 23 = 8 elements: one scalar, three 1-forms, three 2-forms and one 3-form.  In 4-space there are 24 = 16 elements: one scalar, four 1-forms, six 2-forms, four 3-forms and one 4-form.  So, the number of elements rises exponentially with the dimension of the space.

            At this point, we have developed a rich multilinear structure, all based on the simple anti-commutativity of elements x^y = -y^x.  This process is called by another name: a Clifford algebra, named after William Kingdon Clifford (1845-1879), second wrangler at Cambridge and close friend of Arthur Cayley.  But the wedge product is not just algebra—there is also a straightforward geometric interpretation of wedge products that make them useful when extending theories of surfaces and volumes into higher dimensions.

Geometric Interpretation

In Euclidean space, a cross product is related to areas and volumes of paralellapipeds. Wedge products are more general than cross products and they generalize the idea of areas and volumes to higher dimension. As an illustration, an area 2-form is shown in Fig. 1 and a 3-form in Fig. 2.

Fig. 1 Area 2-form showing how the area of a parallelogram is related to the wedge product. The 2-form is an oriented area perpendicular to the unit vector.
Fig. 2 A volume 3-form in Euclidean space. The volume of the parallelogram is equal to the magnitude of the wedge product of the three vectors u, v, and w.

The wedge product is not limited to 3 dimensions nor to Euclidean spaces. This is the power and the beauty of Grassmann’s invention. It also generalizes naturally to differential geometry of manifolds producing what are called differential forms. When integrating in higher dimensions or on non-Euclidean manifolds, the most appropriate approach is to use wedge products and differential forms, which will be the topic of my next blog on the generalized Stokes’ theorem.

Further Reading

1.         Dieudonné, J., The Tragedy of Grassmann. Séminaire de Philosophie et Mathématiques 1979, fascicule 2, 1-14.

2.         Fearnley-Sander, D., Hermann Grassmann and the Creation of Linear Algegra. American Mathematical Monthly 1979, 86 (10), 809-817.

3.         Nolte, D. D., Galileo Unbound: A Path Across Life, the Universe and Everything. Oxford University Press: 2018.

4.         Vargas, J. G., Differential Geometry for Physicists and Mathematicians: Moving Frames and Differential Forms: From Euclid Past Riemann. 2014; p 1-293.

Introduction to Modern Dynamics: Chaos, Networks, Space and Time

The second edition of Introduction to Modern Dynamics: Chaos, Networks, Space and Time publishes this week (Novermber 18, 2019), available from Oxford University Press and Amazon.

Most physics majors will use modern dynamics in their careers: nonlinearity, chaos, network theory, econophysics, game theory, neural nets, geodesic geometry, among many others.

The first edition of Introduction to Modern Dynamics (IMD) was an upper-division junior-level mechanics textbook at the level of Thornton and Marion (Classical Dynamics of Particles and Systems) and Taylor (Classical Mechanics).  IMD helped lead an emerging trend in physics education to update the undergraduate physics curriculum.  Conventional junior-level mechanics courses emphasized Lagrangian and Hamiltonian physics, but notably missing from the classic subjects are modern dynamics topics that most physics majors will use in their careers: nonlinearity, chaos, network theory, econophysics, game theory, neural nets, geodesic geometry, among many others.  These are the topics at the forefront of physics that drive high-tech businesses and start-ups, which is where more than half of all physicists work. IMD introduced these modern topics to junior-level physics majors in an accessible form that allowed them to master the fundamentals to prepare them for the modern world.

The second edition (IMD2) continues that trend by expanding the chapters to include additional material and topics.  It rearranges several of the introductory chapters for improved logical flow and expands them to include key conventional topics that were missing in the first edition (e.g., Lagrange undetermined multipliers and expanded examples of Lagrangian applications).  It is also an opportunity to correct several typographical errors and other errata that students have identified over the past several years.  The second edition also has expanded homework problems.

The goal of IMD2 is to strengthen the sections on conventional topics (that students need to master to take their GREs) to make IMD2 attractive as a mainstream physics textbook for broader adoption at the junior level, while continuing the program of updating the topics and approaches that are relevant for the roles that physicists play in the 21st century.

(New Chapters and Sections highlighted in red.)

New Features in Second Edition:

Second Edition Chapters and Sections

Part 1 Geometric Mechanics

• Expanded development of Lagrangian dynamics

• Lagrange multipliers

• More examples of applications

• Connection to statistical mechanics through the virial theorem

• Greater emphasis on action-angle variables

• The key role of adiabatic invariants

Part 1 Geometric Mechanics

Chapter 1 Physics and Geometry

1.1 State space and dynamical flows

1.2 Coordinate representations

1.3 Coordinate transformation

1.4 Uniformly rotating frames

1.5 Rigid-body motion

Chapter 2 Lagrangian Mechanics

2.1 Calculus of variations

2.2 Lagrangian applications

2.3 Lagrange’s undetermined multipliers

2.4 Conservation laws

2.5 Central force motion

2.6 Virial Theorem

Chapter 3 Hamiltonian Dynamics and Phase Space

3.1 The Hamiltonian function

3.2 Phase space

3.3 Integrable systems and action–angle variables

3.4 Adiabatic invariants

Part 2 Nonlinear Dynamics

• New section on non-autonomous dynamics

• Entire new chapter devoted to Hamiltonian mechanics

• Added importance to Chirikov standard map

• The important KAM theory of “constrained chaos” and solar system stability

• Degeneracy in Hamiltonian chaos

• A short overview of quantum chaos

• Rational resonances and the relation to KAM theory

• Synchronized chaos

Part 2 Nonlinear Dynamics

Chapter 4 Nonlinear Dynamics and Chaos

4.1 One-variable dynamical systems

4.2 Two-variable dynamical systems

4.3 Limit cycles

4.4 Discrete iterative maps

4.5 Three-dimensional state space and chaos

4.6 Non-autonomous (driven) flows

4.7 Fractals and strange attractors

Chapter 5 Hamiltonian Chaos

5.1 Perturbed Hamiltonian systems

5.2 Nonintegrable Hamiltonian systems

5.3 The Chirikov Standard Map

5.4 KAM Theory

5.5 Degeneracy and the web map

5.6 Quantum chaos

Chapter 6 Coupled Oscillators and Synchronization

6.1 Coupled linear oscillators

6.2 Simple models of synchronization

6.3 Rational resonances

6.4 External synchronization

6.5 Synchronization of Chaos

Part 3 Complex Systems

• New emphasis on diffusion on networks

• Epidemic growth on networks

• A new section of game theory in the context of evolutionary dynamics

• A new section on general equilibrium theory in economics

Part 3 Complex Systems

Chapter 7 Network Dynamics

7.1 Network structures

7.2 Random network topologies

7.3 Synchronization on networks

7.4 Diffusion on networks

7.5 Epidemics on networks

Chapter 8 Evolutionary Dynamics

81 Population dynamics

8.2 Virus infection and immune deficiency

8.3 Replicator Dynamics

8.4 Quasi-species

8.5 Game theory and evolutionary stable solutions

Chapter 9 Neurodynamics and Neural Networks

9.1 Neuron structure and function

9.2 Neuron dynamics

9.3 Network nodes: artificial neurons

9.4 Neural network architectures

9.5 Hopfield neural network

9.6 Content-addressable (associative) memory

Chapter 10 Economic Dynamics

10.1 Microeconomics and equilibrium

10.2 Macroeconomics

10.3 Business cycles

10.4 Random walks and stock prices (optional)

Part 4 Relativity and Space–Time

• Relativistic trajectories

• Gravitational waves

Part 4 Relativity and Space–Time

Chapter 11 Metric Spaces and Geodesic Motion

11.1 Manifolds and metric tensors

11.2 Derivative of a tensor

11.3 Geodesic curves in configuration space

11.4 Geodesic motion

Chapter 12 Relativistic Dynamics

12.1 The special theory

12.2 Lorentz transformations

12.3 Metric structure of Minkowski space

12.4 Relativistic trajectories

12.5 Relativistic dynamics

12.6 Linearly accelerating frames (relativistic)

Chapter 13 The General Theory of Relativity and Gravitation

13.1 Riemann curvature tensor

13.2 The Newtonian correspondence

13.3 Einstein’s field equations

13.4 Schwarzschild space–time

13.5 Kinematic consequences of gravity

13.6 The deflection of light by gravity

13.7 The precession of Mercury’s perihelion

13.8 Orbits near a black hole

13.9 Gravitational waves

Synopsis of 2nd Ed. Chapters

Chapter 1. Physics and Geometry (Sample Chapter)

This chapter has been rearranged relative to the 1st edition to provide a more logical flow of the overarching concepts of geometric mechanics that guide the subsequent chapters.  The central role of coordinate transformations is strengthened, as is the material on rigid-body motion with expanded examples.

Chapter 2. Lagrangian Mechanics (Sample Chapter)

Much of the structure and material is retained from the 1st edition while adding two important sections.  The section on applications of Lagrangian mechanics adds many direct examples of the use of Lagrange’s equations of motion.  An additional new section covers the important topic of Lagrange’s undetermined multipliers

Chapter 3. Hamiltonian Dynamics and Phase Space (Sample Chapter)

The importance of Hamiltonian systems and dynamics merits a stand-alone chapter.  The topics from the 1st edition are expanded in this new chapter, including a new section on adiabatic invariants that plays an important role in the development of quantum theory.  Some topics are de-emphasized from the 1st edition, such as general canonical transformations and the symplectic structure of phase space, although the specific transformation to action-angle coordinates is retained and amplified.

Chapter 4. Nonlinear Dynamics and Chaos

The first part of this chapter is retained from the 1st edition with numerous minor corrections and updates of figures.  The second part of the IMD 1st edition, treating Hamiltonian chaos, will be expanded into the new Chapter 5.

Chapter 5. Hamiltonian Chaos

This new stand-alone chapter expands on the last half of Chapter 3 of the IMD 1st edition.  The physical character of Hamiltonian chaos is substantially distinct from dissipative chaos that it deserves its own chapter.  It is also a central topic of interest for complex systems that are either conservative or that have integral invariants, such as our N-body solar system that played such an important role in the history of chaos theory beginning with Poincaré.  The new chapter highlights Poincaré’s homoclinic tangle, illustrated by the Chirikov Standard Map.  The Standard Map is an excellent introduction to KAM theory, which is one of the crowning achievements of the theory of dynamical systems by Komogorov, Arnold and Moser, connecting to deeper aspects of synchronization and rational resonances that drive the structure of systems as diverse as the rotation of the Moon and the rings of Saturn.  This is also a perfect lead-in to the next chapter on synchronization.  An optional section at the end of this chapter briefly discusses quantum chaos to show how Hamiltonian chaos can be extended into the quantum regime.

Chapter 6. Synchronization

This is an updated version of the IMD 1st ed. chapter.  It has a reduced initial section on coupled linear oscillators, retaining the key ideas about linear eigenmodes but removing some irrelevant details in the 1st edition.  A new section is added that defines and emphasizes the importance of quasi-periodicity.  A new section on the synchronization of chaotic oscillators is added.

Chapter 7. Network Dynamics

This chapter rearranges the structure of the chapter from the 1st edition, moving synchronization on networks earlier to connect from the previous chapter.  The section on diffusion and epidemics is moved to the back of the chapter and expanded in the 2nd edition into two separate sections on these topics, adding new material on discrete matrix approaches to continuous dynamics.

Chapter 8. Neurodynamics and Neural Networks

This chapter is retained from the 1st edition with numerous minor corrections and updates of figures.

Chapter 9. Evolutionary Dynamics

Two new sections are added to this chapter.  A section on game theory and evolutionary stable solutions introduces core concepts of evolutionary dynamics that merge well with the other topics of the chapter such as the pay-off matrix and replicator dynamics.  A new section on nearly neutral networks introduces new types of behavior that occur in high-dimensional spaces which are counter intuitive but important for understanding evolutionary drift.

Chapter 10.  Economic Dynamics

This chapter will be significantly updated relative to the 1st edition.  Most of the sections will be rewritten with improved examples and figures.  Three new sections will be added.  The 1st edition section on consumer market competition will be split into two new sections describing the Cournot duopoly and Pareto optimality in one section, and Walras’ Law and general equilibrium theory in another section.  The concept of the Pareto frontier in economics is becoming an important part of biophysical approaches to population dynamics.  In addition, new trends in economics are drawing from general equilibrium theory, first introduced by Walras in the nineteenth century, but now merging with modern ideas of fixed points and stable and unstable manifolds.  A third new section is added on econophysics, highlighting the distinctions that contrast economic dynamics (phase space dynamical approaches to economics) from the emerging field of econophysics (statistical mechanics approaches to economics).

Chapter 11. Metric Spaces and Geodesic Motion

 This chapter is retained from the 1st edition with several minor corrections and updates of figures.

Chapter 12. Relativistic Dynamics

This chapter is retained from the 1st edition with minor corrections and updates of figures.  More examples will be added, such as invariant mass reconstruction.  The connection between relativistic acceleration and Einstein’s equivalence principle will be strengthened.

Chapter 13. The General Theory of Relativity and Gravitation

This chapter is retained from the 1st edition with minor corrections and updates of figures.  A new section will derive the properties of gravitational waves, given the spectacular success of LIGO and the new field of gravitational astronomy.

Homework Problems:

All chapters will have expanded and updated homework problems.  Many of the homework problems from the 1st edition will remain, but the number of problems at the end of each chapter will be nearly doubled, while removing some of the less interesting or problematic problems.

Bibliography

D. D. Nolte, Introduction to Modern Dynamics: Chaos, Networks, Space and Time, 2nd Ed. (Oxford University Press, 2019)

The Physics of Life, the Universe and Everything (In One Easy Equation)

Everyone knows that the answer to life, the universe and everything is “42”.  But if it’s the question that you want, then you can either grab a towel and a copy of The Hitchhikers Guide to the Galaxy, or you can go into physics and begin the search for yourself. 

What you may find is that the question boils down to an extremely simple formula

This innocuous-looking equation carries such riddles, such surprises, such unintuitive behavior that it can become the object of study for life.  This equation is called a vector flow equation, and it can be used to capture the essential physics of economies, neurons, ecosystems, networks, and even orbits of photons around black holes.  This equation is to modern dynamics what F = ma was to classical mechanics.  It is the starting point for understanding complex systems.

The Phase Space of Everything

The apparent simplicity of the “flow equation” masks the complexity it contains.  It is a vector equation because each “dimension” is a variable of a complex system.  Many systems of interest may have only a few variables, but ecosystems and economies and social networks may have hundreds or thousands of variables.  Expressed in component format, the flow equation is

where the superscript spans the number of variables.  But even this masks all that can happen with such an equation. Each of the functions fa can be entirely different from each other, and can be any type of function, whether polynomial, rational, algebraic, transcendental or composite, although they must be single-valued.  They are generally nonlinear, and the limitless ways that functions can be nonlinear is where the richness of the flow equation comes from.

The vector flow equation is an ordinary differential equation (ODE) that can be solved for specific trajectories as initial value problems.  A single set of initial conditions defines a unique trajectory.  For instance, the trajectory for a 4-dimensional example is described as the column vector

which is the single-parameter position vector to a point in phase space, also called state space.  The point sweeps through successive configurations as a function of its single parameter—time.  This trajectory is also called an orbit.  In classical mechanics, the focus has tended to be on the behavior of specific orbits that arise from a specific set of initial conditions.  This is the classic “rock thrown from a cliff” problem of introductory physics courses.  However, in modern dynamics, the focus shifts away from individual trajectories to encompass the set of all possible trajectories.

Why is Modern Dynamics part of Physics?

If finding the solutions to the “x-dot equals f” vector flow equation is all there is to do, then this would just be a math problem—the solution of ODE’s.  There are plenty of gems for mathematicians to look for, and there is an entire of field of study in mathematics called “dynamical systems“, but this would not be “physics”.  Physics as a profession is separate and distinct from mathematics, although the two are sometimes confused.  Physics uses mathematics as its language and as its toolbox, but physics is not mathematics.  Physics is done best when it is done qualitatively—this means with scribbles done on napkins in restaurants or on the back of envelopes while waiting in line. Physics is about recognizing relationships and patterns. Physics is about identifying the limits to scaling properties where the physics changes when scales change. Physics is about the mapping of the simplest possible mathematics onto behavior in the physical world, and recognizing when the simplest possible mathematics is a universal that applies broadly to diverse systems that seem different, but that share the same underlying principles.

So, granted solving ODE’s is not physics, there is still a tremendous amount of good physics that can be done by solving ODE’s. ODE solvers become the modern physicist’s experimental workbench, providing data output from numerical experiments that can test the dependence on parameters in ways that real-world experiments might not be able to access. Physical intuition can be built based on such simulations as the engaged physicist begins to “understand” how the system behaves, able to explain what will happen as the values of parameters are changed.

In the follow sections, three examples of modern dynamics are introduced with a preliminary study, including Python code. These examples are: Galactic dynamics, synchronized networks and ecosystems. Despite their very different natures, their description using dynamical flows share features in common and illustrate the beauty and depth of behavior that can be explored with simple equations.

Galactic Dynamics

One example of the power and beauty of the vector flow equation and its set of all solutions in phase space is called the Henon-Heiles model of the motion of a star within a galaxy.  Of course, this is a terribly complicated problem that involves tens of billions of stars, but if you average over the gravitational potential of all the other stars, and throw in a couple of conservation laws, the resulting potential can look surprisingly simple.  The motion in the plane of this galactic potential takes two configuration coordinates (x, y) with two associated momenta (px, py) for a total of four dimensions.  The flow equations in four-dimensional phase space are simply

Fig. 1 The 4-dimensional phase space flow equations of a star in a galaxy. The terms in light blue are a simple two-dimensional harmonic oscillator. The terms in magenta are the nonlinear contributions from the stars in the galaxy.

where the terms in the light blue box describe a two-dimensional simple harmonic oscillator (SHO), which is a linear oscillator, modified by the terms in the magenta box that represent the nonlinear galactic potential.  The orbits of this Hamiltonian system are chaotic, and because there is no dissipation in the model, a single orbit will continue forever within certain ranges of phase space governed by energy conservation, but never quite repeating.

Fig. 2 Two-dimensional Poincaré section of sets of trajectories in four-dimensional phase space for the Henon-Heiles galactic dynamics model. The perturbation parameter is &eps; = 0.3411 and the energy E = 1.

Hamilton4D.py

#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
Hamilton4D.py
Created on Wed Apr 18 06:03:32 2018

@author: nolte

Derived from:
D. D. Nolte, Introduction to Modern Dynamics: Chaos, Networks, Space and Time, 2nd ed. (Oxford,2019)
"""

import numpy as np
import matplotlib as mpl
from mpl_toolkits.mplot3d import Axes3D
from scipy import integrate
from matplotlib import pyplot as plt
from matplotlib import cm
import time
import os

plt.close('all')

# model_case 1 = Heiles
# model_case 2 = Crescent
print(' ')
print('Hamilton4D.py')
print('Case: 1 = Heiles')
print('Case: 2 = Crescent')
model_case = int(input('Enter the Model Case (1-2)'))

if model_case == 1:
    E = 1       # Heiles: 1, 0.3411   Crescent: 0.05, 1
    epsE = 0.3411   # 3411
    def flow_deriv(x_y_z_w,tspan):
        x, y, z, w = x_y_z_w
        a = z
        b = w
        c = -x - epsE*(2*x*y)
        d = -y - epsE*(x**2 - y**2)
        return[a,b,c,d]
else:
    E = .1       #   Crescent: 0.1, 1
    epsE = 1   
    def flow_deriv(x_y_z_w,tspan):
        x, y, z, w = x_y_z_w
        a = z
        b = w
        c = -(epsE*(y-2*x**2)*(-4*x) + x)
        d = -(y-epsE*2*x**2)
        return[a,b,c,d]
    
prms = np.sqrt(E)
pmax = np.sqrt(2*E)    
            
# Potential Function
if model_case == 1:
    V = np.zeros(shape=(100,100))
    for xloop in range(100):
        x = -2 + 4*xloop/100
        for yloop in range(100):
            y = -2 + 4*yloop/100
            V[yloop,xloop] = 0.5*x**2 + 0.5*y**2 + epsE*(x**2*y - 0.33333*y**3) 
else:
    V = np.zeros(shape=(100,100))
    for xloop in range(100):
        x = -2 + 4*xloop/100
        for yloop in range(100):
            y = -2 + 4*yloop/100
            V[yloop,xloop] = 0.5*x**2 + 0.5*y**2 + epsE*(2*x**4 - 2*x**2*y) 

fig = plt.figure(1)
contr = plt.contourf(V,100, cmap=cm.coolwarm, vmin = 0, vmax = 10)
fig.colorbar(contr, shrink=0.5, aspect=5)    
fig = plt.show()

repnum = 250
mulnum = 64/repnum

np.random.seed(1)
for reploop  in range(repnum):
    px1 = 2*(np.random.random((1))-0.499)*pmax
    py1 = np.sign(np.random.random((1))-0.499)*np.real(np.sqrt(2*(E-px1**2/2)))
    xp1 = 0
    yp1 = 0
    
    x_y_z_w0 = [xp1, yp1, px1, py1]
    
    tspan = np.linspace(1,1000,10000)
    x_t = integrate.odeint(flow_deriv, x_y_z_w0, tspan)
    siztmp = np.shape(x_t)
    siz = siztmp[0]

    if reploop % 50 == 0:
        plt.figure(2)
        lines = plt.plot(x_t[:,0],x_t[:,1])
        plt.setp(lines, linewidth=0.5)
        plt.show()
        time.sleep(0.1)
        #os.system("pause")

    y1 = x_t[:,0]
    y2 = x_t[:,1]
    y3 = x_t[:,2]
    y4 = x_t[:,3]
    
    py = np.zeros(shape=(2*repnum,))
    yvar = np.zeros(shape=(2*repnum,))
    cnt = -1
    last = y1[1]
    for loop in range(2,siz):
        if (last < 0)and(y1[loop] > 0):
            cnt = cnt+1
            del1 = -y1[loop-1]/(y1[loop] - y1[loop-1])
            py[cnt] = y4[loop-1] + del1*(y4[loop]-y4[loop-1])
            yvar[cnt] = y2[loop-1] + del1*(y2[loop]-y2[loop-1])
            last = y1[loop]
        else:
            last = y1[loop]
 
    plt.figure(3)
    lines = plt.plot(yvar,py,'o',ms=1)
    plt.show()
    
if model_case == 1:
    plt.savefig('Heiles')
else:
    plt.savefig('Crescent')
    

Networks, Synchronization and Emergence

A central paradigm of nonlinear science is the emergence of patterns and organized behavior from seemingly random interactions among underlying constituents.  Emergent phenomena are among the most awe inspiring topics in science.  Crystals are emergent, forming slowly from solutions of reagents.  Life is emergent, arising out of the chaotic soup of organic molecules on Earth (or on some distant planet).  Intelligence is emergent, and so is consciousness, arising from the interactions among billions of neurons.  Ecosystems are emergent, based on competition and symbiosis among species.  Economies are emergent, based on the transfer of goods and money spanning scales from the local bodega to the global economy.

One of the common underlying properties of emergence is the existence of networks of interactions.  Networks and network science are topics of great current interest driven by the rise of the World Wide Web and social networks.  But networks are ubiquitous and have long been the topic of research into complex and nonlinear systems.  Networks provide a scaffold for understanding many of the emergent systems.  It allows one to think of isolated elements, like molecules or neurons, that interact with many others, like the neighbors in a crystal or distant synaptic connections.

From the point of view of modern dynamics, the state of a node can be a variable or a “dimension” and the interactions among links define the functions of the vector flow equation.  Emergence is then something that “emerges” from the dynamical flow as many elements interact through complex networks to produce simple or emergent patterns.

Synchronization is a form of emergence that happens when lots of independent oscillators, each vibrating at their own personal frequency, are coupled together to push and pull on each other, entraining all the individual frequencies into one common global oscillation of the entire system.  Synchronization plays an important role in the solar system, explaining why the Moon always shows one face to the Earth, why Saturn’s rings have gaps, and why asteroids are mainly kept away from colliding with the Earth.  Synchronization plays an even more important function in biology where it coordinates the beating of the heart and the functioning of the brain.

One of the most dramatic examples of synchronization is the Kuramoto synchronization phase transition. This occurs when a large set of individual oscillators with differing natural frequencies interact with each other through a weak nonlinear coupling.  For small coupling, all the individual nodes oscillate at their own frequency.  But as the coupling increases, there is a sudden coalescence of all the frequencies into a single common frequency.  This mechanical phase transition, called the Kuramoto transition, has many of the properties of a thermodynamic phase transition, including a solution that utilizes mean field theory.

Fig. 3 The Kuramoto model for the nonlinear coupling of N simple phase oscillators. The term in light blue is the simple phase oscillator. The term in magenta is the global nonlinear coupling that connects each oscillator to every other.

The simulation of 20 Poncaré phase oscillators with global coupling is shown in Fig. 4 as a function of increasing coupling coefficient g. The original individual frequencies are spread randomly. The oscillators with similar frequencies are the first to synchronize, forming small clumps that then synchronize with other clumps of oscillators, until all oscillators are entrained to a single compromise frequency. The Kuramoto phase transition is not sharp in this case because the value of N = 20 is too small. If the simulation is run for 200 oscillators, there is a sudden transition from unsynchronized to synchronized oscillation at a threshold value of g.

Fig. 4 The Kuramoto model for 20 Poincare oscillators showing the frequencies as a function of the coupling coefficient.

The Kuramoto phase transition is one of the most important fundamental examples of modern dynamics because it illustrates many facets of nonlinear dynamics in a very simple way. It highlights the importance of nonlinearity, the simplification of phase oscillators, the use of mean field theory, the underlying structure of the network, and the example of a mechanical analog to a thermodynamic phase transition. It also has analytical solutions because of its simplicity, while still capturing the intrinsic complexity of nonlinear systems.

Kuramoto.py

#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
Created on Sat May 11 08:56:41 2019

@author: nolte

Derived from:
D. D. Nolte, Introduction to Modern Dynamics: Chaos, Networks, Space and Time, 2nd ed. (Oxford,2019)
"""

# https://www.python-course.eu/networkx.php
# https://networkx.github.io/documentation/stable/tutorial.html
# https://networkx.github.io/documentation/stable/reference/functions.html

import numpy as np
from scipy import integrate
from matplotlib import pyplot as plt
import networkx as nx
from UserFunction import linfit
import time

tstart = time.time()

plt.close('all')

Nfac = 20   # 25
N = 20      # 50
width = 0.2

# function: omegout, yout = coupleN(G)
def coupleN(G):

    # function: yd = flow_deriv(x_y)
    def flow_deriv(y,t0):
                
        yp = np.zeros(shape=(N,))
        for omloop  in range(N):
            temp = omega[omloop]
            linksz = G.node[omloop]['numlink']
            for cloop in range(linksz):
                cindex = G.node[omloop]['link'][cloop]
                g = G.node[omloop]['coupling'][cloop]

                temp = temp + g*np.sin(y[cindex]-y[omloop])
            
            yp[omloop] = temp
        
        yd = np.zeros(shape=(N,))
        for omloop in range(N):
            yd[omloop] = yp[omloop]
        
        return yd
    # end of function flow_deriv(x_y)

    mnomega = 1.0
    
    for nodeloop in range(N):
        omega[nodeloop] = G.node[nodeloop]['element']
    
    x_y_z = omega    
    
    # Settle-down Solve for the trajectories
    tsettle = 100
    t = np.linspace(0, tsettle, tsettle)
    x_t = integrate.odeint(flow_deriv, x_y_z, t)
    x0 = x_t[tsettle-1,0:N]
    
    t = np.linspace(1,1000,1000)
    y = integrate.odeint(flow_deriv, x0, t)
    siztmp = np.shape(y)
    sy = siztmp[0]
        
    # Fit the frequency
    m = np.zeros(shape = (N,))
    w = np.zeros(shape = (N,))
    mtmp = np.zeros(shape=(4,))
    btmp = np.zeros(shape=(4,))
    for omloop in range(N):
        
        if np.remainder(sy,4) == 0:
            mtmp[0],btmp[0] = linfit(t[0:sy//2],y[0:sy//2,omloop]);
            mtmp[1],btmp[1] = linfit(t[sy//2+1:sy],y[sy//2+1:sy,omloop]);
            mtmp[2],btmp[2] = linfit(t[sy//4+1:3*sy//4],y[sy//4+1:3*sy//4,omloop]);
            mtmp[3],btmp[3] = linfit(t,y[:,omloop]);
        else:
            sytmp = 4*np.floor(sy/4);
            mtmp[0],btmp[0] = linfit(t[0:sytmp//2],y[0:sytmp//2,omloop]);
            mtmp[1],btmp[1] = linfit(t[sytmp//2+1:sytmp],y[sytmp//2+1:sytmp,omloop]);
            mtmp[2],btmp[2] = linfit(t[sytmp//4+1:3*sytmp/4],y[sytmp//4+1:3*sytmp//4,omloop]);
            mtmp[3],btmp[3] = linfit(t[0:sytmp],y[0:sytmp,omloop]);

        
        #m[omloop] = np.median(mtmp)
        m[omloop] = np.mean(mtmp)
        
        w[omloop] = mnomega + m[omloop]
     
    omegout = m
    yout = y
    
    return omegout, yout
    # end of function: omegout, yout = coupleN(G)



Nlink = N*(N-1)//2      
omega = np.zeros(shape=(N,))
omegatemp = width*(np.random.rand(N)-1)
meanomega = np.mean(omegatemp)
omega = omegatemp - meanomega
sto = np.std(omega)

nodecouple = nx.complete_graph(N)

lnk = np.zeros(shape = (N,), dtype=int)
for loop in range(N):
    nodecouple.node[loop]['element'] = omega[loop]
    nodecouple.node[loop]['link'] = list(nx.neighbors(nodecouple,loop))
    nodecouple.node[loop]['numlink'] = np.size(list(nx.neighbors(nodecouple,loop)))
    lnk[loop] = np.size(list(nx.neighbors(nodecouple,loop)))

avgdegree = np.mean(lnk)
mnomega = 1

facval = np.zeros(shape = (Nfac,))
yy = np.zeros(shape=(Nfac,N))
xx = np.zeros(shape=(Nfac,))
for facloop in range(Nfac):
    print(facloop)
    facoef = 0.2

    fac = facoef*(16*facloop/(Nfac))*(1/(N-1))*sto/mnomega
    for nodeloop in range(N):
        nodecouple.node[nodeloop]['coupling'] = np.zeros(shape=(lnk[nodeloop],))
        for linkloop in range (lnk[nodeloop]):
            nodecouple.node[nodeloop]['coupling'][linkloop] = fac

    facval[facloop] = fac*avgdegree
    
    omegout, yout = coupleN(nodecouple)                           # Here is the subfunction call for the flow

    for omloop in range(N):
        yy[facloop,omloop] = omegout[omloop]

    xx[facloop] = facval[facloop]

plt.figure(1)
lines = plt.plot(xx,yy)
plt.setp(lines, linewidth=0.5)
plt.show()

elapsed_time = time.time() - tstart
print('elapsed time = ',format(elapsed_time,'.2f'),'secs')

The Web of Life

Ecosystems are among the most complex systems on Earth.  The complex interactions among hundreds or thousands of species may lead to steady homeostasis in some cases, to growth and collapse in other cases, and to oscillations or chaos in yet others.  But the definition of species can be broad and abstract, referring to businesses and markets in economic ecosystems, or to cliches and acquaintances in social ecosystems, among many other examples.  These systems are governed by the laws of evolutionary dynamics that include fitness and survival as well as adaptation.

The dimensionality of the dynamical spaces for these systems extends to hundreds or thousands of dimensions—far too complex to visualize when thinking in four dimensions is already challenging.  Yet there are shared principles and common behaviors that emerge even here.  Many of these can be illustrated in a simple three-dimensional system that is represented by a triangular simplex that can be easily visualized, and then generalized back to ultra-high dimensions once they are understood.

A simplex is a closed (N-1)-dimensional geometric figure that describes a zero-sum game (game theory is an integral part of evolutionary dynamics) among N competing species.  For instance, a two-simplex is a triangle that captures the dynamics among three species.  Each vertex of the triangle represents the situation when the entire ecosystem is composed of a single species.  Anywhere inside the triangle represents the situation when all three species are present and interacting.

A classic model of interacting species is the replicator equation. It allows for a fitness-based proliferation and for trade-offs among the individual species. The replicator dynamics equations are shown in Fig. 5.

Fig. 5 Replicator dynamics has a surprisingly simple form, but with surprisingly complicated behavior. The key elements are the fitness and the payoff matrix. The fitness relates to how likely the species will survive. The payoff matrix describes how one species gains at the loss of another (although symbiotic relationships also occur).

The population dynamics on the 2D simplex are shown in Fig. 6 for several different pay-off matrices. The matrix values are shown in color and help interpret the trajectories. For instance the simplex on the upper-right shows a fixed point center. This reflects the antisymmetric character of the pay-off matrix around the diagonal. The stable spiral on the lower-left has a nearly asymmetric pay-off matrix, but with unequal off-diagonal magnitudes. The other two cases show central saddle points with stable fixed points on the boundary. A very large variety of behaviors are possible for this very simple system. The Python program is shown in Trirep.py.

Fig. 6 Payoff matrix and population simplex for four random cases: Upper left is an unstable saddle. Upper right is a center. Lower left is a stable spiral. Lower right is a marginal case.

Trirep.py

#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
trirep.py
Created on Thu May  9 16:23:30 2019

@author: nolte

Derived from:
D. D. Nolte, Introduction to Modern Dynamics: Chaos, Networks, Space and Time, 2nd ed. (Oxford,2019)
"""

import numpy as np
from scipy import integrate
from matplotlib import pyplot as plt

plt.close('all')

def tripartite(x,y,z):

    sm = x + y + z
    xp = x/sm
    yp = y/sm
    
    f = np.sqrt(3)/2
    
    y0 = f*xp
    x0 = -0.5*xp - yp + 1;
    
    plt.figure(2)
    lines = plt.plot(x0,y0)
    plt.setp(lines, linewidth=0.5)
    plt.plot([0, 1],[0, 0],'k',linewidth=1)
    plt.plot([0, 0.5],[0, f],'k',linewidth=1)
    plt.plot([1, 0.5],[0, f],'k',linewidth=1)
    plt.show()
    

def solve_flow(y,tspan):
    def flow_deriv(y, t0):
    #"""Compute the time-derivative ."""
    
        f = np.zeros(shape=(N,))
        for iloop in range(N):
            ftemp = 0
            for jloop in range(N):
                ftemp = ftemp + A[iloop,jloop]*y[jloop]
            f[iloop] = ftemp
        
        phitemp = phi0          # Can adjust this from 0 to 1 to stabilize (but Nth population is no longer independent)
        for loop in range(N):
            phitemp = phitemp + f[loop]*y[loop]
        phi = phitemp
        
        yd = np.zeros(shape=(N,))
        for loop in range(N-1):
            yd[loop] = y[loop]*(f[loop] - phi);
        
        if np.abs(phi0) < 0.01:             # average fitness maintained at zero
            yd[N-1] = y[N-1]*(f[N-1]-phi);
        else:                                     # non-zero average fitness
            ydtemp = 0
            for loop in range(N-1):
                ydtemp = ydtemp - yd[loop]
            yd[N-1] = ydtemp
       
        return yd

    # Solve for the trajectories
    t = np.linspace(0, tspan, 701)
    x_t = integrate.odeint(flow_deriv,y,t)
    return t, x_t

# model_case 1 = zero diagonal
# model_case 2 = zero trace
# model_case 3 = asymmetric (zero trace)
print(' ')
print('trirep.py')
print('Case: 1 = antisymm zero diagonal')
print('Case: 2 = antisymm zero trace')
print('Case: 3 = random')
model_case = int(input('Enter the Model Case (1-3)'))

N = 3
asymm = 3      # 1 = zero diag (replicator eqn)   2 = zero trace (autocatylitic model)  3 = random (but zero trace)
phi0 = 0.001            # average fitness (positive number) damps oscillations
T = 100;


if model_case == 1:
    Atemp = np.zeros(shape=(N,N))
    for yloop in range(N):
        for xloop in range(yloop+1,N):
            Atemp[yloop,xloop] = 2*(0.5 - np.random.random(1))
            Atemp[xloop,yloop] = -Atemp[yloop,xloop]

if model_case == 2:
    Atemp = np.zeros(shape=(N,N))
    for yloop in range(N):
        for xloop in range(yloop+1,N):
            Atemp[yloop,xloop] = 2*(0.5 - np.random.random(1))
            Atemp[xloop,yloop] = -Atemp[yloop,xloop]
        Atemp[yloop,yloop] = 2*(0.5 - np.random.random(1))
    tr = np.trace(Atemp)
    A = Atemp
    for yloop in range(N):
        A[yloop,yloop] = Atemp[yloop,yloop] - tr/N
        
else:
    Atemp = np.zeros(shape=(N,N))
    for yloop in range(N):
        for xloop in range(N):
            Atemp[yloop,xloop] = 2*(0.5 - np.random.random(1))
        
    tr = np.trace(Atemp)
    A = Atemp
    for yloop in range(N):
        A[yloop,yloop] = Atemp[yloop,yloop] - tr/N

plt.figure(3)
im = plt.matshow(A,3,cmap=plt.cm.get_cmap('seismic'))  # hsv, seismic, bwr
cbar = im.figure.colorbar(im)

M = 20
delt = 1/M
ep = 0.01;

tempx = np.zeros(shape = (3,))
for xloop in range(M):
    tempx[0] = delt*(xloop)+ep;
    for yloop in range(M-xloop):
        tempx[1] = delt*yloop+ep
        tempx[2] = 1 - tempx[0] - tempx[1]
        
        x0 = tempx/np.sum(tempx);          # initial populations
        
        tspan = 70
        t, x_t = solve_flow(x0,tspan)
        
        y1 = x_t[:,0]
        y2 = x_t[:,1]
        y3 = x_t[:,2]
        
        plt.figure(1)
        lines = plt.plot(t,y1,t,y2,t,y3)
        plt.setp(lines, linewidth=0.5)
        plt.show()
        plt.ylabel('X Position')
        plt.xlabel('Time')

        tripartite(y1,y2,y3)

Topics in Modern Dynamics

These three examples are just the tip of the iceberg. The topics in modern dynamics are almost numberless. Any system that changes in time is a potential object of study in modern dynamics. Here is a list of a few topics that spring to mind.

Bibliography

D. D. Nolte, Introduction to Modern Dynamics: Chaos, Networks, Space and Time, 2nd Ed. (Oxford University Press, 2019) (The physics and the derivations of the equations for the examples in this blog can be found here.)

Publication Date for the Second Edition: November 18, 2019

D. D. Nolte, Galileo Unbound: A Path Across Life, the Universe and Everything (Oxford University Press, 2018) (The historical origins of the examples in this blog can be found here.)

George Stokes’ Law of Drag

The triumvirate of Cambridge University in the mid-1800’s consisted of three towering figures of mathematics and physics:  George Stokes (1819 – 1903), William Thomson (1824 – 1907) (Lord Kelvin), and James Clerk Maxwell (1831 – 1879).  Their discoveries and methodology changed the nature of natural philosophy, turning it into the subject that today we call physics.  Stokes was the elder, establishing himself as the predominant expert in British mathematical physics, setting the tone for his close friend Thomson (close in age and temperament) as well as the younger Maxwell and many other key figures of 19th century British physics.

            George Gabriel Stokes was born in County Sligo in Ireland as the youngest son of the rector of Skreen parish of the Church of Ireland.  No miraculous stories of his intellectual acumen seem to come from his childhood, as they did for the likes of William Hamilton (1805 – 1865) or George Green (1793 – 1841).  Stokes was a good student, attending school in Skreen, then Dublin and Bristol before entering Pembroke College Cambridge in 1837.  It was towards the end of his time at Cambridge that he emerged as a top mathematics student and as a candidate for Senior Wrangler.

https://upload.wikimedia.org/wikipedia/commons/c/cb/Skreen_Church_-_geograph.org.uk_-_307483.jpg
Church of Ireland church in Skreen, County Sligo, Ireland

The Cambridge Wrangler

Since 1748, the mathematics course at Cambridge University has held a yearly contest to identify the top graduating mathematics student.  The winner of the contest is called the Senior Wrangler, and in the 1800’s the Senior Wrangler received a level of public fame and admiration for intellectual achievement that is somewhat like the fame reserved today for star athletes.  In 1824 the mathematics course was reorganized into the Mathematical Tripos, and the contest became known as the Tripos Exam.  The depth and length of the exam was legion.  For instance, in 1854 when Edward Routh (1831 – 1907) beat out Maxwell for Senior Wrangler, the Tripos consisted of 16 papers spread over 8 days, totaling over 40 hours for a total number of 211 questions.  The winner typically scored less than 50%.  Famous Senior Wranglers include George Airy, John Herschel, Arthur Cayley, Lord Rayleigh, Arthur Eddington, J. E. Littlewood, Peter Guthrie Tait and Joseph Larmor.

Pembroke College
Pembroke College, Cambridge

            In his second year at Cambridge, Stokes had begun studying under William Hopkins (1793 – 1866).  It was common for mathematics students to have private tutors to prep for the Tripos exam, and Tripos tutors were sometimes as famous as the Senior Wranglers themselves, especially if a tutor (like Hopkins) was to have several students win the exam.  George Stokes became Senior Wrangler in 1841, and the same year he won the Smith’s Prize in mathematics.  The Tripos tested primarily on bookwork, while the Smith’s Prize tested on originality.  To achieve top scores on both designated the student as the most capable and creative mathematician of his class.  Stokes was immediately offered a fellowship at Pembroke College allowing him to teach and study whatever he willed.

Part I of the Tripos Exam 1890.

            After Stokes graduated, Hopkins suggested that Stokes study hydrodynamics.  This may have been in part motivated by Hopkins’ own interest is hydraulic problems in geology, but it was also a prescient suggestion, because hydrodynamics was poised for a revolution.

The Early History of Hydrodynamics

Leonardo da Vinci (1452 – 1519) believed that an artist, to capture the essence of a subject, needed to understand its fundamental nature.  Therefore, when he was captivated by the idea of portraying the flow of water, he filled his notebooks with charcoal studies of the whorls and vortices of turbulent torrents and waterfalls.  He was a budding experimental physicist, recording data on the complex phenomenon of hydrodynamics.  Yet Leonardo was no mathematician, and although his understanding of turbulent flow was deep, he did not have the theoretical tools to explain what he saw.  Two centuries later, Daniel Bernoulli (1700 – 1782) provided the first mathematical description of water flowing smoothly in his Hydrodynamica (1738).  However, the modern language of calculus was only beginning to be used at that time, preventing Daniel from providing a rigorous derivation. 

            As for nearly all nascent mathematical theories of the mid 1700’s, whether they be Newtonian dynamics or the calculus of variations or number and graph theory or population dynamics or almost anything, the person who placed the theory on firm mathematical foundations, using modern notions and notations, was Leonhard Euler (1707 – 1783).  In 1752 Euler published a treatise that described the mathematical theory of inviscid flow—meaning flow without viscosity.  Euler’s chief results is

where ρ is the density, v is the velocity, p is pressure, z is the height of the fluid and φ is a velocity potential, while f(t) is a stream function that depends only on time.  If the flow is in steady state, the time derivative vanishes, and the stream function is a constant.  The key to the inviscid approximation is the dominance of momentum in fast flow, as opposed to creeping flow in which viscosity dominates.  Euler’s equation, which expresses the well-known Bernoulli principle, works well under fast laminar conditions, but under slower flow conditions, internal friction ruins the inviscid approximation.

            The violation of the inviscid flow approximation became one of the important outstanding problems in theoretical physics in the early 1800’s.  For instance, the flow of water around ship’s hulls was a key technological problem in the strategic need for speed under sail.  In addition, understanding the creation and propagation of water waves was critical for the safety of ships at sea.  For the growing empire of the British islands, built on the power of their navy, the physics of hydrodynamics was more than an academic pursuit, and their archenemy, the French, were leading the way.

The French Analysts

In 1713 when Newton won his priority dispute with Leibniz over the invention of calculus, it had the unintended consequence of setting back British mathematics and physics for over a hundred years.  Perhaps lulled into complacency by their perceived superiority, Cambridge and Oxford continued teaching classical mathematics, and natural philosophy became dogmatic as Newton’s in Principia became canon.  Meanwhile Continental mathematical analysis went through a fundamental transformation.  Inspired by Newton’s Principia rather than revering it, mathematicians such as the Swiss-German Leonhard Euler, the Frenchwoman Emile du Chatelet and the Italian Joseph Lagrange pushed mathematical physics far beyond Newton by developing Leibniz’ methods and notations for calculus.

The matematicians Newton, Navier and Stokes

            By the early 1800’s, the leading mathematicians of Europe were in the French school led by Pierre-Simon Laplace along with Joseph Fourier, Siméon Denis Poisson and Augustin-Louis Cauchy.  In their hands, functional analysis was going through rapid development, both theoretically and applied, far surpassing British mathematics.  It was by reading the French analysts in the 1820’s that the Englishman George Green finally helped bring British mathematics back up to speed.

            One member of the French school was the French engineer Claude-Louis Navier (1785 – 1836).  He was educated at the Ecole Polytechnique and the School for Roads and Bridges where he became one of the leading architects for bridges in France.  In addition to his engineering tasks, he also was involved in developing principles of work and kinetic energy that aided the later work of Coriolis, who was one of the first physicists to recognize the explicit interplay between kinetic energy and potential energy.  One of Navier’s specialties was hydraulic engineering, and he edited a new edition of a classic work on hydraulics.  In the process, he became aware of serious deficiencies in the theoretical treatment of creeping flow, especially with regards to dissipation.  By adopting a molecular approach championed by Poisson, including appropriate boundary conditions, he derived a correction to the Euler flow equations that included a new term with a new material property of viscosity

Navier-Stokes Equation

Navier published his new flow equation in 1823, but the publication was followed by years of nasty in-fighting as his assumptions were assaulted by Poisson and others.  This acrimony is partly to blame for why Navier was not hailed alone as the discoverer of this equation, which today bears the name “Navier-Stokes Equation”.

Stokes’ Hydrodynamics

Despite the lead of the French mathematicians over the British in mathematical rigor, they were also bogged down by their insistence on mechanistic models that operated on the microscale action-reaction forces.  This was true for their theories of elasticity, hydrodynamics as well as the luminiferous ether.  George Green in England would change this.  While Green was inspired by French mathematics, he made an important shift in thinking in which the fields became the quantities of interest rather than molecular forces.  Differential equations describing macroscale phenomena could be “true” independently of any microscale mechanics.  His theories on elasticity and light propagation relied on no underlying structure of matter or ether.  Underlying models could change, but the differential equations remained true.  Maxwell’s equations, a pinnacle of 19th-century British mathematical physics, were field equations that required no microscopic models, although Maxwell and others later tried to devise models of the ether.

            George Stokes admired Green and adopted his mathematics and outlook on natural philosophy.  When he turned his attention to hydrodynamic flow, he adopted a continuum approach that initially did not rely on molecular interactions to explain viscosity and drag.  He replicated Navier’s results, but this time without relying on any underlying microscale physics.  Yet this only took him so far.  To explain some of the essential features of fluid pressures he had to revert to microscopic arguments of isotropy to explain why displacements were linear and why flow at a boundary ceased.  However, once these functional dependences were set, the remainder of the problem was pure continuum mechanics, establishing the Navier-Stokes equation for incompressible flow.  Stokes went on to apply no-slip boundary conditions for fluids flowing through pipes of different geometric cross sections to calculate flow rates as well as pressure drops along the pipe caused by viscous drag.

            Stokes then turned to experimental results to explain why a pendulum slowly oscillating in air lost amplitude due to dissipation.  He reasoned that when the flow of air around the pendulum bob and stiff rod was slow enough the inertial effects would be negligible, simplifying the Navier-Stokes equation.  He calculated the drag force on a spherical object moving slowly through a viscous liquid and obtained the now famous law known as Stokes’ Law of Drag

in which the drag force increases linearly with speed and is proportional to viscosity.  With dramatic flair, he used his new law to explain why water droplets in clouds float buoyantly until they become big enough to fall as rain.

The Lucasian Chair of Mathematics

There are rare individuals who become especially respected for the breadth and depth of their knowledge.  In our time, already somewhat past, Steven Hawking embodied the ideal of the eminent (almost clairvoyant) scientist pushing his field to the extremes with the deepest understanding, while also being one of the most famous science popularizers of his day as well as an important chronicler of the history of physics.  In his own time, Stokes was held in virtually the same level of esteem. 

            Just as Steven Hawking and Isaac Newton held the Lucasian Chair of Mathematics at Cambridge, Stokes became the Lucasian chair in 1849 and held the chair until his death in 1903.  He was offered the chair in part because of the prestige he held as first wrangler and Smith’s prize winner, but also because of his imposing grasp of the central fields of his time. The Lucasian Chair of Mathematics at Cambridge is one of the most famous academic chairs in the world.  It was established by Charles II in 1664, and the first Lucasian professor was Isaac Barrow followed by Isaac Newton who held the post for 33 years.  Other famous Lucasian professors were Airy, Babbage, Larmor, Dirac as well as Hawking.  During his tenure, Stokes made central contributions to hydrodynamics (as we have seen), but also the elasticity of solids, the behavior of waves in elastic solids, the diffraction of light, problems in light, gravity, sound, heat, meteorology, solar physics, and chemistry.  Perhaps his most famous contribution was his explanation of fluorescence, for which he won the Rumford Medal.  Certainly, if the Nobel Prize had existed in his time, he would have been a Nobel Laureate.

Derivation of Stokes’ Law

The flow field of an incompressible fluid around a smooth spherical object has zero divergence and satisfies Laplace’s equation.  This allows the stream velocities to take the form in spherical coordinates

where the velocity components are defined in terms of the stream function ψ.   The partial derivatives of pressure satisfy the equations

where the second-order operator is

The vanishing of the Laplacian of the stream function

allows the function to take the form

The no-slip boundary condition on the surface of the sphere, as well as the asymptotic velocity field far from the sphere taking the form v•cosθ  gives the solution

Using this expression in the first equations yields the velocities, pressure and shear

The force on the sphere is obtained by integrating the pressure and the shear stress over the surface of the sphere.  The two contributions are

Adding these together gives the final expression for Stokes’ Law

where two thirds of the force is caused by the shear stress and one third by the pressure.

Stokes flow around a sphere. On the left is the pressure. On the right is the y-component of the flow velocity.

Stokes Timeline

  • 1819 – Born County Sligo Parish of Skreen
  • 1837 – Entered Pembroke College Cambridge
  • 1841 – Graduation, Senior Wrangler, Smith’s Prize, Fellow of Pembroke
  • 1845 – Viscosity
  • 1845 – Viscoelastic solid and the luminiferous ether
  • 1845 – Ether drag
  • 1846 – Review of hydrodynamics (including French references)
  • 1847 – Water waves
  • 1847 – Expansion in periodic series (Fourier)
  • 1848 – Jelly theory of the ether
  • 1849 – Lucasian Professorship Cambridge
  • 1849 – Geodesy and Clairaut’s theorem
  • 1849 – Dynamical theory of diffraction
  • 1850 – Damped pendulum, explanation of clouds (water droplets)
  • 1850 – Haidinger’s brushes
  • 1850 – Letter from Kelvin (Thomson) to Stokes on a theorem in vector calculus
  • 1852 – Stokes’ 4 polarization parameters
  • 1852 – Fluorescence and Rumford medal
  • 1854 – Stokes sets “Stokes theorem” for the Smith’s Prize Exam
  • 1857 – Marries
  • 1857 – Effect of wind on sound intensity
  • 1861 – Hankel publishes “Stokes theorem”
  • 1880 – Form of highest waves
  • 1885 – President of Royal Society
  • 1887 – Member of Parliament
  • 1889 – Knighted as baronet by Queen Victoria
  • 1893 – Copley Medal
  • 1903 – Dies
  • 1945 – Cartan establishes modern form of Stokes’ theorem using differential forms

Further Reading

Darrigol, O., Worlds of flow : A history of hydrodynamics from the Bernoullis to Prandtl. (Oxford University Press: Oxford 2005.) This is an excellent technical history of hydrodynamics.

How Number Theory Protects You from the Chaos of the Cosmos

We are exceedingly fortunate that the Earth lies in the Goldilocks zone.  This zone is the range of orbital radii of a planet around its sun for which water can exist in a liquid state.  Water is the universal solvent, and it may be a prerequisite for the evolution of life.  If we were too close to the sun, water would evaporate as steam.  And if we are too far, then it would be locked in perpetual ice.  As it is, the Earth has had wild swings in its surface temperature.  There was once a time, more than 650 million years ago, when the entire Earth’s surface froze over.  Fortunately, the liquid oceans remained liquid, and life that already existed on Earth was able to persist long enough to get to the Cambrian explosion.  Conversely, Venus may once have had liquid oceans and maybe even nascent life, but too much carbon dioxide turned the planet into an oven and boiled away its water (a fate that may await our own Earth if we aren’t careful).  What has saved us so far is the stability of our orbit, our steady distance from the Sun that keeps our water liquid and life flourishing.  Yet it did not have to be this way. 

The regions of regular motion associated with irrational numbers act as if they were a barrier, restricting the range of chaotic orbits and protecting other nearby orbits from the chaos.

Our solar system is a many-body problem.  It consists of three large gravitating bodies (Sun, Jupiter, Saturn) and several minor ones (such as Earth).   Jupiter does influence our orbit, and if it were only a few times more massive than it actually is, then our orbit would become chaotic, varying in distance from the sun in unpredictable ways.  And if Jupiter were only about 20 times bigger than is actually is, there is a possibility that it would perturb the Earth’s orbit so strongly that it could eject the Earth from the solar system entirely, sending us flying through interstellar space, where we would slowly cool until we became a permanent ice ball.  What can protect us from this terrifying fate?  What keeps our orbit stable despite the fact that we inhabit a many-body solar system?  The answer is number theory!

The Most Irrational Number

What is the most irrational number you can think of? 

Is it: pi = 3.1415926535897932384626433 ? 

Or Euler’s constant: e = 2.7182818284590452353602874 ?

How about: sqrt(3) = 1.73205080756887729352744634 ?

These are all perfectly good irrational numbers.  But how do you choose the “most irrational” number?  The answer is fairly simple.  The most irrational number is the one that is least well approximated by a ratio of integers.  For instance, it is possible to get close to pi through the ratio 22/7 = 3.1428 which differs from pi by only 4 parts in ten thousand.  Or Euler’s constant 87/32 = 2.7188 differs from e by only 2 parts in ten thousand.  Yet 87 and 32 are much bigger than 22 and 7, so it may be said that e is more irrational than pi, because it takes ratios of larger integers to get a good approximation.  So is there a “most irrational” number?  The answer is yes.  The Golden Ratio.

The Golden ratio can be defined in many ways, but its most common expression is given by

It is the hardest number to approximate with a ratio of small integers.  For instance, to get a number that is as close as one part in ten thousand to the golden mean takes the ratio 89/55.  This result may seem obscure, but there is a systematic way to find the ratios of integers that approximate an irrational number. This is known as a convergent from continued fractions.

Continued fractions were invented by John Wallis in 1695, introduced in his book Opera Mathematica.  The continued fraction for pi is

An alternate form of displaying this continued fraction is with the expression

The irrational character of pi is captured by the seemingly random integers in this string. However, there can be regular structure in irrational numbers. For instance, a different continued fraction for pi is

that has a surprisingly simple repeating pattern.

The continued fraction for the golden mean has an especially simple repeating form

or

This continued fraction has the slowest convergence for its continued fraction of any other number. Hence, the Golden Ratio can be considered, using this criterion, to be the most irrational number.

If the Golden Ratio is the most irrational number, how does that save us from the chaos of the cosmos? The answer to this question is KAM!

Kolmogorov, Arnold and Moser: (KAM) Theory

KAM is an acronym made from the first initials of three towering mathematicians of the 20th century: Andrey Kolmogorov (1903 – 1987), his student Vladimir Arnold (1937 – 2010), and Jürgen Moser (1928 – 1999).

In 1954, Kolmogorov, considered to be the greatest living mathematician at that time, was invited to give the plenary lecture at a mathematics conference. To the surprise of the conference organizers, he chose to talk on what seemed like a very mundane topic: the question of the stability of the solar system. This had been the topic which Poincaré had attempted to solve in 1890 when he first stumbled on chaotic dynamics. The question had remained open, but the general consensus was that the many-body nature of the solar system made it intrinsically unstable, even for only three bodies.

Against all expectations, Kolmogorov proposed that despite the general chaotic behavior of the three–body problem, there could be “islands of stability” which were protected from chaos, allowing some orbits to remain regular even while other nearby orbits were highly chaotic. He even outlined an approach to a proof of his conjecture, though he had not carried it through to completion.

The proof of Kolmogorov’s conjecture was supplied over the next 10 years through the work of the German mathematician Jürgen Moser and by Kolmogorov’s former student Vladimir Arnold. The proof hinged on the successive ratios of integers that approximate irrational numbers. With this work KAM showed that indeed some orbits are actually protected from neighboring chaos by relying on the irrationality of the ratio of orbital periods.

Resonant Ratios

Let’s go back to the simple model of our solar system that consists of only three bodies: the Sun, Jupiter and Earth. The period of Jupiter’s orbit is 11.86 years, but instead, if it were exactly 12 years, then its period would be in a 12:1 ratio with the Earth’s period. This ratio of integers is called a “resonance”, although in this case it is fairly mismatched. But if this ratio were a ratio of small integers like 5:3, then it means that Jupiter would travel around the sun 5 times in 15 years while the Earth went around 3 times. And every 15 years, the two planets would align. This kind of resonance with ratios of small integers creates a strong gravitational perturbation that alters the orbit of the smaller planet. If the perturbation is strong enough, it could disrupt the Earth’s orbit, creating a chaotic path that might ultimately eject the Earth completely from the solar system.

What KAM discovered is that as the resonance ratio becomes a ratio of large integers, like 87:32, then the planets have a hard time aligning, and the perturbation remains small. A surprising part of this theory is that a nearby orbital ratio might be 5:2 = 1.5, which is only a little different than 87:32 = 1.7. Yet the 5:2 resonance can produce strong chaos, while the 87:32 resonance is almost immune. This way, it is possible to have both chaotic orbits and regular orbits coexisting in the same dynamical system. An irrational orbital ratio protects the regular orbits from chaos. The next question is, how irrational does the orbital ratio need to be to guarantee safety?

You probably already guessed the answer to this question–the answer must be the Golden Ratio. If this is indeed the most irrational number, then it cannot be approximated very well with ratios of small integers, and this is indeed the case. In a three-body system, the most stable orbital ratio would be a ratio of 1.618034. But the more general question of what is “irrational enough” for an orbit to be stable against a given perturbation is much harder to answer. This is the field of Diophantine Analysis, which addresses other questions as well, such as Fermat’s Last Theorem.

KAM Twist Map

The dynamics of three-body systems are hard to visualize directly, so there are tricks that help bring the problem into perspective. The first trick, invented by Henri Poincaré, is called the first return map (or the Poincaré section). This is a way of reducing the dimensionality of the problem by one dimension. But for three bodies, even if they are all in a plane, this still can be complicated. Another trick, called the restricted three-body problem, is to assume that there are two large masses and a third small mass. This way, the dynamics of the two-body system is unaffected by the small mass, so all we need to do is focus on the dynamics of the small body. This brings the dynamics down to two dimensions (the position and momentum of the third body), which is very convenient for visualization, but the dynamics still need solutions to differential equations. So the final trick is to replace the differential equations with simple difference equations that are solved iteratively.

A simple discrete iterative map that captures the essential behavior of the three-body problem begins with action-angle variables that are coupled through a perturbation. Variations on this model have several names: the Twist Map, the Chirikov Map and the Standard Map. The essential mapping is

where J is an action variable (like angular momentum) paired with the angle variable. Initial conditions for the action and the angle are selected, and then all later values are obtained by iteration. The perturbation parameter is given by ε. If ε = 0 then all orbits are perfectly regular and circular. But as the perturbation increases, the open orbits split up into chains of closed (periodic) orbits. As the perturbation increases further, chaotic behavior emerges. The situation for ε = 0.9 is shown in the figure below. There are many regular periodic orbits as well as open orbits. Yet there are simultaneously regions of chaotic behavior. This figure shows an intermediate case where regular orbits can coexist with chaotic ones. The key is the orbital period ratio. For orbital ratios that are sufficiently irrational, the orbits remain open and regular. Bur for orbital ratios that are ratios of small integers, the perturbation is strong enough to drive the dynamics into chaos.

Arnold Twist Map (also known as a Chirikov map) for ε = 0.9 showing the chaos that has emerged at the hyperbolic point, but there are still open orbits that are surprisingly circular (unperturbed) despite the presence of strongly chaotic orbits nearby.

Python Code

#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
Created on Wed Oct. 2, 2019
@author: nolte
"""
import numpy as np
from scipy import integrate
from matplotlib import pyplot as plt
plt.close('all')

eps = 0.9

np.random.seed(2)
plt.figure(1)
for eloop in range(0,50):

    rlast = np.pi*(1.5*np.random.random()-0.5)
    thlast = 2*np.pi*np.random.random()

    orbit = np.int(200*(rlast+np.pi/2))
    rplot = np.zeros(shape=(orbit,))
    thetaplot = np.zeros(shape=(orbit,))
    x = np.zeros(shape=(orbit,))
    y = np.zeros(shape=(orbit,))    
    for loop in range(0,orbit):
        rnew = rlast + eps*np.sin(thlast)
        thnew = np.mod(thlast+rnew,2*np.pi)
        
        rplot[loop] = rnew
        thetaplot[loop] = np.mod(thnew-np.pi,2*np.pi) - np.pi            
          
        rlast = rnew
        thlast = thnew
        
        x[loop] = (rnew+np.pi+0.25)*np.cos(thnew)
        y[loop] = (rnew+np.pi+0.25)*np.sin(thnew)
        
    plt.plot(x,y,'o',ms=1)

plt.savefig('StandMapTwist')

The twist map for three values of ε are shown in the figure below. For ε = 0.2, most orbits are open, with one elliptic point and its associated hyperbolic point. At ε = 0.9 the periodic elliptic point is still stable, but the hyperbolic point has generated a region of chaotic orbits. There is still a remnant open orbit that is associated with an orbital period ratio at the Golden Ratio. However, by ε = 0.97, even this most stable orbit has broken up into a chain of closed orbits as the chaotic regions expand.

Twist map for three levels of perturbation.

Safety in Numbers

In our solar system, governed by gravitational attractions, the square of the orbital period increases as the cube of the average radius (Kepler’s third law). Consider the restricted three-body problem of the Sun and Jupiter with the Earth as the third body. If we analyze the stability of the Earth’s orbit as a function of distance from the Sun, the orbital ratio relative to Jupiter would change smoothly. Near our current position, it would be in a 12:1 resonance, but as we moved farther from the Sun, this ratio would decrease. When the orbital period ratio is sufficiently irrational, then the orbit would be immune to Jupiter’s pull. But as the orbital ratio approaches ratios of integers, the effect gets larger. Close enough to Jupiter there would be a succession of radii that had regular motion separated by regions of chaotic motion. The regions of regular motion associated with irrational numbers act as if they were a barrier, restricting the range of chaotic orbits and protecting more distant orbits from the chaos. In this way numbers, rational versus irrational, protect us from the chaos of our own solar system.

A dramatic demonstration of the orbital resonance effect can be seen with the asteroid belt. The many small bodies act as probes of the orbital resonances. The repetitive tug of Jupiter opens gaps in the distribution of asteroid radii, with major gaps, called Kirkwood Gaps, opening at orbital ratios of 3:1, 5:2, 7:3 and 2:1. These gaps are the radii where chaotic behavior occurs, while the regions in between are stable. Most asteroids spend most of their time in the stable regions, because chaotic motion tends to sweep them out of the regions of resonance. This mechanism for the Kirkwood gaps is the same physics that produces gaps in the rings of Saturn at resonances with the many moons of Saturn.

The gaps in the asteroid distributions caused by orbital resonances with Jupiter. Ref. Wikipedia

Further Reading

For a detailed history of the development of KAM theory, see Chapter 9 Butterflies to Hurricanes in Galileo Unbound (Oxford University Press, 2018).

For a more detailed mathematical description of the KAM theory, see Chapter 5, Hamiltonian Chaos, in Introduction to Modern Dynamics, 2nd edition (Oxford University Press, 2019).

See also:

Dumas, H. S., The KAM Story: A friendly introduction to the content, history and significance of Classical Kolmogorov-Arnold-Moser Theory. World Scientific: 2014.

Arnold, V. I., From superpositions to KAM theory. Vladimir Igorevich Arnold. Selected Papers 1997, PHASIS, 60, 727–740.