Science 1916: A Hundred-year Time Capsule

In one of my previous blog posts, as I was searching for Schwarzschild’s original papers on Einstein’s field equations and quantum theory, I obtained a copy of the January 1916 – June 1916 volume of the Proceedings of the Royal Prussian Academy of Sciences through interlibrary loan.  The extremely thick volume arrived at Purdue about a week after I ordered it online.  It arrived from Oberlin College in Ohio that had received it as a gift in 1928 from the library of Professor Friedrich Loofs of the University of Halle in Germany.  Loofs had been the Haskell Lecturer at Oberlin for the 1911-1912 semesters. 

As I browsed through the volume looking for Schwarzschild’s papers, I was amused to find a cornucopia of turn-of-the-century science topics recorded in its pages.  There were papers on the overbite and lips of marsupials.  There were papers on forgotten languages.  There were papers on ancient Greek texts.  On the origins of religion.  On the philosophy of abstraction.  Histories of Indian dramas.  Reflections on cancer.  But what I found most amazing was a snapshot of the field of physics and mathematics in 1916, with historic papers by historic scientists who changed how we view the world. Here is a snapshot in time and in space, a period of only six months from a single journal, containing papers from authors that reads like a who’s who of physics.

In 1916 there were three major centers of science in the world with leading science publications: London with the Philosophical Magazine and Proceedings of the Royal Society; Paris with the Comptes Rendus of the Académie des Sciences; and Berlin with the Proceedings of the Royal Prussian Academy of Sciences and Annalen der Physik. In Russia, there were the scientific Journals of St. Petersburg, but the Bolshevik Revolution was brewing that would overwhelm that country for decades.  And in 1916 the academic life of the United States was barely worth noticing except for a few points of light at Yale and Johns Hopkins. 

Berlin in 1916 was embroiled in war, but science proceeded relatively unmolested.  The six-month volume of the Proceedings of the Royal Prussian Academy of Sciences contains a number of gems.  Schwarzschild was one of the most prolific contributors, publishing three papers in just this half-year volume, plus his obituary written by Einstein.  But joining Schwarzschild in this volume were Einstein, Planck, Born, Warburg, Frobenious, and Rubens among others—a pantheon of German scientists mostly cut off from the rest of the world at that time, but single-mindedly following their individual threads woven deep into the fabric of the physical world.

Karl Schwarzschild (1873 – 1916)

Schwarzschild had the unenviable yet effective motivation of his impending death to spur him to complete several projects that he must have known would make his name immortal.  In this six-month volume he published his three most important papers.  The first (pg. 189) was on the exact solution to Einstein’s field equations to general relativity.  The solution was for the restricted case of a point mass, yet the derivation yielded the Schwarzschild radius that later became known as the event horizon of a non-roatating black hole.  The second paper (pg. 424) expanded the general relativity solutions to a spherically symmetric incompressible liquid mass. 

Schwarzschild’s solution to Einstein’s field equations for a point mass.

          

Schwarzschild’s extension of the field equation solutions to a finite incompressible fluid.

The subject, content and success of these two papers was wholly unexpected from this observational astronomer stationed on the Russian Front during WWI calculating trajectories for German bombardments.  He would not have been considered a theoretical physicist but for the importance of his results and the sophistication of his methods.  Within only a year after Einstein published his general theory, based as it was on the complicated tensor calculus of Levi-Civita, Christoffel and Ricci-Curbastro that had taken him years to master, Schwarzschild found a solution that evaded even Einstein.

Schwarzschild’s third and final paper (pg. 548) was on an entirely different topic, still not in his official field of astronomy, that positioned all future theoretical work in quantum physics to be phrased in the language of Hamiltonian dynamics and phase space.  He proved that action-angle coordinates were the only acceptable canonical coordinates to be used when quantizing dynamical systems.  This paper answered a central question that had been nagging Bohr and Einstein and Ehrenfest for years—how to quantize dynamical coordinates.  Despite the simple way that Bohr’s quantized hydrogen atom is taught in modern physics, there was an ambiguity in the quantization conditions even for this simple single-electron atom.  The ambiguity arose from the numerous possible canonical coordinate transformations that were admissible, yet which led to different forms of quantized motion. 

Schwarzschild’s proposal of action-angle variables for quantization of dynamical systems.

 Schwarzschild’s doctoral thesis had been a theoretical topic in astrophysics that applied the celestial mechanics theories of Henri Poincaré to binary star systems.  Within Poincaré’s theory were integral invariants that were conserved quantities of the motion.  When a dynamical system had as many constraints as degrees of freedom, then every coordinate had an integral invariant.  In this unexpected last paper from Schwarzschild, he showed how canonical transformation to action-angle coordinates produced a unique representation in terms of action variables (whose dimensions are the same as Planck’s constant).  These action coordinates, with their associated cyclical angle variables, are the only unambiguous representations that can be quantized.  The important points of this paper were amplified a few months later in a publication by Schwarzschild’s friend Paul Epstein (1871 – 1939), solidifying this approach to quantum mechanics.  Paul Ehrenfest (1880 – 1933) continued this work later in 1916 by defining adiabatic invariants whose quantum numbers remain unchanged under slowly varying conditions, and the program started by Schwarzschild was definitively completed by Paul Dirac (1902 – 1984) at the dawn of quantum mechanics in Göttingen in 1925.

Albert Einstein (1879 – 1955)

In 1916 Einstein was mopping up after publishing his definitive field equations of general relativity the year before.  His interests were still cast wide, not restricted only to this latest project.  In the 1916 Jan. to June volume of the Prussian Academy Einstein published two papers.  Each is remarkably short relative to the other papers in the volume, yet the importance of the papers may stand in inverse proportion to their length.

The first paper (pg. 184) is placed right before Schwarzschild’s first paper on February 3.  The subject of the paper is the expression of Maxwell’s equations in four-dimensional space time.  It is notable and ironic that Einstein mentions Hermann Minkowski (1864 – 1909) in the first sentence of the paper.  When Minkowski proposed his bold structure of spacetime in 1908, Einstein had been one of his harshest critics, writing letters to the editor about the absurdity of thinking of space and time as a single interchangeable coordinate system.  This is ironic, because Einstein today is perhaps best known for the special relativity properties of spacetime, yet he was slow to adopt the spacetime viewpoint. Einstein only came around to spacetime when he realized around 1910 that a general approach to relativity required the mathematical structure of tensor manifolds, and Minkowski had provided just such a manifold—the pseudo-Riemannian manifold of space time.  Einstein subsequently adopted spacetime with a passion and became its greatest champion, calling out Minkowski where possible to give him his due, although he had already died tragically of a burst appendix in 1909.

Relativistic energy density of electromagnetic fields.

The importance of Einstein’s paper hinges on his derivation of the electromagnetic field energy density using electromagnetic four vectors.  The energy density is part of the source term for his general relativity field equations.  Any form of energy density can warp spacetime, including electromagnetic field energy.  Furthermore, the Einstein field equations of general relativity are nonlinear as gravitational fields modify space and space modifies electromagnetic fields, producing a coupling between gravity and electromagnetism.  This coupling is implicit in the case of the bending of light by gravity, but Einstein’s paper from 1916 makes the connection explicit. 

Einstein’s second paper (pg. 688) is even shorter and hence one of the most daring publications of his career.  Because the field equations of general relativity are nonlinear, they are not easy to solve exactly, and Einstein was exploring approximate solutions under conditions of slow speeds and weak fields.  In this “non-relativistic” limit the metric tensor separates into a Minkowski metric as a background on which a small metric perturbation remains.  This small perturbation has the properties of a wave equation for a disturbance of the gravitational field that propagates at the speed of light.  Hence, in the June 22 issue of the Prussian Academy in 1916, Einstein predicts the existence and the properties of gravitational waves.  Exactly one hundred years later in 2016, the LIGO collaboration announced the detection of gravitational waves generated by the merger of two black holes.

Einstein’s weak-field low-velocity approximation solutions of his field equations.
Einstein’s prediction of gravitational waves.

Max Planck (1858 – 1947)

Max Planck was active as the secretary of the Prussian Academy in 1916 yet was still fully active in his research.  Although he had launched the quantum revolution with his quantum hypothesis of 1900, he was not a major proponent of quantum theory even as late as 1916.  His primary interests lay in thermodynamics and the origins of entropy, following the theoretical approaches of Ludwig Boltzmann (1844 – 1906).  In 1916 he was interested in how to best partition phase space as a way to count states and calculate entropy from first principles.  His paper in the 1916 volume (pg. 653) calculated the entropy for single-atom solids.

Counting microstates by Planck.

Max Born (1882 – 1970)

Max Born was to be one of the leading champions of the quantum mechanical revolution based at the University of Göttingen in the 1920’s. But in 1916 he was on leave from the University of Berlin working on ranging for artillery.  Yet he still pursued his academic interests, like Schwarzschild.  On pg. 614 in the Proceedings of the Prussian Academy, Born published a paper on anisotropic liquids, such as liquid crystals and the effect of electric fields on them.  It is astonishing to think that so many of the flat-panel displays we have today, whether on our watches or smart phones, are technological descendants of work by Born at the beginning of his career.

Born on liquid crystals.

Ferdinand Frobenius (1849 – 1917)

Like Schwarzschild, Frobenius was at the end of his career in 1916 and would pass away one year later, but unlike Schwarzschild, his career had been a long one, receiving his doctorate under Weierstrass and exploring elliptic functions, differential equations, number theory and group theory.  One of the papers that established him in group theory appears in the May 4th issue on page 542 where he explores the series expansion of a group.

Frobenious on groups.

Heinrich Rubens (1865 – 1922)

Max Planck owed his quantum breakthrough in part to the exquisitely accurate experimental measurements made by Heinrich Rubens on black body radiation.  It was only by the precise shape of what came to be called the Planck spectrum that Planck could say with such confidence that his theory of quantized radiation interactions fit Rubens spectrum so perfectly.  In 1916 Rubens was at the University of Berlin, having taken the position vacated by Paul Drude in 1906.  He was a specialist in infrared spectroscopy, and on page 167 of the Proceedings he describes the spectrum of steam and its consequences for the quantum theory.

Rubens and the infrared spectrum of steam.

Emil Warburg (1946 – 1931)

Emil Warburg’s fame is primarily as the father of Otto Warburg who won the 1931 Nobel prize in physiology.  On page 314 Warburg reports on photochemical processes in BrH gases.     In an obscure and very indirect way, I am an academic descendant of Emil Warburg.  One of his students was Robert Pohl who was a famous early researcher in solid state physics, sometimes called the “father of solid state physics”.  Pohl was at the physics department in Göttingen in the 1920’s along with Born and Franck during the golden age of quantum mechanics.  Robert Pohl’s son, Robert Otto Pohl, was my professor when I was a sophomore at Cornell University in 1978 for the course on introductory electromagnetism using a textbook by the Nobel laureate Edward Purcell, a quirky volume of the Berkeley Series of physics textbooks.  This makes Emil Warburg my professor’s father’s professor.

Warburg on photochemistry.

Papers in the 1916 Vol. 1 of the Prussian Academy of Sciences

Schulze,  Alt– und Neuindisches

Orth,  Zur Frage nach den Beziehungen des Alkoholismus zur Tuberkulose

Schulze,  Die Erhabunen auf der Lippin- und Wangenschleimhaut der Säugetiere

von Wilamwitz-Moellendorff, Die Samie des Menandros

Engler,  Bericht über das >>Pflanzenreich<<

von Harnack,  Bericht über die Ausgabe der griechischen Kirchenväter der dri ersten Jahrhunderte

Meinecke,  Germanischer und romanischer Geist im Wandel der deutschen Geschichtsauffassung

Rubens und Hettner,  Das langwellige Wasserdampfspektrum und seine Deutung durch die Quantentheorie

Einstein,  Eine neue formale Deutung der Maxwellschen Feldgleichungen der Electrodynamic

Schwarschild,  Über das Gravitationsfeld eines Massenpunktes nach der Einsteinschen Theorie

Helmreich,  Handschriftliche Verbesserungen zu dem Hippokratesglossar des Galen

Prager,  Über die Periode des veränderlichen Sterns RR Lyrae

Holl,  Die Zeitfolge des ersten origenistischen Streits

Lüders,  Zu den Upanisads. I. Die Samvargavidya

Warburg,  Über den Energieumsatz bei photochemischen Vorgängen in Gasen. VI.

Hellman,  Über die ägyptischen Witterungsangaben im Kalender von Claudius Ptolemaeus

Meyer-Lübke,  Die Diphthonge im Provenzaslischen

Diels,  Über die Schrift Antipocras des Nikolaus von Polen

Müller und Sieg,  Maitrisimit und >>Tocharisch<<

Meyer,  Ein altirischer Heilsegen

Schwarzschild,  Über das Gravitationasfeld einer Kugel aus inkompressibler Flüssigkeit nach der Einsteinschen Theorie

Brauer,  Die Verbreitung der Hyracoiden

Correns,  Untersuchungen über Geschlechtsbestimmung bei Distelarten

Brahn,  Weitere Untersuchungen über Fermente in der Lever von Krebskranken

Erdmann,  Methodologische Konsequenzen aus der Theorie der Abstraktion

Bang,  Studien zur vergleichenden Grammatik der Türksprachen. I.

Frobenius,  Über die  Kompositionsreihe einer Gruppe

Schwarzschild,  Zur Quantenhypothese

Fischer und Bergmann,  Über neue Galloylderivate des Traubenzuckers und ihren Vergleich mit der Chebulinsäure

Schuchhardt,  Der starke Wall und die breite, zuweilen erhöhte Berme bei frügeschichtlichen Burgen in Norddeutschland

Born,  Über anisotrope Flüssigkeiten

Planck,  Über die absolute Entropie einatomiger Körper

Haberlandt,  Blattepidermis und Lichtperzeption

Einstein,  Näherungsweise Integration der Feldgleichungen der Gravitation

Lüders,  Die Saubhikas.  Ein Beitrag zur Gecschichte des indischen Dramas

The Three-Body Problem, Longitude at Sea, and Lagrange’s Points

When Newton developed his theory of universal gravitation, the first problem he tackled was Kepler’s elliptical orbits of the planets around the sun, and he succeeded beyond compare.  The second problem he tackled was of more practical importance than the tracks of distant planets, namely the path of the Earth’s own moon, and he was never satisfied. 

Newton’s Principia and the Problem of Longitude

Measuring the precise location of the moon at very exact times against the backdrop of the celestial sphere was a method for ships at sea to find their longitude.  Yet the moon’s orbit around the Earth is irregular, and Newton recognized that because gravity was universal, every planet exerted a force on each other, and the moon was being tugged upon by the sun as well as by the Earth.

Newton’s attempt with the Moon was his last significant scientific endeavor

            In Propositions 65 and 66 of Book 1 of the Principia, Newton applied his new theory to attempt to pin down the moon’s trajectory, but was thwarted by the complexity of the three bodies of the Earth-Moon-Sun system.  For instance, the force of the sun on the moon is greater than the force of the Earth on the moon, which raised the question of why the moon continued to circle the Earth rather than being pulled away to the sun. Newton correctly recognized that it was the Earth-moon system that was in orbit around the sun, and hence the sun caused only a perturbation on the Moon’s orbit around the Earth.  However, because the Moon’s orbit is approximately elliptical, the Sun’s pull on the Moon is not constant as it swings around in its orbit, and Newton only succeeded in making estimates of the perturbation. 

            Unsatisfied with his results in the Principia, Newton tried again, beginning in the summer of 1694, but the problem was to too great even for him.  In 1702 he published his research, as far as he was able to take it, on the orbital trajectory of the Moon.  He could pin down the motion to within 10 arc minutes, but this was not accurate enough for reliable navigation, representing an uncertainty of over 10 kilometers at sea—error enough to run aground at night on unseen shoals.  Newton’s attempt with the Moon was his last significant scientific endeavor, and afterwards this great scientist withdrew into administrative activities and other occult interests that consumed his remaining time.

Race for the Moon

            The importance of the Moon for navigation was too pressing to ignore, and in the 1740’s a heated competition to be the first to pin down the Moon’s motion developed among three of the leading mathematicians of the day—Leonhard Euler, Jean Le Rond D’Alembert and Alexis Clairaut—who began attacking the lunar problem and each other [1].  Euler in 1736 had published the first textbook on dynamics that used the calculus, and Clairaut had recently returned from Lapland with Maupertuis.  D’Alembert, for his part, had placed dynamics on a firm physical foundation with his 1743 textbook.  Euler was first to publish with a lunar table in 1746, but there remained problems in his theory that frustrated his attempt at attaining the required level of accuracy.  

            At nearly the same time Clairaut and D’Alembert revisited Newton’s foiled lunar theory and found additional terms in the perturbation expansion that Newton had neglected.  They rushed to beat each other into print, but Clairaut was distracted by a prize competition for the most accurate lunar theory, announced by the Russian Academy of Sciences and refereed by Euler, while D’Alembert ignored the competition, certain that Euler would rule in favor of Clairaut.  Clairaut won the prize, but D’Alembert beat him into print. 

            The rivalry over the moon did not end there. Clairaut continued to improve lunar tables by combining theory and observation, while D’Alembert remained more purely theoretical.  A growing animosity between Clairaut and D’Alembert spilled out into the public eye and became a daily topic of conversation in the Paris salons.  The difference in their approaches matched the difference in their personalities, with the more flamboyant and pragmatic Clairaut disdaining the purist approach and philosophy of D’Alembert.  Clairaut succeeded in publishing improved lunar theory and tables in 1752, followed by Euler in 1753, while D’Alembert’s interests were drawn away towards his activities for Diderot’s Encyclopedia

            The battle over the Moon in the late 1740’s was carried out on the battlefield of perturbation theory.  To lowest order, the orbit of the Moon around the Earth is a Keplerian ellipse, and the effect of the Sun, though creating problems for the use of the Moon for navigation, produces only a small modification—a perturbation—of its overall motion.  Within a decade or two, the accuracy of perturbation theory calculations, combined with empirical observations, had improved to the point that accurate lunar tables had sufficient accuracy to allow ships to locate their longitude to within a kilometer at sea.  The most accurate tables were made by Tobias Mayer, who was awarded posthumously a prize of 3000 pounds by the British Parliament in 1763 for the determination of longitude at sea. Euler received 300 pounds for helping Mayer with his calculations.  This was the same prize that was coveted by the famous clockmaker John Harrison and depicted so brilliantly in Dava Sobel’s Longitude (1995).

Lagrange Points

            Several years later in 1772 Lagrange discovered an interesting special solution to the planar three-body problem with three massive points each executing an elliptic orbit around the center of mass of the system, but configured such that their positions always coincided with the vertices of an equilateral triangle [2].  He found a more important special solution in the restricted three-body problem that emerged when a massless third body was found to have two stable equilibrium points in the combined gravitational potentials of two massive bodies.  These two stable equilibrium points  are known as the L4 and L5 Lagrange points.  Small objects can orbit these points, and in the Sun-Jupiter system these points are occupied by the Trojan asteroids.  Similarly stable Lagrange points exist in the Earth-Moon system where space stations or satellites could be parked. 

For the special case of circular orbits of constant angular frequency w, the motion of the third mass is described by the Lagrangian

where the potential is time dependent because of the motion of the two larger masses.  Lagrange approached the problem by adopting a rotating reference frame in which the two larger masses m1 and m2 move along the stationary line defined by their centers. The Lagrangian in the rotating frame is

where the effective potential is now time independent.  The first term in the effective potential is the Coriolis effect and the second is the centrifugal term.

Fig. Effective potential for the planar three-body problem and the five Lagrange points where the gradient of the effective potential equals zero. The Lagrange points are displayed on a horizontal cross section of the potential energy shown with equipotential lines. The large circle in the center is the Sun. The smaller circle on the right is a Jupiter-like planet. The points L1, L2 and L3 are each saddle-point equilibria positions and hence unstable. The points L4 and L5 are stable points that can collect small masses that orbit these Lagrange points.

            The effective potential is shown in the figure for m3 = 10m2.  There are five locations where the gradient of the effective potential equals zero.  The point L1 is the equilibrium position between the two larger masses.  The points L2 and L3 are at positions where the centrifugal force balances the gravitational attraction to the two larger masses.  These are also the points that separate local orbits around a single mass from global orbits that orbit the two-body system. The last two Lagrange points at L4 and L5 are at one of the vertices of an equilateral triangle, with the other two vertices at the positions of the larger masses. The first three Lagrange points are saddle points.  The last two are at maxima of the effective potential.

L1, lies between Earth and the sun at about 1 million miles from Earth. L1 gets an uninterrupted view of the sun, and is currently occupied by the Solar and Heliospheric Observatory (SOHO) and the Deep Space Climate Observatory. L2 also lies a million miles from Earth, but in the opposite direction of the sun. At this point, with the Earth, moon and sun behind it, a spacecraft can get a clear view of deep space. NASA’s Wilkinson Microwave Anisotropy Probe (WMAP) is currently at this spot measuring the cosmic background radiation left over from the Big Bang. The James Webb Space Telescope will move into this region in 2021.


[1] Gutzwiller, M. C. (1998). “Moon-Earth-Sun: The oldest three-body problem.” Reviews of Modern Physics 70(2): 589-639.

[2] J.L. Lagrange Essai sur le problème des trois corps, 1772, Oeuvres tome 6

Vladimir Arnold’s Cat Map

The 1960’s are known as a time of cultural revolution, but perhaps less known was the revolution that occurred in the science of dynamics.  Three towering figures of that revolution were Stephen Smale (1930 – ) at Berkeley, Andrey Kolmogorov (1903 – 1987) in Moscow and his student Vladimir Arnold (1937 – 2010).  Arnold was only 20 years old in 1957 when he solved Hilbert’s thirteenth problem (that any continuous function of several variables can be constructed with a finite number of two-variable functions).  Only a few years later his work on the problem of small denominators in dynamical systems provided the finishing touches on the long elusive explanation of the stability of the solar system (the problem for which Poincaré won the King Oscar Prize in mathematics in 1889 when he discovered chaotic dynamics ).  This theory is known as KAM-theory, using the first initials of the names of Kolmogorov, Arnold and Moser [1].  Building on his breakthrough in celestial mechanics, Arnold’s work through the 1960’s remade the theory of Hamiltonian systems, creating a shift in perspective that has permanently altered how physicists look at dynamical systems.

Hamiltonian Physics on a Torus

Traditionally, Hamiltonian physics is associated with systems of inertial objects that conserve the sum of kinetic and potential energy, in other words, conservative non-dissipative systems.  But a modern view (after Arnold) of Hamiltonian systems sees them as hyperdimensional mathematical mappings that conserve volume.  The space that these mappings inhabit is phase space, and the conservation of phase-space volume is known as Liouville’s Theorem [2].  The geometry of phase space is called symplectic geometry, and the universal position that symplectic geometry now holds in the physics of Hamiltonian mechanics is largely due to Arnold’s textbook Mathematical Methods of Classical Mechanics (1974, English translation 1978) [3]. Arnold’s famous quote from that text is “Hamiltonian mechanics is geometry in phase space”. 

One of the striking aspects of this textbook is the reduction of phase-space geometry to the geometry of a hyperdimensional torus for a large number of Hamiltonian systems.  If there are as many conserved quantities as there are degrees of freedom in a Hamiltonian system, then the system is called “integrable” (because you can integrated the equations of motion to find a constant of the motion). Then it is possible to map the physics onto a hyperdimensional torus through the transformation of dynamical coordinates into what are known as “action-angle” coordinates [4].  Each independent angle has an associated action that is conserved during the motion of the system.  The periodicity of the dynamical angle coordinate makes it possible to identify it with the angular coordinate of a multi-dimensional torus.  Therefore, every integrable Hamiltonian system can be mapped to motion on a multi-dimensional torus (one dimension for each degree of freedom of the system). 

Actually, integrable Hamiltonian systems are among the most boring dynamical systems you can imagine. They literally just go in circles (around the torus). But as soon as you add a small perturbation that cannot be integrated they produce some of the most complex and beautiful patterns of all dynamical systems. It was Arnold’s focus on motions on a torus, and perturbations that shift the dynamics off the torus, that led him to propose a simple mapping that captured the essence of Hamiltonian chaos.

The Arnold Cat Map

Motion on a two-dimensional torus is defined by two angles, and trajectories on a two-dimensional torus are simple helixes. If the periodicities of the motion in the two angles have an integer ratio, the helix repeats itself. However, if the ratio of periods (also known as the winding number) is irrational, then the helix never repeats and passes arbitrarily closely to any point on the surface of the torus. This last case leads to an “ergodic” system, which is a term introduced by Boltzmann to describe a physical system whose trajectory fills phase space. The behavior of a helix for rational or irrational winding number is not terribly interesting. It’s just an orbit going in circles like an integrable Hamiltonian system. The helix can never even cross itself.

However, if you could add a new dimension to the torus (or add a new degree of freedom to the dynamical system), then the helix could pass over or under itself by moving into the new dimension. By weaving around itself, a trajectory can become chaotic, and the set of many trajectories can become as mixed up as a bowl of spaghetti. This can be a little hard to visualize, especially in higher dimensions, but Arnold thought of a very simple mathematical mapping that captures the essential motion on a torus, preserving volume as required for a Hamiltonian system, but with the ability for regions to become all mixed up, just like trajectories in a nonintegrable Hamiltonian system.

A unit square is isomorphic to a two-dimensional torus. This means that there is a one-to-one mapping of each point on the unit square to each point on the surface of a torus. Imagine taking a sheet of paper and forming a tube out of it. One of the dimensions of the sheet of paper is now an angle coordinate that is cyclic, going around the circumference of the tube. Now if the sheet of paper is flexible (like it is made of thin rubber) you can bend the tube around and connect the top of the tube with the bottom, like a bicycle inner tube. The other dimension of the sheet of paper is now also an angle coordinate that is cyclic. In this way a flat sheet is converted (with some bending) into a torus.

Arnold’s key idea was to create a transformation that takes the torus into itself, preserving volume, yet including the ability for regions to pass around each other. Arnold accomplished this with the simple map

where the modulus 1 takes the unit square into itself. This transformation can also be expressed as a matrix

followed by taking modulus 1. The transformation matrix is called a Floquet matrix, and the determinant of the matrix is equal to unity, which ensures that volume is conserved.

Arnold decided to illustrate this mapping by using a crude image of the face of a cat (See Fig. 1). Successive applications of the transformation stretch and shear the cat, which is then folded back into the unit square. The stretching and folding preserve the volume, but the image becomes all mixed up, just like mixing in a chaotic Hamiltonian system, or like an immiscible dye in water that is stirred.

Fig. 1 Arnold’s illustration of his cat map from pg. 6 of V. I. Arnold and A. Avez, Ergodic Problems of Classical Mechanics (Benjamin, 1968) [5]
Fig. 2 Arnold Cat Map operation is an iterated succession of stretching with shear of a unit square, and translation back to the unit square. The mapping preserves and mixes areas, and is invertible.

Recurrence

When the transformation matrix is applied to continuous values, it produces a continuous range of transformed values that become thinner and thinner until the unit square is uniformly mixed. However, if the unit square is discrete, made up of pixels, then something very different happens (see Fig. 3). The image of the cat in this case is composed of a 50×50 array of pixels. For early iterations, the image becomes stretched and mixed, but at iteration 50 there are 4 low-resolution upside-down versions of the cat, and at iteration 75 the cat fully reforms, but is upside-down. Continuing on, the cat eventually reappears fully reformed and upright at iteration 150. Therefore, the discrete case displays a recurrence and the mapping is periodic. Calculating the period of the cat map on lattices can lead to interesting patterns, especially if the lattice is composed of prime numbers [6].

Fig. 3 A discrete cat map has a recurrence period. This example with a 50×50 lattice has a period of 150.

The Cat Map and the Golden Mean

The golden mean, or the golden ratio, 1.618033988749895 is never far away when working with Hamiltonian systems. Because the golden mean is the “most irrational” of all irrational numbers, it plays an essential role in KAM theory on the stability of the solar system. In the case of Arnold’s cat map, it pops up its head in several ways. For instance, the transformation matrix has eigenvalues

with the remarkable property that

which guarantees conservation of area.


Selected V. I. Arnold Publications

Arnold, V. I. “FUNCTIONS OF 3 VARIABLES.” Doklady Akademii Nauk Sssr 114(4): 679-681. (1957)

Arnold, V. I. “GENERATION OF QUASI-PERIODIC MOTION FROM A FAMILY OF PERIODIC MOTIONS.” Doklady Akademii Nauk Sssr 138(1): 13-&. (1961)

Arnold, V. I. “STABILITY OF EQUILIBRIUM POSITION OF A HAMILTONIAN SYSTEM OF ORDINARY DIFFERENTIAL EQUATIONS IN GENERAL ELLIPTIC CASE.” Doklady Akademii Nauk Sssr 137(2): 255-&. (1961)

Arnold, V. I. “BEHAVIOUR OF AN ADIABATIC INVARIANT WHEN HAMILTONS FUNCTION IS UNDERGOING A SLOW PERIODIC VARIATION.” Doklady Akademii Nauk Sssr 142(4): 758-&. (1962)

Arnold, V. I. “CLASSICAL THEORY OF PERTURBATIONS AND PROBLEM OF STABILITY OF PLANETARY SYSTEMS.” Doklady Akademii Nauk Sssr 145(3): 487-&. (1962)

Arnold, V. I. “BEHAVIOUR OF AN ADIABATIC INVARIANT WHEN HAMILTONS FUNCTION IS UNDERGOING A SLOW PERIODIC VARIATION.” Doklady Akademii Nauk Sssr 142(4): 758-&. (1962)

Arnold, V. I. and Y. G. Sinai. “SMALL PERTURBATIONS OF AUTHOMORPHISMS OF A TORE.” Doklady Akademii Nauk Sssr 144(4): 695-&. (1962)

Arnold, V. I. “Small denominators and problems of the stability of motion in classical and celestial mechanics (in Russian).” Usp. Mat. Nauk. 18: 91-192. (1963)

Arnold, V. I. and A. L. Krylov. “UNIFORM DISTRIBUTION OF POINTS ON A SPHERE AND SOME ERGODIC PROPERTIES OF SOLUTIONS TO LINEAR ORDINARY DIFFERENTIAL EQUATIONS IN COMPLEX REGION.” Doklady Akademii Nauk Sssr 148(1): 9-&. (1963)

Arnold, V. I. “INSTABILITY OF DYNAMICAL SYSTEMS WITH MANY DEGREES OF FREEDOM.” Doklady Akademii Nauk Sssr 156(1): 9-&. (1964)

Arnold, V. “SUR UNE PROPRIETE TOPOLOGIQUE DES APPLICATIONS GLOBALEMENT CANONIQUES DE LA MECANIQUE CLASSIQUE.” Comptes Rendus Hebdomadaires Des Seances De L Academie Des Sciences 261(19): 3719-&. (1965)

Arnold, V. I. “APPLICABILITY CONDITIONS AND ERROR ESTIMATION BY AVERAGING FOR SYSTEMS WHICH GO THROUGH RESONANCES IN COURSE OF EVOLUTION.” Doklady Akademii Nauk Sssr 161(1): 9-&. (1965)


Bibliography

[1] Dumas, H. S. The KAM Story: A friendly introduction to the content, history and significance of Classical Kolmogorov-Arnold-Moser Theory, World Scientific. (2014)

[2] See Chapter 6, “The Tangled Tale of Phase Space” in Galileo Unbound (D. D. Nolte, Oxford University Press, 2018)

[3] V. I. Arnold, Mathematical Methods of Classical Mechanics (Nauk 1974, English translation Springer 1978)

[4] See Chapter 3, “Hamiltonian Dynamics and Phase Space” in Introduction to Modern Dynamics, 2nd ed. (D. D. Nolte, Oxford University Press, 2019)

[5] V. I. Arnold and A. Avez, Ergodic Problems of Classical Mechanics (Benjamin, 1968)

[6] Gaspari, G. “THE ARNOLD CAT MAP ON PRIME LATTICES.” Physica D-Nonlinear Phenomena 73(4): 352-372. (1994)

The Wonderful World of Hamiltonian Maps

Hamiltonian systems are freaks of nature.  Unlike the everyday world we experience that is full of dissipation and inefficiency, Hamiltonian systems live in a world free of loss.  Despite how rare this situation is for us, this unnatural state happens commonly in two extremes: orbital mechanics and quantum mechanics.  In the case of orbital mechanics, dissipation does exist, most commonly in tidal effects, but effects of dissipation in the orbits of moons and planets takes eons to accumulate, making these systems effectively free of dissipation on shorter time scales.  Quantum mechanics is strictly free of dissipation, but there is a strong caveat: ALL quantum states need to be included in the quantum description.  This includes the coupling of discrete quantum states to their environment.  Although it is possible to isolate quantum systems to a large degree, it is never possible to isolate them completely, and they do interact with the quantum states of their environment, if even just the black-body radiation from their container, and even if that container is cooled to milliKelvins.  Such interactions involve so many degrees of freedom, that it all behaves like dissipation.  The origin of quantum decoherence, which poses such a challenge for practical quantum computers, is the entanglement of quantum systems with their environment.

Liouville’s theorem plays a central role in the explanation of the entropy and ergodic properties of ideal gases, as well as in Hamiltonian chaos.

Liouville’s Theorem and Phase Space

A middle ground of practically ideal Hamiltonian mechanics can be found in the dynamics of ideal gases. This is the arena where Maxwell and Boltzmann first developed their theories of statistical mechanics using Hamiltonian physics to describe the large numbers of particles.  Boltzmann applied a result he learned from Jacobi’s Principle of the Last Multiplier to show that a volume of phase space is conserved despite the large number of degrees of freedom and the large number of collisions that take place.  This was the first derivation of what is today known as Liouville’s theorem.

Close-up of the Lozi Map with B = -1 and C = 0.5.

In 1838 Joseph Liouville, a pure mathematician, was interested in classes of solutions of differential equations.  In a short paper, he showed that for one class of differential equation one could define a property that remained invariant under the time evolution of the system.  This purely mathematical paper by Liouville was expanded upon by Jacobi, who was a major commentator on Hamilton’s new theory of dynamics, contributing much of the mathematical structure that we associate today with Hamiltonian mechanics.  Jacobi recognized that Hamilton’s equations were of the same class as the ones studied by Liouville, and the conserved property was a product of differentials.  In the mid-1800’s the language of multidimensional spaces had yet to be invented, so Jacobi did not recognize the conserved quantity as a volume element, nor the space within which the dynamics occurred as a space.  Boltzmann recognized both, and he was the first to establish the principle of conservation of phase space volume. He named this principle after Liouville, even though it was actually Boltzmann himself who found its natural place within the physics of Hamiltonian systems [1].

Liouville’s theorem plays a central role in the explanation of the entropy of ideal gases, as well as in Hamiltonian chaos.  In a system with numerous degrees of freedom, a small volume of initial conditions is stretched and folded by the dynamical equations as the system evolves.  The stretching and folding is like what happens to dough in a bakers hands.  The volume of the dough never changes, but after a long time, a small spot of food coloring will eventually be as close to any part of the dough as you wish.  This analogy is part of the motivation for ergodic systems, and this kind of mixing is characteristic of Hamiltonian systems, in which trajectories can diffuse throughout the phase space volume … usually.

Interestingly, when the number of degrees of freedom are not so large, there is a middle ground of Hamiltonian systems for which some initial conditions can lead to chaotic trajectories, while other initial conditions can produce completely regular behavior.  For the right kind of systems, the regular behavior can hem in the irregular behavior, restricting it to finite regions.  This was a major finding of the KAM theory [2], named after Kolmogorov, Arnold and Moser, which helped explain the regions of regular motion separating regions of chaotic motion as illustrated in Chirikov’s Standard Map.

Discrete Maps

Hamilton’s equations are ordinary continuous differential equations that define a Hamiltonian flow in phase space.  These equations can be solved using standard techniques, such as Runge-Kutta.  However, a much simpler approach for exploring Hamiltonian chaos uses discrete maps that represent the Poincaré first-return map, also known as the Poincaré section.  Testing that a discrete map satisfies Liouville’s theorem is as simple as checking that the determinant of the Floquet matrix is equal to unity.  When the dynamics are represented in a Poincaré plane, these maps are called area-preserving maps.

There are many famous examples of area-preserving maps in the plane.  The Chirikov Standard Map is one of the best known and is often used to illustrate KAM theory.  It is a discrete representation of a kicked rotater, while a kicked harmonic oscillator leads to the Web Map.  The Henon Map was developed to explain the orbits of stars in galaxies.  The Lozi Map is a version of the Henon map that is more accessible analytically.  And the Cat Map was devised by Vladimir Arnold to illustrate what is today called Arnold Diffusion.  All of these maps display classic signatures of (low-dimensional) Hamiltonian chaos with periodic orbits hemming in regions of chaotic orbits.

Chirikov Standard Map
Kicked rotater
Web Map
Kicked harmonic oscillator
Henon Map
Stellar trajectories in galaxies
Lozi Map
Simplified Henon map
Cat MapArnold Diffusion

Table:  Common examples of area-preserving maps.

Lozi Map

My favorite area-preserving discrete map is the Lozi Map.  I first stumbled on this map at the very back of Steven Strogatz’ wonderful book on nonlinear dynamics [3].  It’s one of the last exercises of the last chapter.  The map is particularly simple, but it leads to rich dynamics, both regular and chaotic.  The map equations are

which is area-preserving when |B| = 1.  The constant C can be varied, but the choice C = 0.5 works nicely, and B = -1 produces a beautiful nested structure, as shown in the figure.

Iterated Lozi map for B = -1 and C = 0.5.  Each color is a distinct trajectory.  Many regular trajectories exist that corral regions of chaotic trajectories.  Trajectories become more chaotic farther away from the center.

Python Code for the Lozi Map

"""
Created on Wed May  2 16:17:27 2018
@author: nolte
"""
import numpy as np
from scipy import integrate
from matplotlib import pyplot as plt

B = -1
C = 0.5

np.random.seed(2)
plt.figure(1)

for eloop in range(0,100):

    xlast = np.random.normal(0,1,1)
    ylast = np.random.normal(0,1,1)

    xnew = np.zeros(shape=(500,))
    ynew = np.zeros(shape=(500,))
    for loop in range(0,500):
        xnew[loop] = 1 + ylast - C*abs(xlast)
        ynew[loop] = B*xlast
        xlast = xnew[loop]
        ylast = ynew[loop]
        
    plt.plot(np.real(xnew),np.real(ynew),'o',ms=1)
    plt.xlim(xmin=-1.25,xmax=2)
    plt.ylim(ymin=-2,ymax=1.25)
        
plt.savefig('Lozi')

References:

[1] D. D. Nolte, “The Tangled Tale of Phase Space”, Chapter 6 in Galileo Unbound: A Path Across Life, the Universe and Everything (Oxford University Press, 2018)

[2] H. S. Dumas, The KAM Story: A Friendly Introduction to the Content, History, and Significance of Classical Kolmogorov-Arnold-Moser Theory (World Scientific, 2014)

[3] S. H. Strogatz, Nonlinear Dynamics and Chaos (WestView Press, 1994)

How to Weave a Tapestry from Hamiltonian Chaos

While virtually everyone recognizes the famous Lorenz “Butterfly”, the strange attractor  that is one of the central icons of chaos theory, in my opinion Hamiltonian chaos generates far more interesting patterns. This is because Hamiltonians conserve phase-space volume, stretching and folding small volumes of initial conditions as they evolve in time, until they span large sections of phase space. Hamiltonian chaos is usually displayed as multi-color Poincaré sections (also known as first-return maps) that are created when a set of single trajectories, each represented by a single color, pierce the Poincaré plane over and over again.

The archetype of all Hamiltonian systems is the harmonic oscillator.

MATLAB Handle Graphics

A Hamiltonian tapestry generated from the Web Map for K = 0.616 and q = 4.

Periodically-Kicked Hamiltonian

The classic Hamiltonian system, perhaps the archetype of all Hamiltonian systems, is the harmonic oscillator. The physics of the harmonic oscillator are taught in the most elementary courses, because every stable system in the world is approximated, to lowest order, as a harmonic oscillator. As the simplest dynamical system, one would think that it held no surprises. But surprisingly, it can create the most beautiful tapestries of color when pulsed periodically and mapped onto the Poincaré plane.

The Hamiltonian of the periodically kicked harmonic oscillator is converted into the Web Map, represented as an iterative mapping as

WebMap

There can be resonance between the sequence of kicks and the natural oscillator frequency such that α = 2π/q. At these resonances, intricate web patterns emerge. The Web Map produces a web of stochastic layers when plotted on an extended phase plane. The symmetry of the web is controlled by the integer q, and the stochastic layer width is controlled by the perturbation strength K.

MATLAB Handle Graphics

A tapestry for q = 6.

Web Map Python Program

Iterated maps are easy to implement in code.  Here is a simple Python code to generate maps of different types.  You can play with the coupling constant K and the periodicity q.  For small K, the tapestries are mostly regular.  But as the coupling K increases, stochastic layers emerge.  When q is a small even number, tapestries of regular symmetric are generated.  However, when q is an odd small integer, the tapestries turn into quasi-crystals.

#!/usr/bin/env python3
# -*- coding: utf-8 -*-
“”
@author: nolte
“””

import numpy as np
from scipy import integrate
from matplotlib import pyplot as plt
plt.close(‘all’)
phi = (1+np.sqrt(5))/2
K = 1-phi     # (0.618, 4) (0.618,5) (0.618,7) (1.2, 4)
q = 4         # 4, 5, 6, 7
alpha = 2*np.pi/q

np.random.seed(2)
plt.figure(1)
for eloop in range(0,1000):

xlast = 50*np.random.random()
ylast = 50*np.random.random()

xnew = np.zeros(shape=(300,))
ynew = np.zeros(shape=(300,))

for loop in range(0,300):

xnew[loop] = (xlast + K*np.sin(ylast))*np.cos(alpha) + ylast*np.sin(alpha)
ynew[loop] = -(xlast + K*np.sin(ylast))*np.sin(alpha) + ylast*np.cos(alpha)

xlast = xnew[loop]
ylast = ynew[loop]

plt.plot(np.real(xnew),np.real(ynew),’o’,ms=1)
plt.xlim(xmin=-60,xmax=60)
plt.ylim(ymin=-60,ymax=60)

plt.title(‘WebMap’)
plt.savefig(‘WebMap’)

References and Further Reading

D. D. Nolte, Introduction to Modern Dynamics: Chaos, Networks, Space and Time (Oxford, 2015)

G. M. Zaslavsky,  Hamiltonian chaos and fractional dynamics. (Oxford, 2005)