Vladimir Arnold’s Cat Map

The 1960’s are known as a time of cultural revolution, but perhaps less known was the revolution that occurred in the science of dynamics.  Three towering figures of that revolution were Stephen Smale (1930 – ) at Berkeley, Andrey Kolmogorov (1903 – 1987) in Moscow and his student Vladimir Arnold (1937 – 2010).  Arnold was only 20 years old in 1957 when he solved Hilbert’s thirteenth problem (that any continuous function of several variables can be constructed with a finite number of two-variable functions).  Only a few years later his work on the problem of small denominators in dynamical systems provided the finishing touches on the long elusive explanation of the stability of the solar system (the problem for which Poincaré won the King Oscar Prize in mathematics in 1889 when he discovered chaotic dynamics ).  This theory is known as KAM-theory, using the first initials of the names of Kolmogorov, Arnold and Moser [1].  Building on his breakthrough in celestial mechanics, Arnold’s work through the 1960’s remade the theory of Hamiltonian systems, creating a shift in perspective that has permanently altered how physicists look at dynamical systems.

Hamiltonian Physics on a Torus

Traditionally, Hamiltonian physics is associated with systems of inertial objects that conserve the sum of kinetic and potential energy, in other words, conservative non-dissipative systems.  But a modern view (after Arnold) of Hamiltonian systems sees them as hyperdimensional mathematical mappings that conserve volume.  The space that these mappings inhabit is phase space, and the conservation of phase-space volume is known as Liouville’s Theorem [2].  The geometry of phase space is called symplectic geometry, and the universal position that symplectic geometry now holds in the physics of Hamiltonian mechanics is largely due to Arnold’s textbook Mathematical Methods of Classical Mechanics (1974, English translation 1978) [3]. Arnold’s famous quote from that text is “Hamiltonian mechanics is geometry in phase space”. 

One of the striking aspects of this textbook is the reduction of phase-space geometry to the geometry of a hyperdimensional torus for a large number of Hamiltonian systems.  If there are as many conserved quantities as there are degrees of freedom in a Hamiltonian system, then the system is called “integrable” (because you can integrated the equations of motion to find a constant of the motion). Then it is possible to map the physics onto a hyperdimensional torus through the transformation of dynamical coordinates into what are known as “action-angle” coordinates [4].  Each independent angle has an associated action that is conserved during the motion of the system.  The periodicity of the dynamical angle coordinate makes it possible to identify it with the angular coordinate of a multi-dimensional torus.  Therefore, every integrable Hamiltonian system can be mapped to motion on a multi-dimensional torus (one dimension for each degree of freedom of the system). 

Actually, integrable Hamiltonian systems are among the most boring dynamical systems you can imagine. They literally just go in circles (around the torus). But as soon as you add a small perturbation that cannot be integrated they produce some of the most complex and beautiful patterns of all dynamical systems. It was Arnold’s focus on motions on a torus, and perturbations that shift the dynamics off the torus, that led him to propose a simple mapping that captured the essence of Hamiltonian chaos.

The Arnold Cat Map

Motion on a two-dimensional torus is defined by two angles, and trajectories on a two-dimensional torus are simple helixes. If the periodicities of the motion in the two angles have an integer ratio, the helix repeats itself. However, if the ratio of periods (also known as the winding number) is irrational, then the helix never repeats and passes arbitrarily closely to any point on the surface of the torus. This last case leads to an “ergodic” system, which is a term introduced by Boltzmann to describe a physical system whose trajectory fills phase space. The behavior of a helix for rational or irrational winding number is not terribly interesting. It’s just an orbit going in circles like an integrable Hamiltonian system. The helix can never even cross itself.

However, if you could add a new dimension to the torus (or add a new degree of freedom to the dynamical system), then the helix could pass over or under itself by moving into the new dimension. By weaving around itself, a trajectory can become chaotic, and the set of many trajectories can become as mixed up as a bowl of spaghetti. This can be a little hard to visualize, especially in higher dimensions, but Arnold thought of a very simple mathematical mapping that captures the essential motion on a torus, preserving volume as required for a Hamiltonian system, but with the ability for regions to become all mixed up, just like trajectories in a nonintegrable Hamiltonian system.

A unit square is isomorphic to a two-dimensional torus. This means that there is a one-to-one mapping of each point on the unit square to each point on the surface of a torus. Imagine taking a sheet of paper and forming a tube out of it. One of the dimensions of the sheet of paper is now an angle coordinate that is cyclic, going around the circumference of the tube. Now if the sheet of paper is flexible (like it is made of thin rubber) you can bend the tube around and connect the top of the tube with the bottom, like a bicycle inner tube. The other dimension of the sheet of paper is now also an angle coordinate that is cyclic. In this way a flat sheet is converted (with some bending) into a torus.

Arnold’s key idea was to create a transformation that takes the torus into itself, preserving volume, yet including the ability for regions to pass around each other. Arnold accomplished this with the simple map

where the modulus 1 takes the unit square into itself. This transformation can also be expressed as a matrix

followed by taking modulus 1. The transformation matrix is called a Floquet matrix, and the determinant of the matrix is equal to unity, which ensures that volume is conserved.

Arnold decided to illustrate this mapping by using a crude image of the face of a cat (See Fig. 1). Successive applications of the transformation stretch and shear the cat, which is then folded back into the unit square. The stretching and folding preserve the volume, but the image becomes all mixed up, just like mixing in a chaotic Hamiltonian system, or like an immiscible dye in water that is stirred.

Fig. 1 Arnold’s illustration of his cat map from pg. 6 of V. I. Arnold and A. Avez, Ergodic Problems of Classical Mechanics (Benjamin, 1968) [5]
Fig. 2 Arnold Cat Map operation is an iterated succession of stretching with shear of a unit square, and translation back to the unit square. The mapping preserves and mixes areas, and is invertible.

Recurrence

When the transformation matrix is applied to continuous values, it produces a continuous range of transformed values that become thinner and thinner until the unit square is uniformly mixed. However, if the unit square is discrete, made up of pixels, then something very different happens (see Fig. 3). The image of the cat in this case is composed of a 50×50 array of pixels. For early iterations, the image becomes stretched and mixed, but at iteration 50 there are 4 low-resolution upside-down versions of the cat, and at iteration 75 the cat fully reforms, but is upside-down. Continuing on, the cat eventually reappears fully reformed and upright at iteration 150. Therefore, the discrete case displays a recurrence and the mapping is periodic. Calculating the period of the cat map on lattices can lead to interesting patterns, especially if the lattice is composed of prime numbers [6].

Fig. 3 A discrete cat map has a recurrence period. This example with a 50×50 lattice has a period of 150.

The Cat Map and the Golden Mean

The golden mean, or the golden ratio, 1.618033988749895 is never far away when working with Hamiltonian systems. Because the golden mean is the “most irrational” of all irrational numbers, it plays an essential role in KAM theory on the stability of the solar system. In the case of Arnold’s cat map, it pops up its head in several ways. For instance, the transformation matrix has eigenvalues

with the remarkable property that

which guarantees conservation of area.


Selected V. I. Arnold Publications

Arnold, V. I. “FUNCTIONS OF 3 VARIABLES.” Doklady Akademii Nauk Sssr 114(4): 679-681. (1957)

Arnold, V. I. “GENERATION OF QUASI-PERIODIC MOTION FROM A FAMILY OF PERIODIC MOTIONS.” Doklady Akademii Nauk Sssr 138(1): 13-&. (1961)

Arnold, V. I. “STABILITY OF EQUILIBRIUM POSITION OF A HAMILTONIAN SYSTEM OF ORDINARY DIFFERENTIAL EQUATIONS IN GENERAL ELLIPTIC CASE.” Doklady Akademii Nauk Sssr 137(2): 255-&. (1961)

Arnold, V. I. “BEHAVIOUR OF AN ADIABATIC INVARIANT WHEN HAMILTONS FUNCTION IS UNDERGOING A SLOW PERIODIC VARIATION.” Doklady Akademii Nauk Sssr 142(4): 758-&. (1962)

Arnold, V. I. “CLASSICAL THEORY OF PERTURBATIONS AND PROBLEM OF STABILITY OF PLANETARY SYSTEMS.” Doklady Akademii Nauk Sssr 145(3): 487-&. (1962)

Arnold, V. I. “BEHAVIOUR OF AN ADIABATIC INVARIANT WHEN HAMILTONS FUNCTION IS UNDERGOING A SLOW PERIODIC VARIATION.” Doklady Akademii Nauk Sssr 142(4): 758-&. (1962)

Arnold, V. I. and Y. G. Sinai. “SMALL PERTURBATIONS OF AUTHOMORPHISMS OF A TORE.” Doklady Akademii Nauk Sssr 144(4): 695-&. (1962)

Arnold, V. I. “Small denominators and problems of the stability of motion in classical and celestial mechanics (in Russian).” Usp. Mat. Nauk. 18: 91-192. (1963)

Arnold, V. I. and A. L. Krylov. “UNIFORM DISTRIBUTION OF POINTS ON A SPHERE AND SOME ERGODIC PROPERTIES OF SOLUTIONS TO LINEAR ORDINARY DIFFERENTIAL EQUATIONS IN COMPLEX REGION.” Doklady Akademii Nauk Sssr 148(1): 9-&. (1963)

Arnold, V. I. “INSTABILITY OF DYNAMICAL SYSTEMS WITH MANY DEGREES OF FREEDOM.” Doklady Akademii Nauk Sssr 156(1): 9-&. (1964)

Arnold, V. “SUR UNE PROPRIETE TOPOLOGIQUE DES APPLICATIONS GLOBALEMENT CANONIQUES DE LA MECANIQUE CLASSIQUE.” Comptes Rendus Hebdomadaires Des Seances De L Academie Des Sciences 261(19): 3719-&. (1965)

Arnold, V. I. “APPLICABILITY CONDITIONS AND ERROR ESTIMATION BY AVERAGING FOR SYSTEMS WHICH GO THROUGH RESONANCES IN COURSE OF EVOLUTION.” Doklady Akademii Nauk Sssr 161(1): 9-&. (1965)


Bibliography

[1] Dumas, H. S. The KAM Story: A friendly introduction to the content, history and significance of Classical Kolmogorov-Arnold-Moser Theory, World Scientific. (2014)

[2] See Chapter 6, “The Tangled Tale of Phase Space” in Galileo Unbound (D. D. Nolte, Oxford University Press, 2018)

[3] V. I. Arnold, Mathematical Methods of Classical Mechanics (Nauk 1974, English translation Springer 1978)

[4] See Chapter 3, “Hamiltonian Dynamics and Phase Space” in Introduction to Modern Dynamics, 2nd ed. (D. D. Nolte, Oxford University Press, 2019)

[5] V. I. Arnold and A. Avez, Ergodic Problems of Classical Mechanics (Benjamin, 1968)

[6] Gaspari, G. “THE ARNOLD CAT MAP ON PRIME LATTICES.” Physica D-Nonlinear Phenomena 73(4): 352-372. (1994)


This Blog Post is a Companion to the undergraduate physics textbook Modern Dynamics: Chaos, Networks, Space and Time, 2nd ed. (Oxford, 2019) introducing Lagrangians and Hamiltonians, chaos theory, complex systems, synchronization, neural networks, econophysics and Special and General Relativity.

The Iconic Eikonal and the Optical Path

Nature loves the path of steepest descent.  Place a ball on a smooth curved surface and release it, and it will instantansouly accelerate in the direction of steepest descent.  Shoot a laser beam from an oblique angle onto a piece of glass to hit a target inside, and the path taken by the beam is the only path that decreases the distance to the target in the shortest time.  Diffract a stream of electrons from the surface of a crystal, and quantum detection events are greatest at the positions where the troughs and peaks of the deBroglie waves converge the most.  The first example is Newton’s second law.  The second example is Fermat’s principle and Snell’s Law.  The third example is Feynman’s path-integral formulation of quantum mechanics.  They all share in common a minimization principle—the principle of least action—that the path of a dynamical system is the one that minimizes a property known as “action”.

The Eikonal Equation is the “F = ma” of ray optics.  It’s solutions describe the paths of light rays through complicated media.

         The principle of least action, first proposed by the French physicist Maupertuis through mechanical analogy, became a principle of Lagrangian mechanics in the hands of Lagrange, but was still restricted to mechanical systems of particles.  The principle was generalized forty years later by Hamilton, who began by considering the propagation of light waves, and ended by transforming mechanics into a study of pure geometry divorced from forces and inertia.  Optics played a key role in the development of mechanics, and mechanics returned the favor by giving optics the Eikonal Equation.  The Eikonal Equation is the “F = ma” of ray optics.  It’s solutions describe the paths of light rays through complicated media.

Malus’ Theorem

Anyone who has taken a course in optics knows that Étienne-Louis Malus (1775-1812) discovered the polarization of light, but little else is taught about this French mathematician who was one of the savants Napoleon had taken along with himself when he invaded Egypt in 1798.  After experiencing numerous horrors of war and plague, Malus returned to France damaged but wiser.  He discovered the polarization of light in the Fall of 1808 as he was playing with crystals of icelandic spar at sunset and happened to view last rays of the sun reflected from the windows of the Luxumbourg palace.  Icelandic spar produces double images in natural light because it is birefringent.  Malus discovered that he could extinguish one of the double images of the Luxumbourg windows by rotating the crystal a certain way, demonstrating that light is polarized by reflection.  The degree to which light is extinguished as a function of the angle of the polarizing crystal is known as Malus’ Law

Fronts-piece to the Description de l’Égypte , the first volume published by Joseph Fourier in 1808 based on the report of the savants of L’Institute de l’Égypte that included Monge, Fourier and Malus, among many other French scientists and engineers.

         Malus had picked up an interest in the general properties of light and imaging during lulls in his ordeal in Egypt.  (To read about Malus’ misadventures during Napoleon’s campaign in Egypt, see Chapter 1 of Interference.) He was an emissionist following his compatriot Laplace, rather than an undulationist following Thomas Young.  It is ironic that the French scientists were staunchly supporting Newton on the nature of light, while the British scientist Thomas Young was trying to upend Netwonian optics.  Almost all physicists at that time were emissionists, only a few years after Young’s double-slit experiment of 1804, and few serious scientists accepted Young’s theory of the wave nature of light until Fresnel and Arago supplied the rigorous theory and experimental proofs much later in 1819. 

Malus’ Theorem states that rays perpendicular to an initial surface are perpendicular to a later surface after reflection in an optical system. This theorem is the starting point for the Eikonal ray equation, as well as for modern applications in adaptive optics. This figure shows a propagating aberrated wavefront that is “compensated” by a deformable mirror to produce a tight focus.

         As a prelude to his later discovery of polarization, Malus had earlier proven a theorem about trajectories that particles of light take through an optical system.  One of the key questions about the particles of light in an optical system was how they formed images.  The physics of light particles moving through lenses was too complex to treat at that time, but reflection was relatively easy based on the simple reflection law.  Malus proved a theorem mathematically that after a reflection from a curved mirror, a set of rays perpendicular to an initial nonplanar surface would remain perpendicular at a later surface after reflection (this property is closely related to the conservation of optical etendue).  This is known as Malus’ Theorem, and he thought it only held true after a single reflection, but later mathematicians proved that it remains true even after an arbitrary number of reflections, even in cases when the rays intersect to form an optical effect known as a caustic.  The mathematics of caustics would catch the interest of an Irish mathematician and physicist who helped launch a new field of mathematical physics.

Etienne-Louis Malus

Hamilton’s Characteristic Function

William Rowan Hamilton (1805 – 1865) was a child prodigy who taught himself thirteen languages by the time he was thirteen years old (with the help of his linguist uncle), but mathematics became his primary focus at Trinity College at the University in Dublin.  His mathematical prowess was so great that he was made the Astronomer Royal of Ireland while still an undergraduate student.  He also became fascinated in the theory of envelopes of curves and in particular to the mathematics of caustic curves in optics. 

         In 1823 at the age of 18, he wrote a paper titled Caustics that was read to the Royal Irish Academy.  In this paper, Hamilton gave an exceedingly simple proof of Malus’ Law, but that was perhaps the simplest part of the paper.  Other aspects were mathematically obscure and reviewers requested further additions and refinements before publication.  Over the next four years, as Hamilton expanded this work on optics, he developed a new theory of optics, the first part of which was published as Theory of Systems of Rays in 1827 with two following supplements completed by 1833 but never published.

         Hamilton’s most important contribution to optical theory (and eventually to mechanics) he called his characteristic function.  By applying the principle of Fermat’s least time, which he called his principle of stationary action, he sought to find a single unique function that characterized every path through an optical system.  By first proving Malus’ Theorem and then applying the theorem to any system of rays using the principle of stationary action, he was able to construct two partial differential equations whose solution, if it could be found, defined every ray through the optical system.  This result was completely general and could be extended to include curved rays passing through inhomogeneous media.  Because it mapped input rays to output rays, it was the most general characterization of any defined optical system.  The characteristic function defined surfaces of constant action whose normal vectors were the rays of the optical system.  Today these surfaces of constant action are called the Eikonal function (but how it got its name is the next chapter of this story).  Using his characteristic function, Hamilton predicted a phenomenon known as conical refraction in 1832, which was subsequently observed, launching him to a level of fame unusual for an academic.

         Once Hamilton had established his principle of stationary action of curved light rays, it was an easy step to extend it to apply to mechanical systems of particles with curved trajectories.  This step produced his most famous work On a General Method in Dynamics published in two parts in 1834 and 1835 [1] in which he developed what became known as Hamiltonian dynamics.  As his mechanical work was extended by others including Jacobi, Darboux and Poincaré, Hamilton’s work on optics was overshadowed, overlooked and eventually lost.  It was rediscovered when Schrödinger, in his famous paper of 1926, invoked Hamilton’s optical work as a direct example of the wave-particle duality of quantum mechanics [2]. Yet in the interim, a German mathematician tackled the same optical problems that Hamilton had seventy years earlier, and gave the Eikonal Equation its name.

Bruns’ Eikonal

The German mathematician Heinrich Bruns (1848-1919) was engaged chiefly with the measurement of the Earth, or geodesy.  He was a professor of mathematics in Berlin and later Leipzig.  One claim fame was that one of his graduate students was Felix Hausdorff [3] who would go on to much greater fame in the field of set theory and measure theory (the Hausdorff dimension was a precursor to the fractal dimension).  Possibly motivated by his studies done with Hausdorff on refraction of light by the atmosphere, Bruns became interested in Malus’ Theorem for the same reasons and with the same goals as Hamilton, yet was unaware of Hamilton’s work in optics. 

         The mathematical process of creating “images”, in the sense of a mathematical mapping, made Bruns think of the Greek word  εικων which literally means “icon” or “image”, and he published a small book in 1895 with the title Das Eikonal in which he derived a general equation for the path of rays through an optical system.  His approach was heavily geometrical and is not easily recognized as an equation arising from variational principals.  It rediscovered most of the results of Hamilton’s paper on the Theory of Systems of Rays and was thus not groundbreaking in the sense of new discovery.  But it did reintroduce the world to the problem of systems of rays, and his name of Eikonal for the equations of the ray paths stuck, and was used with increasing frequency in subsequent years.  Arnold Sommerfeld (1868 – 1951) was one of the early proponents of the Eikonal equation and recognized its connection with action principles in mechanics. He discussed the Eikonal equation in a 1911 optics paper with Runge [4] and in 1916 used action principles to extend Bohr’s model of the hydrogen atom [5]. While the Eikonal approach was not used often, it became popular in the 1960’s when computational optics made numerical solutions possible.

Lagrangian Dynamics of Light Rays

In physical optics, one of the most important properties of a ray passing through an optical system is known as the optical path length (OPL).  The OPL is the central quantity that is used in problems of interferometry, and it is the central property that appears in Fermat’s principle that leads to Snell’s Law.  The OPL played an important role in the history of the calculus when Johann Bernoulli in 1697 used it to derive the path taken by a light ray as an analogy of a brachistochrone curve – the curve of least time taken by a particle between two points.

            The OPL between two points in a refractive medium is the sum of the piecewise product of the refractive index n with infinitesimal elements of the path length ds.  In integral form, this is expressed as

where the “dot” is a derivative with respedt to s.  The optical Lagrangian is recognized as

The Lagrangian is inserted into the Euler equations to yield (after some algebra, see Introduction to Modern Dynamics pg. 336)

This is a second-order ordinary differential equation in the variables xa that define the ray path through the system.  It is literally a “trajectory” of the ray, and the Eikonal equation becomes the F = ma of ray optics.

Hamiltonian Optics

In a paraxial system (in which the rays never make large angles relative to the optic axis) it is common to select the position z as a single parameter to define the curve of the ray path so that the trajectory is parameterized as

where the derivatives are with respect to z, and the effective Lagrangian is recognized as

The Hamiltonian formulation is derived from the Lagrangian by defining an optical Hamiltonian as the Legendre transform of the Lagrangian.  To start, the Lagrangian is expressed in terms of the generalized coordinates and momenta.  The generalized optical momenta are defined as

This relationship leads to an alternative expression for the Eikonal equation (also known as the scalar Eikonal equation) expressed as

where S(x,y,z) = const. is the eikonal function.  The  momentum vectors are perpendicular to the surfaces of constant S, which are recognized as the wavefronts of a propagating wave.

            The Lagrangian can be restated as a function of the generalized momenta as

and the Legendre transform that takes the Lagrangian into the Hamiltonian is

The trajectory of the rays is the solution to Hamilton’s equations of motion applied to this Hamiltonian

Light Orbits

If the optical rays are restricted to the x-y plane, then Hamilton’s equations of motion can be expressed relative to the path length ds, and the momenta are pa = ndxa/ds.  The ray equations are (simply expressing the 2 second-order Eikonal equation as 4 first-order equations)

where the dot is a derivative with respect to the element ds.

As an example, consider a radial refractive index profile in the x-y plane

where r is the radius on the x-y plane. Putting this refractive index profile into the Eikonal equations creates a two-dimensional orbit in the x-y plane. The Eikonal Equation is the “F = ma” of ray optics.  It’s solutions describe the paths of light rays through complicated media, including the phenomenon of gravitational lensing (see my blog post) and the orbits of photons around black holes (see my other blog post).

By David D. Nolte, May 30, 2019

Gaussian refractive index profile in the x-y plane. From raysimple.py.
Ray orbits around the center of the Gaussian refractive index profile. From raysimple.py

Python Code: raysimple.py

The following Python code solves for individual trajectories. (Python code on GitHub.)

#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
raysimple.py
Created on Tue May 28 11:50:24 2019
@author: nolte
D. D. Nolte, Introduction to Modern Dynamics: Chaos, Networks, Space and Time, 2nd ed. (Oxford,2019)
"""

import numpy as np
import matplotlib as mpl
from mpl_toolkits.mplot3d import Axes3D
from scipy import integrate
from matplotlib import pyplot as plt
from matplotlib import cm
import time
import os

plt.close('all')

# selection 1 = Gaussian
# selection 2 = Donut
selection = 1

print(' ')
print('raysimple.py')

def refindex(x,y):
    
    if selection == 1:
        
        sig = 10
        
        n = 1 + np.exp(-(x**2 + y**2)/2/sig**2)
        nx = (-2*x/2/sig**2)*np.exp(-(x**2 + y**2)/2/sig**2)
        ny = (-2*y/2/sig**2)*np.exp(-(x**2 + y**2)/2/sig**2)
        
    elif selection == 2:
        
        sig = 10;
        r2 = (x**2 + y**2)
        r1 = np.sqrt(r2)
        np.expon = np.exp(-r2/2/sig**2)
        
        n = 1+0.3*r1*np.expon;
        nx = 0.3*r1*(-2*x/2/sig**2)*np.expon + 0.3*np.expon*2*x/r1
        ny = 0.3*r1*(-2*y/2/sig**2)*np.expon + 0.3*np.expon*2*y/r1
    
        
    return [n,nx,ny]


def flow_deriv(x_y_z,tspan):
    x, y, z, w = x_y_z
    
    n, nx, ny = refindex(x,y)
    
    yp = np.zeros(shape=(4,))
    yp[0] = z/n
    yp[1] = w/n
    yp[2] = nx
    yp[3] = ny
    
    return yp
                
V = np.zeros(shape=(100,100))
for xloop in range(100):
    xx = -20 + 40*xloop/100
    for yloop in range(100):
        yy = -20 + 40*yloop/100
        n,nx,ny = refindex(xx,yy) 
        V[yloop,xloop] = n

fig = plt.figure(1)
contr = plt.contourf(V,100, cmap=cm.coolwarm, vmin = 1, vmax = 3)
fig.colorbar(contr, shrink=0.5, aspect=5)    
fig = plt.show()


v1 = 0.707      # Change this initial condition
v2 = np.sqrt(1-v1**2)
y0 = [12, 0, v1, v2]     # Change these initial conditions

tspan = np.linspace(1,1700,1700)

y = integrate.odeint(flow_deriv, y0, tspan)

plt.figure(2)
lines = plt.plot(y[1:1550,0],y[1:1550,1])
plt.setp(lines, linewidth=0.5)
plt.show()


New from Oxford University Press: Interference and the History of Light and Optics (2023)

Read the stories of the scientists and engineers who tamed light and used it to probe the universe.

Order Today from Amazon.

Order Today from Oxford U Press

Order Today from Barnes & Nobles


Bibliography

An excellent textbook on geometric optics from Hamilton’s point of view is K. B. Wolf, Geometric Optics in Phase Space (Springer, 2004). Another is H. A. Buchdahl, An Introduction to Hamiltonian Optics (Dover, 1992).

A rather older textbook on geometrical optics is by J. L. Synge, Geometrical Optics: An Introduction to Hamilton’s Method (Cambridge University Press, 1962) showing the derivation of the ray equations in the final chapter using variational methods. Synge takes a dim view of Bruns’ term “Eikonal” since Hamilton got there first and Bruns was unaware of it.

A book that makes an especially strong case for the Optical-Mechanical analogy of Fermat’s principle, connecting the trajectories of mechanics to the paths of optical rays is Daryl Holm, Geometric Mechanics: Part I Dynamics and Symmetry (Imperial College Press 2008).

The Eikonal ray equation is derived from the geodesic equation (or rather as a geodesic equation) in D. D. Nolte, Introduction to Modern Dynamics, 2nd-edition (Oxford, 2019).


References

[1] Hamilton, W. R. “On a general method in dynamics I.” Mathematical Papers, I ,103-161: 247-308. (1834); Hamilton, W. R. “On a general method in dynamics II.” Mathematical Papers, I ,103-161: 95-144. (1835)

[2] Schrodinger, E. “Quantification of the eigen-value problem.” Annalen Der Physik 79(6): 489-527. (1926)

[3] For the fateful story of Felix Hausdorff (aka Paul Mongré) see Chapter 9 of Galileo Unbound (Oxford, 2018).

[4] Sommerfeld, A. and J. Runge. “The application of vector calculations on the basis of geometric optics.” Annalen Der Physik 35(7): 277-298. (1911)

[5] Sommerfeld, A. “The quantum theory of spectral lines.” Annalen Der Physik 51(17): 1-94. (1916)



Freeman Dyson’s Quantum Odyssey

In the fall semester of 1947, a brilliant young British mathematician arrived at Cornell University to begin a yearlong fellowship paid by the British Commonwealth.  Freeman Dyson (1923 –) had received an undergraduate degree in mathematics from Cambridge University and was considered to be one of their brightest graduates.  With strong recommendations, he arrived to work with Hans Bethe on quantum electrodynamics.  He made rapid progress on a relativistic model of the Lamb shift, inadvertently intimidating many of his fellow graduate students with his mathematical prowess.  On the other hand, someone who intimidated him, was Richard Feynman.

Initially, Dyson considered Feynman to be a bit of a buffoon and slacker, but he started to notice that Feynman could calculate QED problems in a few lines that took him pages.

Freeman Dyson at Princeton in 1972.

I think like most science/geek types, my first introduction to the unfettered mind of Freeman Dyson was through the science fiction novel Ringworld by Larry Niven. The Dyson ring, or Dyson sphere, was conceived by Dyson when he was thinking about the ultimate fate of civilizations and their increasing need for energy. The greatest source of energy on a stellar scale is of course a star, and Dyson envisioned an advanced civilization capturing all that emitted stellar energy by building a solar collector with a radius the size of a planetary orbit. He published the paper “Search for Artificial Stellar Sources of Infra-Red Radiation” in the prestigious magazine Science in 1960. The practicality of such a scheme has to be seriously questioned, but it is a classic example of how easily he thinks outside the box, taking simple principles and extrapolating them to extreme consequences until the box looks like a speck of dust. I got a first-hand chance to see his way of thinking when he gave a physics colloquium at Cornell University in 1980 when I was an undergraduate there. Hans Bethe still had his office at that time in the Newman laboratory. I remember walking by and looking into his office getting a glance of him editing a paper at his desk. The topic of Dyson’s talk was the fate of life in the long-term evolution of the universe. His arguments were so simple they could not be refuted, yet the consequences for the way life would need to evolve in extreme time was unimaginable … it was a bazaar and mind blowing experience for me as an undergrad … and and example of the strange worlds that can be imagined through simple physics principles.

Initially, as Dyson settled into his life at Cornell under Bethe, he considered Feynman to be a bit of a buffoon and slacker, but he started to notice that Feynman could calculate QED problems in a few lines that took him pages.  Dyson paid closer attention to Feynman, eventually spending more of his time with him than Bethe, and realized that Feynman had invented an entirely new way of calculating quantum effects that used cartoons as a form of book keeping to reduce the complexity of many calculations.  Dyson still did not fully understand how Feynman was doing it, but knew that Feynman’s approach was giving all the right answers.  Around that time, he also began to read about Schwinger’s field-theory approach to QED, following Schwinger’s approach as far as he could, but always coming away with the feeling that it was too complicated and required too much math—even for him! 

Road Trip Across America

That summer, Dyson had time to explore America for the first time because Bethe had gone on an extended trip to Europe.  It turned out that Feynman was driving his car to New Mexico to patch things up with an old flame from his Los Alamos days, so Dyson was happy to tag along.  For days, as they drove across the US, they talked about life and physics and QED.  Dyson had Feynman all to himself and began to see daylight in Feynman’s approach, and to understand that it might be consistent with Schwinger’s and Tomonaga’s field theory approach.  After leaving Feynman in New Mexico, he travelled to the University of Michigan where Schwinger gave a short course on QED, and he was able to dig deeper, talking with him frequently between lectures. 

At the end of the summer, it had been arranged that he would spend the second year of his fellowship at the Institute for Advanced Study in Princeton where Oppenheimer was the new head.  As a final lark before beginning that new phase of his studies he spent a week at Berkeley.  The visit there was uneventful, and he did not find the same kind of open camaraderie that he had found with Bethe in the Newman Laboratory at Cornell, but it left him time to think.  And the more he thought about Schwinger and Feynman, the more convinced he became that the two were equivalent.  On the long bus ride back east from Berkeley, as he half dozed and half looked out the window, he had an epiphany.  He saw all at once how to draw the map from one to the other.  What was more, he realized that many of Feynman’s techniques were much simpler than Schwinger’s, which would significantly simplify lengthy calculations.  By the time he arrived in Chicago, he was ready to write it all down, and by the time he arrived in Princeton, he was ready to publish.  It took him only a few weeks to do it, working with an intensity that he had never experienced before.  When he was done, he sent the paper off to the Physical Review[1].

Dyson knew that he had achieved something significant even though he was essentially just a second-year graduate student, at least from the point of view of the American post-graduate system.  Cambridge was a little different, and Dyson’s degree there was more than the standard bachelor’s degree here.  Nonetheless, he was now under the auspices of the Institute for Advanced Study, where Einstein had his office, and he had sent off an unsupervised manuscript for publication without any imprimatur from the powers at be.  The specific power that mattered most was Oppenheimer, who arrived a few days after Dyson had submitted his manuscript.  When he greeted Oppenheimer, he was excited and pleased to hand him a copy.  Oppenheimer, on the other hand, was neither excited nor pleased to receive it.  Oppenheimer had formed a particularly bad opinion of Feynman’s form of QED at the conference held in the Poconos (to read about Feynman’s disaster at the Poconos conference, see my blog) half-a-year earlier and did not think that this brash young grad student could save it.  Dyson, on his part, was taken aback.  No one who has ever met Dyson would ever call him brash, but in this case he fought for a higher cause, writing a bold memo to Oppenheimer—that terrifying giant of a personality—outlining the importance of the Feynman theory.

Battle for the Heart of Quantum Field Theory 

Oppenheimer decided to give Dyson a chance, and arranged for a series of seminars where Dyson could present the story to the assembled theory group at the Institute, but Dyson could make little headway.  Every time he began to make progress, Oppenheimer would bring it crashing to a halt with scathing questions and criticisms.  This went on for weeks, until Bethe visited from Cornell.  Bethe by then was working with the Feynman formalism himself.  As Bethe lectured in front of Oppenheimer, he seeded his talk with statements such as “surely they had all seen this from Dyson”, and Dyson took the opportunity to pipe up that he had not been allowed to get that far.  After Bethe left, Oppenheimer relented, arranging for Dyson to give three seminars in one week.  The seminars each went on for hours, but finally Dyson got to the end of it.  The audience shuffled out of the seminar room with no energy left for discussions or arguments.  Later that day, Dyson found a note in his box from Oppenheimer saying “Nolo Contendre”—Dyson had won!

With that victory under his belt, Dyson was in a position to communicate the new methods to a small army of postdocs at the Institute, supervising their progress on many outstanding problems in quantum electrodynamics that had resisted calculations using the complicated Schwinger-Tomonaga theory.  Feynman, by this time, had finally published two substantial papers on his approach[2], which added to the foundation that Dyson was building at Princeton.  Although Feynman continued to work for a year or two on QED problems, the center of gravity for these problems shifted solidly to the Institute for Advanced Study and to Dyson.  The army of postdocs that Dyson supervised helped establish the use of Feynman diagrams in QED, calculating ever higher-order corrections to electromagnetic interactions.  These same postdocs were among the first batch of wartime-trained theorists to move into faculty positions across the US, bringing the method of Feynman diagrams with them, adding to the rapid dissemination of Feynman diagrams into many aspects of theoretical physics that extend far beyond QED [3].

As a graduate student at Berkeley in the 1980’s I ran across a very simple-looking equation called “the Dyson equation” in our graduate textbook on relativistic quantum mechanics by Sakurai. The Dyson equation is the extraordinarily simple expression of an infinite series of Feynman diagrams that describes how an electron interacts with itself through the emission of virtual photons that link to virtual electron-positron pairs. This process leads to the propagator Green’s function for the electron and is the starting point for including the simple electron in more complex particle interactions.

The Dyson equation for the single-electron Green’s function represented as an infinite series of Feynman diagrams.

I had no feel for the use of the Dyson equation, barely limping through relativistic quantum mechanics, until a few years later when I was working at Lawrence Berkeley Lab with Mirek Hamera, a visiting scientist from Warwaw Poland who introduced me to the Haldane-Anderson model that applied to a project I was working on for my PhD. Using the theory, with Dyson’s equation at its heart, we were able to show that tightly bound electrons on transition-metal impurities in semiconductors acted as internal reference levels that allowed us to measure internal properties of semiconductors that had never been accessible before. A few years later, I used Dyson’s equation again when I was working on small precipitates of arsenic in the semiconductor GaAs, using the theory to describe an accordion-like ladder of electron states that can occur within the semiconductor bandgap when a nano-sphere takes on multiple charges [4].

The Coulomb ladder of deep energy states of a nano-sphere in GaAs calculated using self-energy principles first studied by Dyson.

I last saw Dyson when he gave the Hubert James Memorial Lecture at Purdue University in 1996. The title of his talk was “How the Dinosaurs Might Have Been Saved: Detection and Deflection of Earth-Impacting Bodies”. As always, his talk was wild and wide ranging, using the simplest possible physics to derive the most dire consequences of our continued existence on this planet.


[1] Dyson, F. J. (1949). “THE RADIATION THEORIES OF TOMONAGA, SCHWINGER, AND FEYNMAN.” Physical Review 75(3): 486-502.

[2] Feynman, R. P. (1949). “THE THEORY OF POSITRONS.” Physical Review 76(6): 749-759.  Feynman, R. P. (1949). “SPACE-TIME APPROACH TO QUANTUM ELECTRODYNAMICS.” Physical Review 76(6): 769-789.

[3] Kaiser, D., K. Ito and K. Hall (2004). “Spreading the tools of theory: Feynman diagrams in the USA, Japan, and the Soviet Union.” Social Studies of Science 34(6): 879-922.

[4] Nolte, D. D. (1998). “Mesoscopic Point-like Defects in Semiconductors.” Phys. Rev. B58(12): pg. 7994

Georg Duffing’s Equation

Although coal and steam launched the industrial revolution, gasoline and controlled explosions have sustained it for over a century.  After early precursors, the internal combustion engine that we recognize today came to life in 1876 from the German engineers Otto and Daimler with later variations by Benz and Diesel.  In the early 20th century, the gasoline engine was replacing coal and oil in virtually all mobile conveyances and had become a major industry attracting the top mechanical engineering talent.  One of those talents was the German engineer Georg Duffing (1861 – 1944) whose unlikely side interest in the quantum mechanics revolution brought him to Berlin to hear lectures by Max Planck, where he launched his own revolution in nonlinear oscillators.

The publication of this highly academic book by a nonacademic would establish Duffing as the originator of one of the most iconic oscillators in modern dynamics.

An Academic Non-Academic

Georg Duffing was born in 1861 in the German town of Waldshut on the border with Switzerland north of Zurich.  Within a year the family moved to Mannheim near Heidelberg where Georg received a good education in mathematics as well as music.  His mathematical interests attracted him to engineering, and he built a reputation that led to an invitation to work at Westinghouse in the United States in 1910.  When he returned to Germany he set himself up as a consultant and inventor with the freedom to move where he wished.  In early 1913 he wished to move to Berlin where Max Planck was lecturing on the new quantum mechanics at the University.  He was always searching for new knowledge, and sitting in on Planck’s lectures must have made him feel like he was witnessing the beginnings of a new era.            

At that time Duffing was interested in problems related to brakes, gears and engines.  In particular, he had become fascinated by vibrations that often were the limiting factors in engine performance.  He stripped the problem of engine vibration down to its simplest form, and he began a careful and systematic study of nonlinear oscillations.  While in Berlin, he had became acquainted with Prof. Meyer at the University who had a mechanical engineering laboratory.  Meyer let Duffing perform his experiments in the lab on the weekends, sometime accompanied by his eldest daughter.  By 1917 he had compiled a systematic investigation of various nonlinear effects in oscillators and had written a manuscript that collected all of this theoretical and experimental work.  He extended this into a small book that he published with Vieweg & Sohn in 1918 to be purchased for a price of 5 Deutsch Marks [1].   The publication of this highly academic book by a nonacademic would establish Duffing as the originator of one of the most iconic oscillators in modern dynamics.

Fig. 1 Cover of Duffing’s 1918 publication on nonlinear oscillators.

Duffing’s Nonlinear Oscillator

The mathematical and technical focus of Duffing’s book was low-order nonlinear corrections to the linear harmonic oscillator.  In one case, he considered a spring that either became stiffer or softer as it stretched.  This happens when a cubic term is added to the usual linear Hooke’s law.  In another case, he considered a spring that was stiffer in one direction than another, making the stiffness asymmetric.  This happens when a quadratic term is added.  These terms are shown in Fig. 2 from Duffing’s book.  The top equation is a free oscillation, and the bottom equation has a harmonic forcing function.  These were the central equations that Duffing explored, plus the addition of damping that he considered in a later chapter as shown in Fig. 3. The book lays out systematically, chapter by chapter, approximate and series solutions to the nonlinear equations, and in special cases described analytically exact solutions (such as for the nonlinear pendulum).

Fig. 2 Duffing’s equations without damping for free oscillation and driven oscillation with quadratic (producing an asymmetric potential) and cubic (producing stiffening or softening) corrections to the spring force.
Fig. 3 Inclusion of damping in the case with cubic corrections to the spring force.

Duffing was a practical engineer as well as a mathematical one, and he built experimental systems to test his solutions.  An engineering drawing of his experimental test apparatus is shown in Fig. 4. The small test pendulum is at S in the figure. The large pendulum at B is the drive pendulum, chosen to be much heavier than the test pendulum so that it can deliver a steady harmonic force through spring F1 to the test system. The cubic nonlinearity of the test system was controlled through the choice of the length of the test pendulum, and the quadratic nonlinearity (the asymmetry) was controlled by allowing the equilibrium angle to be shifted from vertical. The relative strength of the quadratic and cubic terms was adjusted by changing the position of the mass at G. Duffing derived expressions for all the coefficients of the equations in Fig. 1 in terms of experimentally-controlled variables. Using this apparatus, Duffing verified to good accuracy his solutions for various special cases.

Fig. 4 Duffing’s experimental system he used to explore and verify his equations and solutions.

           Duffing’s book is a masterpiece of careful systematic investigation, beginning in general terms, and then breaking the problem down into its special cases, finding solutions for each one with accurate experimental verifications. These attributes established the importance of this little booklet in the history of science and technology, but because it was written in German, most of the early citations were by German scientists.  The first use of Duffing’s name associated to the nonlinear oscillator problem occurred in 1928 [2], as was the first reference to him in a work in English in a book by Timoshenko [3].  The first use of the phrase “Duffing Equation” specifically to describe an oscillator with a linear and cubic restoring force was in 1942 in a series of lectures presented at Brown University [4], and this nomenclature had become established by the end of that decade [5].  Although Duffing had spent considerable attention in his book to the quadratic term for an asymmetric oscillator, the term “Duffing Equation” now refers to the stiffening and softening problem rather than to the asymmetric problem.

Fig. 5 The Duffing equation is generally expressed as a harmonic oscillator (first three terms plus the harmonic drive) modified by a cubic nonlinearity and driven harmonically.

Duffing Rediscovered

Nonlinear oscillations remained mainly in the realm of engineering for nearly half a century, until a broad spectrum of physical scientists began to discover deep secrets hiding behind the simple equations.  In 1963 Edward Lorenz (1917 – 2008) of MIT published a paper that showed how simple nonlinearities in three equations describing the atmosphere could produce a deterministic behavior that appeared to be completely chaotic.  News of this paper spread as researchers in many seemingly unrelated fields began to see similar signatures in chemical reactions, turbulence, electric circuits and mechanical oscillators.  By 1972 when Lorenz was invited to give a talk on the “Butterfly Effect” the science of chaos was emerging as new frontier in physics, and in 1975 it was given its name “chaos theory” by James Yorke (1941 – ).  By 1976 it had become one of the hottest new areas of science. 

        Through the period of the emergence of chaos theory, the Duffing oscillator was known to be one of the archetypical nonlinear oscillators.  A particularly attractive aspect of the general Duffing equations is the possibility of studying a “double-well” potential.  This happens when the “alpha” in the equation in Fig. 5 is negative and the “beta” is positive.  The double-well potential has a long history in physics, both classical and modern, because it represents a “two-state” system that exhibits bistability, bifurcations, and hysteresis.  For a fixed “beta” the potential energy as a function of “alpha” is shown in Fig. 6.  The bifurcation cascades of the double-well Duffing equation was investigated by Phillip Holmes (1945 – ) in 1976 [6], and the properties of the strange attractor were demonstrated in 1978 [7] by Yoshisuke Ueda (1936 – ).  Holmes, and others, continued to do detailed work on the chaotic properties of the Duffing oscillator, helping to make it one of the most iconic systems of chaos theory.

Fig. 6 Potential energy of the Duffing Oscillator. The position variable is x, and changing alpha is along the other axis. For positive beta and alpha the potential is a quartic. For positive beta and negative alpha the potential is a double well.

Python Code for the Duffing Oscillator: Duffing.py

This Python code uses the simple ODE solver on the driven-damped Duffing double-well oscillator to display the configuration-space trajectories and the Poincaré map of the strange attractor. (Python code on GitHub.)

#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
Duffing.py
Created on Wed May 21 06:03:32 2018
@author: nolte
D. D. Nolte, Introduction to Modern Dynamics: Chaos, Networks, Space and Time, 2nd ed. (Oxford,2019)
"""
import numpy as np
import matplotlib as mpl
from mpl_toolkits.mplot3d import Axes3D
from scipy import integrate
from matplotlib import pyplot as plt
from matplotlib import cm
import time
import os

plt.close('all')

# model_case 1 = Pendulum
# model_case 2 = Double Well
print(' ')
print('Duffing.py')

alpha = -1       # -1
beta = 1         # 1
delta = 0.3       # 0.3
gam = 0.15    # 0.15
w = 1
def flow_deriv(x_y_z,tspan):
    x, y, z = x_y_z
    a = y
    b = delta*np.cos(w*tspan) - alpha*x - beta*x**3 - gam*y
    c = w
    return[a,b,c]
                
T = 2*np.pi/w

px1 = np.random.rand(1)
xp1 = np.random.rand(1)
w1 = 0

x_y_z = [xp1, px1, w1]

# Settle-down Solve for the trajectories
t = np.linspace(0, 2000, 40000)
x_t = integrate.odeint(flow_deriv, x_y_z, t)
x0 = x_t[39999,0:3]

tspan = np.linspace(1,20000,400000)
x_t = integrate.odeint(flow_deriv, x0, tspan)
siztmp = np.shape(x_t)
siz = siztmp[0]

y1 = x_t[:,0]
y2 = x_t[:,1]
y3 = x_t[:,2]
    
plt.figure(2)
lines = plt.plot(y1[1:2000],y2[1:2000],'ko',ms=1)
plt.setp(lines, linewidth=0.5)
plt.show()

for cloop in range(0,3):

#phase = np.random.rand(1)*np.pi;
    phase = np.pi*cloop/3

    repnum = 5000
    px = np.zeros(shape=(2*repnum,))
    xvar = np.zeros(shape=(2*repnum,))
    cnt = -1
    testwt = np.mod(tspan-phase,T)-0.5*T;
    last = testwt[1]
    for loop in range(2,siz):
        if (last < 0)and(testwt[loop] > 0):
            cnt = cnt+1
            del1 = -testwt[loop-1]/(testwt[loop] - testwt[loop-1])
            px[cnt] = (y2[loop]-y2[loop-1])*del1 + y2[loop-1]
            xvar[cnt] = (y1[loop]-y1[loop-1])*del1 + y1[loop-1]
            last = testwt[loop]
        else:
            last = testwt[loop]
 
    plt.figure(3)
    if cloop == 0:
        lines = plt.plot(xvar,px,'bo',ms=1)
    elif cloop == 1:
        lines = plt.plot(xvar,px,'go',ms=1)
    else:
        lines = plt.plot(xvar,px,'ro',ms=1)
        
    plt.show()

plt.savefig('Duffing')
Fig. 7 Strange attractor of the double-well Duffing equation for three selected phases.


The Physics of Life, the Universe and Everything:

Galileo Unbound from Oxford University Press:


References

[1] G. Duffing, Erzwungene Schwingungen bei veranderlicher Eigenfrequenz und ihre technische Bedeutung, Vieweg & Sohn, Braunschweig, 1918.

[2] Lachmann, K. “Duffing’s vibration problem.” Mathematische Annalen 99: 479-492. (1928)

[3] S. Timoshenko, Vibration Problems in Engineering, D. Van Nostrand Company, Inc.,New York, 1928.

[4] K.O. Friedrichs, P. Le Corbeiller, N. Levinson, J.J. Stoker, Lectures on Non-Linear Mechanics delivered at Brown University, New York, 1942.

[5] Kovacic, I. and M. J. Brennan, Eds. The Duffing Equation: Nonlinear Oscillators and their Behavior. Chichester, United Kingdom, Wiley. (2011)

[6] Holmes, P. J. and D. A. Rand. “Bifurcations of Duffings Equation – Application of Catastrophe Theory.” Journal of Sound and Vibration 44(2): 237-253. (1976)

[7] Ueda, Y. “Randomly Transitional Phenomena in the System Governed by Duffings Equation.” Journal of Statistical Physics 20(2): 181-196. (1979)

Feynman and the Dawn of QED

In the years immediately following the Japanese surrender at the end of WWII, before the horror and paranoia of global nuclear war had time to sink into the psyche of the nation, atomic scientists were the rock stars of their times.  Not only had they helped end the war with a decisive stroke, they were also the geniuses who were going to lead the US and the World into a bright new future of possibilities.  To help kick off the new era, the powers in Washington proposed to hold a US meeting modeled on the European Solvay Congresses.  The invitees would be a select group of the leading atomic physicists: invitation only!  The conference was held at the Rams Head Inn on Shelter Island, at the far end of Long Island, New York in June of 1947.  The two dozen scientists arrived in a motorcade with police escort and national press coverage.  Richard Feynman was one of the select invitees, although he had done little fundamental work beyond his doctoral thesis with Wheeler.  This would be his first real chance to expound on his path integral formulation of quantum mechanics.  It was also his first conference where he was with all the big guns.  Oppenheimer and Bethe were there as well as Wheeler and Kramers, von Neumann and Pauling.  It was an august crowd and auspicious occasion.

Shelter Island and the Foundations of Quantum Mechanics

            The topic that had been selected for the conference was Foundations of Quantum Mechanics, which at that time meant quantum electrodynamics, known as QED, a theory that was at the forefront of theoretical physics, but mired in theoretical difficulties.  Specifically, it was waist deep in infinities that cropped up in calculations that went beyond the lowest order.  The theorists could do back-of-the-envelope calculations with ease and arrive quickly at rough numbers that closely matched experiment, but as soon as they tried to be more accurate, results diverged, mainly because of the self-energy of the electron, which was the problem that Wheeler and Feynman had started on at the beginning of his doctoral studies [1].  As long as experiments had only limited resolution, the calculations were often good enough.  But at the Shelter Island conference, Willis Lamb, a theorist-turned-experimentalist from Columbia University, announced the highest resolution atomic spectroscopy of atomic hydrogen ever attained, and there was a deep surprise in the experimental results.

An obvious photo-op at Shelter Island with, left to right: W. Lamb, Abraham Pais, John Wheeler (holding paper), Richard P. Feynman (holding pen), Herman Feschbach and Julian Schwinger.

            Hydrogen, of course, is the simplest of all atoms.  This was the atom that launched Bohr’s model, inspired Heisenberg’s matrix mechanics and proved Schrödinger’s wave mechanics.  Deviations from the classical Bohr levels, measured experimentally, were the testing grounds for Dirac’s relativistic quantum theory that had enjoyed unparalleled success until Lamb’s presentation at Shelter Island.  Lamb showed there was an exceedingly small energy splitting of about 200 parts in a billion that amounted to a wavelength of 28 cm in the microwave region of the electromagnetic spectrum.  This splitting was not predicted, nor could it be described, by the formerly successful relativistic Dirac theory of the electron. 

            The audience was abuzz with excitement.  Here was a very accurate measurement that stood ready for the theorists to test their theories on.  In the discussions, Oppenheimer guessed that the splitting was likely caused by electromagnetic interactions related to the self energy of the electron.  Victor Weisskopf of MIT with Julian Schwinger of Harvard suggested that, although the total energy calculations of each level might be infinite,  the difference in energy DE should be finite.  After all, in spectroscopy it is only the energy difference that is measured experimentally.  Absolute energies are not accessible directly to experiment.  The trick was how to subtract one infinity from another in a consistent way to get a finite answer.  Many of the discussions in the hallways, as well as many of the presentations, revolved around this question.  For instance, Kramers suggested that there should be two masses in the electron theory—one is the observed electron mass seen in experiments, and the second is a type of internal or bare mass of the electron to be used in perturbation calculations. 

            On the train ride up state after the Shelter Island Conference, Hans Bethe took out his pen and a sheaf of paper and started scribbling down ideas about how to use mass renormalization, subtracting infinity from infinity in a precise and consistent way to get finite answers in the QED calculations.  He made surprising progress, and by the time the train pulled into the station at Schenectady he had achieved a finite calculation in reasonable agreement with Lamb’s shift.  Oppenheimer had been right that the Lamb shift was electromagnetic in origin, and the suggestion by Weisskopf and Schwinger that the energy difference would be finite was indeed the correct approach.  Bethe was thrilled with his own progress and quickly wrote up a paper draft and sent a copy in letters to Oppenheimer and Weisskopf [2].  Oppenheimer’s reply was gracious, but Weisskopf initially bristled because he also had tried the calculations after the conference, but had failed where Bethe had succeeded.  On the other hand, both pointed out to Bethe that his calculation was non-relativistic, and that a relativistic calculation was still needed.

When Bethe returned to Cornell, he told Feynman about the success of his calculations but that a relativistic version was still missing. Feynman told him on the spot that he knew how to do it and that he would have it the next day. Feynman’s optimism was based on the new approach to relativistic quantum electrodynamics that he had been developing with the aid of his newly-invented “Feynman Diagrams”. Despite his optimism, he hit a snag that evening as he tried to calculate the self-energy of the electron. When he met with Bethe the next day, they both tried to to reconcile the calculations with Feynman’s new approach, but they failed to find a path through the calculations that made sense. Somewhat miffed, because he knew that his approach should work, Feynman got down to work in a way that he had usually avoided (he had always liked finding the “easy” path through tough problems). Over several intense months, he began to see how it all would work out.

           At the same time that Feynman was making progress on his work, word arrived at Cornell of progress being made by Julian Schwinger at Harvard.  Schwinger was a mathematical prodigy like Feynman, and also like Feynman had grown up in New York city, but they came from very different neighborhoods and had very different styles.  Schwinger was a formalist who pursued everything with precision and mathematical rigor.  He lectured calmly without notes in flawless presentations.  Feynman, on the other hand, did his physics by feel.  He made intuitive guesses and checked afterwards if they were right, testing ideas through trial and error.  His lectures ranged widely, with great energy, without structure, following wherever the ideas might lead.  This difference in approach and style between Schwinger and Feynman would have embarrassing consequences at the upcoming sequel to the Shelter Island conference that was to be held in late March 1948 at a resort in the Pocono Mountains in Pennsylvania.

The Conference in the Poconos

           The Pocono conference was poised to be for the theorists Schwinger and Feynman what the Shelter Island had been for the experimentalists Rabi and Lamb—a chance to drop bombshells.  There was a palpable buzz leading up to the conference with advance word coming from Schwinger about his successful calculation of the g-factor of the electron and the Lamb shift.  In addition to the attendees who had been at Shelter Island, the Pocono conference was attended by Bohr and Dirac—two of the giants who had invented quantum mechanics.  Schwinger began his presentation first.  He had developed a rigorous mathematical method to remove the infinities from QED, enabling him to make detailed calculations of the QED corrections—a significant achievement—but the method was terribly complicated and tedious.  His presentation went on for many hours in his carefully crafted style, without notes, delivered like a speech.  Even so, the audience grew restless, and whenever Schwinger tried to justify his work on physical grounds, Bohr would speak up, and arguments among the attendees would ensue, after which Schwinger would say that all would become clear at the end.  Finally, he came to the end, where only Fermi and Bethe had followed him.  The rest of the audience was in a daze.

            Feynman was nervous.  It had seemed to him that Schwinger’s talk had gone badly, despite Schwinger’s careful preparation.  Furthermore, the audience was spent and not in a mood to hear anything challenging.  Bethe suggested that if Feynman stuck to the math instead of the physics, then the audience might not interrupt so much.  So Feynman restructured his talk in the short break before he was to begin.  Unfortunately, Feynman’s strength was in physical intuition, and although he was no slouch at math, he was guided by visualization and by trial and error.  Many of the steps in his method worked (he knew this because they gave the correct answers and because he could “feel” they were correct), but he did not have all the mathematical justifications.  What he did have was a completely new way of thinking about quantum electromagnetic interactions and a new way of making calculations that were far simpler and faster than Schwinger’s.  The challenge was that he relied on space-time graphs in which “unphysical” things were allowed to occur, and in fact were required to occur, as part of the sum over many histories of his path integrals.  For instance, a key element in the approach was allowing electrons to travel backwards in time as positrons.  In addition, a process in which the electron and positron annihilate into a single photon, and then the photon decays into an electron-positron pair, is not allowed by mass and energy conservation, but this is a possible history that must add to the sum.  As long as the time between the photon emission and decay is short enough to satisfy Heisenberg’s uncertainty principle, there is no violation of physics.

Feynman’s first published “Feynman Diagram” in the Physical Review (1948) [3] (Photograph reprinted from “Galileo Unbound” (D. Nolte, Oxford University Press, 2018)

            None of this was familiar to the audience, and the talk quickly derailed.  Dirac pestered him with questions that he tried to deflect, but Dirac persisted like a raven pecking at dead meat.  A question was raised about the Pauli exclusion principle, about whether an orbital could have three electrons instead of the required two, and Feynman said that it could (all histories were possible and had to be summed over), an answer that dismayed the audience.  Finally, as Feynman was drawing another of his space-time graphs showing electrons as lines, Bohr rose to his feet and asked whether Feynman had forgotten Heisenberg’s uncertainty principle that made it impossible to even talk about an electron trajectory.  It was hopeless.  Bohr had not understood that the diagrams were a shorthand notation not to be taken literally.  The audience gave up and so did Feynman.  The talk just fizzled out.  It was a disaster.

           At the close of the Pocono conference, Schwinger was the hero, and his version of QED appeared to be the right approach [4].  Oppenheimer, the reigning king of physics, former head of the successful Manhattan Project and newly selected to head the prestigious Institute for Advanced Study at Princeton, had been thoroughly impressed by Schwinger and thoroughly disappointed by Feynman.  When Oppenheimer returned to Princeton, a letter was waiting for him in the mail from a colleague he knew in Japan by the name of Sin-Itiro Tomonaga [5].  In the letter, Tomonaga described work he had completed, unbeknownst to anyone in the US or Europe, on a renormalized QED.  His results and approach were similar to Schwinger’s but had been accomplished independently in a virtual vacuum that surrounded Japan after the end of the war.  His results cemented the Schwinger-Tomonaga approach to QED, further elevating them above the odd-ball Feynman scratchings.  Oppenheimer immediately circulated the news of Tomonaga’s success to all the attendees of the Pocono conference.  It appeared that Feynman was destined to be a footnote, but the prevailing winds were about to change as Feynman retreated to Cornell. In defeat, Feynman found the motivation to establish his simplified yet powerful version of quantum electrodynamics. He published his approach in 1948, a method that surpassed Schwinger and Tomonaga in conceptual clarity and ease of calculation. This work was to catapult Feynman to the pinnacles of fame, becoming the physicist next to Einstein whose name was most recognizable, in that later half of the twentieth century, to the man in the street (helped by a series of books that mythologized his exploits [6]).



For more on the history of Feynman and quantum mechanics, read Galileo Unbound from Oxford Press:


References


[1] See Chapter 8 “On the Quantum Footpath”, Galileo Unbound (Oxford, 2018)

[2] Schweber, S. S. QED and the men who made it : Dyson, Feynman, Schwinger, and Tomonaga. Princeton, N.J. :, Princeton University Press. (1994)

[3] Feynman, R. P. “Space-time Approach to Quantum Electrodynamics.” Physical Review 76(6): 769-789. (1949)

[4] Schwinger, J. “ON QUANTUM-ELECTRODYNAMICS AND THE MAGNETIC MOMENT OF THE ELECTRON.” Physical Review 73(4): 416-417. (1948)

[5] Tomonaga, S. “ON INFINITE FIELD REACTIONS IN QUANTUM FIELD THEORY.” Physical Review 74(2): 224-225. (1948)

[6] Surely You’re Joking, Mr. Feynman!: Adventures of a Curious Character, Richard Feynman, Ralph Leighton (contributor), Edward Hutchings (editor), 1985, W W Norton,

Dirac relativistic quantum wave equation

Dirac: From Quantum Field Theory to Antimatter

Paul Adrian Maurice Dirac (1902 – 1984) was given the moniker of “the strangest man” by Niels Bohr while he was reminiscing about the many great scientists with whom he had worked over the years [1].  It is a moniker that resonates with the innumerable “Dirac stories” that abound in the mythology of the hallways of physics departments around the world.  Dirac was awkward, shy, a loner, rarely said anything, was completely literal, had not the slightest comprehension of art or poetry, nor any clear understanding of human interpersonal interaction.  Dirac was also brilliant, providing the theoretical foundation for the central paradigm of modern physics—quantum field theory.  The discovery of the Higgs boson in 2012, a human achievement that capped nearly a century of scientific endeavor, rests solidly on the theory of quantum fields that permeate space.  The Higgs particle, when it pops into existence at the Large Hadron Collider in Geneva, is a singular quantum excitation of the Higgs field, a field that usually resides in a vacuum state, frothing with quantum fluctuations that imbue all particles—and you and me—with mass.  The Higgs field is Dirac’s legacy.

… all of a sudden he had a new equation with four-dimensional space-time symmetry.

Copenhagen and Bohr

Although Dirac as a young scientist was initially enthralled with relativity theory, he was working under Ralph Fowler (1889 – 1944) in the physics department at Cambridge in 1925 when he had the chance to read advanced proofs of Heisenberg’s matrix mechanics paper.  This chance event launched him on his own trajectory in quantum theory.  After Dirac was awarded his doctorate from Cambridge in 1926, he received a stipend that sent him to work with Niels Bohr (1885 – 1962) in Copenhagen—ground zero of the new physics. During his time there, Dirac became famous for taking long walks across Copenhagen as he played about with things in his mind, performing mental juggling of abstract symbols, envisioning how they would permute and act.  His attention was focused on the electromagnetic field and how it interacted with the quantized states of atoms.  Although the electromagnetic field was the classical field of light, it was also the quantum field of Einstein’s photon, and he wondered how the quantized harmonic oscillators of the electromagnetic field could be generated by quantum wavefunctions acting as operators.  But acting on what?  He decided that, to generate a photon, the wavefunction must operate on a state that had no photons—the ground state of the electromagnetic field known as the vacuum state.

            In late 1926, nearing the end of his stay in Copenhagen with Bohr, Dirac put these thoughts into their appropriate mathematical form and began work on two successive manuscripts.  The first manuscript contained the theoretical details of the non-commuting electromagnetic field operators.  He called the process of generating photons out of the vacuum “second quantization”.  This phrase is a bit of a misnomer, because there is no specific “first quantization” per se, although he was probably thinking of the quantized energy levels of Schrödinger and Heisenberg.  In second quantization, the classical field of electromagnetism is converted to an operator that generates quanta of the associated quantum field out of the vacuum (and also annihilates photons back into the vacuum).  The creation operators can be applied again and again to build up an N-photon state containing N photons that obey Bose-Einstein statistics, as they must, as required by their integer spin, agreeing with Planck’s blackbody radiation. 

            Dirac then went further to show how an interaction of the quantized electromagnetic field with quantized energy levels involved the annihilation and creation of photons as they promoted electrons to higher atomic energy levels, or demoted them through stimulated emission.  Very significantly, Dirac’s new theory explained the spontaneous emission of light from an excited electron level as a direct physical process that creates a photon carrying away the energy as the electron falls to a lower energy level.  Spontaneous emission had been explained first by Einstein more than ten years earlier when he derived the famous A and B coefficients, but Einstein’s arguments were based on the principle of detailed balance, which is a thermodynamic argument.  It is impressive that Einstein’s deep understanding of thermodynamics and statistical mechanics could allow him to derive the necessity of both spontaneous and stimulated emission, but the physical mechanism for these processes was inferred rather than derived. Dirac, in late 1926, had produced the first direct theory of photon exchange with matter.  This was the birth of quantum electrodynamics, known as QED, and the birth of quantum field theory [2].

Fig. 1 Paul Dirac in his early days.

Göttingen and Born

            Dirac’s next stop on his postodctoral fellowship was in Göttingen to work with Max Born (1882 – 1970) and the large group of theoreticians and mathematicians who were like electrons in a cloud orbiting around the nucleus represented by the new quantum theory.  Göttingen was second only to Copenhagen as the Mecca for quantum theorists.  Hilbert was there and von Neumann too, as well as the brash American J. Robert Oppenheimer (1904 – 1967) who was finishing his PhD with Born.  Dirac and Oppenheimer struck up an awkward friendship.  Oppenheimer was considered arrogant by many others in the group, but he was in awe of Dirac who arrived with his manuscript on quantum electrodynamics ready for submission.  Oppenheimer struggled at first to understand Dirac’s new approach to quantizing fields, but he quickly grasped the importance, as did Pascual Jordan (1902 – 1980), who was also in Göttingen.

            Jordan had already worked on ideas very close to Dirac’s on the quantization of fields.  He and Dirac seemed to be going down the same path, independently arriving at very similar conclusions around the same time.  In fact, Jordan was often a step ahead of Dirac, tending to publish just before Dirac, as with non-commuting matrices, transformation theory and the relationship of canonical transformations to second quantization.  However, Dirac’s paper on quantum electrodynamics was a masterpiece in clarity and comprehensiveness, launching a new field in a way that Jordan had not yet achieved with his own work.  But because of the closeness of Jordan’s thinking to Dirac’s, he was able to see immediately how to extend Dirac’s approach.  Within the year, he published a series of papers that established the formalism of quantum electrodynamics as well as quantum field theory.  With Pauli, he systematized the operators for creation and annihilation of photons [3].  With Wigner, he developed second quantization for de Broglie matter waves, defining creation and annihilation operators that obeyed the Pauli exclusion principle of electrons[4].  Jordan was on a roll, forging ahead of Dirac on extensions of quantum electrodynamics and field theory, but Dirac was about to eclipse Jordan once and for all.

St. John’s at Cambridge

            At the end of the Spring semester in 1927, Dirac was offered a position as a fellow of St. John’s College at Cambridge, which he accepted, returning to England to begin his life as a college professor.  During the summer and into the Fall, Dirac returned to his first passion in physics, relativity, which had yet to be successfully incorporated into quantum physics.  Oskar Klein and Walter Gordon had made initial attempts at formulating relativistic quantum theory, but they could not correctly incorporate the spin properties of the electron, and their wave equation had the bad habit of producing negative probabilities.  Probabilities went negative because the Klein-Gordon equation had two time derivatives instead of one.  The reason it had two (while the non-relativistic Schrödinger equation has only one) is because space-time symmetry required the double space derivative of the Schrödinger equation to be paired with a double time derivative.  Dirac, with creative insight, realized that the problem could be flipped by requiring the single time derivative to be paired with a single space derivative.  The problem was that a single space derivative did not seem to make any sense [5].

St. John’s College at Cambridge

            As Dirac puzzled how to get an equation with only single derivatives, he was playing around with Pauli spin matrices and hit on a simple identity that related the spin matrices to the electron momentum.  At first he could not get the identity to apply to four-dimensional relativistic momenta using the usual 2×2 spin matrices.  Then he realized that four-dimensional space-time could be captured if he expanded Pauli’s 2×2 spin matrices to 4×4 spin matrices, and all of a sudden he had a new equation with four-dimensional space-time symmetry with single derivatives on space and time.  As a test of his new equation, he calculated fine details of the experimentally-measured hydrogen spectrum, known as the fine structure, which had resisted theoretical explanation, and he derived answers in close agreement with experiment.  He also showed that the electron had spin-1/2, and he calculated its magnetic moment.  He finished his manuscript at the end of the Fall semester in 1927, and the paper was published in early 1928[6].  His relativistic quantum wave equation was an instant sensation, becoming known for all time as “the Dirac Equation”.  He had succeeded at finding a correct and long-sought relativistic quantum theory where many others had failed, such as Oskar Klein and Paul Gordon.  It was a crowning achievement, placing Dirac firmly in the firmament of the quantum theorists.

Fig. 1 The relativistic Dirac equation. The wavefunction is a four-component spinor. The gamma-del product is a 4×4 matrix operator. The time and space derivatives are both first-order operators.

Antimatter

            In the process of ridding the Klein-Gordon equation of negative probability, which Dirac found abhorent, his new equation created an infinite number of negative energy states, which he did not find abhorent.  It is perhaps a matter of taste what one theoriest is willing to accept over another, and for Dirac, negative energies were better than negative probabilities.  Even so, one needed to deal with an infinite number of negative energy states in quantum theory, because they are available to quantum transitions.  In 1929 and 1930, as Dirac was writing his famous textbook on quantum theory, he became intrigued by the similarity between the positive and negative electron states of the vacuum and the energy levels of valence electrons on atoms.  An electron in a state outside a filled electron shell behaves very much like a single-electron atom, like sodium and lithium with their single valence electrons.  Conversely, an atomic shell that has one electron less than a full complement can be described as having a “hole” that behaves “as if” it were a positive particle.  It is like a bubble in water.  As water sinks, the bubble rises to the top of the water level.  For electrons, if all the electrons go one way in an electric field, then the hole goes the opposite direction, like a positive charge. 

            Dirac took this analogy of nearly-filled atomic shells and applied it to the vacuum states of the electron, viewing the filled negative energy states like the filled electron shells of atoms.  If there is a missing electron, a hole in this infinite sea, then it would behave as if it had positive charge.  Initially, Dirac speculated that the “hole” was the proton, and he even wrote a paper on that possibility.  But Oppenheimer pointed out that the idea was inconsistent with observations, especially the inability of the electron and proton to annihilate, and that the ground state of the infinite electron sea must be completely filled. Hermann Weyl further pointed out that the electron-proton theory did not have the correct symmetry, and Dirac had to rethink.  In early 1931 he hit on an audacious solution to the puzzle.  What if the hole in the infinite negative energy sea did not just behave like a positive particle, but actually was a positive particle, a new particle that Dirac dubbed the “anti-electron”?  The anti-electron would have the same mass as the electron, but would have positive charge. He suggested that such particles might be generated in high-energy collisions in vacuum, and he finished his paper with the suggestion that there also could be an anti-proton with the mass of the proton but with negative charge.  In this singular paper, titled “Quantized Singularities of the Electromagnetic Field” published in 1931, Dirac predicted the existence of antimatter.  A year later the positron was discovered by Carl David Anderson at Cal Tech.  Anderson had originally called the particle the positive electron, but a journal editor of the Physical Review changed it to positron, and the new name stuck.

Fig. 3 An electron-positron pair is created by the absorption of a photon (gamma ray). Positrons have negative energy and can be viewed as a hole in a sea of filled electron states. (Momentum conservation is satisfied if a near-by heavy particle takes up the recoil momentum.)

            The prediction and subsequent experimental validation of antmatter stands out in the history of physics in the 20th Century.  In previous centuries, theory was performed mainly in the service of experiment, explaining interesting new observed phenomena either as consequences of known physics, or creating new physics to explain the observations.  Quantum theory, revolutionary as a way of understanding nature, was developed to explain spectroscopic observations of atoms and molecules and gases.  Similarly, the precession of the perihelion of Mercury was a well-known phenomenon when Einstein used his newly developed general relativity to explain it.  As a counter example, Einstein’s prediction of the deflection of light by the Sun was something new that emerged from theory.  This is one reason why Einstein became so famous after Eddington’s expedition to observe the deflection of apparent star locations during the total eclipse.  Einstein had predicted something that had never been seen before.  Dirac’s prediction of the existence of antimatter similarly is a triumph of rational thought, following the mathematical representation of reality to an inevitable conclusion that cannot be ignored, no matter how wild and initially unimaginable it is.  Dirac went on to receive the Nobel prize in Physics in 1933, sharing the prize that year with Schrödinger (Heisenberg won it the previous year in 1932).


Read the stories behind the history of quantum field theory, in Galileo Unbound from Oxford University Press


References

[1] Framelo, “The Strangest Man: The Hidden Life of Paul Dirac” (Basic Books, 2011)

[2] Dirac, P. A. M. (1927). “The quantum theory of the emission and absorption of radiation.” Proceedings of the Royal Society of London Series A114(767): 243-265.;  Dirac, P. A. M. (1927). “The quantum theory of dispersion.” Proceedings of the Royal Society of London Series A114(769): 710-728.

[3] Jordan, P. and W. Pauli, Jr. (1928). “To quantum electrodynamics of free charge fields.” Zeitschrift Fur Physik 47(3-4): 151-173.

[4] Jordan, P. and E. Wigner (1928). “About the Pauli’s equivalence prohibited.” Zeitschrift Fur Physik 47(9-10): 631-651.

[5] This is because two space derivatives measure the curvative of the wavefunction which is related to the kinetic energy of the electron.

[6] Dirac, P. A. M. (1928). “The quantum theory of the electron.” Proceedings of the Royal Society of London Series A 117(778): 610-624.;  Dirac, P. A. M. (1928). “The quantum theory of the electron – Part II.” Proceedings of the Royal Society of London Series A118(779): 351-361.

Physicists in Revolution: Arago, Riemann, Jacobi and Doppler

The opening episode of Victoria on Masterpiece Theatre (PBS) this season finds the queen confronting widespread unrest among her subjects who are pressing for more freedoms and more say in government. Louis-Phillipe, former King of France, has been deposed in the February Revolution of 1848 in Paris and his presence at the Royal Palace does not help the situation.

In 1848 a wave of spontaneous revolution swept across Europe.  It was not a single revolution of many parts, but many separate revolutions with similar goals.  Two essential disruptions of life occurred in the early 1800’s.  The first was the partitioning of Europe at the Congress of Vienna from 1814 to 1815, presided over by Prince Metternich of Austria, that had carved up Napoleon’s conquests and sought to establish a stable order based on the old ideal of absolute monarchy.  In the process, nationalities were separated or suppressed.  The second was the industrialization of Europe in the early 1800’s that created economic upheaval, with masses of working poor fleeing effective serfdom in the fields and flocking to the cities.  Wages fell, food became scarce, legions of the poor and starving bloomed.  Because of these influences, European society had become unstable, supercooled beyond a phase transition and waiting for a seed or catalyst to crystalize the continent into a new state of matter. 

When the wave came, physicists across Europe were caught in the upheaval.  Some were caught up in the fervor and turned their attention to national service, some lost their standing and their positions during the inevitable reactionary backlash, others got the opportunities of their careers.  It was difficult for anyone to be untouched by the 1848 revolutions, and physicist were no exception.

The Spontaneous Fire of Revolution

The extraodinary wave of revolution was sparked by a small rebellion in Sicily in January 1848 that sought to overturn the ruling Bourbons.  It was a small rebellion of little direct consequence to Europe, but it succeeded in establishing a liberal democracy in an independent state that stood as a symbol of what could be achieved by a determined populace.  The people of Paris took notice, and in the sudden and unanticipated February Revolution, the French constitutional monarchy under Louis-Phillipe was overthrown in a few days and replaced by the French Second Republic.  The shock of Louis-Phillipe’s fall reverberated across Europe, feared by those in power and welcomed by those who sought a new world order.  Nationalism, liberalism, socialism and communism were on the rise, and the opportunity to change the world seemed to have arrived.  The Five Days of Milan in Italy, the March Revolution of the German states, the Polish rebellion against Prussia, and the Young Irelander Rebellion in Ireland were all consequences of the unstable conditions and the unprecidented opportunities for the people to enact change.  None of these uprisings were coordinated by any central group.  It was a spontaneous consequence of similar preconditions that existed across nearly all the states of Europe.

Arago and the February Revolution in Paris

The French were no newcomers to street rebellions.  Paris had a history of armed conflict between citizens manning barricades and the superior forces of the powers at be.  The unforgettable scene in Les Misérables of Marius at the barricade and Jean Valjean’s rescue through the sewers of Paris was based on the 1832 June Rebellion in Paris.  Yet this event was merely an echo of the much larger rebellion of 1830 that had toppled the unpopular monarchy of Charles X, followed by the ascension of the Bourgeois Monarch Louis Phillipe at the start of the July Monarchy.  Eighteen years later, Louis Phillipe was still on the throne and the masses were ready again for a change.  Alexis de Tocqueville saw the change coming and remarked, “We are sleeping together in a volcano. … A wind of revolution blows, the storm is on the horizon.”  The storm would sweep up a generation of participants, including the French physicist Francois Arago (1786 – 1853).

Lamartine in front of the Town Hall of Paris on 25 February 1848 (Image by Henri Félix Emmanuel Philippoteaux in public domain).

Arago is one of the under-appreciated French physicists of the 1800’s.  This may be because so many of his peers have become icons in the history of physics: Fourier, Fresnel, Poisson, Laplace, Malus, Biot and Foucault.  The one place where his name appears—the Spot of Arago—was not exclusively his discovery, but rather was an experimental demonstration of an effect derived by Poisson using Fresnel’s new theory of diffraction.  Poisson derived the phenomenon as a means to show the absurdity of Fresnel’s undulatory theory of light, but Arago’s experimental demonstration turned the tables on Poisson and the emissionists (followers of Newton’s particulate theory of light).  Yet Arago played a role behind the scenes as a supporter and motivator of some of the most important discoveries in optics.  In particular, it was Arago’s encouragement and support of the (at that time) unknown Fresnel, that helped establish the Fresnel theory of diffraction and the wave nature of light.  Together, Arago and Fresnel established the transverse nature of the light wave, and Arago is also the little-known discoverer of optical rotation.  As a young scientist, he attempted to measure the drift of the ether, which was a null experiment that foreshadowed the epochal experiments of Michelson and Morley 80 years later.  In his later years, Arago proposed the methodology for measuring the speed of light in both stationary and moving materials, which became the basis for the important measurements of the speed of light by Fizeau and Foucault (who also attempted to measure ether drift).

In addition to his duties as the director of the National Observatory and as the perpetual secretary of the Academie des Sciences (replacing Fourier), he entered politics in 1830 when he was elected as a member of the chamber of deputies.  At the fall of Louis-Phillipe in the February Revolution of 1848, he was appointed as a member of the steering committee of the newly formed government of the French Second Republic, and he was named head of the Marine and Colonies as well as the head of the Department of War.  Although he was a staunch republican and supporter of the people, his position put him in direct conflict with the later stages of the revolutions of 1848. 

The population of Paris became disenchanted with the conservative trends in the Second Republic.  In June of 1848 barricades were again erected in the streets of Paris, this time in opposition to the Republic.  Forces were drawn up on both sides, although many of the Republican forces defected to the insurgents, and attempts were made to mediate the conflict.  At the barricade on the rue Soufflot near the Pantheon, Arago himself approached the barricades to implore defenders to disperse.  It is a measure of the respect Arago held with the people when they replied, “Monsieur Arago, we are full of respect for you, but you have no right to reproach us.  You have never been hungry.  You don’t know what poverty is.” [1] When Arago finally withdrew, he feared that death and carnage were inevitable.  They came at noon on June 23 when the barricade at Porte Saint-Denis was attacked by the National Guards.  This started a general onslaught of all the barricades by Republican forces that left 1,500 workers dead in the streets and more than 11,000 arrested.  Arago resigned from the steering committee on June 24, although he continued to work in the government until the coup d’Etat by Louis Napolean, the nephew of Napoleon Bonaparte, in 1852 when he became Napoleon III, Emperor of the Second French Empire. Louis Napoleon demanded that all government workers take an oath of allegiance to him, but Arago refused.  Yet such was the respect that Arago commanded that Louis Napoleon let him continue unmolested as the astronomer of the Bureau des Longitudes.

Riemann and Jacobi and the March Revolution in Berlin

The February Revolution of Paris was followed a month later by the March Revolutions of the German States.  The center of the German-speaking world at that time was Vienna, and a demonstration by students broke out in Vienna on March 13. Emperor Ferdinand, following the advice of Metternich, called out the army who fired on the crowd, killing several protestors.  Throngs rallied to the protest and arms were distributed, readying for a fight.  Rather than risk unreserved bloodshed, the emperor dismissed Metternich who went into exile to London (following closely the footsteps of the French Louis-Phillipe).  Within the week, the revolutionary fervor had spread to Berlin where a student uprising marched on the royal palace of King Frederick Wilhelm IV on March 18.  They were met by 20,000 troops. 

The March 1848 revolution in Berlin (Image in the public domain).

Not all university students were liberals and revolutionaries, and there were numerous student groups that formed to support the King.  One of the students in one of these loyalist groups was a shy mathematician who joined a loyalist student militia to protect the King.  Bernhard Riemann (1826 – 1866) had come to the University of Berlin after spending a short time in the Mathematics department at the University in Göttingen.  Despite the presence of Gauss there, the mathematics department was not considered strong (this would change dramatically in about 50 years when Göttingen became the center of German mathematics with the arrival of Felix Klein, Karl Schwarzschild and Hermann Minkowski).  At Berlin, Riemann attended lectures by Steiner, Jacobi, Dirichlet and Eisenstein. 

On the night of the uprising, a nervous Riemann found himself among a group of students, few more than 20 years old, guarding the quarters of the King, not knowing what would unfold.  They spent a sleepless night that dawned on the chaos and carnage at the barricades at Alexander Platz with hundreds of citizens dead.  King Wilhelm was caught off guard by the events, and he assured the citizens that he would reorganize the government and yield to the demonstrator’s demands for parliamentary elections, a constitution, and freedom of the press.  Two days later the king attended a mass funeral for the fallen, attended by his generals and ministers who wore the german revolutionary tricolor of black, red and gold.  This ploy worked, and the unrest in Berlin died away before the king was forced to abdicate.  This must have relieved Riemann immensely, because this entire episode was entirely outside his usual meek and mild character.  Yet the character of all the unrelated 1848 revolutions had one thing in common: a sharp division among the populace between the liberals and the conservatives.  As Riemann had elected to join with the loyalists, one of his professors picked the other side.

Carl Gustav Jacob Jacobi (1804 – 1851) had been born in Potsdam and had obtained his first faculty position at the University of Königsberg where he was soon ranked among the top mathematicians in Europe.  However, in his early thirties he was stricken with diabetes, and the harsh winters of Königsberg became to difficult to bear.  He returned to the milder climate of Berlin to a faculty position at the university when the wave of revolution swept over the city.  Jacobi was a liberal thinker and was caught up in the movement, attending meetings at the Constitution Club.  Once the danger to Wilhelm IV had passed, the reactionary forces took their revenge, and Jacobi’s teaching stipend was suspended.  When he threatened to move to the University of Vienna, the royalists relented, so Jacobi too was able to weather the storm. 

The surprising footnote to this story is that Jacobi delivered lectures on a course on the application of differential equations to mechanics in the winter semester of 1847 – 1848 right in the midst of the political turmoil.  His participation in the extraordinary political events of that time apparently did not hamper him from giving one of the most extraordinary sets of lectures in mathematical physics.  Jacobi’s lectures of 1848 were the greatest advance in mathematical physics since Euler had reinterpreted Newton a hundred years earlier.  This is where Jacobi expanded on the work of Hamilton, establishing what is today called the Hamilton-Jacobi theory of dynamics.  He also derived and proved, using Liouville’s theorem of 1838, that the volume of phase space was an invariant in a conservative dynamical system [2].  It is tempting to imagine Jacobi returning home late at night, after rousing discussions of revolution at the Constitution Club, to set to work on his own revolutionary theories in physics.

Doppler and the Hungarian Revolution

Among all the states of Europe, the revolutions of 1848 posed the greatest threat to the Austrian Empire, which was a beaurocratic state entangling scores of diverse nationalities sprawled across the largest state of Europe.  The Austrian Empire was the remnant of the Holy Roman Empire that had succumbed to the Napoleonic invasion.  The lands that were controlled by Austria, after Metternich engineered the Congress of Vienna, included Poles, Ukranians, Romanians, Germans, Czechs, Slovaks, Hungarians, Slovenes, Serbs, Albanians and more.  Holding this diverse array of peoples together was already a challenge, and the revolutions of 1848 carried with them strong feelings of nationalism.  The revolutions spreading across Europe were the perfect catalyst to set off the Hungarian Revolution that grew into a war for independence, and the fierce fighting across Hungary could not be avoided even by cloistered physicists.

Christian Doppler (1803 – 1853) had moved in 1847 from Prague (where he had proposed what came to be called the Doppler effect in 1842 to the Royal Bohemian Society of Sciences) to the Academy of Mines and Forests in Schemnitz (modern Banská Štiavnica in Slovakia, but then part of the Kingdom of Hungary) with more pay and less work.  His health had been failing, and the strenuous duties at Prague had taken their toll.  If the goal of this move to an obscure school far from the center of Austrian power had been to lead a peaceful life, Doppler’s plans were sorely upset.

The news of the protests in Vienna arrived in Schemnitz on the 17th of March, and student demonstrations commenced immediately.  Amidst the uncertainty, Doppler requested a leave of absence from the summer semester and returned to Vienna.  It is not clear why he went there, whether to be near the center of excitement, or to take advantage of the free time to pursue his own researches.  While in Vienna he read a treatise before the Academy on galvano-electric effects.  He returned to Schemnitz in the Fall to relative peace, until the 12th of December, when the Hungarians rejected to acknowledge the new Emperor Franz Josef in Vienna, replacing his Uncle Ferdinand who was forced to abdicate, and the Hungarian war for independence began.

Görgey’s troops crossing the Sturec pass. Their ability to evade the Austrian pursuit was legendary (Image by Keiss Károly in the public domain).

One of Doppler’s former students from his days in Prague was appointed to command the newly formed Hungarian army.  General Arthur Görgey (1818 – 1916) moved to take possession of the northern mining towns (present day Slovakia) and occupied Schemnitz.  When Görgey learned that his old teacher was in the town he sent word to Doppler to meet him at his headquarters.  Meeting with a revolutionary and rebel could have marked Doppler as a traitor in Vienna, but he decided to meet him anyway, taking along one of his colleagues as a “witness” that the discussion were purely academic.  This meeting opens an interesting unsolved question in the history of physics. 

Around this time Doppler was interested in the dynamical properties of the pendulum for cases when the suspension wire was exceptionally long.  Experiments on such extreme pendula could provide insight into changes in gravity with height as well as the effects of the motion of the Earth.  For instance, Coriolis had published his paper on forces in rotating frames many years earlier in 1835.  Because Schemnitz was a mining town, there was ample access to deep mine shafts in which to set up a pendulum with a very long wire.  This is where the story becomes murky.  Within the family of Doppler’s descendants there are stories of Doppler setting up such an experiment, and even a night time visit to the Doppler house by Görgey.  The pendulum was thought to be one of the topics discussed by Doppler and Görgey at their first meeting, and Görgey (from his life as a scientist prior to becoming a revolution general) had arrived to help with the experiment [3]

This story is significant for two reasons.  First, it would be astounding to think of General Görgey taking a break from the revolution to do some physics for fun.  Görgey has not been graced by history with a benevolent reputation.  He was known as a hard and sometimes vicious leader, and towards the end of the short-lived Hungarian Revolution he displaced the President Kossuth to become the dictator of Hungary.  The second reason, which is important for the history of physics, is that if Doppler had performed this experiment in 1848, it would have preceded the famous experiment by Foucault by more than two years.  However, the paper published by Doppler around this time on the dynamics of the pendulum did not mention the experiment, and it remains an open question in the history of physics whether Doppler may have had priority over Foucault.

The Austrian Imperial Army laid siege to Schemnitz and commenced a short bombardment that displaced Görgey and his troops from the town.  Even as Schemnitz was being liberated, a letter arrived informing Doppler that his old mentor Stampfer at the University of Vienna was retiring and that he had been chosen to be his replacement.  The March Revolution had led to the abdication of the previous Austrian emperor and his replacement by the more liberal-minded Franz Josef who was interested in restructuring the educational system in the Austrian empire.  On the advice of Doppler’s supporters who were in the new government, the Institute of Physics was formed and Doppler was named as its first director.  He arrived in the spring of 1850 to take up his new post.

The Legacy of 1848

Despite the early successes and optimism of the revolutions of 1848, reactionary forces were quick to reverse many of the advances made for universal suffrage, constitutional government, freedom of the press, and freedom of expression.  In most cases, monarchs either retained power or soon returned.  Even the reviled Metternich returned to Vienna from exile in London in 1851.  Yet as is so often the case, once a door has been opened it is difficult to shut it again.  The pressure for reforms continued long after the revolutions faded away, and by 1870 many of the specific demands of the people had been instituted by most of the European states.  Russia was an exception, which may explain why the inevitable Russian Revolution half a century later was so severe.            

The revolutions of 1848 cannot be said to have had a long-lasting impact on the progress of physics, although they certainly had a direct impact on the lives of selected physicists.  The most lasting effect of the revolutions on science was the restructuring of educational systems, not only in Austria, but in many of the European states.  This was perhaps one of the first times when the social and economic benefits of science education to the national welfare was understood and implemented across Europe, although a similar recognition had occurred earlier during the French Revolution, for instance leading to the founding of the Ecole Polytechnique.  The most important, though subtle, effect of the revolutions of 1848 on society was the shift away from autocratic rule to democracy, and the freeing of expression and thought from rigid bounds.  The coming revolution in physics at the turn of the next century may have been helped a little by the revolutionary spirit that still echoed from 1848.


[1] pg. 201, Mike Rapport, “1848: Year of Revolution” (Basic Books, 2008)

[2] D. D. Nolte, The Tangled Tale of Phase Space, Chap. 6 in Galileo Unbound (Oxford University Press, 2018)

[3] Schuster, P. Moving the stars : Christian Doppler, his life, his works and principle, and the world after. Pöllauberg, Austria, Living Edition. (2005)



Chandrasekhar’s Limit

Arthur Eddington was the complete package—an observationalist with the mathematical and theoretical skills to understand Einstein’s general theory, and the ability to construct the theory of the internal structure of stars.  He was Zeus in Olympus among astrophysicists.  He always had the last word, and he stood with Einstein firmly opposed to the Schwarzschild singularity.  In 1924 he published a theoretical paper in which he derived a new coordinate frame (now known as Eddington-Finkelstein coordinates) in which the singularity at the Schwarzschild radius is removed.  At the time, he took this to mean that the singularity did not exist and that gravitational cut off was not possible [1].  It would seem that the possibility of dark stars (black holes) had been put to rest.  Both Eddington and Einstein said so!  But just as they were writing the obituary of black holes, a strange new form of matter was emerging from astronomical observations that would challenge the views of these giants.

Something wonderful, but also a little scary, happened when Chandrasekhar included the relativistic effects in his calculation.

White Dwarf

Binary star systems have always held a certain fascination for astronomers.  If your field of study is the (mostly) immutable stars, then the stars that do move provide some excitement.  The attraction of binaries is the same thing that makes them important astrophysically—they are dynamic.  While many double stars are observed in the night sky (a few had been noted by Galileo), some of these are just coincidental alignments of near and far stars.  However, William Herschel began cataloging binary stars in 1779 and became convinced in 1802 that at least some of them must be gravitationally bound to each other.  He carefully measured the positions of binary stars over many years and confirmed that these stars showed relative changes in position, proving that they were gravitational bound binary star systems [2].  The first orbit of a binary star was computed in 1827 by Félix Savary for the orbit of Xi Ursae Majoris.  Finding the orbit of a binary star system provides a treasure trove of useful information about the pair of stars.  Not only can the masses of the stars be determined, but their radii and densities also can be estimated.  Furthermore, by combining this information with the distance to the binaries, it was possible to develop a relationship between mass and luminosity for all stars, even single stars.  Therefore, binaries became a form of measuring stick for crucial stellar properties.

Comparison of Earth to a white dwarf star with a mass equal to the Sun. They have comparable radii but radically different densities.

One of the binary star systems that Hershel discovered was the pair known as 40 Eridani B/C, which he observed on January 31 in 1783.  Of this pair, 40 Eridani B was very dim compared to its companion.  More than a century later, in 1910 when spectrographs were first being used routinely on large telescopes, the spectrum of 40 Eridani B was found to be of an unusual white spectral class.  In the same year, the low luminosity companion of Sirius, known as Sirius B, which shared the same unusual white spectral class, was evaluated in terms of its size and mass and was found to be exceptionally small and dense [3].  In fact, it was too small and too dense to be believed at first, because the densities were beyond any known or even conceivable matter.  The mass of Sirius B is around the mass of the Sun, but its radius is comparable to the radius of the Earth, making the density of the white star about ten thousand times denser than the core of the Sun.  Eddington at first felt the same way about white dwarfs that he felt about black holes, but he was eventually swayed by the astrophysical evidence.  By 1922 many of these small white stars had been discovered, called white dwarfs, and their incredibly large densities had been firmly established.  In his famous book on stellar structure [4], he noted the strange paradox:  As a star cools, its pressure must decrease, as all gases must do as they cool, and the star would shrink, yet the pressure required to balance the force of gravity to stabilize the star against continued shrinkage must increase as the star gets smaller.  How can pressure decrease and yet increase at the same time?  In 1926, on the eve of the birth of quantum mechanics, Eddington could conceive of no mechanism that could resolve this paradox.  So he noted it as an open problem in his book and sent it to press.

Subrahmanyan Chandrasekhar

Three years after the publication of Eddington’s book, an eager and excited nineteen-year-old graduate of the University in Madras India boarded a steamer bound for England.  Subrahmanyan Chandrasekhar (1910—1995) had been accepted for graduate studies at Cambridge University.  The voyage in 1930 took eighteen days via the Suez Canal, and he needed something to do to pass the time.  He had with him Eddington’s book, which he carried like a bible, and he also had a copy of a breakthrough article written by R. H. Fowler that applied the new theory of quantum mechanics to the problem of dense matter composed of ions and electrons [5].  Fowler showed how the Pauli exclusion principle for electrons, that obeyed Fermi-Dirac statistics, created an energetic sea of electrons in their lowest energy state, called electron degeneracy.  This degeneracy was a fundamental quantum property of matter, and carried with it an intrinsic pressure unrelated to thermal properties.  Chandrasekhar realized that this was a pressure mechanism that could balance the force of gravity in a cooling star and might resolve Eddington’s paradox of the white dwarfs.  As the steamer moved ever closer to England, Chandrasekhar derived the new balance between gravitational pressure and electron degeneracy pressure and found the radius of the white dwarf as a function of its mass.  The critical step in Chandrasekhar’s theory, conceived alone on the steamer at sea with access to just a handful of books and papers, was the inclusion of special relativity with the quantum physics.  This was necessary, because the densities were so high and the electrons were so energetic, that they attained speeds approaching the speed of light. 

Something wonderful, but also a little scary, happened when Chandrasekhar included the relativistic effects in his calculation.  He discovered that electron degeneracy pressure could balance the force of gravity if the mass of the white dwarf were smaller than about 1.4 times the mass of the Sun.  But if the dwarf was more massive than this, then even the electron degeneracy pressure would be insufficient to fight gravity, and the star would continue to collapse.  To what?  Schwarzschild’s singularity was one possibility.  Chandrasekhar wrote up two papers on his calculations, and when he arrived in England, he showed them to Fowler, who was to be his advisor at Cambridge.  Fowler was genuinely enthusiastic about  the first paper, on the derivation of the relativistic electron degeneracy pressure, and it was submitted for publication.  The second paper, on the maximum sustainable mass for a white dwarf, which reared the ugly head of Schwarzschild’s singularity, made Fowler uncomfortable, and he sat on the paper, unwilling to give his approval for publication in the leading British astrophysical journal.  Chandrasekhar grew annoyed, and in frustration sent it, without Fowler’s approval, to an American journal, where “The Maximum Mass of Ideal White Dwarfs” was published in 1931 [6].  This paper, written in eighteen days on a steamer at sea, established what became known as the Chandrasekhar limit, for which Chandrasekhar would win the 1983 Nobel Prize in Physics, but not before he was forced to fight major battles for its acceptance.

The Chandrasekhar limit expressed in terms of the Planck Mass and the mass of a proton. The limit is approximately 1.4 times the mass of the Sun. White dwarfs with masses larger than the limit cannot balance gravitational collapse by relativistic electron degeneracy.

Chandrasekhar versus Eddington

Initially there was almost no response to Chandrasekhar’s paper.  Frankly, few astronomers had the theoretical training needed to understand the physics.  Eddington was one exception, which was why he held such stature in the community.  The big question therefore was:  Was Chandrasekhar’s theory correct?  During the three years to obtain his PhD, Chandrasekhar met frequently with Eddington, who was also at Cambridge, and with colleagues outside the university, and they all encouraged Chandrasekhar to tackle the more difficult problem to combine internal stellar structure with his theory.  This could not be done with pen and paper, but required numerical calculation.  Eddington was in possession of an early electromagnetic calculator, and he loaned it to Chandrasekhar to do the calculations.  After many months of tedious work, Chandrasekhar was finally ready to confirm his theory at the 1934 meeting of the British Astrophysical Society. 

The young Chandrasekhar stood up and gave his results in an impeccable presentation before an auditorium crowded with his peers.  But as he left the stage, he was shocked when Eddington himself rose to give the next presentation.  Eddington proceeded to criticize and reject Chandrasekhar’s careful work, proposing instead a garbled mash-up of quantum theory and relativity that would eliminate Chandrasekhar’s limit and hence prevent collapse to the Schwarzschild singularity.  Chandrasekhar sat mortified in the audience.  After the session, many of his friends and colleagues came up to him to give their condolences—if Eddington, the leader of the field and one of the few astronomers who understood Einstein’s theories, said that Chandrasekhar was wrong, then that was that.  Badly wounded, Chandrasekhar was faced with a dire choice.  Should he fight against the reputation of Eddington, fight for the truth of his theory?  But he was at the beginning of his career and could ill afford to pit himself against the giant.  So he turned his back on the problem of stellar death, and applied his talents to the problem of stellar evolution. 

Chandrasekhar went on to have an illustrious career, spent mostly at the University of Chicago (far from Cambridge), and he did eventually return to his limit as it became clear that Eddington was wrong.  In fact, many at the time already suspected Eddington was wrong and were seeking for the answer to the next question: If white dwarfs cannot support themselves under gravity and must collapse, what do they collapse to?  In Pasadena at the California Institute of Technology, an astrophysicist named Fritz Zwicky thought he knew the answer.

Fritz Zwicky’s Neutron Star

Fritz Zwicky (1898—1874) was an irritating and badly flawed genius.  What made him so irritating was that he knew he was a genius and never let anyone forget it.  What made him badly flawed was that he never cared much for weight of evidence.  It was the ideas that mattered—let lesser minds do the tedious work of filling in the cracks.  And what made him a genius was that he was often right!  Zwicky pushed the envelope—he loved extremes.  The more extreme a theory was, the more likely he was to favor it—like his proposal for dark matter.  Most of his colleagues considered him to be a buffoon and borderline crackpot.  He was tolerated by no one—no one except his steadfast collaborator of many years Ernst Baade (until they nearly came to blows on the eve of World War II).  Baade was a German physicist trained at Göttingen and recently arrived at Cal Tech.  He was exceptionally well informed on the latest advances in a broad range of fields.  Where Zwicky made intuitive leaps, often unsupported by evidence, Baade would provide the context.  Baade was a walking Wikipedia for Zwicky, and together they changed the face of astrophysics.

Zwicky and Baade submitted an abstract to the American Physical Society Meeting in 1933, which Kip Thorne has called “…one of the most prescient documents in the history of physics and astronomy” [7].  In the abstract, Zwicky and Baade introduced, for the first time, the existence of supernovae as a separate class of nova and estimated the total energy output of these cataclysmic events, including the possibility that they are the source of some cosmic rays.  They introduced the idea of a neutron star, a star composed purely of neutrons, only a year after Chadwick discovered the neutron’s existence, and they strongly suggested that a supernova is produced by the transformation of a star into a neutron star.  A neutron star would have a mass similar to that of the Sun, but would have a radius of only tens of kilometers.  If the mass density of white dwarfs was hard to swallow, the density of a neutron star was billion times greater!  It would take nearly thirty years before each of the assertions made in this short abstract were proven true, but Zwicky certainly had a clear view, tempered by Baade, of where the field of astrophysics was headed.  But no one listened to Zwicky.  He was too aggressive and backed up his wild assertions with too little substance.  Therefore, neutron stars simmered on the back burner until more substantial physicists could address their properties more seriously.

Two substantial physicists who had the talent and skills that Zwicky lacked were Lev Landau in Moscow and Robert Oppenheimer at Berkeley.  Landau derived the properties of a neutron star in 1937 and published the results to great fanfare.  He was not aware of Zwicky’s work, and he called them neutron cores, because he hypothesized that they might reside at the core of ordinary stars like the Sun.  Oppenheimer, working with a Canadian graduate student George Volkoff at Berkeley, showed that Landau’s idea about stellar cores was not correct, but that the general idea of a neutron core, or rather neutron star, was correct [8].  Once Oppenheimer was interested in neutron stars, he kept going and asked the same question about neutron stars that Chandrasekhar had asked about white dwarfs:  Is there a maximum size for neutron stars beyond which they must collapse?  The answer to this question used the same quantum mechanical degeneracy pressure (now provided by neutrons rather than electrons) and gravitational compaction as the problem of white dwarfs, but it required detailed understanding of nuclear forces, which in 1938 were only beginning to be understood.  However, Oppenheimer knew enough to make a good estimate of the nuclear binding contribution to the total internal pressure and came to a similar conclusion for neutron stars as Chandrasekhar had made for white dwarfs.  There was indeed a maximum mass of a neutron star, a Chandrasekhar-type limit of about three solar masses.  Beyond this mass, even the degeneracy pressure of neutrons could not support gravitational pressure, and the neutron star must collapse.  In Oppenheimer’s mind it was clear what it must collapse to—a black hole (known as gravitational cut-off at that time). This was to lead Oppenheimer and John Wheeler to their famous confrontation over the existence of black holes, which Oppenheimer won, but Wheeler took possession of the battle field [9].

Derivation of the Relativistic Chandrasekhar Limit

White dwarfs are created from the balance between gravitational compression and the degeneracy pressure of electrons caused by the Pauli exclusion principle. When a star collapses gravitationally, the matter becomes so dense that the electrons begin to fill up quantum states until all the lowest-energy states are filled and no more electrons can be added. This results in a balance that stabilizes the gravitational collapse, and the result is a white dwarf with a mass density a million times larger than the Sun.

If the electrons remained non-relativistic, then there would be no upper limit for the size of a star that would form a white dwarf. However, because electrons become relativistic at high enough compaction, if the initial star is too massive, the electron degeneracy pressure becomes limited relativistically and cannot keep the matter from compacting more, and even the white dwarf will collapse (to a neutron star or a black hole). The largest mass that can be supported by a white dwarf is known as the Chandrasekhar limit.

A simplified derivation of the Chandrasekhar limit begins by defining the total energy as the kinetic energy of the degenerate Fermi electron gas plus the gravitational potential energy

The kinetic energy of the degenerate Fermi gas has the relativistic expression


where the Fermi k-vector can be expressed as a function of the radius of the white dwarf and the total number of electrons in the star, as

If the star is composed of pure hydrogen, then the mass of the star is expressed in terms of the total number of electrons and the mass of the proton

The total energy of the white dwarf is minimized by taking its derivative with respect to the radius of the star

When the derivative is set to zero, the term in brackets becomes

This is solved for the radius for which the electron degeneracy pressure stabilizes the gravitational pressure

This is the relativistic radius-mass expression for the size of the stabilized white dwarf as a function of the mass (or total number of electrons). One of the astonishing results of this calculation is the merging of astronomically large numbers (the mass of stars) with both relativity and quantum physics. The radius of the white dwarf is actually expressed as a multiple of the Compton wavelength of the electron!

The expression in the square root becomes smaller as the size of the star increases, and there is an upper bound to the mass of the star beyond which the argument in the square root goes negative. This upper bound is the Chandrasekhar limit defined when the argument equals zero

This gives the final expression for the Chandrasekhar limit (expressed in terms of the Planck mass)

This expression is only approximate, but it does contain the essential physics and magnitude. This limit is on the order of a solar mass. A more realistic numerical calculation yields a limiting mass of about 1.4 times the mass of the Sun. For white dwarfs larger than this value, the electron degeneracy is insufficient to support the gravitational pressure, and the star will collapse to a neutron star or a black hole.

By David D. Nolte, Jan. 7, 2019


[1] The fact that Eddington coordinates removed the singularity at the Schwarzschild radius was first pointed out by Lemaitre in 1933.  A local observer passing through the Schwarzschild radius would experience no divergence in local properties, even though a distant observer would see that in-falling observer becoming length contracted and time dilated. This point of view of an in-falling observer was explained in 1958 by Finkelstein, who also pointed out that the Schwarzschild radius is an event horizon.

[2] William Herschel (1803), Account of the Changes That Have Happened, during the Last Twenty-Five Years, in the Relative Situation of Double-Stars; With an Investigation of the Cause to Which They Are Owing, Philosophical Transactions of the Royal Society of London 93, pp. 339–382 (Motion of binary stars)

[3] Boss, L. (1910). Preliminary General Catalogue of 6188 stars for the epoch 1900. Carnegie Institution of Washington. (Mass and radius of Sirius B)

[4] Eddington, A. S. (1927). Stars and Atoms. Clarendon Press. LCCN 27015694.

[5] Fowler, R. H. (1926). “On dense matter”. Monthly Notices of the Royal Astronomical Society 87: 114. Bibcode:1926MNRAS..87..114F. (Quantum mechanics of degenerate matter).

[6] Chandrasekhar, S. (1931). “The Maximum Mass of Ideal White Dwarfs”. The Astrophysical Journal 74: 81. Bibcode:1931ApJ….74…81C. doi:10.1086/143324. (Mass limit of white dwarfs).

[7] Kip Thorne (1994) Black Holes & Time Warps: Einstein’s Outrageous Legacy (Norton). pg. 174

[8] Oppenheimer was aware of Zwicky’s proposal because he had a joint appointment between Berkeley and Cal Tech.

[9] See Chapter 7, “The Lens of Gravity” in Galileo Unbound: A Path Across Life, the Universe and Everything (Oxford University Press, 2018).



George Green’s Theorem

For a thirty-year old miller’s son with only one year of formal education, George Green had a strange hobby—he read papers in mathematics journals, mostly from France.  This was his escape from a dreary life running a flour mill on the outskirts of Nottingham, England, in 1823.  The tall wind mill owned by his father required 24-hour attention, with farmers depositing their grain at all hours and the mechanisms and sails needing constant upkeep.  During his one year in school when he was eight years old he had become fascinated by maths, and he nurtured this interest after leaving school one year later, stealing away to the top floor of the mill to pore over books he scavenged, devouring and exhausting all that English mathematics had to offer.  By the time he was thirty, his father’s business had become highly successful, providing George with enough wages to become a paying member of the private Nottingham Subscription Library with access to the Transactions of the Royal Society as well to foreign journals.  This simple event changed his life and changed the larger world of mathematics.

Green’s windmill in Sneinton, England.

French Analysis in England

George Green was born in Nottinghamshire, England.  No record of his birth exists, but he was baptized in 1793, which may be assumed to be the year of his birth.  His father was a baker in Nottingham, but the food riots of 1800 forced him to move outside of the city to the town of Sneinton, where he bought a house and built an industrial-scale windmill to grind flour for his business.  He prospered enough to send his eight-year old son to Robert Goodacre’s Academy located on Upper Parliament Street in Nottingham.  Green was exceptionally bright, and after one year in school he had absorbed most of what the Academy could teach him, including a smattering of Latin and Greek as well as French along with what simple math that was offered.  Once he was nine, his schooling was over, and he took up the responsibility of helping his father run the mill, which he did faithfully, though unenthusiastically, for the next 20 years.  As the milling business expanded, his father hired a mill manager that took part of the burden off George.  The manager had a daughter Jane Smith, and in 1824 she had her first child with Green.  Six more children were born to the couple over the following fifteen years, though they never married.

Without adopting any microscopic picture of how electric or magnetic fields are produced or how they are transmitted through space, Green could still derive rigorous properties that are independent of any details of the microscopic model.

            During the 20 years after leaving Goodacre’s Academy, Green never gave up learning what he could, teaching himself to read French readily as well as mastering English mathematics.  The 1700’s and early 1800’s had been a relatively stagnant period for English mathematics.  After the priority dispute between Newton and Leibniz over the invention of the calculus, English mathematics had become isolated from continental advances.  This was part snobbery, but also part handicap as the English school struggled with Newton’s awkward fluxions while the continental mathematicians worked with Leibniz’ more fruitful differential notation.  One notable exception was Brook Taylor who developed the Taylor’s Series (and who grew up on the opposite end of the economic spectrum from Green, see my Blog on Taylor). However, the French mathematicians in the early 1800’s were especially productive, including such works as those by Lagrange, Laplace and Poisson.

            One block away from where Green lived stood the Free Grammar School overseen by headmaster John Topolis.  Topolis was a Cambridge graduate on a minor mission to update the teaching of mathematics in England, well aware that the advances on the continent were passing England by.  For instance, Topolis translated Laplace’s mathematically advanced Méchaniqe Celéste from French into English.  Topolis was also well aware of the work by the other French mathematicians and maintained an active scholarly output that eventually brought him back to Cambridge as Dean of Queen’s College in 1819 when Green was 26 years old.  There is no record whether Topolis and Green knew each other, but their close proximity and common interests point to a natural acquaintance.  One can speculate that Green may even have sought Topolis out, given his insatiable desire to learn more mathematics, and it is likely that Topolis would have introduced Green to the vibrant French school of mathematics.             

By the time Green joined the Nottingham Subscription Library, he must already have been well trained in basic mathematics, and membership in the library allowed him to request loans of foreign journals (sort of like Interlibrary Loan today).  With his library membership beginning in 1823, Green absorbed the latest advances in differential equations and must have begun forming a new viewpoint of the uses of mathematics in the physical sciences.  This was around the same time that he was beginning his family with Jane as well as continuing to run his fathers mill, so his mathematical hobby was relegated to the dark hours of the night.  Nonetheless, he made steady progress over the next five years as his ideas took rough shape and were refined until finally he took pen to paper, and this uneducated miller’s son began a masterpiece that would change the history of mathematics.

Essay on Mathematical Analysis of Electricity and Magnetism

By 1827 Green’s free-time hobby was about to bear fruit, and he took out a modest advertisement to announce its forthcoming publication.  Because he was an unknown, and unknown to any of the local academics (Topolis had already gone back to Cambridge), he chose vanity publishing and published out of pocket.   An Essay on the Application of Mathematical Analysis to the Theories of Electricity and Magnetism was printed in March of 1828, and there were 51 subscribers, mostly from among the members of the Nottingham Subscription Library who bought it at 7 shillings and 6 pence per copy, probably out of curiosity or sympathy rather than interest.  Few, if any, could have recognized that Green’s little essay contained several revolutionary elements.

Fig. 1 Cover page of George Green’s Essay

            The topic of the essay was not remarkable, treating mathematical problems of electricity and magnetism, which was in vogue at that time.  As background, he had read works by Cavendish, Poisson, Arago, Laplace, Fourier, Cauchy and Thomas Young (probably Young’s Course of Lectures on Natural Philosopy and the Mechanical Arts (1807)).  He paid close attention to Laplace’s treatment of celestial mechanics and gravitation which had obvious strong analogs to electrostatics and the Coulomb force because of the common inverse square dependence. 

            One radical contribution in Green’s essay was his introduction of the potential function—one of the first uses of the concept of a potential function in mathematical physics—and he gave it its modern name.  Others had used similar constructions, such as Euler [1], D’Alembert [2], Laplace[3] and Poisson [4], but the use had been implicit rather than explicit.  Green shifted the potential function to the forefront, as a central concept from which one could derive other phenomena.  Another radical contribution from Green was his use of the divergence theorem.  This has tremendous utility, because it relates a volume integral to a surface integral.  It was one of the first examples of how measuring something over a closed surface could determine a property contained within the enclosed volume.  Gauss’ law is the most common example of this, where measuring the electric flux through a closed surface determines the amount of enclosed charge.  Lagrange in 1762 [5] and Gauss in 1813 [6] had used forms of the divergence theorem in the context of gravitation, but Green applied it to electrostatics where it has become known as Gauss’ law and is one of the four Maxwell equations.  Yet another contribution was Green’s use of linear superposition to determine the potential of a continuous charge distribution, integrating the potential of a point charge over a continuous charge distribution.  This was equivalent to defining what is today called a Green’s function, which is a common method to solve partial differential equations.

            A subtle contribution of Green’s Essay, but no less influential, was his adoption of a mathematical approach to a physics problem based on the fundamental properties of the mathematical structure rather than on any underlying physical model.  Without adopting any microscopic picture of how electric or magnetic fields are produced or how they are transmitted through space, he could still derive rigorous properties that are independent of any details of the microscopic model.  For instance, the inverse square law of both electrostatics and gravitation is a fundamental property of the divergence theorem (a mathematical theorem) in three-dimensional space.  There is no need to consider what space is composed of, such as the many differing models of the ether that were being proposed around that time.  He would apply this same fundamental mathematical approach in his later career as a Cambridge mathematician to explain the laws of reflection and refraction of light.

George Green: Cambridge Mathematician

A year after the publication of the Essay, Green’s father died a wealthy man, his milling business having become very successful.  Green inherited the family fortune, and he was finally able to leave the mill and begin devoting his energy to mathematics.  Around the same time he began working on mathematical problems with the support of Sir Edward Bromhead.  Bromhead was a Nottingham peer who had been one of the 51 subscribers to Green’s published Essay.  As a graduate of Cambridge he was friends with Herschel, Babbage and Peacock, and he recognized the mathematical genius in this self-educated miller’s son.  The two men spent two years working together on a pair of publications, after which Bromhead used his influence to open doors at Cambridge.

            In 1832, at the age of 40, George Green enrolled as an undergraduate student in Gonville and Caius College at Cambridge.  Despite his concerns over his lack of preparation, he won the first-year mathematics prize.  In 1838 he graduated as fourth wrangler only two positions behind the future famous mathematician James Joseph Sylvester (1814 – 1897).  Based on his work he was elected as a fellow of the Cambridge Philosophical Society in 1840.  Green had finally become what he had dreamed of being for his entire life—a professional mathematician.

            Green’s later papers continued the analytical dynamics trend he had established in his Essay by applying mathematical principles to the reflection and refraction of light. Cauchy had built microscopic models of the vibrating ether to explain and derive the Fresnel reflection and transmission coefficients, attempting to understand the structure of ether.  But Green developed a mathematical theory that was independent of microscopic models of the ether.  He believed that microscopic models could shift and change as newer models refined the details of older ones.  If a theory depended on the microscopic interactions among the model constituents, then it too would need to change with the times.  By developing a theory based on analytical dynamics, founded on fundamental principles such as minimization principles and geometry, then one could construct a theory that could stand the test of time, even as the microscopic understanding changed.  This approach to mathematical physics was prescient, foreshadowing the geometrization of physics in the late 1800’s that would lead ultimately to Einsteins theory of General Relativity.

Green’s Theorem and Greens Function

Green died in 1841 at the age of 49, and his Essay was mostly forgotten.  Ten years later a young William Thomson (later Lord Kelvin) was graduating from Cambridge and about to travel to Paris to meet with the leading mathematicians of the age.  As he was preparing for the trip, he stumbled across a mention of Green’s Essay but could find no copy in the Cambridge archives.  Fortunately, one of the professors had a copy that he lent Thomson.  When Thomson showed the work to Liouville and Sturm it caused a sensation, and Thomson later had the Essay republished in Crelle’s journal, finally bringing the work and Green’s name into the mainstream.

            In physics and mathematics it is common to name theorems or laws in honor of a leading figure, even if the they had little to do with the exact form of the theorem.  This sometimes has the effect of obscuring the historical origins of the theorem.  A classic example of this is the naming of Liouville’s theorem on the conservation of phase space volume after Liouville, who never knew of phase space, but who had published a small theorem in pure mathematics in 1838, unrelated to mechanics, that inspired Jacobi and later Boltzmann to derive the form of Liouville’s theorem that we use today.  The same is true of Green’s Theorem and Green’s Function.  The form of the theorem known as Green’s theorem was first presented by Cauchy [7] in 1846 and later proved by Riemann [8] in 1851.  The equation is named in honor of Green who was one of the early mathematicians to show how to relate an integral of a function over one manifold to an integral of the same function over a manifold whose dimension differed by one.  This property is a consequence of the Generalized Stokes Theorem (named after George Stokes), of which the Kelvin-Stokes Theorem, the Divergence Theorem and Green’s Theorem are special cases.

Fig. 2 Green’s theorem and its relationship with the Kelvin-Stokes theorem, the Divergence theorem and the Generalized Stokes theorem (expressed in differential forms)

            Similarly, the use of Green’s function for the solution of partial differential equations was inspired by Green’s use of the superposition of point potentials integrated over a continuous charge distribution.  The Green’s function came into more general use in the late 1800’s and entered the mainstream of physics in the mid 1900’s [9].

Fig. 3 The application of Green’s function so solve a linear operator problem, and an example applied to Poisson’s equation.

By David D. Nolte, Dec. 26, 2018


[1] L. Euler, Novi Commentarii Acad. Sci. Petropolitanae , 6 (1761)

[2] J. d’Alembert, “Opuscules mathématiques” , 1 , Paris (1761)

[3] P.S. Laplace, Hist. Acad. Sci. Paris (1782)

[4] S.D. Poisson, “Remarques sur une équation qui se présente dans la théorie des attractions des sphéroïdes” Nouveau Bull. Soc. Philomathique de Paris , 3 (1813) pp. 388–392

[5] Lagrange (1762) “Nouvelles recherches sur la nature et la propagation du son” (New researches on the nature and propagation of sound), Miscellanea Taurinensia (also known as: Mélanges de Turin ), 2: 11 – 172

[6] C. F. Gauss (1813) “Theoria attractionis corporum sphaeroidicorum ellipticorum homogeneorum methodo nova tractata,” Commentationes societatis regiae scientiarium Gottingensis recentiores, 2: 355–378

[7] Augustin Cauchy: A. Cauchy (1846) “Sur les intégrales qui s’étendent à tous les points d’une courbe fermée” (On integrals that extend over all of the points of a closed curve), Comptes rendus, 23: 251–255.

[8] Bernhard Riemann (1851) Grundlagen für eine allgemeine Theorie der Functionen einer veränderlichen complexen Grösse (Basis for a general theory of functions of a variable complex quantity), (Göttingen, (Germany): Adalbert Rente, 1867

[9] Schwinger, Julian (1993). “The Greening of quantum Field Theory: George and I”: 10283. arXiv:hep-ph/9310283

Dark Matter Mysteries

There is more to the Universe than meets the eye—way more. Over the past quarter century, it has become clear that all the points of light in the night sky, the stars, the Milky Way, the nubulae, all the distant galaxies, when added up with the nonluminous dust, constitute only a small fraction of the total energy density of the Universe. In fact, “normal” matter, like the stuff of which we are made—star dust—contributes only 4% to everything that is. The rest is something else, something different, something that doesn’t show up in the most sophisticated laboratory experiments, not even the Large Hadron Collider [1]. It is unmeasurable on terrestrial scales, and even at the scale of our furthest probe—the Voyager I spacecraft that left our solar system several years ago—there have been no indications of deviations from Newton’s law of gravity. To the highest precision we can achieve, it is invisible and non-interacting on any scale smaller than our stellar neighborhood. Perhaps it can never be detected in any direct sense. If so, then how do we know it is there? The answer comes from galactic trajectories. The motions in and of galaxies have been, and continue to be, the principal laboratory for the investigation of  cosmic questions about the dark matter of the universe.

Today, the nature of Dark Matter is one of the greatest mysteries in physics, and the search for direct detection of Dark Matter is one of physics’ greatest pursuits.

Island Universes

The nature of the Milky Way was a mystery through most of human history. To the ancient Greeks it was the milky circle (γαλαξίας κύκλος , pronounced galaktikos kyklos) and to the Romans it was literally the milky way (via lactea). Aristotle, in his Meteorologica, briefly suggested that the Milky Way might be composed of a large number of distant stars, but then rejected that idea in favor of a wisp, exhaled like breath on a cold morning, from the stars. The Milky Way is unmistakable on a clear dark night to anyone who looks up, far away from city lights. It was a constant companion through most of human history, like the constant stars, until electric lights extinguished it from much of the world in the past hundred years. Geoffrey Chaucer, in his Hous of Fame (1380) proclaimed “See yonder, lo, the Galaxyë Which men clepeth the Milky Wey, For hit is whyt.” (See yonder, lo, the galaxy which men call the Milky Way, for it is white.).

474336main_p1024ay_full

Hubble image of one of the galaxies in the Coma Cluster of galaxies that Fritz Zwicky used to announce that the universe contained a vast amount of dark matter.

Aristotle was fated, again, to be corrected by Galileo. Using his telescope in 1610, Galileo was the first to resolve a vast field of individual faint stars in the Milky Way. This led Emmanual Kant, in 1755, to propose that the Milky Way Galaxy was a rotating disk of stars held together by Newtonian gravity like the disk of the solar system, but much larger. He went on to suggest that the faint nebulae might be other far distant galaxies, which he called “island universes”. The first direct evidence that nebulae were distant galaxies came in 1917 with the observation of a supernova in the Andromeda Galaxy by Heber Curtis. Based on the brightness of the supernova, he estimated that the Andromeda Galaxy was over a million light years away, but uncertainty in the distance measurement kept the door open for the possibility that it was still part of the Milky Way, and hence the possibility that the Milky Way was the Universe.

The question of the nature of the nebulae hinged on the problem of measuring distances across vast amounts of space. By line of sight, there is no yard stick to tell how far away something is, so other methods must be used. Stellar parallax, for instance, can gauge the distance to nearby stars by measuring slight changes in the apparent positions of the stars as the Earth changes its position around the Sun through the year. This effect was used successfully for the first time in 1838 by Fredrich Bessel, and by the year 2000 more than a hundred thousand stars had their distances measured using stellar parallax. Recent advances in satellite observatories have extended the reach of stellar parallax to a distance of about 10,000 light years from the Sun, but this is still only a tenth of the diameter of the Milky Way. To measure distances to the far side of our own galaxy, or beyond, requires something else.

Because of Henrietta Leavitt

In 1908 Henrietta Leavitt, working at the Harvard Observatory as one of the famous female “computers”, discovered that stars whose luminosities oscillate with a steady periodicity, stars known as Cepheid variables, have a relationship between the period of oscillation and the average luminosity of the star [2]. By measuring the distance to nearby Cepheid variables using stellar parallax, the absolute brightness of the Cepheid could be calibrated, and the Cepheid could then be used as “standard candles”. This meant that by observing the period of oscillation and the brightness of a distant Cepheid, the distance to the star could be calculated. Edwin Hubble (1889 – 1953), working at the Mount Wilson observatory in Passedena CA, observed Cepheid variables in several of the brightest nebulae in the night sky. In 1925 he announced his observation of individual Cepheid variables in Andromeda and calculated that Andromeda was more than a million light years away, more than 10 Milky Way diameters (the actual number is about 25 Milky Way diameters). This meant that Andromeda was a separate galaxy and that the Universe was made of more than just our local cluster of stars. Once this door was opened, the known Universe expanded quickly up to a hundred Milky Way diameters as Hubble measured the distances to scores of our neighboring galaxies in the Virgo galaxy cluster. However, it was more than just our knowledge of the universe that was expanding.

Armed with measurements of galactic distances, Hubble was in a unique position to relate those distances to the speeds of the galaxies by combining his distance measurements with spectroscopic observations of the light spectra made by other astronomers. These galaxy emission spectra could be used to measure the Doppler effect on the light emitted by the stars of the galaxy. The Doppler effect, first proposed by Christian Doppler (1803 – 1853) in 1843, causes the wavelength of emitted light to be shifted to the red for objects receding from an observer, and shifted to the blue for objects approaching an observer. The amount of spectral shift is directly proportional the the object’s speed. Doppler’s original proposal was to use this effect to measure the speed of binary stars, which is indeed performed routinely today by astronomers for just this purpose, but in Doppler’s day spectroscopy was not precise enough to accomplish this. However, by the time Hubble was making his measurements, optical spectroscopy had become a precision science, and the Doppler shift of the galaxies could be measured with great accuracy. In 1929 Hubble announced the discovery of a proportional relationship between the distance to the galaxies and their Doppler shift. What he found was that the galaxies [3] are receding from us with speeds proportional to their distance [4]. Hubble himself made no claims at that time about what these data meant from a cosmological point of view, but others quickly noted that this Hubble effect could be explained if the universe were expanding.

Einstein’s Mistake

The state of the universe had been in doubt ever since Heber Curtis observed the supernova in the Andromeda galaxy in 1917. Einstein published a paper that same year in which he sought to resolve a problem that had appeared in the solution to his field equations. It appeared that the universe should either be expanding or contracting. Because the night sky literally was the firmament, it went against the mentality of the times to think of the universe as something intrinsically unstable, so Einstein fixed it with an extra term in his field equations, adding something called the cosmological constant, denoted by the Greek lambda (Λ). This extra term put the universe into a static equilibrium, and Einstein could rest easy with his firm trust in the firmament. However, a few years later, in 1922, the Russian physicist and mathematician Alexander Friedmann (1888 – 1925) published a paper that showed that Einstein’s static equilibrium was actually unstable, meaning that small perturbations away from the current energy density would either grow or shrink. This same result was found independently by the Belgian astronomer Georges Lemaître in 1927, who suggested that not only was the universe  expanding, but that it had originated in a singular event (now known as the Big Bang). Einstein was dismissive of Lemaître’s proposal and even quipped “Your calculations are correct, but your physics is atrocious.” [5] But after Hubble published his observation on the red shifts of galaxies in 1929, Lemaître pointed out that the redshifts would be explained by an expanding universe. Although Hubble himself never fully adopted this point of view, Einstein immediately saw it for what it was—a clear and simple explanation for a basic physical phenomenon that he had foolishly overlooked. Einstein retracted his cosmological constant in embarrassment and gave his support to Lemaître’s expanding universe. Nonetheless, Einstein’s physical intuition was never too far from the mark, and the cosmological constant has been resurrected in recent years in the form of Dark Energy. However, something else, both remarkable and disturbing, reared its head in the intervening years—Dark Matter.

Fritz Zwicky: Gadfly Genius

It is difficult to write about important advances in astronomy and astrophysics of the 20th century without tripping over Fritz Zwicky. As the gadfly genius that he was, he had a tendency to shoot close to the mark, or at least some of his many crazy ideas tended to be right. He was also in the right place at the right time, at the Mt. Wilson observatory nearby Cal Tech with regular access the World’s largest telescope. Shortly after Hubble proved that the nebulae were other galaxies and used Doppler shifts to measure their speeds, Zwicky (with his assistant Baade) began a study of as many galactic speeds and distances as they could. He was able to construct a three-dimensional map of the galaxies in the relatively nearby Coma galaxy cluster, together with their velocities. He then deduced that the galaxies in this isolated cluster were gravitational bound to each other, performing a whirling dance in each others thrall, like stars in globular star clusters in our Milky Way. But there was a serious problem.

Star clusters display average speeds and average gravitational potentials that are nicely balanced, a result predicted from a theorem of mechanics that was named the Virial Theorem by Rudolf Clausius in 1870. The Virial Theorem states that the average kinetic energy of a system of many bodies is directly related to the average potential energy of the system. By applying the Virial Theorem to the galaxies of the Coma cluster, Zwicky found that the dynamics of the galaxies were badly out of balance. The galaxy kinetic energies were far too fast relative to the gravitational potential—so fast, in fact, that the galaxies should have flown off away from each other and not been bound at all. To reconcile this discrepancy of the galactic speeds with the obvious fact that the galaxies were gravitationally bound, Zwicky postulated that there was unobserved matter present in the cluster that supplied the missing gravitational potential. The amount of missing potential was very large, and Zwicky’s calculations predicted that there was 400 times as much invisible matter, which he called “dark matter”, as visible. With his usual flare for the dramatic, Zwicky announced his findings to the World in 1933, but the World shrugged— after all, it was just Zwicky.

Nonetheless, Zwicky’s and Baade’s observations of the structure of the Coma cluster, and the calculations using the Virial Theorem, were verified by other astronomers. Something was clearly happening in the Coma cluster, but other scientists and astronomers did not have the courage or vision to make the bold assessment that Zwicky had. The problem of the Coma cluster, and a growing number of additional galaxy clusters that have been studied during the succeeding years, was to remain a thorn in the side of gravitational theory through half a century, and indeed remains a thorn to the present day. It is an important clue to a big question about the nature of gravity, which is arguably the least understood of the four forces of nature.

Vera Rubin: Galaxy Rotation Curves

Galactic clusters are among the largest coherent structures in the observable universe, and there are many questions about their origin and dynamics. Smaller gravitationally bound structures that can be handled more easily are individual galaxies themselves. If something important was missing in the dynamics of galactic clusters, perhaps the dynamics of the stars in individual galaxies could help shed light on the problem. In the late 1960’s and early 1970’s Vera Rubin at the Carnegie Institution of Washington used newly developed spectrographs to study the speeds of stars in individual galaxies. From simple Newtonian dynamics it is well understood that the speed of stars as a function of distance from the galactic center should increase with increasing distance up to the average radius of the galaxy, and then should decrease at larger distances. This trend in speed as a function of radius is called a rotation curve. As Rubin constructed the rotation curves for many galaxies, the increase of speed with increasing radius at small radii emerged as a clear trend, but the stars farther out in the galaxies were all moving far too fast. In fact, they are moving so fast that they exceeded escape velocity and should have flown off into space long ago. This disturbing pattern was repeated consistently in one rotation curve after another.

A simple fix to the problem of the rotation curves is to assume that there is significant mass present in every galaxy that is not observable either as luminous matter or as interstellar dust. In other words, there is unobserved matter, dark matter, in all galaxies that keeps all their stars gravitationally bound. Estimates of the amount of dark matter needed to fix the velocity curves is about five times as much dark matter as observable matter. This is not the same factor of 400 that Zwicky had estimated for the Coma cluster, but it is still a surprisingly large number. In short, 80% of the mass of a galaxy is not normal. It is neither a perturbation nor an artifact, but something fundamental and large. In fact, there is so much dark matter in the Universe that it must have a major effect on the overall curvature of space-time according to Einstein’s field equations. One of the best probes of the large-scale structure of the Universe is the afterglow of the Big Bang, known as the cosmic microwave background (CMB).

The Big Bang

The Big Bang was incredibly hot, but as the Universe expanded, its temperature cooled. About 379,000 years after the Big Bang, the Universe cooled sufficiently that the electron-nucleon plasma that filled space at that time condensed primarily into hydrogen. Plasma is charged and hence is opaque to photons.  Hydrogen, on the other hand, is neutral and transparent. Therefore, when the hydrogen condensed, the thermal photons suddenly flew free, unimpeded, and have continued unimpeded, continuing to cool, until today the thermal glow has reached about three degrees above absolute zero. Photons in thermal equilibrium with this low temperature have an average wavelength of a few millimeters corresponding to microwave frequencies, which is why the afterglow of the Big Bang got its CMB name.

The CMB is amazingly uniform when viewed from any direction in space, but it is not perfectly uniform. At the level of 0.005 percent, there are variations in the temperature depending on the location on the sky. These fluctuations in background temperature are called the CMB anisotropy, and they play an important role helping to interpret current models of the Universe. For instance, the average angular size of the fluctuations is related to the overall curvature of the Universe. This is because in the early Universe not all parts of it were in communication with each other because of the finite size and the finite speed of light. This set an original spatial size to thermal discrepancies. As the Universe continued to expand, the size of the regional variations expanded with it, and the sizes observed today would appear larger or smaller, depending on how the universe is curved. Therefore, to measure the energy density of the Universe, and hence to find its curvature, required measurements of the CMB temperature that were accurate to better than a part in 10,000.

Andrew Lange and Paul Richards: The Lambda and the Omega

In graduate school at Berkeley in 1982, my first graduate research assistantship was in the group of Paul Richards, one of the world leaders in observational cosmology. One of his senior graduate students at the time, Andrew Lange, was sharp and charismatic and leading an ambitious project to measure the cosmic background radiation on an experiment borne by a Japanese sounding rocket. My job was to create a set of far-infrared dichroic beamsplitters for the spectrometer.   A few days before launch, a technician noticed that the explosive bolts on the rocket nose-cone had expired. When fired, these would open the cone and expose the instrument at high altitude to the CMB. The old bolts were duly replaced with fresh ones. On launch day, the instrument and the sounding rocket worked perfectly, but the explosive bolts failed to fire, and the spectrometer made excellent measurements of the inside of the nose cone all the way up and all the way down until it sank into the Pacific Ocean. I left Paul’s comology group for a more promising career in solid state physics under the direction of Eugene Haller and Leo Falicov, but Paul and Andrew went on to great fame with high-altitude balloon-borne experiments that flew at 40,000 feet, above most of the atmosphere, to measure the CMB anisotropy.

By the late nineties, Andrew was established as a professor at Cal Tech. He was co-leading an experiment called BOOMerANG that flew a high-altitude balloon around Antarctica, while Paul was leading an experiment called MAXIMA that flew a balloon from Palastine, Texas. The two experiments had originally been coordinated together, but operational differences turned the former professor/student team into competitors to see who would be the first to measure the shape of the Universe through the CMB anisotropy.  BOOMerANG flew in 1997 and again in 1998, followed by MAXIMA that flew in 1998 and again in 1999. In early 2000, Andrew and the BOOMerANG team announced that the Universe was flat, confirmed quickly by an announcement by MAXIMA [BoomerMax]. This means that the energy density of the Universe is exactly critical, and there is precisely enough gravity to balance the expansion of the Universe. This parameter is known as Omega (Ω).  What was perhaps more important than this discovery was the announcement by Paul’s MAXIMA team that the amount of “normal” baryonic matter in the Universe made up only about 4% of the critical density. This is a shockingly small number, but agreed with predictions from Big Bang nucleosynthesis. When combined with independent measurements of Dark Energy known as Lambda (Λ), it also meant that about 25% of the energy density of the Universe is made up of Dark Matter—about five times more than ordinary matter. Zwicky’s Dark Matter announcement of 1933, virtually ignored by everyone, had been 75 years ahead of its time [6].

Dark Matter Pursuits

Today, the nature of Dark Matter is one of the greatest mysteries in physics, and the search for direct detection of Dark Matter is one of physics’ greatest pursuits. The indirect evidence for Dark Matter is incontestable—the CMB anisotropy, matter filaments in the early Universe, the speeds of galaxies in bound clusters, rotation curves of stars in Galaxies, gravitational lensing—all of these agree and confirm that most of the gravitational mass of the Universe is Dark. But what is it? The leading idea today is that it consists of weakly interacting particles, called cold dark matter (CDM). The dark matter particles pass right through you without ever disturbing a single electron. This is unlike unseen cosmic rays that are also passing through your body at the rate of several per second, leaving ionized trails like bullet holes through your flesh. Dark matter passes undisturbed through the entire Earth. This is not entirely unbelievable, because neutrinos, which are part of “normal” matter, also mostly pass through the Earth without interaction. Admittedly, the physics of neutrinos is not completely understood, but if ordinary matter can interact so weakly, then dark matter is just more extreme and perhaps not so strange. Of course, this makes detection of dark matter a big challenge. If a particle exists that won’t interact with anything, then how would you ever measure it? There are a lot of clever physicists with good ideas how to do it, but none of the ideas are easy, and none have worked yet.

[1] As of the writing of this chapter, Dark Matter has not been observed in particle form, but only through gravitational effects at large (galactic) scales.

[2] Leavitt, Henrietta S. “1777 Variables in the Magellanic Clouds”. Annals of Harvard College Observatory. LX(IV) (1908) 87-110

[3] Excluding the local group of galaxies that include Andromeda and Triangulum that are gravitationally influenced by the Milky Way.

[4] Hubble, Edwin (1929). “A relation between distance and radial velocity among extra-galactic nebulae”. PNAS 15 (3): 168–173.

[5] Deprit, A. (1984). “Monsignor Georges Lemaître”. In A. Barger (ed). The Big Bang and Georges Lemaître. Reidel. p. 370.

[6] I was amazed to read in Science magazine in 2004 or 2005, in a section called “Nobel Watch”, that Andrew Lange was a candidate for the Nobel Prize for his work on BoomerAng.  Around that same time I invited Paul Richards to Purdue to give our weekly physics colloquium.  There was definitely a buzz going around that the BoomerAng and MAXIMA collaborations were being talked about in Nobel circles.  The next year, the Nobel Prize of 2006 was indeed awarded for work on the Cosmic Microwave Background, but to Mather and Smoot for their earlier work on the COBE satellite.