Science 1916: A Hundred-year Time Capsule

In one of my previous blog posts, as I was searching for Schwarzschild’s original papers on Einstein’s field equations and quantum theory, I obtained a copy of the January 1916 – June 1916 volume of the Proceedings of the Royal Prussian Academy of Sciences through interlibrary loan.  The extremely thick volume arrived at Purdue about a week after I ordered it online.  It arrived from Oberlin College in Ohio that had received it as a gift in 1928 from the library of Professor Friedrich Loofs of the University of Halle in Germany.  Loofs had been the Haskell Lecturer at Oberlin for the 1911-1912 semesters. 

As I browsed through the volume looking for Schwarzschild’s papers, I was amused to find a cornucopia of turn-of-the-century science topics recorded in its pages.  There were papers on the overbite and lips of marsupials.  There were papers on forgotten languages.  There were papers on ancient Greek texts.  On the origins of religion.  On the philosophy of abstraction.  Histories of Indian dramas.  Reflections on cancer.  But what I found most amazing was a snapshot of the field of physics and mathematics in 1916, with historic papers by historic scientists who changed how we view the world. Here is a snapshot in time and in space, a period of only six months from a single journal, containing papers from authors that reads like a who’s who of physics.

In 1916 there were three major centers of science in the world with leading science publications: London with the Philosophical Magazine and Proceedings of the Royal Society; Paris with the Comptes Rendus of the Académie des Sciences; and Berlin with the Proceedings of the Royal Prussian Academy of Sciences and Annalen der Physik. In Russia, there were the scientific Journals of St. Petersburg, but the Bolshevik Revolution was brewing that would overwhelm that country for decades.  And in 1916 the academic life of the United States was barely worth noticing except for a few points of light at Yale and Johns Hopkins. 

Berlin in 1916 was embroiled in war, but science proceeded relatively unmolested.  The six-month volume of the Proceedings of the Royal Prussian Academy of Sciences contains a number of gems.  Schwarzschild was one of the most prolific contributors, publishing three papers in just this half-year volume, plus his obituary written by Einstein.  But joining Schwarzschild in this volume were Einstein, Planck, Born, Warburg, Frobenious, and Rubens among others—a pantheon of German scientists mostly cut off from the rest of the world at that time, but single-mindedly following their individual threads woven deep into the fabric of the physical world.

Karl Schwarzschild (1873 – 1916)

Schwarzschild had the unenviable yet effective motivation of his impending death to spur him to complete several projects that he must have known would make his name immortal.  In this six-month volume he published his three most important papers.  The first (pg. 189) was on the exact solution to Einstein’s field equations to general relativity.  The solution was for the restricted case of a point mass, yet the derivation yielded the Schwarzschild radius that later became known as the event horizon of a non-roatating black hole.  The second paper (pg. 424) expanded the general relativity solutions to a spherically symmetric incompressible liquid mass. 

Schwarzschild’s solution to Einstein’s field equations for a point mass.

          

Schwarzschild’s extension of the field equation solutions to a finite incompressible fluid.

The subject, content and success of these two papers was wholly unexpected from this observational astronomer stationed on the Russian Front during WWI calculating trajectories for German bombardments.  He would not have been considered a theoretical physicist but for the importance of his results and the sophistication of his methods.  Within only a year after Einstein published his general theory, based as it was on the complicated tensor calculus of Levi-Civita, Christoffel and Ricci-Curbastro that had taken him years to master, Schwarzschild found a solution that evaded even Einstein.

Schwarzschild’s third and final paper (pg. 548) was on an entirely different topic, still not in his official field of astronomy, that positioned all future theoretical work in quantum physics to be phrased in the language of Hamiltonian dynamics and phase space.  He proved that action-angle coordinates were the only acceptable canonical coordinates to be used when quantizing dynamical systems.  This paper answered a central question that had been nagging Bohr and Einstein and Ehrenfest for years—how to quantize dynamical coordinates.  Despite the simple way that Bohr’s quantized hydrogen atom is taught in modern physics, there was an ambiguity in the quantization conditions even for this simple single-electron atom.  The ambiguity arose from the numerous possible canonical coordinate transformations that were admissible, yet which led to different forms of quantized motion. 

Schwarzschild’s proposal of action-angle variables for quantization of dynamical systems.

 Schwarzschild’s doctoral thesis had been a theoretical topic in astrophysics that applied the celestial mechanics theories of Henri Poincaré to binary star systems.  Within Poincaré’s theory were integral invariants that were conserved quantities of the motion.  When a dynamical system had as many constraints as degrees of freedom, then every coordinate had an integral invariant.  In this unexpected last paper from Schwarzschild, he showed how canonical transformation to action-angle coordinates produced a unique representation in terms of action variables (whose dimensions are the same as Planck’s constant).  These action coordinates, with their associated cyclical angle variables, are the only unambiguous representations that can be quantized.  The important points of this paper were amplified a few months later in a publication by Schwarzschild’s friend Paul Epstein (1871 – 1939), solidifying this approach to quantum mechanics.  Paul Ehrenfest (1880 – 1933) continued this work later in 1916 by defining adiabatic invariants whose quantum numbers remain unchanged under slowly varying conditions, and the program started by Schwarzschild was definitively completed by Paul Dirac (1902 – 1984) at the dawn of quantum mechanics in Göttingen in 1925.

Albert Einstein (1879 – 1955)

In 1916 Einstein was mopping up after publishing his definitive field equations of general relativity the year before.  His interests were still cast wide, not restricted only to this latest project.  In the 1916 Jan. to June volume of the Prussian Academy Einstein published two papers.  Each is remarkably short relative to the other papers in the volume, yet the importance of the papers may stand in inverse proportion to their length.

The first paper (pg. 184) is placed right before Schwarzschild’s first paper on February 3.  The subject of the paper is the expression of Maxwell’s equations in four-dimensional space time.  It is notable and ironic that Einstein mentions Hermann Minkowski (1864 – 1909) in the first sentence of the paper.  When Minkowski proposed his bold structure of spacetime in 1908, Einstein had been one of his harshest critics, writing letters to the editor about the absurdity of thinking of space and time as a single interchangeable coordinate system.  This is ironic, because Einstein today is perhaps best known for the special relativity properties of spacetime, yet he was slow to adopt the spacetime viewpoint. Einstein only came around to spacetime when he realized around 1910 that a general approach to relativity required the mathematical structure of tensor manifolds, and Minkowski had provided just such a manifold—the pseudo-Riemannian manifold of space time.  Einstein subsequently adopted spacetime with a passion and became its greatest champion, calling out Minkowski where possible to give him his due, although he had already died tragically of a burst appendix in 1909.

Relativistic energy density of electromagnetic fields.

The importance of Einstein’s paper hinges on his derivation of the electromagnetic field energy density using electromagnetic four vectors.  The energy density is part of the source term for his general relativity field equations.  Any form of energy density can warp spacetime, including electromagnetic field energy.  Furthermore, the Einstein field equations of general relativity are nonlinear as gravitational fields modify space and space modifies electromagnetic fields, producing a coupling between gravity and electromagnetism.  This coupling is implicit in the case of the bending of light by gravity, but Einstein’s paper from 1916 makes the connection explicit. 

Einstein’s second paper (pg. 688) is even shorter and hence one of the most daring publications of his career.  Because the field equations of general relativity are nonlinear, they are not easy to solve exactly, and Einstein was exploring approximate solutions under conditions of slow speeds and weak fields.  In this “non-relativistic” limit the metric tensor separates into a Minkowski metric as a background on which a small metric perturbation remains.  This small perturbation has the properties of a wave equation for a disturbance of the gravitational field that propagates at the speed of light.  Hence, in the June 22 issue of the Prussian Academy in 1916, Einstein predicts the existence and the properties of gravitational waves.  Exactly one hundred years later in 2016, the LIGO collaboration announced the detection of gravitational waves generated by the merger of two black holes.

Einstein’s weak-field low-velocity approximation solutions of his field equations.
Einstein’s prediction of gravitational waves.

Max Planck (1858 – 1947)

Max Planck was active as the secretary of the Prussian Academy in 1916 yet was still fully active in his research.  Although he had launched the quantum revolution with his quantum hypothesis of 1900, he was not a major proponent of quantum theory even as late as 1916.  His primary interests lay in thermodynamics and the origins of entropy, following the theoretical approaches of Ludwig Boltzmann (1844 – 1906).  In 1916 he was interested in how to best partition phase space as a way to count states and calculate entropy from first principles.  His paper in the 1916 volume (pg. 653) calculated the entropy for single-atom solids.

Counting microstates by Planck.

Max Born (1882 – 1970)

Max Born was to be one of the leading champions of the quantum mechanical revolution based at the University of Göttingen in the 1920’s. But in 1916 he was on leave from the University of Berlin working on ranging for artillery.  Yet he still pursued his academic interests, like Schwarzschild.  On pg. 614 in the Proceedings of the Prussian Academy, Born published a paper on anisotropic liquids, such as liquid crystals and the effect of electric fields on them.  It is astonishing to think that so many of the flat-panel displays we have today, whether on our watches or smart phones, are technological descendants of work by Born at the beginning of his career.

Born on liquid crystals.

Ferdinand Frobenius (1849 – 1917)

Like Schwarzschild, Frobenius was at the end of his career in 1916 and would pass away one year later, but unlike Schwarzschild, his career had been a long one, receiving his doctorate under Weierstrass and exploring elliptic functions, differential equations, number theory and group theory.  One of the papers that established him in group theory appears in the May 4th issue on page 542 where he explores the series expansion of a group.

Frobenious on groups.

Heinrich Rubens (1865 – 1922)

Max Planck owed his quantum breakthrough in part to the exquisitely accurate experimental measurements made by Heinrich Rubens on black body radiation.  It was only by the precise shape of what came to be called the Planck spectrum that Planck could say with such confidence that his theory of quantized radiation interactions fit Rubens spectrum so perfectly.  In 1916 Rubens was at the University of Berlin, having taken the position vacated by Paul Drude in 1906.  He was a specialist in infrared spectroscopy, and on page 167 of the Proceedings he describes the spectrum of steam and its consequences for the quantum theory.

Rubens and the infrared spectrum of steam.

Emil Warburg (1946 – 1931)

Emil Warburg’s fame is primarily as the father of Otto Warburg who won the 1931 Nobel prize in physiology.  On page 314 Warburg reports on photochemical processes in BrH gases.     In an obscure and very indirect way, I am an academic descendant of Emil Warburg.  One of his students was Robert Pohl who was a famous early researcher in solid state physics, sometimes called the “father of solid state physics”.  Pohl was at the physics department in Göttingen in the 1920’s along with Born and Franck during the golden age of quantum mechanics.  Robert Pohl’s son, Robert Otto Pohl, was my professor when I was a sophomore at Cornell University in 1978 for the course on introductory electromagnetism using a textbook by the Nobel laureate Edward Purcell, a quirky volume of the Berkeley Series of physics textbooks.  This makes Emil Warburg my professor’s father’s professor.

Warburg on photochemistry.

Papers in the 1916 Vol. 1 of the Prussian Academy of Sciences

Schulze,  Alt– und Neuindisches

Orth,  Zur Frage nach den Beziehungen des Alkoholismus zur Tuberkulose

Schulze,  Die Erhabunen auf der Lippin- und Wangenschleimhaut der Säugetiere

von Wilamwitz-Moellendorff, Die Samie des Menandros

Engler,  Bericht über das >>Pflanzenreich<<

von Harnack,  Bericht über die Ausgabe der griechischen Kirchenväter der dri ersten Jahrhunderte

Meinecke,  Germanischer und romanischer Geist im Wandel der deutschen Geschichtsauffassung

Rubens und Hettner,  Das langwellige Wasserdampfspektrum und seine Deutung durch die Quantentheorie

Einstein,  Eine neue formale Deutung der Maxwellschen Feldgleichungen der Electrodynamic

Schwarschild,  Über das Gravitationsfeld eines Massenpunktes nach der Einsteinschen Theorie

Helmreich,  Handschriftliche Verbesserungen zu dem Hippokratesglossar des Galen

Prager,  Über die Periode des veränderlichen Sterns RR Lyrae

Holl,  Die Zeitfolge des ersten origenistischen Streits

Lüders,  Zu den Upanisads. I. Die Samvargavidya

Warburg,  Über den Energieumsatz bei photochemischen Vorgängen in Gasen. VI.

Hellman,  Über die ägyptischen Witterungsangaben im Kalender von Claudius Ptolemaeus

Meyer-Lübke,  Die Diphthonge im Provenzaslischen

Diels,  Über die Schrift Antipocras des Nikolaus von Polen

Müller und Sieg,  Maitrisimit und >>Tocharisch<<

Meyer,  Ein altirischer Heilsegen

Schwarzschild,  Über das Gravitationasfeld einer Kugel aus inkompressibler Flüssigkeit nach der Einsteinschen Theorie

Brauer,  Die Verbreitung der Hyracoiden

Correns,  Untersuchungen über Geschlechtsbestimmung bei Distelarten

Brahn,  Weitere Untersuchungen über Fermente in der Lever von Krebskranken

Erdmann,  Methodologische Konsequenzen aus der Theorie der Abstraktion

Bang,  Studien zur vergleichenden Grammatik der Türksprachen. I.

Frobenius,  Über die  Kompositionsreihe einer Gruppe

Schwarzschild,  Zur Quantenhypothese

Fischer und Bergmann,  Über neue Galloylderivate des Traubenzuckers und ihren Vergleich mit der Chebulinsäure

Schuchhardt,  Der starke Wall und die breite, zuweilen erhöhte Berme bei frügeschichtlichen Burgen in Norddeutschland

Born,  Über anisotrope Flüssigkeiten

Planck,  Über die absolute Entropie einatomiger Körper

Haberlandt,  Blattepidermis und Lichtperzeption

Einstein,  Näherungsweise Integration der Feldgleichungen der Gravitation

Lüders,  Die Saubhikas.  Ein Beitrag zur Gecschichte des indischen Dramas

Karl Schwarzschild’s Radius: How Fame Eclipsed a Physicist’s own Legacy

In an ironic twist of the history of physics, Karl Schwarzschild’s fame has eclipsed his own legacy.  When asked who was Karl Schwarzschild (1873 – 1916), you would probably say he’s the guy who solved Einstein’s Field Equations of General Relativity and discovered the radius of black holes.  You may also know that he accomplished this Herculean feat while dying slowly behind the German lines on the Eastern Front in WWI.  But asked what else he did, and you would probably come up blank.  Yet Schwarzschild was one of the most wide-ranging physicists at the turn of the 20th century, which is saying something, because it places him into the same pantheon as Planck, Lorentz, Poincaré and Einstein.  Let’s take a look at the part of his career that hides in the shadow of his own radius.

A Radius of Interest

Karl Schwarzschild was born in Frankfurt, Germany, shortly after the Franco-Prussian war thrust Prussia onto the world stage as a major political force in Europe.  His family were Jewish merchants of longstanding reputation in the city, and Schwarzschild’s childhood was spent in the vibrant Jewish community.  One of his father’s friends was a professor at a university in Frankfurt, whose son, Paul Epstein (1871 – 1939), became a close friend of Karl’s at the Gymnasium.  Schwarzshild and Epstein would partially shadow each other’s careers despite the fact that Schwarzschild became an astronomer while Epstein became a famous mathematician and number theorist.  This was in part because Schwarzschild had large radius of interests that spanned the breadth of current mathematics and science, practicing both experiments and theory. 

Schwarzschild’s application of the Hamiltonian formalism for quantum systems set the stage for the later adoption of Hamiltonian methods in quantum mechanics. He came dangerously close to stating the uncertainty principle that catapulted Heisenberg to fame.

By the time Schwarzschild was sixteen, he had taught himself the mathematics of celestial mechanics to such depth that he published two papers on the orbits of binary stars.  He also became fascinated in astronomy and purchased lenses and other materials to construct his own telescope.  His interests were helped along by Epstein, two years older and whose father had his own private observatory.  When Epstein went to study at the University of Strasbourg (then part of the German Federation) Schwarzschild followed him.  But Schwarzschild’s main interest in astronomy diverged from Epstein’s main interest in mathematics, and Schwarzschild transferred to the University of Munich where he studied under Hugo von Seeliger (1849 – 1924), the premier German astronomer of his day.  Epstein remained at Strasbourg where he studied under Bruno Christoffel (1829 – 1900) and eventually became a professor, but he was forced to relinquish the post when Strasbourg was ceded to France after WWI. 

The Birth of Stellar Interferometry

Until the Hubble space telescope was launched in 1990 no star had ever been resolved as a direct image.  Within a year of its launch, using its spectacular resolving power, the Hubble optics resolved—just barely—the red supergiant Betelgeuse.  No other star (other than the Sun) is close enough or big enough to image the stellar disk, even for the Hubble far above our atmosphere.  The reason is that the diameter of the optical lenses and mirrors of the Hubble—as big as they are at 2.4 meter diameter—still produce a diffraction pattern that smears the image so that stars cannot be resolved.  Yet information on the size of a distant object is encoded as phase in the light waves that are emitted from the object, and this phase information is accessible to interferometry.

The first physicist who truly grasped the power of optical interferometry and who understood how to design the first interferometric metrology systems was the French physicist Armand Hippolyte Louis Fizeau (1819 – 1896).  Fizeau became interested in the properties of light when he collaborated with his friend Léon Foucault (1819–1868) on early uses of photography.  The two then embarked on a measurement of the speed of light but had a falling out before the experiment could be finished, and both continued the pursuit independently.  Fizeau achieved the first measurement using a toothed wheel rotating rapidly [1], while Foucault came in second using a more versatile system with a spinning mirror [2].  Yet Fizeau surpassed Foucault in optical design and became an expert in interference effects.  Interference apparatus had been developed earlier by Augustin Fresnel (the Fresnel bi-prism 1819), Humphrey Lloyd (Lloyd’s mirror 1834) and Jules Jamin (Jamin’s interferential refractor 1856).  They had found ways of redirecting light using refraction and reflection to cause interference fringes.  But Fizeau was one of the first to recognize that each emitting region of a light source was coherent with itself, and he used this insight and the use of lenses to design the first interferometer.

Fizeau’s interferometer used a lens with a with a tight focal spot masked off by an opaque screen with two open slits.  When the masked lens device was focused on an intense light source it produced two parallel pencils of light that were mutually coherent but spatially separated.  Fizeau used this apparatus to measure the speed of light in moving water in 1859 [3]

Fig. 1  Optical configuration of the source element of the Fizeau refractometer.

The working principle of the Fizeau refractometer is shown in Fig. 1.  The light source is at the bottom, and it is reflected by the partially-silvered beam splitter to pass through the lens and the mask containing two slits.  (Only the light paths that pass through the double-slit mask on the lens are shown in the figure.)  The slits produce two pencils of mutually coherent light that pass through a system (in the famous Fizeau ether drag experiment it was along two tubes of moving water) and are returned through the same slits, and they intersect at the view port where they produce interference fringes.  The fringe spacing is set by the separation of the two slits in the mask.  The Rayleigh region of the lens defines a region of spatial coherence even for a so-called “incoherent” source.  Therefore, this apparatus, by use of the lens, could convert an incoherent light source into a coherent probe to test the refractive index of test materials, which is why it was called a refractometer. 

Fizeau became adept at thinking of alternative optical designs of his refractometer and alternative applications.  In an address to the French Physical Society in 1868 he suggested that the double-slit mask could be used on a telescope to determine sizes of distant astronomical objects [4].  There were several subsequent attempts to use Fizeau’s configuration in astronomical observations, but none were conclusive and hence were not widely known.

An optical configuration and astronomical application that was very similar to Fizeau’s idea was proposed by Albert Michelson in 1890 [5].  He built the apparatus and used it to successfully measure the size of several moons of Jupiter [6].  The configuration of the Michelson stellar interferometer is shown in Fig. 2.  Light from a distant star passes through two slits in the mask in front of the collecting optics of a telescope.  When the two pencils of light intersect at the view port, they produce interference fringes.  Because of the finite size of the stellar source, the fringes are partially washed out.  By adjusting the slit separation, a certain separation can be found where the fringes completely wash out.  The size of the star is then related to the separation of the slits for which the fringe visibility vanishes.  This simple principle allows this type of stellar interferometry to measure the size of stars that are large and relatively close to Earth.  However, if stars are too far away even this approach cannot be used to measure their sizes because telescopes aren’t big enough.  This limitation is currently being bypassed by the use of long-baseline optical interferometers.

Fig. 2  Optical configuration of the Michelson stellar interferometer.  Fringes at the view port are partially washed out by the finite size of the star.  By adjusting the slit separation, the fringes can be made to vanish entirely, yielding an equation that can be solved for the size of the star.

One of the open questions in the history of interferometry is whether Michelson was aware of Fizeau’s proposal for the stellar interferometer made in 1868.  Michelson was well aware of Fizeau’s published research and acknowledged him as a direct inspiration of his own work in interference effects.  But Michelson also was unaware of the undercurrents in the French school of optical interference.  When he visited Paris in 1881, he met with many of the leading figures in this school (including Lippmann and Cornu), but there is no mention or any evidence that he met with Fizeau.  By this time Fizeau’s wife had passed away, and Fizeau spent most of his time in seclusion at his home outside Paris.  Therefore, it is unlikely that he would have been present during Michelson’s visit.  Because Michelson viewed Fizeau with such awe and respect, if he had met him, he most certainly would have mentioned it.  Therefore, Michelson’s invention of the stellar interferometer can be considered with some confidence to be a case of independent discovery.  It is perhaps not surprising that he hit on the same idea that Fizeau had in 1868, because Michelson was one of the few physicists who understood coherence and interference at the same depth as Fizeau.

Schwarzschild’s Stellar Interferometer

The physics of the Michelson stellar interferometer is very similar to the physics of Young’s double slit experiment.  The two slits in the aperture mask of the telescope objective act to produce a simple sinusoidal interference pattern at the image plane of the optical system.  The size of the stellar diameter is determined by using the wash-out effect of the fringes caused by the finite stellar size.  However, it is well known to physicists who work with diffraction gratings that a multiple-slit interference pattern has a much greater resolving power than a simple double slit. 

This realization must have hit von Seeliger and Schwarzschild, working together at Munich, when they saw the publication of Michelson’s theoretical analysis of his stellar interferometer in 1890, followed by his use of the apparatus to measure the size of Jupiter’s moons.  Schwarzschild and von Seeliger realized that by replacing the double-slit mask with a multiple-slit mask, the widths of the interference maxima would be much narrower.  Such a diffraction mask on a telescope would cause a star to produce a multiple set of images on the image plane of the telescope associated with the multiple diffraction orders.  More interestingly, if the target were a binary star, the diffraction would produce two sets of diffraction maxima—a double image!  If the “finesse” of the grating is high enough, the binary star separation could be resolved as a doublet in the diffraction pattern at the image, and the separation could be measured, giving the angular separation of the two stars of the binary system.  Such an approach to the binary separation would be a direct measurement, which was a distinct and clever improvement over the indirect Michelson configuration that required finding the extinction of the fringe visibility. 

Schwarzschild enlisted the help of a fine German instrument maker to create a multiple slit system that had an adjustable slit separation.  The device is shown in Fig. 3 from Schwarzschild’s 1896 publication on the use of the stellar interferometer to measure the separation of binary stars [7].  The device is ingenious.  By rotating the chain around the gear on the right-hand side of the apparatus, the two metal plates with four slits could be raised or lowered, cause the projection onto the objective plane to have variable slit spacings.  In the operation of the telescope, the changing height of the slits does not matter, because they are near a conjugate optical plane (the entrance pupil) of the optical system.  Using this adjustable multiple slit system, Schwarzschild (and two colleagues he enlisted) made multiple observations of well-known binary star systems, and they calculated the star separations.  Several of their published results are shown in Fig. 4.

Fig. 3  Illustration from Schwarzschild’s 1896 paper describing an improvement of the Michelson interferometer for measuring the separation of binary star systems Ref. [7].
Fig. 4  Data page from Schwarzschild’s 1896 paper measuring the angular separation of two well-known binary star systems: gamma Leonis and chsi Ursa Major. Ref. [7]

Schwarzschild’s publication demonstrated one of the very first uses of stellar interferometry—well before Michelson himself used his own configuration to measure the diameter of Betelgeuse in 1920.  Schwarzschild’s major achievement was performed before he had received his doctorate, on a topic orthogonal to his dissertation topic.  Yet this fact is virtually unknown to the broader physics community outside of astronomy.  If he had not become so famous later for his solution of Einstein’s field equations, Schwarzschild nonetheless might have been famous for his early contributions to stellar interferometry.  But even this was not the end of his unique contributions to physics.

Adiabatic Physics

As Schwarzschild worked for his doctorate under von Seeliger, his dissertation topic was on new theories by Henri Poincaré (1854 – 1912) on celestial mechanics.  Poincaré had made a big splash on the international stage with the publication of his prize-winning memoire in 1890 on the three-body problem.  This is the publication where Poincaré first described what would later become known as chaos theory.  The memoire was followed by his volumes on “New Methods in Celestial Mechanics” published between 1892 and 1899.  Poincaré’s work on celestial mechanics was based on his earlier work on the theory of dynamical systems where he discovered important invariant theorems, such as Liouville’s theorem on the conservation of phase space volume.  Schwarzshild applied Poincaré’s theorems to problems in celestial orbits.  He took his doctorate in 1896 and received a post at an astronomical observatory outside Vienna. 

While at Vienna, Schwarzschild performed his most important sustained contributions to the science of astronomy.  Astronomical observations had been dominated for centuries by the human eye, but photographic techniques had been making steady inroads since the time of Hermann Carl Vogel (1841 – 1907) in the 1880’s at the Potsdam observatory.  Photographic plates were used primarily to record star positions but were known to be unreliable for recording stellar intensities.  Schwarzschild developed a “out-of-focus” technique that blurred the star’s image, while making it larger and easier to measure the density of the exposed and developed photographic emulsions.  In this way, Schwarzschild measured the magnitudes of 367 stars.  Two of these stars had variable magnitudes that he was able to record and track.  Schwarzschild correctly explained the intensity variation caused by steady oscillations in heating and cooling of the stellar atmosphere.  This work established the properties of these Cepheid variables which would become some of the most important “standard candles” for the measurement of cosmological distances.  Based on the importance of this work, Schwarzschild returned to Munich as a teacher in 1899 and subsequently was appointed in 1901 as the director of the observatory at Göttingen established by Gauss eighty years earlier.

Schwarzschild’s years at Göttingen brought him into contact with some of the greatest mathematicians and physicists of that era.  The mathematicians included Felix Klein, David Hilbert and Hermann Minkowski.  The physicists included von Laue, a student of Woldemar Voigt.  This period was one of several “golden ages” of Göttingen.  The first golden age was the time of Gauss and Riemann in the mid-1800’s.  The second golden age, when Schwarzschild was present, began when Felix Klein arrived at Göttingen and attracted the top mathematicians of the time.  The third golden age of Göttingen was the time of Born and Jordan and Heisenberg at the birth of quantum mechanics in the mid 1920’s.

In 1906, the Austrian Physicist Paul Ehrenfest, freshly out of his PhD under the supervision of Boltzmann, arrived at Göttingen only weeks before Boltzmann took his own life.  Felix Klein at Göttingen had been relying on Boltzmann to provide a comprehensive review of statistical mechanics for the Mathematical Encyclopedia, so he now entrusted this project to the young Ehrenfest.  It was a monumental task, which was to take him and his physicist wife Tatyanya nearly five years to complete.  Part of the delay was the desire by the Ehrenfests to close some open problems that remained in Boltzmann’s work.  One of these was a mechanical theorem of Boltzmann’s that identified properties of statistical mechanical systems that remained unaltered through a very slow change in system parameters.  These properties would later be called adiabatic invariants by Einstein. 

Ehrenfest recognized that Wien’s displacement law, which had been a guiding light for Planck and his theory of black body radiation, had originally been derived by Wien using classical principles related to slow changes in the volume of a cavity.  Ehrenfest was struck by the fact that such slow changes would not induce changes in the quantum numbers of the quantized states, and hence that the quantum numbers must be adiabatic invariants of the black body system.  This not only explained why Wien’s displacement law continued to hold under quantum as well as classical considerations, but it also explained why Planck’s quantization of the energy of his simple oscillators was the only possible choice.  For a classical harmonic oscillator, the ratio of the energy of oscillation to the frequency of oscillation is an adiabatic invariant, which is immediately recognized as Planck’s quantum condition .  

Ehrenfest published his observations in 1913 [8], the same year that Bohr published his theory of the hydrogen atom, so Ehrenfest immediately applied the theory of adiabatic invariants to Bohr’s model and discovered that the quantum condition for the quantized energy levels was again the adiabatic invariants of the electron orbits, and not merely a consequence of integer multiples of angular momentum, which had seemed somewhat ad hoc

After eight exciting years at Göttingen, Schwarzschild was offered the position at the Potsdam Observatory in 1909 upon the retirement from that post of the famous German astronomer Carl Vogel who had made the first confirmed measurements of the optical Doppler effect.  Schwarzschild accepted and moved to Potsdam with a new family.  His son Martin Schwarzschild would follow him into his profession, becoming a famous astronomer at Princeton University and a theorist on stellar structure.  At the outbreak of WWI, Schwarzschild joined the German army out of a sense of patriotism.  Because of his advanced education he was made an officer of artillery with the job to calculate artillery trajectories, and after a short time on the Western Front in Belgium was transferred to the Eastern Front in Russia.  Though he was not in the trenches, he was in the midst of the chaos to the rear of the front.  Despite this situation, he found time to pursue his science through the year 1915. 

Schwarzschild was intrigued by Ehrenfest’s paper on adiabatic invariants and their similarity to several of the invariant theorems of Poincaré that he had studied for his doctorate.  Up until this time, mechanics had been mostly pursued through the Lagrangian formalism which could easily handle generalized forces associated with dissipation.  But celestial mechanics are conservative systems for which the Hamiltonian formalism is a more natural approach.  In particular, the Hamilton-Jacobi canonical transformations made it particularly easy to find pairs of generalized coordinates that had simple periodic behavior.  In his published paper [9], Schwarzschild called these “Action-Angle” coordinates because one was the action integral that was well-known in the principle of “Least Action”, and the other was like an angle variable that changed steadily in time (see Fig. 5). Action-angle coordinates have come to form the foundation of many of the properties of Hamiltonian chaos, Hamiltonian maps, and Hamiltonian tapestries.

Fig. 5  Description of the canonical transformation to action-angle coordinates (Ref. [9] pg. 549). Schwarzschild names the new coordinates “Wirkungsvariable” and “Winkelvariable”.

During lulls in bombardments, Schwarzschild translated the Hamilton-Jacobi methods of celestial mechanics to apply them to the new quantum mechanics of the Bohr orbits.  The phrase “quantum mechanics” had not yet been coined (that would come ten years later in a paper by Max Born), but it was clear that the Bohr quantization conditions were a new type of mechanics.  The periodicities that were inherent in the quantum systems were natural properties that could be mapped onto the periodicities of the angle variables, while Ehrenfest’s adiabatic invariants could be mapped onto the slowly varying action integrals.  Schwarzschild showed that action-angle coordinates were the only allowed choice of coordinates, because they enabled the separation of the Hamilton-Jacobi equations and hence provided the correct quantization conditions for the Bohr electron orbits.  Later, when Sommerfeld published his quantized elliptical orbits in 1916, the multiplicity of quantum conditions and orbits had caused concern, but Ehrenfest came to the rescue, showing that each of Sommerfeld’s quantum conditions were precisely Schwarzschild’s action-integral invariants of the classical electron dynamics [10].

The works by Schwarzschild, and a closely-related paper that amplified his ideas published by his friend Paul Epstein several months later [11], were the first to show the power of the Hamiltonian formulation of dynamics for quantum systems, foreshadowing the future importance of Hamiltonians for quantum theory.  An essential part of the Hamiltonian formalism is the concept of phase space.  In his paper, Schwarzschild showed that the phase space of quantum systems was divided into small but finite elementary regions whose areas were equal to Planck’s constant h-bar (see Fig. 6).  The areas were products of a small change in momentum coordinate Delta-p and a corresponding small change in position coordinate Delta-x.  Therefore, the product DxDp = h-bar.  This observation, made in 1915 by Schwarzschild, was only one step away from Heisenberg’s uncertainty relation, twelve years before Heisenberg discovered it.  However, in 1915 Born’s probabilistic interpretation of quantum mechanics had not yet been made, nor the idea of measurement uncertainty, so Schwarzschild did not have the appropriate context in which to have made the leap to the uncertainty principle.  However, by introducing the action-angle coordinates as well as the Hamiltonian formalism applied to quantum systems, with the natural structure of phase space, Schwarzschild laid the foundation for the future developments in quantum theory made by the next generation.

Fig. 6  Expression of the division of phase space into elemental areas of action equal to h-bar (Ref. [9] pg. 550).

All Quiet on the Eastern Front

Towards the end of his second stay in Munich in 1900, prior to joining the Göttingen faculty, Schwarzschild had presented a paper at a meeting of the German Astronomical Society held in Heidelberg in August.  The topic was unlike anything he had tackled before.  It considered the highly theoretical question of whether the universe was non-Euclidean, and more specifically if it had curvature.  He concluded from observation that if the universe were curved, the radius of curvature must be larger than between 50 light years and 2000 light years, depending on whether the geometry was hyperbolic or elliptical.  Schwarzschild was working out ideas of differential geometry and applying them to the universe at large at a time when Einstein was just graduating from the ETH where he skipped his math classes and had his friend Marcel Grossmann take notes for him.

The topic of Schwarzschild’s talk tells an important story about the warping of historical perspective by the “great man” syndrome.  In this case the great man is Einstein who is today given all the credit for discovering the warping of space.  His development of General Relativity is often portrayed as by a lone genius in the wilderness performing a blazing act of creation out of the void.  In fact, non-Euclidean geometry had been around for some time by 1900—five years before Einstein’s Special Theory and ten years before his first publications on the General Theory.  Gauss had developed the idea of intrinsic curvature of a manifold fifty years earlier, amplified by Riemann.  By the turn of the century alternative geometries were all the rage, and Schwarzschild considered whether there were sufficient astronomical observations to set limits on the size of curvature of the universe.  But revisionist history is just as prevalent in physics as in any field, and when someone like Einstein becomes so big in the mind’s eye, his shadow makes it difficult to see all the people standing behind him.

This is not meant to take away from the feat that Einstein accomplished.  The General Theory of Relativity, published by Einstein in its full form in 1915 was spectacular [12].  Einstein had taken vague notions about curved spaces and had made them specific, mathematically rigorous and intimately connected with physics through the mass-energy source term in his field equations.  His mathematics had gone beyond even what his mathematician friend and former collaborator Grossmann could achieve.  Yet Einstein’s field equations were nonlinear tensor differential equations in which the warping of space depended on the strength of energy fields, but the configuration of those energy fields depended on the warping of space.  This type of nonlinear equation is difficult to solve in general terms, and Einstein was not immediately aware of how to find the solutions to his own equations.

Therefore, it was no small surprise to him when he received a letter from the Eastern Front from an astronomer he barely knew who had found a solution—a simple solution (see Fig. 7) —to his field equations.  Einstein probably wondered how he could have missed it, but he was generous and forwarded the letter to the Reports of the Prussian Physical Society where it was published in 1916 [13].

Fig. 7  Schwarzschild’s solution of the Einstein Field Equations (Ref. [13] pg. 194).

In the same paper, Schwarzschild used his exact solution to find the exact equation that described the precession of the perihelion of Mercury that Einstein had only calculated approximately. The dynamical equations for Mercury are shown in Fig. 8.

Fig. 8  Explanation for the precession of the perihelion of Mercury ( Ref. [13]  pg. 195)

Schwarzschild’s solution to Einstein’s Field Equation of General Relativity was not a general solution, even for a point mass. He had constants of integration that could have arbitrary values, such as the characteristic length scale that Schwarzschild called “alpha”. It was David Hilbert who later expanded upon Schwarzschild’s work, giving the general solution and naming the characteristic length scale (where the metric diverges) after Schwarzschild. This is where the phrase “Schwarzschild Radius” got its name, and it stuck. In fact it stuck so well that Schwarzschild’s radius has now eclipsed much of the rest of Schwarzschild’s considerable accomplishments.

Unfortunately, Schwarzschild’s accomplishments were cut short when he contracted an autoimmune disease that may have been hereditary. It is ironic that in the carnage of the Eastern Front, it was a genetic disease that caused his death at the age of 42. He was already suffering from the effects of the disease as he worked on his last publications. He was sent home from the front to his family in Potsdam where he passed away several months later having shepherded his final two papers through the publication process. His last paper, on the action-angle variables in quantum systems , was published on the day that he died.

Schwarzschild’s Legacy

Schwarzschild’s legacy was assured when he solved Einstein’s field equations and Einstein communicated it to the world. But his hidden legacy is no less important.

Schwarzschild’s application of the Hamiltonian formalism of canonical transformations and phase space for quantum systems set the stage for the later adoption of Hamiltonian methods in quantum mechanics. He came dangerously close to stating the uncertainty principle that catapulted Heisenberg to later fame, although he could not express it in probabilistic terms because he came too early.

Schwarzschild is considered to be the greatest German astronomer of the last hundred years. This is in part based on his work at the birth of stellar interferometry and in part on his development of stellar photometry and the calibration of the Cepheid variable stars that went on to revolutionize our view of our place in the universe. Solving Einsteins field equations was just a sideline for him, a hobby to occupy his active and curious mind.


[1] Fizeau, H. L. (1849). “Sur une expérience relative à la vitesse de propagation de la lumière.” Comptes rendus de l’Académie des sciences 29: 90–92, 132.

[2] Foucault, J. L. (1862). “Détermination expérimentale de la vitesse de la lumière: parallaxe du Soleil.” Comptes rendus de l’Académie des sciences 55: 501–503, 792–596.

[3] Fizeau, H. (1859). “Sur les hypothèses relatives à l’éther lumineux.” Ann. Chim. Phys.  Ser. 4 57: 385–404.

[4] Fizeau, H. (1868). “Prix Bordin: Rapport sur le concours de l’annee 1867.” C. R. Acad. Sci. 66: 932.

[5] Michelson, A. A. (1890). “I. On the application of interference methods to astronomical measurements.” The London, Edinburgh, and Dublin Philosophical Magazine and Journal of Science 30(182): 1-21.

[6] Michelson, A. A. (1891). “Measurement of Jupiter’s Satellites by Interference.” Nature 45(1155): 160-161.

[7] Schwarzschild, K. (1896). “Über messung von doppelsternen durch interferenzen.” Astron. Nachr. 3335: 139.

[8] P. Ehrenfest, “Een mechanische theorema van Boltzmann en zijne betrekking tot de quanta theorie (A mechanical theorem of Boltzmann and its relation to the theory of energy quanta),” Verslag van de Gewoge Vergaderingen der Wis-en Natuurkungige Afdeeling, vol. 22, pp. 586-593, 1913.

[9] Schwarzschild, K. (1916). “Quantum hypothesis.” Sitzungsberichte Der Koniglich Preussischen Akademie Der Wissenschaften: 548-568.

[10] P. Ehrenfest, “Adiabatic invariables and quantum theory,” Annalen Der Physik, vol. 51, pp. 327-352, Oct 1916.

[11] Epstein, P. S. (1916). “The quantum theory.” Annalen Der Physik 51(18): 168-188.

[12] Einstein, A. (1915). “On the general theory of relativity.” Sitzungsberichte Der Koniglich Preussischen Akademie Der Wissenschaften: 778-786.

[13] Schwarzschild, K. (1916). “Über das Gravitationsfeld eines Massenpunktes nach der Einstein’schen Theorie.” Sitzungsberichte der Königlich-Preussischen Akademie der Wissenschaften: 189.

How to Teach General Relativity to Undergraduate Physics Majors

As a graduate student in physics at Berkeley in the 1980’s, I took General Relativity (aka GR), from Bruno Zumino, who was a world-famous physicist known as one of the originators of super-symmetry in quantum gravity (not to be confused with super-asymmetry of Cooper-Fowler Big Bang Theory fame).  The class textbook was Gravitation and cosmology: principles and applications of the general theory of relativity, by Steven Weinberg, another world-famous physicist, in this case known for grand unification of the electro-weak force with electromagnetism.  With so much expertise at hand, how could I fail but to absorb the simple essence of general relativity? 

The answer is that I failed miserably.  Somehow, I managed to pass the course, but I walked away with nothing!  And it bugged me for years.  What was so hard about GR?  It took me almost a decade teaching undergraduate physics classes at Purdue in the 90’s before I realized that it my biggest obstacle had been language:  I kept mistaking the words and terms of GR as if they were English.  Words like “general covariance” and “contravariant” and “contraction” and “covariant derivative”.  They sounded like English, with lots of “co” prefixes that were hard to keep straight, but they actually are part of a very different language that I call Physics-ese

Physics-ese is a language that has lots of words that sound like English, and so you think you know what the words mean, but the words have sometimes opposite meanings than what you would guess.  And the meanings of Physics-ese are precisely defined, and not something that can be left to interpretation.  I learned this while teaching the intro courses to non-majors, because so many times when the students were confused, it turned out that it was because they had mistaken a textbook jargon term to be English.  If you told them that the word wasn’t English, but just a token standing for a well-defined object or process, it would unshackle them from their misconceptions.

Then, in the early 00’s when I started to explore the physics of generalized trajectories related to some of my own research interests, I realized that the primary obstacle to my learning anything in the Gravitation course was Physics-ese.   So this raised the question in my mind: what would it take to teach GR to undergraduate physics majors in a relatively painless manner?  This is my answer. 

More on this topic can be found in Chapter 11 of the textbook IMD2: Introduction to Modern Dynamics, 2nd Edition, Oxford University Press, 2019

Trajectories as Flows

One of the culprits for my mind block learning GR was Newton himself.  His ubiquitous second law, taught as F = ma, is surprisingly misleading if one wants to have a more general understanding of what a trajectory is.  This is particularly the case for light paths, which can be bent by gravity, yet clearly cannot have any forces acting on them. 

The way to fix this is subtle yet simple.  First, express Newton’s second law as

which is actually closer to the way that Newton expressed the law in his Principia.  In three dimensions for a single particle, these equations represent a 6-dimensional dynamical space called phase space: three coordinate dimensions and three momentum dimensions.  Then generalize the vector quantities, like the position vector, to be expressed as xa for the six dynamics variables: x, y, z, px, py, and pz

Now, as part of Physics-ese, putting the index as a superscript instead as a subscript turns out to be a useful notation when working in higher-dimensional spaces.  This superscript is called a “contravariant index” which sounds like English but is uninterpretable without a Physics-ese-to-English dictionary.  All “contravariant index” means is “column vector component”.  In other words, xa is just the position vector expressed as a column vector

This superscripted index is called a “contravariant” index, but seriously dude, just forget that “contravariant” word from Physics-ese and just think “index”.  You already know it’s a column vector.

Then Newton’s second law becomes

where the index a runs from 1 to 6, and the function Fa is a vector function of the dynamic variables.  To spell it out, this is

so it’s a lot easier to write it in the one-line form with the index notation. 

The simple index notation equation is in the standard form for what is called, in Physics-ese, a “mathematical flow”.  It is an ODE that can be solved for any set of initial conditions for a given trajectory.  Or a whole field of solutions can be considered in a phase-space portrait that looks like the flow lines of hydrodynamics.  The phase-space portrait captures the essential physics of the system, whether it is a rock thrown off a cliff, or a photon orbiting a black hole.  But to get to that second problem, it is necessary to look deeper into the way that space is described by any set of coordinates, especially if those coordinates are changing from location to location.

What’s so Fictitious about Fictitious Forces?

Freshmen physics students are routinely admonished for talking about “centrifugal” forces (rather than centripetal) when describing circular motion, usually with the statement that centrifugal forces are fictitious—only appearing to be forces when the observer is in the rotating frame.  The same is said for the Coriolis force.  Yet for being such a “fictitious” force, the Coriolis effect is what drives hurricanes and the colossal devastation they cause.  Try telling a hurricane victim that they were wiped out by a fictitious force!  Looking closer at the Coriolis force is a good way of understanding how taking derivatives of vectors leads to effects often called “fictitious”, yet it opens the door on some of the simpler techniques in the topic of differential geometry.

To start, consider a vector in a uniformly rotating frame.  Such a frame is called “non-inertial” because of the angular acceleration associated with the uniform rotation.  For an observer in the rotating frame, vectors are attached to the frame, like pinning them down to the coordinate axes, but the axes themselves are changing in time (when viewed by an external observer in a fixed frame).  If the primed frame is the external fixed frame, then a position in the rotating frame is

where R is the position vector of the origin of the rotating frame and r is the position in the rotating frame relative to the origin.  The funny notation on the last term is called in Physics-ese a “contraction”, but it is just a simple inner product, or dot product, between the components of the position vector and the basis vectors.  A basis vector is like the old-fashioned i, j, k of vector calculus indicating unit basis vectors pointing along the x, y and z axes.  The format with one index up and one down in the product means to do a summation.  This is known as the Einstein summation convention, so it’s just

Taking the time derivative of the position vector gives

and by the chain rule this must be

where the last term has a time derivative of a basis vector.  This is non-zero because in the rotating frame the basis vector is changing orientation in time.  This term is non-inertial and can be shown fairly easily (see IMD2 Chapter 1) to be

which is where the centrifugal force comes from.  This shows how a so-called fictitious force arises from a derivative of a basis vector.  The fascinating point of this is that in GR, the force of gravity arises in almost the same way, making it tempting to call gravity a fictitious force, despite the fact that it can kill you if you fall out a window.  The question is, how does gravity arise from simple derivatives of basis vectors?

The Geodesic Equation

To teach GR to undergraduates, you cannot expect them to have taken a course in differential geometry, because most of them just don’t have the time in their schedule to take such an advanced mathematics course.  In addition, there is far more taught in differential geometry than is needed to make progress in GR.  So the simple approach is to teach what they need to understand GR with as little differential geometry as possible, expressed with clear English-to-Physics-ese translations. 

For example, consider the partial derivative of a vector expressed in index notation as

Taking the partial derivative, using the always-necessary chain rule, is

where the second term is just like the extra time-derivative term that showed up in the derivation of the Coriolis force.  The basis vector of a general coordinate system may change size and orientation as a function of position, so this derivative is not in general zero.  Because the derivative of a basis vector is so central to the ideas of GR, they are given their own symbol.  It is

where the new “Gamma” symbol is called a Christoffel symbol.  It has lots of indexes, both up and down, which looks daunting, but it can be interpreted as the beta-th derivative of the alpha-th component of the mu-th basis vector.  The partial derivative is now

For those of you who noticed that some of the indexes flipped from alpha to mu and vice versa, you’re right!  Swapping repeated indexes in these “contractions” is allowed and helps make derivations a lot easier, which is probably why Einstein invented this notation in the first place.

The last step in taking a partial derivative of a vector is to isolate a single vector component Va as

where a new symbol, the del-operator has been introduced.  This del-operator is known as the “covariant derivative” of the vector component.  Again, forget the “covariant” part and just think “gradient”.  Namely, taking the gradient of a vector in general includes changes in the vector component as well as changes in the basis vector.

Now that you know how to take the partial derivative of a vector using Christoffel symbols, you are ready to generate the central equation of General Relativity:  The geodesic equation. 

Everyone knows that a geodesic is the shortest path between two points, like a great circle route on the globe.  But it also turns out to be the straightest path, which can be derived using an idea known as “parallel transport”.  To start, consider transporting a vector along a curve in a flat metric.  The equation describing this process is

Because the Christoffel symbols are zero in a flat space, the covariant derivative and the partial derivative are equal, giving

If the vector is transported parallel to itself, then there is no change in V along the curve, so that

Finally, recognizing

and substituting this in gives

This is the geodesic equation! 

Fig. 1 The geodesic equation of motion is for force-free motion through a metric space. The curvature of the trajectory is analogous to acceleration, and the generalized gradient is analogous to a force. The geodesic equation is the “F = ma” of GR.

Putting this in the standard form of a flow gives the geodesic flow equations

The flow defines an ordinary differential equation that defines a curve that carries its own tangent vector onto itself.  The curve is parameterized by a parameter s that can be identified with path length.  It is the central equation of GR, because it describes how an object follows a force-free trajectory, like free fall, in any general coordinate system.  It can be applied to simple problems like the Coriolis effect, or it can be applied to seemingly difficult problems, like the trajectory of a light path past a black hole.

The Metric Connection

Arriving at the geodesic equation is a major accomplishment, and you have done it in just a few pages of this blog.  But there is still an important missing piece before we are doing General Relativity of gravitation.  We need to connect the Christoffel symbol in the geodesic equation to the warping of space-time around a gravitating object. 

The warping of space-time by matter and energy is another central piece of GR and is often the central focus of a graduate-level course on the subject.  This part of GR does have its challenges leading up to Einstein’s Field Equations that explain how matter makes space bend.  But at an undergraduate level, it is sufficient to just describe the bent coordinates as a starting point, then use the geodesic equation to solve for so many of the cool effects of black holes.

So, stating the way that matter bends space-time is as simple as writing down the length element for the Schwarzschild metric of a spherical gravitating mass as

where RS = GM/c2 is the Schwarzschild radius.  (The connection between the metric tensor gab and the Christoffel symbol can be found in Chapter 11 of IMD2.)  It takes only a little work to find that

This means that if we have the Schwarzschild metric, all we have to do is take first partial derivatives and we will arrive at the Christoffel symbols that go into the geodesic equation.  Solving for any type of force-free trajectory is then just a matter of solving ODEs with initial conditions (performed routinely with numerical ODE solvers in Python, Matlab, Mathematica, etc.).

The first problem we will tackle using the geodesic equation is the deflection of light by gravity.  This is the quintessential problem of GR because there cannot be any gravitational force on a photon, yet the path of the photon surely must bend in the presence of gravity.  This is possible through the geodesic motion of the photon through warped space time.  I’ll take up this problem in my next Blog.

Is the Future of Quantum Computing Bright?

There is a very real possibility that quantum computing is, and always will be, a technology of the future.  Yet if it is ever to be the technology of the now, then it needs two things: practical high-performance implementation and a killer app.  Both of these will require technological breakthroughs.  Whether this will be enough to make quantum computing real (commercializable) was the topic of a special symposium at the Conference on Lasers and ElectroOptics (CLEO) held in San Jose the week of May 6, 2019. 

Quantum computing is stuck in a sort of limbo between hype and hope, pitched with incredible (unbelievable?) claims, yet supported by tantalizing laboratory demonstrations. 

            The symposium had panelists from many top groups working in quantum information science, including Jerry Chow (IBM), Mikhail Lukin (Harvard), Jelena Vuckovic (Stanford), Birgitta Whaley (Berkeley) and Jungsang Kim (IonQ).  The moderator Ben Eggleton (U Sydney) posed the question to the panel: “Will Quantum Computing Actually Work?”.  My Blog for this week is a report, in part, of what they said, and also what was happening in the hallways and the scientific sessions at CLEO.  My personal view after listening and watching this past week is that the future of quantum computers is optics.

Einstein’s Photons

 It is either ironic or obvious that the central figure behind quantum computing is Albert Einstein.  It is obvious because Einstein provided the fundamental tools of quantum computing by creating both quanta and entanglement (the two key elements to any quantum computer).  It is ironic, because Einstein turned his back on quantum mechanics, and he “invented” entanglement to actually argue that it was an “incomplete science”. 

            The actual quantum revolution did not begin with Max Planck in 1900, as so many Modern Physics textbooks attest, but with Einstein in 1905.  This was his “miracle year” when he published 5 seminal papers, each of which solved one of the greatest outstanding problems in the physics of the time.  In one of those papers he used simple arguments based on statistics, combined with the properties of light emission, to propose — actually to prove — that light is composed of quanta of energy (later to be named “photons” by Gilbert Lewis in 1924).  Although Planck’s theory of blackbody radiation contained quanta implicitly through the discrete actions of his oscillators in the walls of the cavity, Planck vigorously rejected the idea that light itself came in quanta.  He even apologized for Einstein, as he was proposing Einstein for membership the Berlin Academy, saying that he should be admitted despite his grave error of believing in light quanta.  When Millikan set out in 1914 to prove experimentally that Einstein was wrong about photons by performing exquisite experiments on the photoelectric effect, he actually ended up proving that Einstein was right after all, which brought Einstein the Nobel Prize in 1921.

            In the early 1930’s after a series of intense and public debates with Bohr over the meaning of quantum mechanics, Einstein had had enough of the “Copenhagen Interpretation” of quantum mechanics.  In league with Schrödinger, who deeply disliked Heisenberg’s version of quantum mechanics, the two proposed two of the most iconic problems of quantum mechanics.  Schrödinger launched, as a laughable parody, his eponymously-named “Schrödinger’s Cat”, and Einstein launched what has become known as the “Entanglement”.  Each was intended to show the absurdity of quantum mechanics and drive a nail into its coffin, but each has been embraced so thoroughly by physicists that Schrödinger and Einstein are given the praise and glory for inventing these touchstones of quantum science. Schrödinger’s cat and entanglement both lie at the heart of the problems and the promise of quantum computers.

Between Hype and Hope

Quantum computing is stuck in a sort of limbo between hype and hope, pitched with incredible (unbelievable?) claims, yet supported by tantalizing laboratory demonstrations.  In the midst of the current revival in quantum computing interest (the first wave of interest in quantum computing was in the 1990’s, see “Mind at Light Speed“), the US Congress has passed a house resolution to fund quantum computing efforts in the United States with a commitment $1B.  This comes on the heels of commercial efforts in quantum computing by big players like IBM, Microsoft and Google, and also is partially in response to China’s substantial financial commitment to quantum information science.  These acts, and the infusion of cash, will supercharge efforts on quantum computing.  But this comes with real danger of creating a bubble.  If there is too much hype, and if the expensive efforts under-deliver, then the bubble will burst, putting quantum computing back by decades.  This has happened before, as in the telecom and fiber optics bubble of Y2K that burst in 2001.  The optics industry is still recovering from that crash nearly 20 years later.  The quantum computing community will need to be very careful in managing expectations, while also making real strides on some very difficult and long-range problems.

            This was part of what the discussion at the CLEO symposium centered around.  Despite the charge by Eggleton to “be real” and avoid the hype, there was plenty of hype going around on the panel and plenty of optimism, tempered by caution.  I admit that there is reason for cautious optimism.  Jerry Chow showed IBM’s very real quantum computer (with a very small number of qubits) that can be accessed through the cloud by anyone.  They even built a user interface to allow users to code their own quantum codes.  Jungsang Kim of IonQ was equally optimistic, showing off their trapped-atom quantum computer with dozens of trapped ions acting as individual qubits.  Admittedly Chow and Kim have vested interests in their own products, but the technology is certainly impressive.  One of the sharpest critics, Mikhail Lukin of Harvard, was surprisingly also one of the most optimistic. He made clear that scalable quantum computers in the near future is nonsense.  Yet he is part of a Harvard-MIT collaboration that has constructed a 51-qubit array of trapped atoms that sets a world record.  Although it cannot be used for quantum computing, it was used to simulate a complex many-body physics problem, and it found an answer that could not be calculated or predicted using conventional computers.

            The panel did come to a general consensus about quantum computing that highlights the specific challenges that the field will face as it is called upon to deliver on its hyperbole.  They each echoed an idea known as the “supremacy plot” which is a two-axis graph of number of qubits and number of operations (also called circuit depth).  The graph has one region that is not interesting, one region that is downright laughable (at the moment), and one final area of great hope.  The region of no interest lies in the range of large numbers of qubits but low numbers of operations, or large numbers of operations on a small number of qubits.  Each of these extremes can easily be calculated on conventional computers and hence is of no practical interest.  The region that is laughable is the the area of large numbers of qubits and large numbers of operations.  No one suggested that this area can be accessed in even the next 10 years.  The region that everyone is eager to reach is the region of “quantum supremacy”.  This consists of quantum computers that have enough qubits and enough operations that they cannot be simulated by classical computers.  When asked where this region is, the panel consensus was that it would require more than 50 qubits and more than hundreds or thousands of operations.  What makes this so exciting is that there are real technologies that are now approaching this region–and they are based on light.

The Quantum Supremacy Chart: Plot of the number of Qbits and the circuit depth (number of operations or gates) in a quantum computer. The red region (“Zzzzzzz”) is where classical computers can do as well. The purple region (“Ha Ha Ha”) is a dream. The middle region (“Wow”) is the region of hope, which may soon be reached by trapped atoms and optics.

Chris Monroe’s Perfect Qubits

The second plenary session at CLEO featured the recent Nobel prize winners Art Ashkin, Donna Strickland and Gerard Mourou who won the 2018 Nobel prize in physics for laser applications.  (Donna Strickland is only the third woman to win the Nobel prize in physics.)  The warm-up band for these headliners was Chris Monroe, founder of the start-up company IonQ out of the University of Maryland.  Monroe outlined the general layout of their quantum computer which is based on trapped atoms which he called “perfect qubits”.  Each trapped atom is literally an atomic clock with the kind of exact precision that atomic clocks come with.  The quantum properties of these atoms are as perfect as is needed for any quantum computation, and the limits on the performance of the current IonQ system is entirely caused by the classical controls that trap and manipulate the atoms.  This is where the efforts of their rapidly growing R&D team are focused.

            If trapped atoms are the perfect qubit, then the perfect quantum communication channel is the photon.  The photon in vacuum is the quintessential messenger, propagating forever and interacting with nothing.  This is why experimental cosmologists can see the photons originating from the Big Bang 13 billion years ago (actually from about a hundred thousand years after the Big Bang when the Universe became transparent).  In a quantum computer based on trapped atoms as the gates, photons become the perfect wires.

            On the quantum supremacy chart, Monroe plotted the two main quantum computing technologies: solid state (based mainly on superconductors but also some semiconductor technology) and trapped atoms.  The challenges to solid state quantum computers comes with the scale-up to the range of 50 qubits or more that will be needed to cross the frontier into quantum supremacy.  The inhomogeneous nature of solid state fabrication, as perfected as it is for the transistor, is a central problem for a solid state solution to quantum computing.  Furthermore, by scaling up the number of solid state qubits, it is extremely difficult to simultaneously increase the circuit depth.  In fact, circuit depth is likely to decrease (initially) as the number of qubits rises because of the two-dimensional interconnect problem that is well known to circuit designers.  Trapped atoms, on the other hand, have the advantages of the perfection of atomic clocks that can be globally interconnected through perfect photon channels, and scaling up the number of qubits can go together with increased circuit depth–at least in the view of Monroe, who admittedly has a vested interest.  But he was speaking before an audience of several thousand highly-trained and highly-critical optics specialists, and no scientist in front of such an audience will make a claim that cannot be supported (although the reality is always in the caveats).

The Future of Quantum Computing is Optics

The state of the art of the photonic control of light equals the levels of sophistication of electronic control of the electron in circuits.  Each is driven by big-world applications: electronics by the consumer electronics and computer market, and photonics by the telecom industry.  Having a technology attached to a major world-wide market is a guarantee that progress is made relatively quickly with the advantages of economy of scale.  The commercial driver is profits, and the driver for funding agencies (who support quantum computing) is their mandate to foster competitive national economies that create jobs and improve standards of living.

            The yearly CLEO conference is one of the top conferences in laser science in the world, drawing in thousands of laser scientists who are working on photonic control.  Integrated optics is one of the current hot topics.  It brings many of the resources of the electronics industry to bear on photonics.  Solid state optics is mostly concerned with quantum properties of matter and its interaction with photons, and this year’s CLEO conference hosted many focused sessions on quantum sensors, quantum control, quantum information and quantum communication.  The level of external control of quantum systems is increasing at a spectacular rate.  Sitting in the audience at CLEO you get the sense that you are looking at the embryonic stages of vast new technologies that will be enlisted in the near future for quantum computing.  The challenge is, there are so many variants that it is hard to know which of these naissent technologies will win and change the world.  But the key to technological progress is diversity (as it is for society), because it is the interplay and cross-fertilization among the diverse technologies that drives each forward, and even technologies that recede away still contribute to the advances of the winning technology. 

            The expert panel at CLEO on the future of quantum computing punctuated their moments of hype with moments of realism as they called for new technologies to solve some of the current barriers to quantum computers.  Walking out of the panel discussion that night, and walking into one of the CLEO technical sessions the next day, you could almost connect the dots.  The enabling technologies being requested by the panel are literally being built by the audience.

            In the end, the panel had a surprisingly prosaic argument in favor of the current push to build a working quantum computer.  It is an echo of the movie Field of Dreams, with the famous quote “If you build it they will come”.  That was the plea made by Lukin, who argued that by putting quantum computers into the hands of users, then the killer app that will drive the future economics of quantum computers likely will emerge.  You don’t really know what to do with a quantum computer until you have one.

            Given the “perfect qubits” of trapped atoms, and the “perfect photons” of the communication channels, combined with the dizzying assortment of quantum control technologies being invented and highlighted at CLEO, it is easy to believe that the first large-scale quantum computers will be based on light.

Chandrasekhar’s Limit

Arthur Eddington was the complete package—an observationalist with the mathematical and theoretical skills to understand Einstein’s general theory, and the ability to construct the theory of the internal structure of stars.  He was Zeus in Olympus among astrophysicists.  He always had the last word, and he stood with Einstein firmly opposed to the Schwarzschild singularity.  In 1924 he published a theoretical paper in which he derived a new coordinate frame (now known as Eddington-Finkelstein coordinates) in which the singularity at the Schwarzschild radius is removed.  At the time, he took this to mean that the singularity did not exist and that gravitational cut off was not possible [1].  It would seem that the possibility of dark stars (black holes) had been put to rest.  Both Eddington and Einstein said so!  But just as they were writing the obituary of black holes, a strange new form of matter was emerging from astronomical observations that would challenge the views of these giants.

Something wonderful, but also a little scary, happened when Chandrasekhar included the relativistic effects in his calculation.

White Dwarf

Binary star systems have always held a certain fascination for astronomers.  If your field of study is the (mostly) immutable stars, then the stars that do move provide some excitement.  The attraction of binaries is the same thing that makes them important astrophysically—they are dynamic.  While many double stars are observed in the night sky (a few had been noted by Galileo), some of these are just coincidental alignments of near and far stars.  However, William Herschel began cataloging binary stars in 1779 and became convinced in 1802 that at least some of them must be gravitationally bound to each other.  He carefully measured the positions of binary stars over many years and confirmed that these stars showed relative changes in position, proving that they were gravitational bound binary star systems [2].  The first orbit of a binary star was computed in 1827 by Félix Savary for the orbit of Xi Ursae Majoris.  Finding the orbit of a binary star system provides a treasure trove of useful information about the pair of stars.  Not only can the masses of the stars be determined, but their radii and densities also can be estimated.  Furthermore, by combining this information with the distance to the binaries, it was possible to develop a relationship between mass and luminosity for all stars, even single stars.  Therefore, binaries became a form of measuring stick for crucial stellar properties.

Comparison of Earth to a white dwarf star with a mass equal to the Sun. They have comparable radii but radically different densities.

One of the binary star systems that Hershel discovered was the pair known as 40 Eridani B/C, which he observed on January 31 in 1783.  Of this pair, 40 Eridani B was very dim compared to its companion.  More than a century later, in 1910 when spectrographs were first being used routinely on large telescopes, the spectrum of 40 Eridani B was found to be of an unusual white spectral class.  In the same year, the low luminosity companion of Sirius, known as Sirius B, which shared the same unusual white spectral class, was evaluated in terms of its size and mass and was found to be exceptionally small and dense [3].  In fact, it was too small and too dense to be believed at first, because the densities were beyond any known or even conceivable matter.  The mass of Sirius B is around the mass of the Sun, but its radius is comparable to the radius of the Earth, making the density of the white star about ten thousand times denser than the core of the Sun.  Eddington at first felt the same way about white dwarfs that he felt about black holes, but he was eventually swayed by the astrophysical evidence.  By 1922 many of these small white stars had been discovered, called white dwarfs, and their incredibly large densities had been firmly established.  In his famous book on stellar structure [4], he noted the strange paradox:  As a star cools, its pressure must decrease, as all gases must do as they cool, and the star would shrink, yet the pressure required to balance the force of gravity to stabilize the star against continued shrinkage must increase as the star gets smaller.  How can pressure decrease and yet increase at the same time?  In 1926, on the eve of the birth of quantum mechanics, Eddington could conceive of no mechanism that could resolve this paradox.  So he noted it as an open problem in his book and sent it to press.

Subrahmanyan Chandrasekhar

Three years after the publication of Eddington’s book, an eager and excited nineteen-year-old graduate of the University in Madras India boarded a steamer bound for England.  Subrahmanyan Chandrasekhar (1910—1995) had been accepted for graduate studies at Cambridge University.  The voyage in 1930 took eighteen days via the Suez Canal, and he needed something to do to pass the time.  He had with him Eddington’s book, which he carried like a bible, and he also had a copy of a breakthrough article written by R. H. Fowler that applied the new theory of quantum mechanics to the problem of dense matter composed of ions and electrons [5].  Fowler showed how the Pauli exclusion principle for electrons, that obeyed Fermi-Dirac statistics, created an energetic sea of electrons in their lowest energy state, called electron degeneracy.  This degeneracy was a fundamental quantum property of matter, and carried with it an intrinsic pressure unrelated to thermal properties.  Chandrasekhar realized that this was a pressure mechanism that could balance the force of gravity in a cooling star and might resolve Eddington’s paradox of the white dwarfs.  As the steamer moved ever closer to England, Chandrasekhar derived the new balance between gravitational pressure and electron degeneracy pressure and found the radius of the white dwarf as a function of its mass.  The critical step in Chandrasekhar’s theory, conceived alone on the steamer at sea with access to just a handful of books and papers, was the inclusion of special relativity with the quantum physics.  This was necessary, because the densities were so high and the electrons were so energetic, that they attained speeds approaching the speed of light. 

Something wonderful, but also a little scary, happened when Chandrasekhar included the relativistic effects in his calculation.  He discovered that electron degeneracy pressure could balance the force of gravity if the mass of the white dwarf were smaller than about 1.4 times the mass of the Sun.  But if the dwarf was more massive than this, then even the electron degeneracy pressure would be insufficient to fight gravity, and the star would continue to collapse.  To what?  Schwarzschild’s singularity was one possibility.  Chandrasekhar wrote up two papers on his calculations, and when he arrived in England, he showed them to Fowler, who was to be his advisor at Cambridge.  Fowler was genuinely enthusiastic about  the first paper, on the derivation of the relativistic electron degeneracy pressure, and it was submitted for publication.  The second paper, on the maximum sustainable mass for a white dwarf, which reared the ugly head of Schwarzschild’s singularity, made Fowler uncomfortable, and he sat on the paper, unwilling to give his approval for publication in the leading British astrophysical journal.  Chandrasekhar grew annoyed, and in frustration sent it, without Fowler’s approval, to an American journal, where “The Maximum Mass of Ideal White Dwarfs” was published in 1931 [6].  This paper, written in eighteen days on a steamer at sea, established what became known as the Chandrasekhar limit, for which Chandrasekhar would win the 1983 Nobel Prize in Physics, but not before he was forced to fight major battles for its acceptance.

The Chandrasekhar limit expressed in terms of the Planck Mass and the mass of a proton. The limit is approximately 1.4 times the mass of the Sun. White dwarfs with masses larger than the limit cannot balance gravitational collapse by relativistic electron degeneracy.

Chandrasekhar versus Eddington

Initially there was almost no response to Chandrasekhar’s paper.  Frankly, few astronomers had the theoretical training needed to understand the physics.  Eddington was one exception, which was why he held such stature in the community.  The big question therefore was:  Was Chandrasekhar’s theory correct?  During the three years to obtain his PhD, Chandrasekhar met frequently with Eddington, who was also at Cambridge, and with colleagues outside the university, and they all encouraged Chandrasekhar to tackle the more difficult problem to combine internal stellar structure with his theory.  This could not be done with pen and paper, but required numerical calculation.  Eddington was in possession of an early electromagnetic calculator, and he loaned it to Chandrasekhar to do the calculations.  After many months of tedious work, Chandrasekhar was finally ready to confirm his theory at the 1934 meeting of the British Astrophysical Society. 

The young Chandrasekhar stood up and gave his results in an impeccable presentation before an auditorium crowded with his peers.  But as he left the stage, he was shocked when Eddington himself rose to give the next presentation.  Eddington proceeded to criticize and reject Chandrasekhar’s careful work, proposing instead a garbled mash-up of quantum theory and relativity that would eliminate Chandrasekhar’s limit and hence prevent collapse to the Schwarzschild singularity.  Chandrasekhar sat mortified in the audience.  After the session, many of his friends and colleagues came up to him to give their condolences—if Eddington, the leader of the field and one of the few astronomers who understood Einstein’s theories, said that Chandrasekhar was wrong, then that was that.  Badly wounded, Chandrasekhar was faced with a dire choice.  Should he fight against the reputation of Eddington, fight for the truth of his theory?  But he was at the beginning of his career and could ill afford to pit himself against the giant.  So he turned his back on the problem of stellar death, and applied his talents to the problem of stellar evolution. 

Chandrasekhar went on to have an illustrious career, spent mostly at the University of Chicago (far from Cambridge), and he did eventually return to his limit as it became clear that Eddington was wrong.  In fact, many at the time already suspected Eddington was wrong and were seeking for the answer to the next question: If white dwarfs cannot support themselves under gravity and must collapse, what do they collapse to?  In Pasadena at the California Institute of Technology, an astrophysicist named Fritz Zwicky thought he knew the answer.

Fritz Zwicky’s Neutron Star

Fritz Zwicky (1898—1874) was an irritating and badly flawed genius.  What made him so irritating was that he knew he was a genius and never let anyone forget it.  What made him badly flawed was that he never cared much for weight of evidence.  It was the ideas that mattered—let lesser minds do the tedious work of filling in the cracks.  And what made him a genius was that he was often right!  Zwicky pushed the envelope—he loved extremes.  The more extreme a theory was, the more likely he was to favor it—like his proposal for dark matter.  Most of his colleagues considered him to be a buffoon and borderline crackpot.  He was tolerated by no one—no one except his steadfast collaborator of many years Ernst Baade (until they nearly came to blows on the eve of World War II).  Baade was a German physicist trained at Göttingen and recently arrived at Cal Tech.  He was exceptionally well informed on the latest advances in a broad range of fields.  Where Zwicky made intuitive leaps, often unsupported by evidence, Baade would provide the context.  Baade was a walking Wikipedia for Zwicky, and together they changed the face of astrophysics.

Zwicky and Baade submitted an abstract to the American Physical Society Meeting in 1933, which Kip Thorne has called “…one of the most prescient documents in the history of physics and astronomy” [7].  In the abstract, Zwicky and Baade introduced, for the first time, the existence of supernovae as a separate class of nova and estimated the total energy output of these cataclysmic events, including the possibility that they are the source of some cosmic rays.  They introduced the idea of a neutron star, a star composed purely of neutrons, only a year after Chadwick discovered the neutron’s existence, and they strongly suggested that a supernova is produced by the transformation of a star into a neutron star.  A neutron star would have a mass similar to that of the Sun, but would have a radius of only tens of kilometers.  If the mass density of white dwarfs was hard to swallow, the density of a neutron star was billion times greater!  It would take nearly thirty years before each of the assertions made in this short abstract were proven true, but Zwicky certainly had a clear view, tempered by Baade, of where the field of astrophysics was headed.  But no one listened to Zwicky.  He was too aggressive and backed up his wild assertions with too little substance.  Therefore, neutron stars simmered on the back burner until more substantial physicists could address their properties more seriously.

Two substantial physicists who had the talent and skills that Zwicky lacked were Lev Landau in Moscow and Robert Oppenheimer at Berkeley.  Landau derived the properties of a neutron star in 1937 and published the results to great fanfare.  He was not aware of Zwicky’s work, and he called them neutron cores, because he hypothesized that they might reside at the core of ordinary stars like the Sun.  Oppenheimer, working with a Canadian graduate student George Volkoff at Berkeley, showed that Landau’s idea about stellar cores was not correct, but that the general idea of a neutron core, or rather neutron star, was correct [8].  Once Oppenheimer was interested in neutron stars, he kept going and asked the same question about neutron stars that Chandrasekhar had asked about white dwarfs:  Is there a maximum size for neutron stars beyond which they must collapse?  The answer to this question used the same quantum mechanical degeneracy pressure (now provided by neutrons rather than electrons) and gravitational compaction as the problem of white dwarfs, but it required detailed understanding of nuclear forces, which in 1938 were only beginning to be understood.  However, Oppenheimer knew enough to make a good estimate of the nuclear binding contribution to the total internal pressure and came to a similar conclusion for neutron stars as Chandrasekhar had made for white dwarfs.  There was indeed a maximum mass of a neutron star, a Chandrasekhar-type limit of about three solar masses.  Beyond this mass, even the degeneracy pressure of neutrons could not support gravitational pressure, and the neutron star must collapse.  In Oppenheimer’s mind it was clear what it must collapse to—a black hole (known as gravitational cut-off at that time). This was to lead Oppenheimer and John Wheeler to their famous confrontation over the existence of black holes, which Oppenheimer won, but Wheeler took possession of the battle field [9].

Derivation of the Relativistic Chandrasekhar Limit

White dwarfs are created from the balance between gravitational compression and the degeneracy pressure of electrons caused by the Pauli exclusion principle. When a star collapses gravitationally, the matter becomes so dense that the electrons begin to fill up quantum states until all the lowest-energy states are filled and no more electrons can be added. This results in a balance that stabilizes the gravitational collapse, and the result is a white dwarf with a mass density a million times larger than the Sun.

If the electrons remained non-relativistic, then there would be no upper limit for the size of a star that would form a white dwarf. However, because electrons become relativistic at high enough compaction, if the initial star is too massive, the electron degeneracy pressure becomes limited relativistically and cannot keep the matter from compacting more, and even the white dwarf will collapse (to a neutron star or a black hole). The largest mass that can be supported by a white dwarf is known as the Chandrasekhar limit.

A simplified derivation of the Chandrasekhar limit begins by defining the total energy as the kinetic energy of the degenerate Fermi electron gas plus the gravitational potential energy

The kinetic energy of the degenerate Fermi gas has the relativistic expression


where the Fermi k-vector can be expressed as a function of the radius of the white dwarf and the total number of electrons in the star, as

If the star is composed of pure hydrogen, then the mass of the star is expressed in terms of the total number of electrons and the mass of the proton

The total energy of the white dwarf is minimized by taking its derivative with respect to the radius of the star

When the derivative is set to zero, the term in brackets becomes

This is solved for the radius for which the electron degeneracy pressure stabilizes the gravitational pressure

This is the relativistic radius-mass expression for the size of the stabilized white dwarf as a function of the mass (or total number of electrons). One of the astonishing results of this calculation is the merging of astronomically large numbers (the mass of stars) with both relativity and quantum physics. The radius of the white dwarf is actually expressed as a multiple of the Compton wavelength of the electron!

The expression in the square root becomes smaller as the size of the star increases, and there is an upper bound to the mass of the star beyond which the argument in the square root goes negative. This upper bound is the Chandrasekhar limit defined when the argument equals zero

This gives the final expression for the Chandrasekhar limit (expressed in terms of the Planck mass)

This expression is only approximate, but it does contain the essential physics and magnitude. This limit is on the order of a solar mass. A more realistic numerical calculation yields a limiting mass of about 1.4 times the mass of the Sun. For white dwarfs larger than this value, the electron degeneracy is insufficient to support the gravitational pressure, and the star will collapse to a neutron star or a black hole.


[1] The fact that Eddington coordinates removed the singularity at the Schwarzschild radius was first pointed out by Lemaitre in 1933.  A local observer passing through the Schwarzschild radius would experience no divergence in local properties, even though a distant observer would see that in-falling observer becoming length contracted and time dilated. This point of view of an in-falling observer was explained in 1958 by Finkelstein, who also pointed out that the Schwarzschild radius is an event horizon.

[2] William Herschel (1803), Account of the Changes That Have Happened, during the Last Twenty-Five Years, in the Relative Situation of Double-Stars; With an Investigation of the Cause to Which They Are Owing, Philosophical Transactions of the Royal Society of London 93, pp. 339–382 (Motion of binary stars)

[3] Boss, L. (1910). Preliminary General Catalogue of 6188 stars for the epoch 1900. Carnegie Institution of Washington. (Mass and radius of Sirius B)

[4] Eddington, A. S. (1927). Stars and Atoms. Clarendon Press. LCCN 27015694.

[5] Fowler, R. H. (1926). “On dense matter”. Monthly Notices of the Royal Astronomical Society 87: 114. Bibcode:1926MNRAS..87..114F. (Quantum mechanics of degenerate matter).

[6] Chandrasekhar, S. (1931). “The Maximum Mass of Ideal White Dwarfs”. The Astrophysical Journal 74: 81. Bibcode:1931ApJ….74…81C. doi:10.1086/143324. (Mass limit of white dwarfs).

[7] Kip Thorne (1994) Black Holes & Time Warps: Einstein’s Outrageous Legacy (Norton). pg. 174

[8] Oppenheimer was aware of Zwicky’s proposal because he had a joint appointment between Berkeley and Cal Tech.

[9] See Chapter 7, “The Lens of Gravity” in Galileo Unbound: A Path Across Life, the Universe and Everything (Oxford University Press, 2018).



A Wealth of Motions: Six Generations in the History of the Physics of Motion

SixGenerations3

Since Galileo launched his trajectory, there have been six broad generations that have traced the continuing development of concepts of motion. These are: 1) Universal Motion; 2) Phase Space; 3) Space-Time; 4) Geometric Dynamics; 5) Quantum Coherence; and 6) Complex Systems. These six generations were not all sequential, many evolving in parallel over the centuries, borrowing from each other, and there surely are other ways one could divide up the story of dynamics. But these six generations capture the grand concepts and the crucial paradigm shifts that are Galileo’s legacy, taking us from Galileo’s trajectory to the broad expanses across which physicists practice physics today.

Universal Motion emerged as a new concept when Isaac Newton proposed his theory of universal gravitation by which the force that causes apples to drop from trees is the same force that keeps the Moon in motion around the Earth, and the Earth in motion around the Sun. This was a bold step because even in Newton’s day, some still believed that celestial objects obeyed different laws. For instance, it was only through the work of Edmund Halley, a contemporary and friend of Newton’s, that comets were understood to travel in elliptical orbits obeying the same laws as the planets. Universal Motion included ideas of momentum from the start, while concepts of energy and potential, which fill out this first generation, took nearly a century to develop in the hands of many others, like Leibniz and Euler and the Bernoullis. This first generation was concluded by the masterwork of the Italian-French mathematician Joseph-Louis Lagrange, who also planted the seed of the second generation.

The second generation, culminating in the powerful and useful Phase Space, also took more than a century to mature. It began when Lagrange divorced dynamics from geometry, establishing generalized coordinates as surrogates to directions in space. Ironically, by discarding geometry, Lagrange laid the foundation for generalized spaces, because generalized coordinates could be anything, coming in any units and in any number, each coordinate having its companion velocity, doubling the dimension for every freedom. The Austrian physicist Ludwig Boltzmann expanded the number of dimensions to the scale of Avogadro’s number of particles, and he discovered the conservation of phase space volume, an invariance of phase space that stays the same even as 1023 atoms (Avogadro’s number) in ideal gases follow their random trajectories. The idea of phase space set the stage for statistical mechanics and for a new probabilistic viewpoint of mechanics that would extend into chaotic motions.

The French mathematician Henri Poincaré got a glimpse of chaotic motion in 1890 as he rushed to correct an embarrassing mistake in his manuscript that had just won a major international prize. The mistake was mathematical, but the consequences were profoundly physical, beginning the long road to a theory of chaos that simmered, without boiling, for nearly seventy years until computers became common lab equipment. Edward Lorenz of MIT, working on models of the atmosphere in the late 1960s, used one of the earliest scientific computers to expose the beauty and the complexity of chaotic systems. He discovered that the computer simulations were exponentially sensitive to the initial conditions, and the joke became that a butterfly flapping its wings in China could cause hurricanes in the Atlantic. In his computer simulations, Lorenz discovered what today is known as the Lorenz butterfly, an example of something called a “strange attractor”. But the term chaos is a bit of a misnomer, because chaos theory is primarily about finding what things are shared in common, or are invariant, among seemingly random-acting systems.

The third generation in concepts of motion, Space-Time, is indelibly linked with Einstein’s special theory of relativity, but Einstein was not its originator. Space-time was the brain child of the gifted but short-lived Prussian mathematician Hermann Minkowski, who had been attracted from Königsberg to the mathematical powerhouse at the University in Göttingen, Germany around the turn of the 20th Century by David Hilbert. Minkowski was an expert in invariant theory, and when Einstein published his special theory of relativity in 1905 to explain the Lorentz transformations, Minkowski recognized a subtle structure buried inside the theory. This structure was related to Riemann’s metric theory of geometry, but it had the radical feature that time appeared as one of the geometric dimensions. This was a drastic departure from all former theories of motion that had always separated space and time: trajectories had been points in space that traced out a continuous curve as a function of time. But in Minkowski’s mind, trajectories were invariant curves, and although their mathematical representation changed with changing point of view (relative motion of observers), the trajectories existed in a separate unchanging reality, not mere functions of time, but eternal. He called these trajectories world lines. They were static structures in a geometry that is today called Minkowski space. Einstein at first was highly antagonistic to this new view, but he relented, and later he so completely adopted space-time in his general theory that today Minkowski is almost forgotten, his echo heard softly in expressions of the Minkowski metric that is the background to Einstein’s warped geometry that bends light and captures errant space craft.

The fourth generation in the development of concepts of motion, Geometric Dynamics, began when an ambitious French physicist with delusions of grandeur, the historically ambiguous Pierre Louis Maupertuis, returned from a scientific boondoggle to Lapland where he measured the flatness of the Earth in defense of Newtonian physics over Cartesian. Skyrocketed to fame by the success of the expedition, he began his second act by proposing the Principle of Least Action, a principle by which all motion seeks to be most efficient by taking a geometric path that minimizes a physical quantity called action. In this principle, Maupertuis saw both a universal law that could explain all of physical motion, as well as a path for himself to gain eternal fame in the company of Galileo and Newton. Unfortunately, his high hopes were dashed through personal conceit and nasty intrigue, and most physicists today don’t even recognize his name. But the idea of least action struck a deep chord that reverberates throughout physics. It is the first and fundamental example of a minimum principle, of which there are many. For instance, minimum potential energy identifies points of system equilibrium, and paths of minimum distances are geodesic paths. In dynamics, minimization of the difference between potential and kinetic energies identifies the dynamical paths of trajectories, and minimization of distance through space-time warped by mass and energy density identifies the paths of falling objects.

Maupertuis’ fundamentally important idea was picked up by Euler and Lagrange, expanding it through the language of differential geometry. This was the language of Bernhard Riemann, a gifted and shy German mathematician whose mathematical language was adopted by physicists to describe motion as a geodesic, the shortest path like a great-circle route on the Earth, in an abstract dynamical space defined by kinetic energy and potentials. In this view, it is the geometry of the abstract dynamical space that imposes Galileo’s simple parabolic form on freely falling objects. Einstein took this viewpoint farther than any before him, showing how mass and energy warped space and how free objects near gravitating bodies move along geodesic curves defined by the shape of space. This brought trajectories to a new level of abstraction, as space itself became the cause of motion. Prior to general relativity, motion occurred in space. Afterwards, motion was caused by space. In this sense, gravity is not a force, but is like a path down which everything falls.

The fifth generation of concepts of motion, Quantum Coherence, increased abstraction yet again in the comprehension of trajectories, ushering in difficult concepts like wave-particle duality and quantum interference. Quantum interference underlies many of the counter-intuitive properties of quantum systems, including the possibility for quantum systems to be in two or more states at the same time, and for quantum computers to crack unbreakable codes. But this new perspective came with a cost, introducing fundamental uncertainties that are locked in a battle of trade-offs as one measurement becomes more certain and others becomes more uncertain.

Einstein distrusted Heisenberg’s uncertainty principle, not that he disagreed with its veracity, but he felt it was more a statement of ignorance than a statement of fundamental unknowability. In support of Einstein, Schrödinger devised a thought experiment that was meant to be a reduction to absurdity in which a cat is placed in a box with a vial of poison that would be broken if a quantum particle decays. The cruel fate of Schrödinger’s cat, who might or might not be poisoned, hinges on whether or not someone opens the lid and looks inside. Once the box is opened, there is one world in which the cat is alive and another world in which the cat is dead. These two worlds spring into existence when the box is opened—a bizarre state of affairs from the point of view of a pragmatist. This is where Richard Feynman jumped into the fray and redefined the idea of a trajectory in a radically new way by showing that a quantum trajectory is not a single path, like Galileo’s parabola, but the combined effect of the quantum particle taking all possible paths simultaneously. Feynman established this new view of quantum trajectories in his thesis dissertation under the direction of John Archibald Wheeler at Princeton. By adapting Maupertuis’ Principle of Least Action to quantum mechanics, Feynman showed how every particle takes every possible path—simultaneously—every path interfering in such as way that only the path with the most constructive interference is observed. In the quantum view, the deterministic trajectory of the cannon ball evaporates into a cloud of probable trajectories.

In our current complex times, the sixth generation in the evolution of concepts of motion explores Complex Systems. Lorenz’s Butterfly has more to it than butterflies, because Life is the greatest complex system of our experience and our existence. We are the end result of a cascade of self-organizing events that began half a billion years after Earth coalesced out of the nebula, leading to the emergence of consciousness only about 100,000 years ago—a fact that lets us sit here now and wonder about it all. That we are conscious is perhaps no accident. Once the first amino acids coagulated in a muddy pool, we have been marching steadily uphill, up a high mountain peak in a fitness landscape. Every advantage a species gained over its environment and over its competitors exerted a type of pressure on all the other species in the ecosystem that caused them to gain their own advantage.

The modern field of evolutionary dynamics spans a wide range of scales across a wide range of abstractions. It treats genes and mutations on DNA in much the same way it treats the slow drift of languages and the emergence of new dialects. It treats games and social interactions the same way it does the evolution of cancer. Evolutionary dynamics is the direct descendant of chaos theory that turned butterflies into hurricanes, but the topics it treats are special to us as evolved species, and as potential victims of disease. The theory has evolved its own visualizations, such as the branches in the tree of life and the high mountain tops in fitness landscapes separated by deep valleys. Evolutionary dynamics draws, in a fundamental way, on dynamic processes in high dimensions, without which it would be impossible to explain how something as complex as human beings could have arisen from random mutations.

These six generations in the development of dynamics are not likely to stop, and future generations may arise as physicists pursue the eternal quest for the truth behind the structure of reality.