The Future of Quantum Computing is Bright

There is a very real possibility that quantum computing is, and always will be, a technology of the future.  Yet if it is ever to be the technology of the now, then it needs two things: practical high-performance implementation and a killer app.  Both of these will require technological breakthroughs.  Whether this will be enough to make quantum computing real (commercializable) was the topic of a special symposium at the Conference on Lasers and ElectroOptics (CLEO) held in San Jose the week of May 6, 2019. 

Quantum computing is stuck in a sort of limbo between hype and hope, pitched with incredible (unbelievable?) claims, yet supported by tantalizing laboratory demonstrations. 

            The symposium had panelists from many top groups working in quantum information science, including Jerry Chow (IBM), Mikhail Lukin (Harvard), Jelena Vuckovic (Stanford), Birgitta Whaley (Berkeley) and Jungsang Kim (IonQ).  The moderator Ben Eggleton (U Sydney) posed the question to the panel: “Will Quantum Computing Actually Work?”.  My Blog for this week is a report, in part, of what they said, and also what was happening in the hallways and the scientific sessions at CLEO.  My personal view after listening and watching this past week is that the future of quantum computers is optics.

Einstein’s Photons

 It is either ironic or obvious that the central figure behind quantum computing is Albert Einstein.  It is obvious because Einstein provided the fundamental tools of quantum computing by creating both quanta and entanglement (the two key elements to any quantum computer).  It is ironic, because Einstein turned his back on quantum mechanics, and he “invented” entanglement to actually argue that it was an “incomplete science”. 

            The actual quantum revolution did not begin with Max Planck in 1900, as so many Modern Physics textbooks attest, but with Einstein in 1905.  This was his “miracle year” when he published 5 seminal papers, each of which solved one of the greatest outstanding problems in the physics of the time.  In one of those papers he used simple arguments based on statistics, combined with the properties of light emission, to propose — actually to prove — that light is composed of quanta of energy (later to be named “photons” by Gilbert Lewis in 1924).  Although Planck’s theory of blackbody radiation contained quanta implicitly through the discrete actions of his oscillators in the walls of the cavity, Planck vigorously rejected the idea that light itself came in quanta.  He even apologized for Einstein, as he was proposing Einstein for membership the Berlin Academy, saying that he should be admitted despite his grave error of believing in light quanta.  When Millikan set out in 1914 to prove experimentally that Einstein was wrong about photons by performing exquisite experiments on the photoelectric effect, he actually ended up proving that Einstein was right after all, which brought Einstein the Nobel Prize in 1921.

            In the early 1930’s after a series of intense and public debates with Bohr over the meaning of quantum mechanics, Einstein had had enough of the “Copenhagen Interpretation” of quantum mechanics.  In league with Schrödinger, who deeply disliked Heisenberg’s version of quantum mechanics, the two proposed two of the most iconic problems of quantum mechanics.  Schrödinger launched, as a laughable parody, his eponymously-named “Schrödinger’s Cat”, and Einstein launched what has become known as the “Entanglement”.  Each was intended to show the absurdity of quantum mechanics and drive a nail into its coffin, but each has been embraced so thoroughly by physicists that Schrödinger and Einstein are given the praise and glory for inventing these touchstones of quantum science. Schrödinger’s cat and entanglement both lie at the heart of the problems and the promise of quantum computers.

Between Hype and Hope

Quantum computing is stuck in a sort of limbo between hype and hope, pitched with incredible (unbelievable?) claims, yet supported by tantalizing laboratory demonstrations.  In the midst of the current revival in quantum computing interest (the first wave of interest in quantum computing was in the 1990’s, see “Mind at Light Speed“), the US Congress has passed a house resolution to fund quantum computing efforts in the United States with a commitment $1B.  This comes on the heels of commercial efforts in quantum computing by big players like IBM, Microsoft and Google, and also is partially in response to China’s substantial financial commitment to quantum information science.  These acts, and the infusion of cash, will supercharge efforts on quantum computing.  But this comes with real danger of creating a bubble.  If there is too much hype, and if the expensive efforts under-deliver, then the bubble will burst, putting quantum computing back by decades.  This has happened before, as in the telecom and fiber optics bubble of Y2K that burst in 2001.  The optics industry is still recovering from that crash nearly 20 years later.  The quantum computing community will need to be very careful in managing expectations, while also making real strides on some very difficult and long-range problems.

            This was part of what the discussion at the CLEO symposium centered around.  Despite the charge by Eggleton to “be real” and avoid the hype, there was plenty of hype going around on the panel and plenty of optimism, tempered by caution.  I admit that there is reason for cautious optimism.  Jerry Chow showed IBM’s very real quantum computer (with a very small number of qubits) that can be accessed through the cloud by anyone.  They even built a user interface to allow users to code their own quantum codes.  Jungsang Kim of IonQ was equally optimistic, showing off their trapped-atom quantum computer with dozens of trapped ions acting as individual qubits.  Admittedly Chow and Kim have vested interests in their own products, but the technology is certainly impressive.  One of the sharpest critics, Mikhail Lukin of Harvard, was surprisingly also one of the most optimistic. He made clear that scalable quantum computers in the near future is nonsense.  Yet he is part of a Harvard-MIT collaboration that has constructed a 51-qubit array of trapped atoms that sets a world record.  Although it cannot be used for quantum computing, it was used to simulate a complex many-body physics problem, and it found an answer that could not be calculated or predicted using conventional computers.

            The panel did come to a general consensus about quantum computing that highlights the specific challenges that the field will face as it is called upon to deliver on its hyperbole.  They each echoed an idea known as the “supremacy plot” which is a two-axis graph of number of qubits and number of operations (also called circuit depth).  The graph has one region that is not interesting, one region that is downright laughable (at the moment), and one final area of great hope.  The region of no interest lies in the range of large numbers of qubits but low numbers of operations, or large numbers of operations on a small number of qubits.  Each of these extremes can easily be calculated on conventional computers and hence is of no practical interest.  The region that is laughable is the the area of large numbers of qubits and large numbers of operations.  No one suggested that this area can be accessed in even the next 10 years.  The region that everyone is eager to reach is the region of “quantum supremacy”.  This consists of quantum computers that have enough qubits and enough operations that they cannot be simulated by classical computers.  When asked where this region is, the panel consensus was that it would require more than 50 qubits and more than hundreds or thousands of operations.  What makes this so exciting is that there are real technologies that are now approaching this region–and they are based on light.

The Quantum Supremacy Chart: Plot of the number of Qbits and the circuit depth (number of operations or gates) in a quantum computer. The red region (“Zzzzzzz”) is where classical computers can do as well. The purple region (“Ha Ha Ha”) is a dream. The middle region (“Wow”) is the region of hope, which may soon be reached by trapped atoms and optics.

Chris Monroe’s Perfect Qubits

The second plenary session at CLEO featured the recent Nobel prize winners Art Ashkin, Donna Strickland and Gerard Mourou who won the 2018 Nobel prize in physics for laser applications.  (Donna Strickland is only the third woman to win the Nobel prize in physics.)  The warm-up band for these headliners was Chris Monroe, founder of the start-up company IonQ out of the University of Maryland.  Monroe outlined the general layout of their quantum computer which is based on trapped atoms which he called “perfect qubits”.  Each trapped atom is literally an atomic clock with the kind of exact precision that atomic clocks come with.  The quantum properties of these atoms are as perfect as is needed for any quantum computation, and the limits on the performance of the current IonQ system is entirely caused by the classical controls that trap and manipulate the atoms.  This is where the efforts of their rapidly growing R&D team are focused.

            If trapped atoms are the perfect qubit, then the perfect quantum communication channel is the photon.  The photon in vacuum is the quintessential messenger, propagating forever and interacting with nothing.  This is why experimental cosmologists can see the photons originating from the Big Bang 13 billion years ago (actually from about a hundred thousand years after the Big Bang when the Universe became transparent).  In a quantum computer based on trapped atoms as the gates, photons become the perfect wires.

            On the quantum supremacy chart, Monroe plotted the two main quantum computing technologies: solid state (based mainly on superconductors but also some semiconductor technology) and trapped atoms.  The challenges to solid state quantum computers comes with the scale-up to the range of 50 qubits or more that will be needed to cross the frontier into quantum supremacy.  The inhomogeneous nature of solid state fabrication, as perfected as it is for the transistor, is a central problem for a solid state solution to quantum computing.  Furthermore, by scaling up the number of solid state qubits, it is extremely difficult to simultaneously increase the circuit depth.  In fact, circuit depth is likely to decrease (initially) as the number of qubits rises because of the two-dimensional interconnect problem that is well known to circuit designers.  Trapped atoms, on the other hand, have the advantages of the perfection of atomic clocks that can be globally interconnected through perfect photon channels, and scaling up the number of qubits can go together with increased circuit depth–at least in the view of Monroe, who admittedly has a vested interest.  But he was speaking before an audience of several thousand highly-trained and highly-critical optics specialists, and no scientist in front of such an audience will make a claim that cannot be supported (although the reality is always in the caveats).

The Future of Quantum Computing is Optics

The state of the art of the photonic control of light equals the levels of sophistication of electronic control of the electron in circuits.  Each is driven by big-world applications: electronics by the consumer electronics and computer market, and photonics by the telecom industry.  Having a technology attached to a major world-wide market is a guarantee that progress is made relatively quickly with the advantages of economy of scale.  The commercial driver is profits, and the driver for funding agencies (who support quantum computing) is their mandate to foster competitive national economies that create jobs and improve standards of living.

            The yearly CLEO conference is one of the top conferences in laser science in the world, drawing in thousands of laser scientists who are working on photonic control.  Integrated optics is one of the current hot topics.  It brings many of the resources of the electronics industry to bear on photonics.  Solid state optics is mostly concerned with quantum properties of matter and its interaction with photons, and this year’s CLEO conference hosted many focused sessions on quantum sensors, quantum control, quantum information and quantum communication.  The level of external control of quantum systems is increasing at a spectacular rate.  Sitting in the audience at CLEO you get the sense that you are looking at the embryonic stages of vast new technologies that will be enlisted in the near future for quantum computing.  The challenge is, there are so many variants that it is hard to know which of these naissent technologies will win and change the world.  But the key to technological progress is diversity (as it is for society), because it is the interplay and cross-fertilization among the diverse technologies that drives each forward, and even technologies that recede away still contribute to the advances of the winning technology. 

            The expert panel at CLEO on the future of quantum computing punctuated their moments of hype with moments of realism as they called for new technologies to solve some of the current barriers to quantum computers.  Walking out of the panel discussion that night, and walking into one of the CLEO technical sessions the next day, you could almost connect the dots.  The enabling technologies being requested by the panel are literally being built by the audience.

            In the end, the panel had a surprisingly prosaic argument in favor of the current push to build a working quantum computer.  It is an echo of the movie Field of Dreams, with the famous quote “If you build it they will come”.  That was the plea made by Lukin, who argued that by putting quantum computers into the hands of users, then the killer app that will drive the future economics of quantum computers likely will emerge.  You don’t really know what to do with a quantum computer until you have one.

            Given the “perfect qubits” of trapped atoms, and the “perfect photons” of the communication channels, combined with the dizzying assortment of quantum control technologies being invented and highlighted at CLEO, it is easy to believe that the first large-scale quantum computers will be based on light.

Chandrasekhar’s Limit

Arthur Eddington was the complete package—an observationalist with the mathematical and theoretical skills to understand Einstein’s general theory, and the ability to construct the theory of the internal structure of stars.  He was Zeus in Olympus among astrophysicists.  He always had the last word, and he stood with Einstein firmly opposed to the Schwarzschild singularity.  In 1924 he published a theoretical paper in which he derived a new coordinate frame (now known as Eddington-Finkelstein coordinates) in which the singularity at the Schwarzschild radius is removed.  At the time, he took this to mean that the singularity did not exist and that gravitational cut off was not possible [1].  It would seem that the possibility of dark stars (black holes) had been put to rest.  Both Eddington and Einstein said so!  But just as they were writing the obituary of black holes, a strange new form of matter was emerging from astronomical observations that would challenge the views of these giants.

Something wonderful, but also a little scary, happened when Chandrasekhar included the relativistic effects in his calculation.

White Dwarf

Binary star systems have always held a certain fascination for astronomers.  If your field of study is the (mostly) immutable stars, then the stars that do move provide some excitement.  The attraction of binaries is the same thing that makes them important astrophysically—they are dynamic.  While many double stars are observed in the night sky (a few had been noted by Galileo), some of these are just coincidental alignments of near and far stars.  However, William Herschel began cataloging binary stars in 1779 and became convinced in 1802 that at least some of them must be gravitationally bound to each other.  He carefully measured the positions of binary stars over many years and confirmed that these stars showed relative changes in position, proving that they were gravitational bound binary star systems [2].  The first orbit of a binary star was computed in 1827 by Félix Savary for the orbit of Xi Ursae Majoris.  Finding the orbit of a binary star system provides a treasure trove of useful information about the pair of stars.  Not only can the masses of the stars be determined, but their radii and densities also can be estimated.  Furthermore, by combining this information with the distance to the binaries, it was possible to develop a relationship between mass and luminosity for all stars, even single stars.  Therefore, binaries became a form of measuring stick for crucial stellar properties.

Comparison of Earth to a white dwarf star with a mass equal to the Sun. They have comparable radii but radically different densities.

One of the binary star systems that Hershel discovered was the pair known as 40 Eridani B/C, which he observed on January 31 in 1783.  Of this pair, 40 Eridani B was very dim compared to its companion.  More than a century later, in 1910 when spectrographs were first being used routinely on large telescopes, the spectrum of 40 Eridani B was found to be of an unusual white spectral class.  In the same year, the low luminosity companion of Sirius, known as Sirius B, which shared the same unusual white spectral class, was evaluated in terms of its size and mass and was found to be exceptionally small and dense [3].  In fact, it was too small and too dense to be believed at first, because the densities were beyond any known or even conceivable matter.  The mass of Sirius B is around the mass of the Sun, but its radius is comparable to the radius of the Earth, making the density of the white star about ten thousand times denser than the core of the Sun.  Eddington at first felt the same way about white dwarfs that he felt about black holes, but he was eventually swayed by the astrophysical evidence.  By 1922 many of these small white stars had been discovered, called white dwarfs, and their incredibly large densities had been firmly established.  In his famous book on stellar structure [4], he noted the strange paradox:  As a star cools, its pressure must decrease, as all gases must do as they cool, and the star would shrink, yet the pressure required to balance the force of gravity to stabilize the star against continued shrinkage must increase as the star gets smaller.  How can pressure decrease and yet increase at the same time?  In 1926, on the eve of the birth of quantum mechanics, Eddington could conceive of no mechanism that could resolve this paradox.  So he noted it as an open problem in his book and sent it to press.

Subrahmanyan Chandrasekhar

Three years after the publication of Eddington’s book, an eager and excited nineteen-year-old graduate of the University in Madras India boarded a steamer bound for England.  Subrahmanyan Chandrasekhar (1910—1995) had been accepted for graduate studies at Cambridge University.  The voyage in 1930 took eighteen days via the Suez Canal, and he needed something to do to pass the time.  He had with him Eddington’s book, which he carried like a bible, and he also had a copy of a breakthrough article written by R. H. Fowler that applied the new theory of quantum mechanics to the problem of dense matter composed of ions and electrons [5].  Fowler showed how the Pauli exclusion principle for electrons, that obeyed Fermi-Dirac statistics, created an energetic sea of electrons in their lowest energy state, called electron degeneracy.  This degeneracy was a fundamental quantum property of matter, and carried with it an intrinsic pressure unrelated to thermal properties.  Chandrasekhar realized that this was a pressure mechanism that could balance the force of gravity in a cooling star and might resolve Eddington’s paradox of the white dwarfs.  As the steamer moved ever closer to England, Chandrasekhar derived the new balance between gravitational pressure and electron degeneracy pressure and found the radius of the white dwarf as a function of its mass.  The critical step in Chandrasekhar’s theory, conceived alone on the steamer at sea with access to just a handful of books and papers, was the inclusion of special relativity with the quantum physics.  This was necessary, because the densities were so high and the electrons were so energetic, that they attained speeds approaching the speed of light. 

Something wonderful, but also a little scary, happened when Chandrasekhar included the relativistic effects in his calculation.  He discovered that electron degeneracy pressure could balance the force of gravity if the mass of the white dwarf were smaller than about 1.4 times the mass of the Sun.  But if the dwarf was more massive than this, then even the electron degeneracy pressure would be insufficient to fight gravity, and the star would continue to collapse.  To what?  Schwarzschild’s singularity was one possibility.  Chandrasekhar wrote up two papers on his calculations, and when he arrived in England, he showed them to Fowler, who was to be his advisor at Cambridge.  Fowler was genuinely enthusiastic about  the first paper, on the derivation of the relativistic electron degeneracy pressure, and it was submitted for publication.  The second paper, on the maximum sustainable mass for a white dwarf, which reared the ugly head of Schwarzschild’s singularity, made Fowler uncomfortable, and he sat on the paper, unwilling to give his approval for publication in the leading British astrophysical journal.  Chandrasekhar grew annoyed, and in frustration sent it, without Fowler’s approval, to an American journal, where “The Maximum Mass of Ideal White Dwarfs” was published in 1931 [6].  This paper, written in eighteen days on a steamer at sea, established what became known as the Chandrasekhar limit, for which Chandrasekhar would win the 1983 Nobel Prize in Physics, but not before he was forced to fight major battles for its acceptance.

The Chandrasekhar limit expressed in terms of the Planck Mass and the mass of a proton. The limit is approximately 1.4 times the mass of the Sun. White dwarfs with masses larger than the limit cannot balance gravitational collapse by relativistic electron degeneracy.

Chandrasekhar versus Eddington

Initially there was almost no response to Chandrasekhar’s paper.  Frankly, few astronomers had the theoretical training needed to understand the physics.  Eddington was one exception, which was why he held such stature in the community.  The big question therefore was:  Was Chandrasekhar’s theory correct?  During the three years to obtain his PhD, Chandrasekhar met frequently with Eddington, who was also at Cambridge, and with colleagues outside the university, and they all encouraged Chandrasekhar to tackle the more difficult problem to combine internal stellar structure with his theory.  This could not be done with pen and paper, but required numerical calculation.  Eddington was in possession of an early electromagnetic calculator, and he loaned it to Chandrasekhar to do the calculations.  After many months of tedious work, Chandrasekhar was finally ready to confirm his theory at the 1934 meeting of the British Astrophysical Society. 

The young Chandrasekhar stood up and gave his results in an impeccable presentation before an auditorium crowded with his peers.  But as he left the stage, he was shocked when Eddington himself rose to give the next presentation.  Eddington proceeded to criticize and reject Chandrasekhar’s careful work, proposing instead a garbled mash-up of quantum theory and relativity that would eliminate Chandrasekhar’s limit and hence prevent collapse to the Schwarzschild singularity.  Chandrasekhar sat mortified in the audience.  After the session, many of his friends and colleagues came up to him to give their condolences—if Eddington, the leader of the field and one of the few astronomers who understood Einstein’s theories, said that Chandrasekhar was wrong, then that was that.  Badly wounded, Chandrasekhar was faced with a dire choice.  Should he fight against the reputation of Eddington, fight for the truth of his theory?  But he was at the beginning of his career and could ill afford to pit himself against the giant.  So he turned his back on the problem of stellar death, and applied his talents to the problem of stellar evolution. 

Chandrasekhar went on to have an illustrious career, spent mostly at the University of Chicago (far from Cambridge), and he did eventually return to his limit as it became clear that Eddington was wrong.  In fact, many at the time already suspected Eddington was wrong and were seeking for the answer to the next question: If white dwarfs cannot support themselves under gravity and must collapse, what do they collapse to?  In Pasadena at the California Institute of Technology, an astrophysicist named Fritz Zwicky thought he knew the answer.

Fritz Zwicky’s Neutron Star

Fritz Zwicky (1898—1874) was an irritating and badly flawed genius.  What made him so irritating was that he knew he was a genius and never let anyone forget it.  What made him badly flawed was that he never cared much for weight of evidence.  It was the ideas that mattered—let lesser minds do the tedious work of filling in the cracks.  And what made him a genius was that he was often right!  Zwicky pushed the envelope—he loved extremes.  The more extreme a theory was, the more likely he was to favor it—like his proposal for dark matter.  Most of his colleagues considered him to be a buffoon and borderline crackpot.  He was tolerated by no one—no one except his steadfast collaborator of many years Ernst Baade (until they nearly came to blows on the eve of World War II).  Baade was a German physicist trained at Göttingen and recently arrived at Cal Tech.  He was exceptionally well informed on the latest advances in a broad range of fields.  Where Zwicky made intuitive leaps, often unsupported by evidence, Baade would provide the context.  Baade was a walking Wikipedia for Zwicky, and together they changed the face of astrophysics.

Zwicky and Baade submitted an abstract to the American Physical Society Meeting in 1933, which Kip Thorne has called “…one of the most prescient documents in the history of physics and astronomy” [7].  In the abstract, Zwicky and Baade introduced, for the first time, the existence of supernovae as a separate class of nova and estimated the total energy output of these cataclysmic events, including the possibility that they are the source of some cosmic rays.  They introduced the idea of a neutron star, a star composed purely of neutrons, only a year after Chadwick discovered the neutron’s existence, and they strongly suggested that a supernova is produced by the transformation of a star into a neutron star.  A neutron star would have a mass similar to that of the Sun, but would have a radius of only tens of kilometers.  If the mass density of white dwarfs was hard to swallow, the density of a neutron star was billion times greater!  It would take nearly thirty years before each of the assertions made in this short abstract were proven true, but Zwicky certainly had a clear view, tempered by Baade, of where the field of astrophysics was headed.  But no one listened to Zwicky.  He was too aggressive and backed up his wild assertions with too little substance.  Therefore, neutron stars simmered on the back burner until more substantial physicists could address their properties more seriously.

Two substantial physicists who had the talent and skills that Zwicky lacked were Lev Landau in Moscow and Robert Oppenheimer at Berkeley.  Landau derived the properties of a neutron star in 1937 and published the results to great fanfare.  He was not aware of Zwicky’s work, and he called them neutron cores, because he hypothesized that they might reside at the core of ordinary stars like the Sun.  Oppenheimer, working with a Canadian graduate student George Volkoff at Berkeley, showed that Landau’s idea about stellar cores was not correct, but that the general idea of a neutron core, or rather neutron star, was correct [8].  Once Oppenheimer was interested in neutron stars, he kept going and asked the same question about neutron stars that Chandrasekhar had asked about white dwarfs:  Is there a maximum size for neutron stars beyond which they must collapse?  The answer to this question used the same quantum mechanical degeneracy pressure (now provided by neutrons rather than electrons) and gravitational compaction as the problem of white dwarfs, but it required detailed understanding of nuclear forces, which in 1938 were only beginning to be understood.  However, Oppenheimer knew enough to make a good estimate of the nuclear binding contribution to the total internal pressure and came to a similar conclusion for neutron stars as Chandrasekhar had made for white dwarfs.  There was indeed a maximum mass of a neutron star, a Chandrasekhar-type limit of about three solar masses.  Beyond this mass, even the degeneracy pressure of neutrons could not support gravitational pressure, and the neutron star must collapse.  In Oppenheimer’s mind it was clear what it must collapse to—a black hole (known as gravitational cut-off at that time). This was to lead Oppenheimer and John Wheeler to their famous confrontation over the existence of black holes, which Oppenheimer won, but Wheeler took possession of the battle field [9].

Derivation of the Relativistic Chandrasekhar Limit

White dwarfs are created from the balance between gravitational compression and the degeneracy pressure of electrons caused by the Pauli exclusion principle. When a star collapses gravitationally, the matter becomes so dense that the electrons begin to fill up quantum states until all the lowest-energy states are filled and no more electrons can be added. This results in a balance that stabilizes the gravitational collapse, and the result is a white dwarf with a mass density a million times larger than the Sun.

If the electrons remained non-relativistic, then there would be no upper limit for the size of a star that would form a white dwarf. However, because electrons become relativistic at high enough compaction, if the initial star is too massive, the electron degeneracy pressure becomes limited relativistically and cannot keep the matter from compacting more, and even the white dwarf will collapse (to a neutron star or a black hole). The largest mass that can be supported by a white dwarf is known as the Chandrasekhar limit.

A simplified derivation of the Chandrasekhar limit begins by defining the total energy as the kinetic energy of the degenerate Fermi electron gas plus the gravitational potential energy

The kinetic energy of the degenerate Fermi gas has the relativistic expression


where the Fermi k-vector can be expressed as a function of the radius of the white dwarf and the total number of electrons in the star, as

If the star is composed of pure hydrogen, then the mass of the star is expressed in terms of the total number of electrons and the mass of the proton

The total energy of the white dwarf is minimized by taking its derivative with respect to the radius of the star

When the derivative is set to zero, the term in brackets becomes

This is solved for the radius for which the electron degeneracy pressure stabilizes the gravitational pressure

This is the relativistic radius-mass expression for the size of the stabilized white dwarf as a function of the mass (or total number of electrons). One of the astonishing results of this calculation is the merging of astronomically large numbers (the mass of stars) with both relativity and quantum physics. The radius of the white dwarf is actually expressed as a multiple of the Compton wavelength of the electron!

The expression in the square root becomes smaller as the size of the star increases, and there is an upper bound to the mass of the star beyond which the argument in the square root goes negative. This upper bound is the Chandrasekhar limit defined when the argument equals zero

This gives the final expression for the Chandrasekhar limit (expressed in terms of the Planck mass)

This expression is only approximate, but it does contain the essential physics and magnitude. This limit is on the order of a solar mass. A more realistic numerical calculation yields a limiting mass of about 1.4 times the mass of the Sun. For white dwarfs larger than this value, the electron degeneracy is insufficient to support the gravitational pressure, and the star will collapse to a neutron star or a black hole.


[1] The fact that Eddington coordinates removed the singularity at the Schwarzschild radius was first pointed out by Lemaitre in 1933.  A local observer passing through the Schwarzschild radius would experience no divergence in local properties, even though a distant observer would see that in-falling observer becoming length contracted and time dilated. This point of view of an in-falling observer was explained in 1958 by Finkelstein, who also pointed out that the Schwarzschild radius is an event horizon.

[2] William Herschel (1803), Account of the Changes That Have Happened, during the Last Twenty-Five Years, in the Relative Situation of Double-Stars; With an Investigation of the Cause to Which They Are Owing, Philosophical Transactions of the Royal Society of London 93, pp. 339–382 (Motion of binary stars)

[3] Boss, L. (1910). Preliminary General Catalogue of 6188 stars for the epoch 1900. Carnegie Institution of Washington. (Mass and radius of Sirius B)

[4] Eddington, A. S. (1927). Stars and Atoms. Clarendon Press. LCCN 27015694.

[5] Fowler, R. H. (1926). “On dense matter”. Monthly Notices of the Royal Astronomical Society 87: 114. Bibcode:1926MNRAS..87..114F. (Quantum mechanics of degenerate matter).

[6] Chandrasekhar, S. (1931). “The Maximum Mass of Ideal White Dwarfs”. The Astrophysical Journal 74: 81. Bibcode:1931ApJ….74…81C. doi:10.1086/143324. (Mass limit of white dwarfs).

[7] Kip Thorne (1994) Black Holes & Time Warps: Einstein’s Outrageous Legacy (Norton). pg. 174

[8] Oppenheimer was aware of Zwicky’s proposal because he had a joint appointment between Berkeley and Cal Tech.

[9] See Chapter 7, “The Lens of Gravity” in Galileo Unbound: A Path Across Life, the Universe and Everything (Oxford University Press, 2018).



A Wealth of Motions: Six Generations in the History of the Physics of Motion

SixGenerations3

Since Galileo launched his trajectory, there have been six broad generations that have traced the continuing development of concepts of motion. These are: 1) Universal Motion; 2) Phase Space; 3) Space-Time; 4) Geometric Dynamics; 5) Quantum Coherence; and 6) Complex Systems. These six generations were not all sequential, many evolving in parallel over the centuries, borrowing from each other, and there surely are other ways one could divide up the story of dynamics. But these six generations capture the grand concepts and the crucial paradigm shifts that are Galileo’s legacy, taking us from Galileo’s trajectory to the broad expanses across which physicists practice physics today.

Universal Motion emerged as a new concept when Isaac Newton proposed his theory of universal gravitation by which the force that causes apples to drop from trees is the same force that keeps the Moon in motion around the Earth, and the Earth in motion around the Sun. This was a bold step because even in Newton’s day, some still believed that celestial objects obeyed different laws. For instance, it was only through the work of Edmund Halley, a contemporary and friend of Newton’s, that comets were understood to travel in elliptical orbits obeying the same laws as the planets. Universal Motion included ideas of momentum from the start, while concepts of energy and potential, which fill out this first generation, took nearly a century to develop in the hands of many others, like Leibniz and Euler and the Bernoullis. This first generation was concluded by the masterwork of the Italian-French mathematician Joseph-Louis Lagrange, who also planted the seed of the second generation.

The second generation, culminating in the powerful and useful Phase Space, also took more than a century to mature. It began when Lagrange divorced dynamics from geometry, establishing generalized coordinates as surrogates to directions in space. Ironically, by discarding geometry, Lagrange laid the foundation for generalized spaces, because generalized coordinates could be anything, coming in any units and in any number, each coordinate having its companion velocity, doubling the dimension for every freedom. The Austrian physicist Ludwig Boltzmann expanded the number of dimensions to the scale of Avogadro’s number of particles, and he discovered the conservation of phase space volume, an invariance of phase space that stays the same even as 1023 atoms (Avogadro’s number) in ideal gases follow their random trajectories. The idea of phase space set the stage for statistical mechanics and for a new probabilistic viewpoint of mechanics that would extend into chaotic motions.

The French mathematician Henri Poincaré got a glimpse of chaotic motion in 1890 as he rushed to correct an embarrassing mistake in his manuscript that had just won a major international prize. The mistake was mathematical, but the consequences were profoundly physical, beginning the long road to a theory of chaos that simmered, without boiling, for nearly seventy years until computers became common lab equipment. Edward Lorenz of MIT, working on models of the atmosphere in the late 1960s, used one of the earliest scientific computers to expose the beauty and the complexity of chaotic systems. He discovered that the computer simulations were exponentially sensitive to the initial conditions, and the joke became that a butterfly flapping its wings in China could cause hurricanes in the Atlantic. In his computer simulations, Lorenz discovered what today is known as the Lorenz butterfly, an example of something called a “strange attractor”. But the term chaos is a bit of a misnomer, because chaos theory is primarily about finding what things are shared in common, or are invariant, among seemingly random-acting systems.

The third generation in concepts of motion, Space-Time, is indelibly linked with Einstein’s special theory of relativity, but Einstein was not its originator. Space-time was the brain child of the gifted but short-lived Prussian mathematician Hermann Minkowski, who had been attracted from Königsberg to the mathematical powerhouse at the University in Göttingen, Germany around the turn of the 20th Century by David Hilbert. Minkowski was an expert in invariant theory, and when Einstein published his special theory of relativity in 1905 to explain the Lorentz transformations, Minkowski recognized a subtle structure buried inside the theory. This structure was related to Riemann’s metric theory of geometry, but it had the radical feature that time appeared as one of the geometric dimensions. This was a drastic departure from all former theories of motion that had always separated space and time: trajectories had been points in space that traced out a continuous curve as a function of time. But in Minkowski’s mind, trajectories were invariant curves, and although their mathematical representation changed with changing point of view (relative motion of observers), the trajectories existed in a separate unchanging reality, not mere functions of time, but eternal. He called these trajectories world lines. They were static structures in a geometry that is today called Minkowski space. Einstein at first was highly antagonistic to this new view, but he relented, and later he so completely adopted space-time in his general theory that today Minkowski is almost forgotten, his echo heard softly in expressions of the Minkowski metric that is the background to Einstein’s warped geometry that bends light and captures errant space craft.

The fourth generation in the development of concepts of motion, Geometric Dynamics, began when an ambitious French physicist with delusions of grandeur, the historically ambiguous Pierre Louis Maupertuis, returned from a scientific boondoggle to Lapland where he measured the flatness of the Earth in defense of Newtonian physics over Cartesian. Skyrocketed to fame by the success of the expedition, he began his second act by proposing the Principle of Least Action, a principle by which all motion seeks to be most efficient by taking a geometric path that minimizes a physical quantity called action. In this principle, Maupertuis saw both a universal law that could explain all of physical motion, as well as a path for himself to gain eternal fame in the company of Galileo and Newton. Unfortunately, his high hopes were dashed through personal conceit and nasty intrigue, and most physicists today don’t even recognize his name. But the idea of least action struck a deep chord that reverberates throughout physics. It is the first and fundamental example of a minimum principle, of which there are many. For instance, minimum potential energy identifies points of system equilibrium, and paths of minimum distances are geodesic paths. In dynamics, minimization of the difference between potential and kinetic energies identifies the dynamical paths of trajectories, and minimization of distance through space-time warped by mass and energy density identifies the paths of falling objects.

Maupertuis’ fundamentally important idea was picked up by Euler and Lagrange, expanding it through the language of differential geometry. This was the language of Bernhard Riemann, a gifted and shy German mathematician whose mathematical language was adopted by physicists to describe motion as a geodesic, the shortest path like a great-circle route on the Earth, in an abstract dynamical space defined by kinetic energy and potentials. In this view, it is the geometry of the abstract dynamical space that imposes Galileo’s simple parabolic form on freely falling objects. Einstein took this viewpoint farther than any before him, showing how mass and energy warped space and how free objects near gravitating bodies move along geodesic curves defined by the shape of space. This brought trajectories to a new level of abstraction, as space itself became the cause of motion. Prior to general relativity, motion occurred in space. Afterwards, motion was caused by space. In this sense, gravity is not a force, but is like a path down which everything falls.

The fifth generation of concepts of motion, Quantum Coherence, increased abstraction yet again in the comprehension of trajectories, ushering in difficult concepts like wave-particle duality and quantum interference. Quantum interference underlies many of the counter-intuitive properties of quantum systems, including the possibility for quantum systems to be in two or more states at the same time, and for quantum computers to crack unbreakable codes. But this new perspective came with a cost, introducing fundamental uncertainties that are locked in a battle of trade-offs as one measurement becomes more certain and others becomes more uncertain.

Einstein distrusted Heisenberg’s uncertainty principle, not that he disagreed with its veracity, but he felt it was more a statement of ignorance than a statement of fundamental unknowability. In support of Einstein, Schrödinger devised a thought experiment that was meant to be a reduction to absurdity in which a cat is placed in a box with a vial of poison that would be broken if a quantum particle decays. The cruel fate of Schrödinger’s cat, who might or might not be poisoned, hinges on whether or not someone opens the lid and looks inside. Once the box is opened, there is one world in which the cat is alive and another world in which the cat is dead. These two worlds spring into existence when the box is opened—a bizarre state of affairs from the point of view of a pragmatist. This is where Richard Feynman jumped into the fray and redefined the idea of a trajectory in a radically new way by showing that a quantum trajectory is not a single path, like Galileo’s parabola, but the combined effect of the quantum particle taking all possible paths simultaneously. Feynman established this new view of quantum trajectories in his thesis dissertation under the direction of John Archibald Wheeler at Princeton. By adapting Maupertuis’ Principle of Least Action to quantum mechanics, Feynman showed how every particle takes every possible path—simultaneously—every path interfering in such as way that only the path with the most constructive interference is observed. In the quantum view, the deterministic trajectory of the cannon ball evaporates into a cloud of probable trajectories.

In our current complex times, the sixth generation in the evolution of concepts of motion explores Complex Systems. Lorenz’s Butterfly has more to it than butterflies, because Life is the greatest complex system of our experience and our existence. We are the end result of a cascade of self-organizing events that began half a billion years after Earth coalesced out of the nebula, leading to the emergence of consciousness only about 100,000 years ago—a fact that lets us sit here now and wonder about it all. That we are conscious is perhaps no accident. Once the first amino acids coagulated in a muddy pool, we have been marching steadily uphill, up a high mountain peak in a fitness landscape. Every advantage a species gained over its environment and over its competitors exerted a type of pressure on all the other species in the ecosystem that caused them to gain their own advantage.

The modern field of evolutionary dynamics spans a wide range of scales across a wide range of abstractions. It treats genes and mutations on DNA in much the same way it treats the slow drift of languages and the emergence of new dialects. It treats games and social interactions the same way it does the evolution of cancer. Evolutionary dynamics is the direct descendant of chaos theory that turned butterflies into hurricanes, but the topics it treats are special to us as evolved species, and as potential victims of disease. The theory has evolved its own visualizations, such as the branches in the tree of life and the high mountain tops in fitness landscapes separated by deep valleys. Evolutionary dynamics draws, in a fundamental way, on dynamic processes in high dimensions, without which it would be impossible to explain how something as complex as human beings could have arisen from random mutations.

These six generations in the development of dynamics are not likely to stop, and future generations may arise as physicists pursue the eternal quest for the truth behind the structure of reality.