At the turn of the New Year, as I turn to the breakthroughs in physics of the previous year, sifting through the candidates, I usually narrow it down to about 4 to 6 that I find personally compelling (See, for instance 2023, 2022). In a given year, they may be related to things like supersolids, condensed atoms, or quantum entanglement. Often they relate to those awful, embarrassing gaps in physics knowledge that we give euphemistic names to, like “Dark Energy” and “Dark Matter” (although in the end they may be neither energy nor matter). But this year, as I sifted, I was struck by how many of the “physics” advances of the past year were focused on pushing limits—lower temperatures, more qubits, larger distances.
If you want something that is eventually useful, then engineering is the way to go, and many of the potential breakthroughs of 2024 did require heroic efforts. But if you are looking for a paradigm shift—a new way of seeing or thinking about our reality—then bigger, better and farther won’t give you that. We may be pushing the boundaries, but the thinking stays the same.
Therefore, for 2024, I have replaced “breakthrough” with a single “prospect” that may force us to change our thinking about the universe and the fundamental forces behind it.
This prospect is the weakening of dark energy over time.
It is a “prospect” because it is not yet absolutely confirmed. If it is confirmed in the next few years, then it changes our view of reality. If it is not confirmed, then it still forces us to think harder about fundamental questions, pointing where to look next.
Einstein’s Cosmological “Constant”
Like so much of physics today, the origins of this story go back to Einstein. At the height of WWI in 1917, as Einstein was working in Berlin, he “tweaked” his new theory of general relativity to allow the universe to be static. The tweak came in the form of a parameter he labelled Lambda (Λ), providing a counterbalance against the gravitational collapse of the universe, which at the time was assumed to have a time-invariant density. This cosmological “constant” of spacetime represented a pressure that kept the universe inflated like a balloon.
Fig. 1 Einstein’s “Field Equations” for the universe containing expressions for curvature, the metric tensor and energy density. Spacetime is warped by energy density, and trajectories within the warped spacetime follow geodesic curves. When Λ = 0, only gravitional attraction is present. When Λ ≠ 0, a “repulsive” background force exerts a pressure on spacetime, keeping it inflated like a balloon.
Later, in 1929 when Edwin Hubble discovered that the universe was not static but was expanding, and not only expanding, but apparently on a free trajectory originating at some point in the past (the Big Bang), Einstein zeroed out his cosmological constant, viewing it as one of his greatest blunders.
And so it stood until 1998 when two teams announced that the expansion of the universe is accelerating—and Einstein’s cosmological constant was back in. In addition, measurements of the energy density of the universe showed that the cosmological constant was contributing around 68% of the total energy density, which has been given the name of Dark Energy. One of the ways to measure Dark Energy is through BAO.
Baryon Acoustic Oscillations (BAO)
If the goal of science communication is to be transparent, and to engage the public in the heroic pursuit of pure science, then the moniker Baryon Acoustic Oscillations (BAO) was perhaps the wrong turn of phrase. “Cosmic Ripples” might have been a better analogy (and a bit more poetic).
In the early moments after the Big Bang, slight density fluctuations set up a balance of opposing effects between gravitational attraction, that tends to clump matter, and the homogenization effects of the hot photon background, that tends to disperse ionized matter. Matter consists of both dark matter as well as the matter we are composed of, known as baryonic matter. Only baryonic matter can be ionized and hence interact with photons, hence only photons and baryons experience this balance. As the universe expanded, an initial clump of baryons and photons expanded outward together, like the ripples on a millpond caused by a thrown pebble. And because the early universe had many clumps (and anti-clumps where density was lower than average), the millpond ripples were like those from a gentle rain with many expanding ringlets overlapping.
Fig. 2 Overlapping ripples showing galaxies formed along the shells. The size of the shells is set by the speed of “sound” in the universe. From [Ref].
Fig. 3 Left. Galaxies formed on acoustic ringlets like drops of dew on a spider’s web. Right. Many ringlets overlapping. The characteristic size of the ringlets can still be extracted statistically. From [Ref].
Then, about 400,000 years after the Big Bang, as the universe expanded and cooled, it got cold enough that ionized electrons and baryons formed atoms that are neutral and transparent to light. Light suddenly flew free, decoupled from the matter that had constrained it. Removing the balance between light and matter in the BAO caused the baryonic ripples to freeze in place, as if a sudden arctic blast froze the millpond in an instant. The residual clumps of matter in the early universe became clumps of galaxies in the modern universe that we can see and measure. We can also see the effects of those clumps on the temperature fluctuations of the cosmic microwave background (CMB).
Between these two—the BAO and the CMB—it is possible to measure cosmic distances, and with those distances, to measure how fast the universe is expanding.
Acceleration Slowing
The Dark Energy Spectroscopic Instrument (DESI) on top of Kitt Peak in Arizona is measuring the distances to millions of galaxies using automated fiber optic arrays containing thousands of optical fibers. In one year it measured the distances to about 6 milliion galaxies.
Fig. 4 The Kitt Peak observatory, the site of DESI. From [Ref].
By focusing on seven “epochs” in galaxy formation in the universe, it measures the sizes of the BAO ripples over time, ranging in ages from 3 billion to 11 billion years ago. (The universe is about 13.8 billion years old.) The relative sizes are then compared to the predictions of the LCDM (Lambda-Cold-Dark-Matter) model. This is the “consensus” model of the day—agreed upon as being “most likely” to explain observations. If Dark Energy is a true constant, then the relative sizes of the ripples should all be the same, regardless of how far back in time we look.
But what the DESI data discovered is that relative sizes more recently (a few billion years ago) are smaller than predicted by LCDM. Given that LCDM includes the acceleration of the expansion of the universe caused by Dark Energy, it means that Dark Energy is slightly weaker in the past few billion years than it was 10 billion years ago—it’s weakening or “thawing”.
The measurements as they stand today are shown in Fig. 5, showing the relative sizes as a function of how far back in time they look, with a dashed line showing the deviation from the LCDM prediction. The error bars in the figure are not yet are that impressive, and statistical effects may be causing the trend, so it might be erased by more measurements. But the BAO results have been augmented by recent measurements of supernova (SNe) that provide additional support for thawing Dark Energy. Combined, the BAO+SNe results currently stand at about 3.4 sigma. The gold standard for “discovery” is about 5 sigma, so there is still room for this effect to disappear. So stay tuned—the final answer may be known within a few years.
Fig. 5 Seven “epochs” in the evolution of galaxies in the universe. This plot shows relative galactic distances as a function of time looking back towards the Big Bang (older times closer to the Big Bang are to the right side of the graph). In more recent times, relative distances are smaller than predicted by the consensus theory known as Lambda-Cold-Dark-Matter (LCDM), suggesting that Dark Energy is slight weaker today than it was billions of years ago. The three left-most data points (with error bars from early 2024) are below the LCDM line. From [Ref].Fig. 6 Annotated version of the previous figure. From [Ref].
The Future of Physics
The gravitational constant G is considered to be a constant property of nature, as is Planck’s constant h, and the charge of the electron e. None of these fundamental properties of physics are viewed as time dependent and none can be derived from basic principles. They are simply constants of our reality. But if Λ is time dependent, then it is not a fundamental constant and should be derivable and explainable.
These days, the physics breakthroughs in the news that really catch the eye tend to be Astro-centric. Partly, this is due to the new data coming from the James Webb Space Telescope, which is the flashiest and newest toy of the year in physics. But also, this is part of a broader trend in physics that we see in the interest statements of physics students applying to graduate school. With the Higgs business winding down for high energy physics, and solid state physics becoming more engineering, the frontiers of physics have pushed to the skies, where there seem to be endless surprises.
To be sure, quantum information physics (a hot topic) and AMO (atomic and molecular optics) are performing herculean feats in the laboratories. But even there, Bose-Einstein condensates are simulating the early universe, and quantum computers are simulating worm holes—tipping their hat to astrophysics!
So here are my picks for the top physics breakthroughs of 2023.
The Early Universe
The James Webb Space Telescope (JWST) has come through big on all of its promises! They said it would revolutionize the astrophysics of the early universe, and they were right. As of 2023, all astrophysics textbooks describing the early universe and the formation of galaxies are now obsolete, thanks to JWST.
Foremost among the discoveries is how fast the universe took up its current form. Galaxies condensed much earlier than expected, as did supermassive black holes. Everything that we thought took billions of years seem to have happened in only about one-tenth of that time (incredibly fast on cosmic time scales). The new JWST observations blow away the status quo on the early universe, and now the astrophysicists have to go back to the chalk board.
If LIGO and the first detection of gravitational waves was the huge breakthrough of 2015, detecting something so faint that it took a century to build an apparatus sensitive enough to detect them, then the newest observations of gravitational waves using galactic ripples presents a whole new level of gravitational wave physics.
By using the exquisitely precise timing of distant pulsars, astrophysicists have been able to detect a din of gravitational waves washing back and forth across the universe. These waves came from supermassive black hole mergers in the early universe. As the waves stretch and compress the space between us and distant pulsars, the arrival times of pulsar pulses detected at the Earth vary a tiny but measurable amount, haralding the passing of a gravitational wave.
This approach is a form of statistical optics in contrast to the original direct detection that was a form of interferometry. These are complimentary techniques in optics research, just as they will be complimentary forms of gravitational wave astronomy. Statistical optics (and fluctuation analysis) provides spectral density functions which can yield ensemble averages in the large N limit. This can answer questions about large ensembles that single interferometric detection cannot contribute to. Conversely, interferometric detection provides the details of individual events in ways that statistical optics cannot do. The two complimentary techniques, moving forward, will provide a much clearer picture of gravitational wave physics and the conditions in the universe that generate them.
Phosphorous on Enceladus
Planetary science is the close cousin to the more distant field of cosmology, but being close to home also makes it more immediate. The search for life outside the Earth stands as one of the greatest scientific quests of our day. We are almost certainly not alone in the universe, and life may be as close as Enceladus, the icy moon of Saturn.
Scientists have been studying data from the Cassini spacecraft that observed Saturn close-up for over a decade from 2004 to 2017. Enceladus has a subsurface liquid ocean that generates plumes of tiny ice crystals that erupt like geysers from fissures in the solid surface. The ocean remains liquid because of internal tidal heating caused by the large gravitational forces of Saturn.
The Cassini spacecraft flew through the plumes and analyzed their content using its Cosmic Dust Analyzer. While the ice crystals from Enceladus were already known to contain organic compounds, the science team discovered that they also contain phosphorous. This is the least abundant element within the molecules of life, but it is absolutely essential, providing the backbone chemistry of DNA as well as being a constituent of amino acids.
With this discovery, all the essential building blocks of life are known to exist on Enceladus, along with a liquid ocean that is likely to be in chemical contact with rocky minerals on the ocean floor, possibly providing the kind of environment that could promote the emergence of life on a planet other than Earth.
Simulating the Expanding Universe in a Bose-Einstein Condensate
Putting the universe under a microscope in a laboratory may have seemed a foolish dream, until a group at the University of Heidelberg did just that. It isn’t possible to make a real universe in the laboratory, but by adjusting the properties of an ultra-cold collection of atoms known as a Bose-Einstein condensate, the research group was able to create a type of local space whose internal metric has a curvature, like curved space-time. Furthermore, by controlling the inter-atomic interactions of the condensate with a magnetic field, they could cause the condensate to expand or contract, mimicking different scenarios for the evolution of our own universe. By adjusting the type of expansion that occurs, the scientists could create hypotheses about the geometry of the universe and test them experimentally, something that could never be done in our own universe. This could lead to new insights into the behavior of the early universe and the formation of its large-scale structure.
This is the only breakthrough I picked that is not related to astrophysics (although even this effect may have played a role in the very early universe).
Entanglement is one of the hottest topics in physics today (although the idea is 89 years old) because of the crucial role it plays in quantum information physics. The topic was awarded the 2022 Nobel Prize in Physics which went to John Clauser, Alain Aspect and Anton Zeilinger.
Direct observations of entanglement have been mostly restricted to optics (where entangled photons are easily created and detected) or molecular and atomic physics as well as in the solid state.
But entanglement eluded high-energy physics (which is quantum matter personified) until 2023 when the Atlas Collaboration at the LHC (Large Hadron Collider) in Geneva posted a manuscript on Arxiv that reported the first observation of entanglement in the decay products of a quark.
Fig. Thresholds for entanglement detection in decays from top quarks. Imagecredit.
Quarks interact so strongly (literally through the strong force), that entangled quarks experience very rapid decoherence, and entanglement effects virtually disappear in their decay products. However, top quarks decay so rapidly, that their entanglement properties can be transferred to their decay products, producing measurable effects in the downstream detection. This is what the Atlas team detected.
While this discovery won’t make quantum computers any better, it does open up a new perspective on high-energy particle interactions, and may even have contributed to the properties of the primordial soup during the Big Bang.
It may be hard to get excited about nothing … unless nothing is the whole ball game.
The only way we can really know what is, is by knowing what isn’t. Nothing is the backdrop against which we measure something. Experimentalists spend almost as much time doing control experiments, where nothing happens (or nothing is supposed to happen) as they spend measuring a phenomenon itself, the something.
Even the universe, full of so much something, came out of nothing during the Big Bang. And today the energy density of nothing, so-called Dark Energy, is blowing our universe apart, propelling it ever faster to a bitter cold end.
So here is a brief history of nothing, tracing how we have understood what it is, where it came from, and where is it today.
With sturdy shoulders, space stands opposing all its weight to nothingness. Where space is, there is being.
Friedrich Nietzsche
40,000 BCE – Cosmic Origins
This is a human history, about how we homo sapiens try to understand the natural world around us, so the first step on a history of nothing is the Big Bang of human consciousness that occurred sometime between 100,000 – 40,000 years ago. Some sort of collective phase transition happened in our thought process when we seem to have become aware of our own existence within the natural world. This time frame coincides with the beginning of representational art and ritual burial. This is also likely the time when human language skills reached their modern form, and when logical arguments–stories–first were told to explain our existence and origins.
Roughly two origin stories emerged from this time. One of these assumes that what is has always been, either continuously or cyclically. Buddhism and Hinduism are part of this tradition as are many of the origin philosophies of Indigenous North Americans. Another assumes that there was a beginning when everything came out of nothing. Abrahamic faiths (Let there be light!) subscribe to this creatio ex nihilo. What came before creation? Nothing!
500 BCE – Leucippus and Democritus Atomism
The Greek philosopher Leucippus and his student Democritus, living around 500 BCE, were the first to lay out the atomic theory in which the elements of substance were indivisible atoms of matter, and between the atoms of matter was void. The different materials around us were created by the different ways that these atoms collide and cluster together. Plato later adhered to this theory, developing ideas along these lines in his Timeaus.
300 BCE – Aristotle Vacuum
Aristotle is famous for arguing, in his Physics Book IV, Section 8, that nature abhors a vacuum (horror vacui) because any void would be immediately filled by the imposing matter surrounding it. He also argued more philosophically that nothing, by definition, cannot exist.
1644 – Rene Descartes Vortex Theory
Fast forward a millennia and a half, and theories of existence were finally achieving a level of sophistication that can be called “scientific”. Rene Descartes followed Aristotle’s views of the vacuum, but he extended it to the vacuum of space, filling it with an incompressible fluid in his Principles of Philosophy (1644). Just like water, laminar motion can only occur by shear, leading to vortices. Descartes was a better philosopher than mathematician, so it took Christian Huygens to apply mathematics to vortex motion to “explain” the gravitational effects of the solar system.
Otto von Guericke is one of those hidden gems of the history of science, a person who almost no-one remembers today, but who was far in advance of his own day. He was a powerful politician, holding the position of Burgomeister of the city of Magdeburg for more than 30 years, helping to rebuild it after it was sacked during the Thirty Years War. He was also a diplomat, playing a key role in the reorientation of power within the Holy Roman Empire. How he had free time is anyone’s guess, but he used it to pursue scientific interests that spanned from electrostatics to his invention of the vacuum pump.
With a succession of vacuum pumps, each better than the last, von Geuricke was like a kid in a toy factory, pumping the air out of anything he could find. In the process, he showed that a vacuum would extinguish a flame and could raise water in a tube.
His most famous demonstration was, of course, the Magdeburg sphere demonstration. In 1657 he fabricated two 20-inch hemispheres that he attached together with a vacuum seal and used his vacuum pump to evacuate the air from inside. He then attached chains from the hemispheres to a team of eight horses on each side, for a total of 16 horses, who were unable to separate the spheres. This dramatically demonstrated that air exerts a force on surfaces, and that Aristotle and Descartes were wrong—nature did allow a vacuum!
1667 – Isaac Newton Action at a Distance
When it came to the vacuum, Newton was agnostic. His universal theory of gravitation posited action at a distance, but the intervening medium played no direct role.
Nothing comes from nothing, Nothing ever could.
Rogers and Hammerstein, The Sound of Music
This would seem to say that Newton had nothing to say about the vacuum, but his other major work, his Optiks, established particles as the elements of light rays. Such light particles travelled easily through vacuum, so the particle theory of light came down on the empty side of space.
Statue of Isaac Newton by Sir Eduardo Paolozzi based on a painting by William Blake. Image Credit
1821 – Augustin Fresnel Luminiferous Aether
Today, we tend to think of Thomas Young as the chief proponent for the wave nature of light, going against the towering reputation of his own countryman Newton, and his courage and insights are admirable. But it was Augustin Fresnel who put mathematics to the theory. It was also Fresnel, working with his friend Francois Arago, who established that light waves are purely transverse.
For these contributions, Fresnel stands as one of the greatest physicists of the 1800’s. But his transverse light waves gave birth to one of the greatest red herrings of that century—the luminiferous aether. The argument went something like this, “if light is waves, then just as sound is oscillations of air, light must be oscillations of some medium that supports it – the luminiferous aether.” Arago searched for effects of this aether in his astronomical observations, but he didn’t see it, and Fresnel developed a theory of “partial aether drag” to account for Arago’s null measurement. Hippolyte Fizeau later confirmed the Fresnel “drag coefficient” in his famous measurement of the speed of light in moving water. (For the full story of Arago, Fresnel and Fizeau, see Chapter 2 of “Interference”. [1])
But the transverse character of light also required that this unknown medium must have some stiffness to it, like solids that support transverse elastic waves. This launched almost a century of alternative ideas of the aether that drew in such stellar actors as George Green, George Stokes and Augustin Cauchy with theories spanning from complete aether drag to zero aether drag with Fresnel’s partial aether drag somewhere in the middle.
1849 – Michael Faraday Field Theory
Micheal Faraday was one of the most intuitive physicists of the 1800’s. He worked by feel and mental images rather than by equations and proofs. He took nothing for granted, able to see what his experiments were telling him instead of looking only for what he expected.
This talent allowed him to see lines of force when he mapped out the magnetic field around a current-carrying wire. Physicists before him, including Ampere who developed a mathematical theory for the magnetic effects of a wire, thought only in terms of Newton’s action at a distance. All forces were central forces that acted in straight lines. Faraday’s experiments told him something different. The magnetic lines of force were circular, not straight. And they filled space. This realization led him to formulate his theory for the magnetic field.
Others at the time rejected this view, until William Thomson (the future Lord Kelvin) wrote a letter to Faraday in 1845 telling him that he had developed a mathematical theory for the field. He suggested that Faraday look for effects of fields on light, which Faraday found just one month later when he observed the rotation of the polarization of light when it propagated in a high-index material subject to a high magnetic field. This effect is now called Faraday Rotation and was one of the first experimental verifications of the direct effects of fields.
Nothing is more real than nothing.
Samuel Beckett
In 1949, Faraday stated his theory of fields in their strongest form, suggesting that fields in empty space were the repository of magnetic phenomena rather than magnets themselves [2]. He also proposed a theory of light in which the electric and magnetic fields induced each other in repeated succession without the need for a luminiferous aether.
1861 – James Clerk Maxwell Equations of Electromagnetism
James Clerk Maxwell pulled the various electric and magnetic phenomena together into a single grand theory, although the four succinct “Maxwell Equations” was condensed by Oliver Heaviside from Maxwell’s original 15 equations (written using Hamilton’s awkward quaternions) down to the 4 vector equations that we know and love today.
One of the most significant and most surprising thing to come out of Maxwell’s equations was the speed of electromagnetic waves that matched closely with the known speed of light, providing near certain proof that light was electromagnetic waves.
However, the propagation of electromagnetic waves in Maxwell’s theory did not rule out the existence of a supporting medium—the luminiferous aether. It was still not clear that fields could exist in a pure vacuum but might still be like the stress fields in solids.
Late in his life, just before he died, Maxwell pointed out that no measurement of relative speed through the aether performed on a moving Earth could see deviations that were linear in the speed of the Earth but instead would be second order. He considered that such second-order effects would be far to small ever to detect, but Albert Michelson had different ideas.
1887 – Albert Michelson Null Experiment
Albert Michelson was convinced of the existence of the luminiferous aether, and he was equally convinced that he could detect it. In 1880, working in the basement of the Potsdam Observatory outside Berlin, he operated his first interferometer in a search for evidence of the motion of the Earth through the aether. He had built the interferometer, what has come to be called a Michelson Interferometer, months earlier in the laboratory of Hermann von Helmholtz in the center of Berlin, but the footfalls of the horse carriages outside the building disturbed the measurements too much—Postdam was quieter.
But he could find no difference in his interference fringes as he oriented the arms of his interferometer parallel and orthogonal to the Earth’s motion. A simple calculation told him that his interferometer design should have been able to detect it—just barely—so the null experiment was a puzzle.
Seven years later, again in a basement (this time in a student dormitory at Western Reserve College in Cleveland, Ohio), Michelson repeated the experiment with an interferometer that was ten times more sensitive. He did this in collaboration with Edward Morley. But again, the results were null. There was no difference in the interference fringes regardless of which way he oriented his interferometer. Motion through the aether was undetectable.
(Michelson has a fascinating backstory, complete with firestorms (literally) and the Wild West and a moment when he was almost committed to an insane asylum against his will by a vengeful wife. To read all about this, see Chapter 4: After the Gold Rush in my recent book Interference (Oxford, 2023)).
The Michelson Morley experiment did not create the crisis in physics that it is sometimes credited with. They published their results, and the physics world took it in stride. Voigt and Fitzgerald and Lorentz and Poincaré toyed with various ideas to explain it away, but there had already been so many different models, from complete drag to no drag, that a few more theories just added to the bunch.
But they all had their heads in a haze. It took an unknown patent clerk in Switzerland to blow away the wisps and bring the problem into the crystal clear.
1905 – Albert Einstein Relativity
So much has been written about Albert Einstein’s “miracle year” of 1905 that it has lapsed into a form of physics mythology. Looking back, it seems like his own personal Big Bang, springing forth out of the vacuum. He published 5 papers that year, each one launching a new approach to physics on a bewildering breadth of problems from statistical mechanics to quantum physics, from electromagnetism to light … and of course, Special Relativity [3].
Whereas the others, Voigt and Fitzgerald and Lorentz and Poincaré, were trying to reconcile measurements of the speed of light in relative motion, Einstein just replaced all that musing with a simple postulate, his second postulate of relativity theory:
2. Any ray of light moves in the “stationary” system of co-ordinates with the determined velocity c, whether the ray be emitted by a stationary or by a moving body. Hence …
Albert Einstein, Annalen der Physik, 1905
And the rest was just simple algebra—in complete agreement with Michelson’s null experiment, and with Fizeau’s measurement of the so-called Fresnel drag coefficient, while also leading to the famous E = mc2 and beyond.
There is no aether. Electromagnetic waves are self-supporting in vacuum—changing electric fields induce changing magnetic fields that induce, in turn, changing electric fields—and so it goes.
The vacuum is vacuum—nothing! Except that it isn’t. It is still full of things.
1931 – P. A. M Dirac Antimatter
The Dirac equation is the famous end-product of P. A. M. Dirac’s search for a relativistic form of the Schrödinger equation. It replaces the asymmetric use in Schrödinger’s form of a second spatial derivative and a first time derivative with Dirac’s form using only first derivatives that are compatible with relativistic transformations [4].
One of the immediate consequences of this equation is a solution that has negative energy. At first puzzling and hard to interpret [5], Dirac eventually hit on the amazing proposal that these negative energy states are real particles paired with ordinary particles. For instance, the negative energy state associated with the electron was an anti-electron, a particle with the same mass as the electron, but with positive charge. Furthermore, because the anti-electron has negative energy and the electron has positive energy, these two particles can annihilate and convert their mass energy into the energy of gamma rays. This audacious proposal was confirmed by the American physicist Carl Anderson who discovered the positron in 1932.
The existence of particles and anti-particles, combined with Heisenberg’s uncertainty principle, suggests that vacuum fluctuations can spontaneously produce electron-positron pairs that would then annihilate within a time related to the mass energy
Although this is an exceedingly short time (about 10-21 seconds), it means that the vacuum is not empty, but contains a frothing sea of particle-antiparticle pairs popping into and out of existence.
1938 – M. C. Escher Negative Space
Scientists are not the only ones who think about empty space. Artists, too, are deeply committed to a visual understanding of our world around us, and the uses of negative space in art dates back virtually to the first cave paintings. However, artists and art historians only talked explicitly in such terms since the 1930’s and 1940’s [6]. One of the best early examples of the interplay between positive and negative space was a print made by M. C. Escher in 1938 titled “Day and Night”.
1946 – Edward Purcell Modified Spontaneous Emission
In 1916 Einstein laid out the laws of photon emission and absorption using very simple arguments (his modus operandi) based on the principles of detailed balance. He discovered that light can be emitted either spontaneously or through stimulated emission (the basis of the laser) [7]. Once the nature of vacuum fluctuations was realized through the work of Dirac, spontaneous emission was understood more deeply as a form of stimulated emission caused by vacuum fluctuations. In the absence of vacuum fluctuations, spontaneous emission would be inhibited. Conversely, if vacuum fluctuations are enhanced, then spontaneous emission would be enhanced.
This effect was observed by Edward Purcell in 1946 through the observation of emission times of an atom in a RF cavity [8]. When the atomic transition was resonant with the cavity, spontaneous emission times were much faster. The Purcell enhancement factor is
where Q is the “Q” of the cavity, and V is the cavity volume. The physical basis of this effect is the modification of vacuum fluctuations by the cavity modes caused by interference effects. When cavity modes have constructive interference, then vacuum fluctuations are larger, and spontaneous emission is stimulated more quickly.
1948 – Hendrik Casimir Vacuum Force
Interference effects in a cavity affect the total energy of the system by excluding some modes which become inaccessible to vacuum fluctuations. This lowers the internal energy internal to a cavity relative to free space outside the cavity, resulting in a net “pressure” acting on the cavity. If two parallel plates are placed in close proximity, this would cause a force of attraction between them. The effect was predicted in 1948 by Hendrik Casimir [9], but it was not verified experimentally until 1997 by S. Lamoreaux at Yale University [10].
Two plates brought very close feel a pressure exerted by the higher vacuum energy density external to the cavity.
1949 – Shinichiro Tomonaga, Richard Feynman and Julian Schwinger QED
The physics of the vacuum in the years up to 1948 had been a hodge-podge of ad hoc theories that captured the qualitative aspects, and even some of the quantitative aspects of vacuum fluctuations, but a consistent theory was lacking until the work of Tomonaga in Japan, Feynman at Cornell and Schwinger at Harvard. Feynman and Schwinger both published their theory of quantum electrodynamics (QED) in 1949. They were actually scooped by Tomonaga, who had developed his theory earlier during WWII, but physics research in Japan had been cut off from the outside world. It was when Oppenheimer received a letter from Tomonaga in 1949 that the West became aware of his work. All three received the Nobel Prize for their work on QED in 1965. Precision tests of QED now make it one of the most accurately confirmed theories in physics.
Richard Feynman’s first “Feynman diagram”.
1964 – Peter Higgs and The Higgs
The Higgs particle, known as “The Higgs”, was the brain-child of Peter Higgs, Francois Englert and Gerald Guralnik in 1964. Higgs’ name became associated with the theory because of a response letter he wrote to an objection made about the theory. The Higg’s mechanism is spontaneous symmetry breaking in which a high-symmetry potential can lower its energy by distorting the field, arriving at a new minimum in the potential. This mechanism can allow the bosons that carry force to acquire mass (something the earlier Yang-Mills theory could not do).
Spontaneous symmetry breaking is a ubiquitous phenomenon in physics. It occurs in the solid state when crystals can lower their total energy by slightly distorting from a high symmetry to a low symmetry. It occurs in superconductors in the formation of Cooper pairs that carry supercurrents. And here it occurs in the Higgs field as the mechanism to imbues particles with mass .
Conceptual graph of a potential surface where the high symmetry potential is higher than when space is distorted to lower symmetry. Image Credit
The theory was mostly ignored for its first decade, but later became the core of theories of electroweak unification. The Large Hadron Collider (LHC) at Geneva was built to detect the boson, announced in 2012. Peter Higgs and Francois Englert were awarded the Nobel Prize in Physics in 2013, just one year after the discovery.
The Higgs field permeates all space, and distortions in this field around idealized massless point particles are observed as mass. In this way empty space becomes anything but.
1981 – Alan Guth Inflationary Big Bang
Problems arose in observational cosmology in the 1970’s when it was understood that parts of the observable universe that should have been causally disconnected were in thermal equilibrium. This could only be possible if the universe were much smaller near the very beginning. In January of 1981, Alan Guth, then at Cornell University, realized that a rapid expansion from an initial quantum fluctuation could be achieved if an initial “false vacuum” existed in a positive energy density state (negative vacuum pressure). Such a false vacuum could relax to the ordinary vacuum, causing a period of very rapid growth that Guth called “inflation”. Equilibrium would have been achieved prior to inflation, solving the observational problem.Therefore, the inflationary model posits a multiplicities of different types of “vacuum”, and once again, simple vacuum is not so simple.
Energy density as a function of a scalar variable. Quantum fluctuations create a “false vacuum” that can relax to “normal vacuum: by expanding rapidly. Image Credit
1998 – Saul Pearlmutter Dark Energy
Einstein didn’t make many mistakes, but in the early days of General Relativity he constructed a theoretical model of a “static” universe. A central parameter in Einstein’s model was something called the Cosmological Constant. By tuning it to balance gravitational collapse, he tuned the universe into a static Ithough unstable) state. But when Edwin Hubble showed that the universe was expanding, Einstein was proven incorrect. His Cosmological Constant was set to zero and was considered to be a rare blunder.
Fast forward to 1999, and the Supernova Cosmology Project, directed by Saul Pearlmutter, discovered that the expansion of the universe was accelerating. The simplest explanation was that Einstein had been right all along, or at least partially right, in that there was a non-zero Cosmological Constant. Not only is the universe not static, but it is literally blowing up. The physical origin of the Cosmological Constant is believed to be a form of energy density associated with the space of the universe. This “extra” energy density has been called “Dark Energy”, filling empty space.
The bottom line is that nothing, i.e., the vacuum, is far from nothing. It is filled with a froth of particles, and energy, and fields, and potentials, and broken symmetries, and negative pressures, and who knows what else as modern physics has been much ado about this so-called nothing, almost more than it has been about everything else.
[2] L. Peirce Williams in “Faraday, Michael.” Complete Dictionary of Scientific Biography, vol. 4, Charles Scribner’s Sons, 2008, pp. 527-540.
[3] A. Einstein, “On the electrodynamics of moving bodies,” Annalen Der Physik 17, 891-921 (1905).
[4] Dirac, P. A. M. (1928). “The Quantum Theory of the Electron”. Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences. 117 (778): 610–624.
[5] Dirac, P. A. M. (1930). “A Theory of Electrons and Protons”. Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences. 126 (801): 360–365.
[6] Nikolai M Kasak, Physical Art: Action of positive and negative space, (Rome, 1947/48) [2d part rev. in 1955 and 1956].
[7] A. Einstein, “Strahlungs-Emission un -Absorption nach der Quantentheorie,” Verh. Deutsch. Phys. Ges. 18, 318 (1916).
[8] Purcell, E. M. (1946-06-01). “Proceedings of the American Physical Society: Spontaneous Emission Probabilities at Ratio Frequencies”. Physical Review. American Physical Society (APS). 69 (11–12): 681.
[9] Casimir, H. B. G. (1948). “On the attraction between two perfectly conducting plates”. Proc. Kon. Ned. Akad. Wet. 51: 793.
[10] Lamoreaux, S. K. (1997). “Demonstration of the Casimir Force in the 0.6 to 6 μm Range”. Physical Review Letters. 78 (1): 5–8.
Read more in Books by David Nolte at Oxford University Press