Intersystem crossing (ISC) for singlet-to-triplet transition and generation of singlet oxygen

A Story of Singlets and Triplets: Relativistic Biology

The tortoise and the hare.  The snail and the falcon.  The sloth and the cheetah.

Comparing the slowest animals to the fastest, the peregrine falcon wins in the air at 200 mph diving on a dove.  The cheetah wins on the ground at 75 mph running down dinner.  That’s fast!

Einstein’s theory of relativity says that fast things behave differently than slow things.  So how fast do things need to move to see these effects?

The measure is the ratio of the speed of the object relative to the speed of light at 670 million miles per hour.  This ratio is called beta

The cheetah has β = 1×10-7, and the peregrine falcon has β = 4×10-7—which are less than one part in a million. 

And what about that snail?  At a speed of 0.003 mph it has β = 4×10-12 for a just few parts per trillion.

Yet relativistic physics is present in biology, despite these slow speeds, and they can help keep us alive.  How?

The Boon and Bane of Oxygen

All animal life on Earth needs oxygen to survive.  Oxygen is the energy storehouse that fuels mitochondria—the tiny batteries inside our cells that generate the energetic molecules that make us run.

Of all the elements, oxygen is second only to flourine as the most chemically reactive.  But fluorine is too one-dimensional to help much with life.  It needs only one extra electron to complete its outer-shell octet, leaving nothing in reserve to attach to much else.  Oxygen, on the other hand, needs two electrons to complete its outer shell, making it an easy bridge between two other atoms, usually carbons or hydrogens, and there you have it—organic chemistry is off and running.

Yet the same chemical reactivity that makes oxygen central to life also makes it corrosive to life.  When oxygen is not doing its part keeping things alive, it is tearing things apart.

The problem is reactive oxygen species, or ROS.  These are activated forms of oxygen that damage biological molecules, including DNA, putting wear and tear on (aging) the cellular machinery and introducing mutations into the genetic codes.  And one of the ROS is an active form of simple diatomic oxygen O2 — the very air we breathe.

The Saga of the Singlet and the Triplet

Diatomic oxygen is deceptively simple: two oxygen atoms, each two electrons short of a full shell, coming together to share two of each other’s electrons in a double bond, satisfying the closed shell octet for the outer valence electrons for this element.

Fig. 1 The Lewis diagram of the oxygen double bond.  Each oxygen shares two electrons with the other, filling the valence shell.

Bonds are based on energy levels, and the individual energy levels of the two separate oxygen atoms combine and shift as they form molecular orbitals (MO).  These orbitals are occupied by the 6 + 6 = 12 electrons of the diatomic molecule.  The first 10 electrons fill up the lower MOs until the last two go into the next highest state.  Here, for interesting reasons associated with how electrons interact with each other in confined spaces, the last two electrons both have the same orientation of their spins.  The total electron spin of the final full ground-state configuration of 12 electrons has S = 1. 

The unreactive triplet ground state molecular orbital diagram of diatomic oxygen (dioxygen). The two last electron spins are unpaired.
Fig. 2 Molecular orbital diagram for diatomic oxygen.  The 2s and 2p levels of the separated oxygen hybridize into new orbitals that are filled with 6 + 6 = 12 electrons.  The last two electron states have the same spin with a total spin S = 1.  This is the unreactive triplet ground state.

When a many-electron system has a spin S = 1, quantum mechanics prescribes that it has 2S+1 spin projections, so that the ground state of diatomic oxygen has three possible spin projections—known as a triplet.  But this is just for the lowest energy configuration of O2.

It is also possible to put both electrons into the final levels with opposite spins, giving S = 0.  This is known as the singlet, and there are two different ways these spins can be anti-aligned, creating two excited states.  The first excited state is about 1 eV in energy above the ground state, and the second excited state is about 0.7 eV higher than the first. 

The spin excited states of diatomic oxygen showing the pairing of spins to produce reactive singlet oxygen.
Fig. 3 Ground and excited states of O2.  The two singlet excited states have paired electron spins with a total spin of S = 0.  The ground state has S = 1.  The transitions between ground and excited states are “spin forbidden”.

Now we come the heart of our story.  It hinges on how reactive the different forms of O2 are, and how easily the singlet and the triplet can transform into each other.

Don’t Mix Your Spins

What happens if you mix hydrogen and oxygen in a 2:1 ratio in a plastic bag?

Nothing!  Unless you touch a match to it, and then it “pops” and creates water H2O.

Despite the thermodynamic instability of oxygen and hydrogen, the mixture is kinetically stable because it takes an additional input of energy to make the reaction go.  This is because oxygen at room temperature is almost exclusively in its ground state, the triplet state.  The triplet oxygen accepts electrons from other molecules, or atoms like hydrogen, but the local environment of organic molecules are almost exclusively in S = 0 states because of their stability. To accept two electrons from organic molecules requires spins to flip, which has an energy associated with the process, creating an energy barrier. This is known as the spin conservation rule of organic chemistry. Therefore, the reaction with the triplet is unlikely to go, unless you give it a hand with extra energy or a catalyst to get over the barrier.

However, this “spin blockade” that makes triplet (S = 1) oxygen unreactive (kinetically stable) is lifted in the case of singlet (S = 0) oxygen.  The singlet still accepts two electrons from other molecules, but these can be taken up easily by accepting electrons from the S = 0 organic molecules around it, conserving spin.  This makes singlet oxygen extremely reactive organically and hence is why it is an ROS. Singlet oxygen reacts with organic molecules, damaging them.

Despite the deleterious effects of singlet oxygen, it is produced as a side effect of lipids (fats) immersed in an oxygen environment. In a sense, fat slowly “burns”, creating reactive organic species that further react with lipids (primarily polyunsaturated fatty acids (PUFAs)), generating more ROS and creating a chain reaction. The chain reaction is terminated by the Russell mechanism that generates singlet oxygen as a byproduct.

Singlet oxygen, even though it is an excited state, cannot convert directly back to the benign triplet ground state for the same spin conservation reasons as organic chemistry. The singlet (S = 0) cannot emit a photon to get to its ground state triplet (S = 1) because photons cannot change the spin of the electrons in a transition. So once the singlet oxygen is formed, it can stick around and react chemically, eating up an organic molecule. Therefore, the oxygen environment, so necessary for our survival, is slowly killing us with oxydative stress. In response, mechanisms to de-excite singlet oxygen evolved, using organic molecules like carotenoids that are excellent physical quenchers of the singlet state.

But let’s flip the problem and ask what it takes to selectively harness singlet oxygen production to act as a targeted therapy that kills cancer cells—Enter Einstein’s theory of special relativity.

Relativistic Origins of Spin-Orbit Coupling

When I was a sophomore at Cornell University in the late 1970’s, I took the introductory class in electricity and magnetism. The text was an oddball thing from the Berkeley Physics Series authored by the Nobel Laureate, Edward Purcell, from Harvard.

Cover and front page of Purcell's E&M volume of the Berkeley Series.
Fig. 4 The cover and front page of my 1978 copy of Edward Purcell’s Berkeley Physics volume on Electricity and Magnetism.

(The Berkeley Series was a set of 5 volumes to accompany a 5-semester introduction to physics. It was the brainchild of Charles Kittel from Berkeley and Philip Morrison from Cornell who, in 1961, were responding to the Sputnik crisis and the need to improve the teaching of physics in the West.)

Purcell’s book had the quirky misfortune to use Gaussian units based on centimeters and grams (cgs) instead of meters and kilograms (MKS). Physicists tend to like cgs units, especially in the teaching of electromagnetism, because it places electric fields and magnetic fields on equal footing. Unfortunately, it creates a weird set of units like “statvolts” and “statcoulombs” that are a nightmare to calculate with.

Nonetheless, Purcell’s book was revolutionary as a second-semester intro physics book, in that it used Special Relativity to explain the transformations between electric and magnetic fields. For instance, starting with the static electric field of a stationary parallel plate capacitor in one frame, Purcell showed how an observer moving relatively to the capacitor in a moving frame detects the electric field as expected, but also detects a slight magnetic field. As the speed of the observer increases, the strength of the magnetic field increases, becoming comparable to the electric field in strength as the relative frame speed approaches the speed of light.

In this way, there is one frame in which there is no magnetic field at all, and another frame in which there is a strong magnetic field. The only difference is the point of view—yet the consequences are profound, especially when the quantum spin of an electron is involved.

Conversion of E fields to B fields through relativistically moving frames
Fig. 5 Purcell’s use of a parallel plate capacitor in his demonstration of the origin of magnetic fields from electric fields in relatively moving frames.

The spin of the electron is like a tiny magnet, and if the spin is in an external magnetic field, it feels a torque as well as an interaction energy. The torque makes it precess, and the interaction energy shifts its energy levels depending on the orientation of the spin to the field.

When an electron is in a quantum orbital around a stationary nucleus, attracted by the electric field of the nucleus, it would seem that there is no magnetic field for the spin to interact with, which indeed is true if the electron has no angular momentum. But if the electron orbital does have angular momentum with a non-zero expectation value for its velocity, then this moving electron does experience a magnetic field—the electron is moving relative to the electric field of the nucleus, and special relativity dictates that it experiences a magnetic field. The resulting magnetic field interacts with the magnetic moment of the electron and shifts its energy a tiny amount.

Spin-orbit coupling and spin-forbidden transitions. Spin-orbit mixes singlet and triplet spin states.
Fig. 6 Transitions to the singlet excited state are allowed but to the triplet state are spin-forbidden. The spin-orbit coupling mixes the spin states, giving a little triplet character to the singlet and vice versa

This is called the spin-orbit interaction and leads to the fine structure of the electron energy levels in atoms and molecules. More importantly for our story of Singlets and Triplets, the slight shift in energy also mixes the spin states. Quantum mechanical superposition of states mixes in a little S = 0 into the triplet excited state and a little S = 1 into the singlet ground state, and the transition is no longer strictly spin forbidden. The spin-orbit effect is relatively small in oxygen, contributing little to the quenching of singlet oxygen. But spin-orbit in other molecules, especially optical dyes that absorb light, can be used to generate singlet oxygen as a potent way to kill cancer cells.

Spin-Orbit and Photodynamic Cancer Therapy

The physics of light propagation through living tissue is a fascinating topic. Light is easily transported through tissue through multiple scattering. This is why your whole hand glows red when you cover a flashlight at night. The photons bounce around but are not easily absorbed. This surprising translucence of tissue can be used to help treat cancer.

Photodynamic cancer therapy uses photosensitizer molecules, typically organic dyes, that absorb light, transforming the molecule from a single ground state to a singlet excited state (spin-allowed transition). Although singlet-to-triplet conversion is spin-forbidden, the spin-orbit coupling slightly mixes the spin states which allows a transformation known as intersystem crossing (ISC). The excited singlet crosses over (usually through a vibrational state associated with thermal energy) to an excited triplet state of the photosensitizer molecule. Oxygen triplet molecules, that are always prevalent in tissue, collide with the triplet photosensitizer—and they swap their energies in a spin-allowed transfer. The photosensitizer triplet returns to its singlet ground state, while the triplet oxygen converts to highly reactive singlet oxygen. The swap doesn’t change total spin, so it is allowed and fast, generating large amounts of reactive oxygen singlets.

Intersystem crossing (ISC) for singlet-to-triplet transition and generation of singlet oxygen
Fig. 7 Intersystem crossing diagram. A photosensitizer molecule in a singlet ground state absorbs a photon that creates a singlet excited state of the molecule. The spin-orbit mixing of spin states allows intersystem crossing (ISC) to generate a triplet excited state of the photosensitizer. This triplet can then exchange its energy with oxygen to create highly-reactive singlet oxygen.

In photodynamic therapy, the photosensitizer is taken up by the patient’s body, but the doctor only shines light around the tumor area, letting the light multiply-scatter through the tissue, exciting the photosensitizer and locally producing singlet oxygens that kill the local cancer cells. Other parts of the body remain in the dark and have none of the ill-effects that are so common for the conventional systemic cytotoxic anti-cancer drugs. This local treatment of the tumor using localized light can be much more benign to overall patient health while still delivering effective treatment to the tumor.

Photodynamic therapy has been approved by the FDA for some early stage cancers, and continuing research is expanding its applications. Because light transport through tissue is limited to about a centimeter, most of these applications are for “shallow” tumors that can be accessed by light through the skin or through internal surfaces (like the esophagus or colon).

Postscript: Relativistic Spin

I can’t leave this story of relativistic biology without going into a final historical detail from the early days of relativistic quantum theory. When Uhlenbeck and Goudsmit first proposed in 1925 that the electron has spin, there were two immediate objections. First, Lorentz showed that in a semiclassical model of the spinning electron, the surface of the electron would be moving at 300 times the speed of light. The solution to this problem came three years later in 1928 with the relativistic quantum theory of Paul Dirac, which took an entirely different view of quantum versus classical physics. In quantum theory, there is no “radius of the electron” as there was in semiclassical theory.

The second objection was more serious. The original predictions from electron spin and the spin-orbit interaction in Bohr’s theory of the atom predicted a fine-structure splitting that was a factor of 2 times larger than observed experimentally. In 1926 Llewellyn Thomas showed that a relativistically precessing spin was in a non-inertial frame, requiring a continuous set of transformations from one instantaneously inertial frame to another. These continuously shifting transformations introduced a factor of 1/2 into the spin precession, exactly matching the experimental values. This effect is known as Thomas precession. Interestingly, the fully relativistic theory of Dirac in 1928 automatically incorporated this precession within the theory, so once again Dirac’s equation superseded the old quantum theory.

The Light in Einstein’s Elevator

Gravity bends light!

Of all the audacious proposals made by Einstein, and there were many, this one takes the cake because it should be impossible.

There can be no force of gravity on light because light has no mass.  Without mass, there is no gravitational “interaction”.  We all know Newton’s Law of gravity … it was one of the first equations of physics we ever learned

Newtonian gravitation

which shows the interaction between the masses M and m through their product.  For light, this is strictly zero. 

How, then did Einstein conclude, in 1907, only two years after he proposed his theory of special relativity, that gravity bends light? If it were us, we might take Newton’s other famous equation and equate the two

Newton's second law

and guess that somehow the little mass m (though it equals zero) cancels out to give

Acceleration

so that light would fall in gravity with the same acceleration as anything else, massive or not. 

But this is not how Einstein arrived at his proposal, because this derivation is wrong!  To do it right, you have to think like an Einstein.

“My Happiest Thought”

Towards the end of 1907, Einstein was asked by Johannes Stark to contribute a review article on the state of the relativity theory to the Jahrbuch of Radioactivity and Electronics. There had been a flurry of activity in the field in the two years since Einstein had published his groundbreaking paper in Annalen der Physik in September of 1905 [1]. Einstein himself had written several additional papers on the topic, along with others, so Stark felt it was time to put things into perspective.

Photo of Einstein around 1905 during his Annis Mirabalis.
Fig. 1 Einstein around 1905.

Einstein was still working at the Patent Office in Bern, Switzerland, which must not have been too taxing, because it gave him plenty of time think. It was while he was sitting in his armchair in his office in 1907 that he had what he later described as the happiest thought of his life. He had been struggling with the details of how to apply relativity theory to accelerating reference frames, a topic that is fraught with conceptual traps, when he had a flash of simplifying idea:

“Then there occurred to me the ‘glucklichste Gedanke meines Lebens,’ the happiest thought of my life, in the following form. The gravitational field has only a relative existence in a way similar to the electric field generated by magnetoelectric induction. Because for an observer falling freely from the roof of a house there exists —at least in his immediate surroundings— no gravitational field. Indeed, if the observer drops some bodies then these remain relative to him in a state of rest or of uniform motion… The observer therefore has the right to interpret his state as ‘at rest.'”[2]

In other words, the freely falling observer believes he is in an inertial frame rather than an accelerating one, and by the principle of relativity, this means that all the laws of physics in the accelerating frame must be the same as for an inertial frame. Hence, his great insight was that there must be complete equivalence between a mechanically accelerating frame and a gravitational field. This is the very first conception of his Equivalence Principle.

Cover of the Jahrbuch for Radioactivity and Electronics from 1907.
Fig. 2 Front page of the 1907 volume of the Jahrbuch. The editor list reads like a “Whos-Who” of early modern physics.

Title page to Einstein's 1907 Jahrbuch review article
Fig. 3 Title page to Einstein’s 1907 Jahrbuch review article “On the Relativity Principle and its Consequences” [3]

After completing his review of the consequences of special relativity in his Jahrbuch article, Einstein took the opportunity to launch into his speculations on the role of the relativity principle in gravitation. He is almost appologetic at the start, saying that:

“This is not the place for a detailed discussion of this question.  But as it will occur to anybody who has been following the applications of the principle of relativity, I will not refrain from taking a stand on this question here.”

But he then launches into his first foray into general relativity with keen insights.

The beginning of the section where Einstein first discusses the effects of accelerating frames and effects of gravity
Fig. 4 The beginning of the section where Einstein first discusses the effects of accelerating frames and effects of gravity.

He states early in his exposition:

“… in the discussion that follows, we shall therefore assume the complete physical equivalence of a gravitational field and a corresponding accelerated reference system.”

Here is his equivalence principle. And using it, in 1907, he derives the effect of acceleration (and gravity) on ticking clocks, on the energy density of electromagnetic radiation (photons) in a gravitational potential, and on the deflection of light by gravity.

Over the next several years, Einstein was distracted by other things, such as obtaining his first university position, and his continuing work on the early quantum theory. But by 1910 he was ready to tackle the general theory of relativity once again, when he discovered that his equivalence principle was missing a key element: the effects of spatial curvature, which launched him on a 5-year program into the world of tensors and metric spaces that culminated with his completed general theory of relativity that he published in November of 1915 [4].

The Observer in the Chest: There is no Elevator

Einstein was never a stone to gather moss. Shortly after delivering his triumphal exposition on the General Theory of Relativity, he wrote up a popular account of his Special and now General Theories to be published as a book in 1916, first in German [5] and then in English [6]. What passed for a “popular exposition” in 1916 is far from what is considered popular today. Einstein’s little book is full of equations that would be somewhat challenging even for specialists. But the book also showcases Einstein’s special talent to create simple analogies, like the falling observer, that can make difficult concepts of physics appear crystal clear.

In 1916, Einstein was not yet thinking in terms of an elevator. His mental image at this time, for a sequestered observer, was someone inside a spacious chest filled with measurement apparatus that the observer could use at will. This observer in his chest was either floating off in space far from any gravitating bodies, or the chest was being pulled by a rope hooked to the ceiling such that the chest accelerates constantly. Based on the measurement he makes, he cannot distinguish between gravitational fields and acceleration, and hence they are equivalent. A bit later in the book, Einstein describes what a ray of light would do in an accelerating frame, but he does not have his observer attempt any such measurement, even in principle, because the deflection of the ray of light from a linear path would be far too small to measure.

But Einstein does go on to say that any curvature of the path of the light ray requires that the speed of light changes with position. This is a shocking admission, because his fundamental postulate of relativity from 1905 was the invariance of the speed of light in all inertial frames. It was from this simple assertion that he was eventually able to derive E = mc2. Where, on the one hand, he was ready to posit the invariance of the speed of light, on the other hand, as soon as he understood the effects of gravity on light, Einstein did not hesitate to cast this postulate adrift.

Position-dependent speed of light in relativity.

Fig. 5 Einstein’s argument for the speed of light depending on position in a gravitational field.

(Einstein can be forgiven for taking so long to speak in terms of an elevator that could accelerate at a rate of one g, because it was not until 1946 that the rocket plane Bell X-1 achieved linear acceleration exceeding 1 g, and jet planes did not achieve 1 g linear acceleration until the F-15 Eagle in 1972.)

Aircraft with greater than 1:1 thrust to weight ratios
Fig. 6 Aircraft with greater than 1:1 thrust to weight ratios.

The Evolution of Physics: Enter Einstein’s Elevator

Years passed, and Einstein fled an increasingly autocratic and belligerent Germany for a position at Princeton’s Institute for Advanced Study. In 1938, at the instigation of his friend Leopold Infeld, they decided to write a general interest book on the new physics of relativity and quanta that had evolved so rapidly over the past 30 years.

Title page of "Evolution of Physics" 1938 written with his friend Leopold Infeld at Princeton's Institute for Advanced Study.
Fig. 7 Title page of “Evolution of Physics” 1938 written with his friend Leopold Infeld at Princeton’s Institute for Advanced Study.

Here, in this obscure book that no-one remembers today, we find Einstein’s elevator for the first time, and the exposition talks very explicitly about a small window that lets in a light ray, and what the observer sees (in principle) for the path of the ray.

One of the only figures in the Einstein and Infeld book: The origin of "Einstein's Elevator"!
Fig. 8 One of the only figures in the Einstein and Infeld book: The origin of “Einstein’s Elevator”!

By the equivalence principle, the observer cannot tell whether they are far out in space, being accelerated at the rate g, or whether they are statinary on the surface of the Earth subject to a gravitational field. In the first instance of the accelerating elevator, a photon moving in a straight line through space would appear to deflect downward in the elevator, as shown in Fig. 9, because the elevator is accelerating upwards as the photon transits the elevator. However, by the equivalence principle, the same physics should occur in the gravitational field. Hence, gravity must bend light. Furthermore, light falls inside the elevator with an acceleration g, just as any other object would.

The accelerating elevator and what an observer inside sees (From "Galileo Unbound" (Oxford, 2018).
Fig. 9 The accelerating elevator and what an observer inside sees (From “Galileo Unbound” (Oxford, 2018). [7])

Light Deflection in the Equivalence Principle

A photon enters an elevator at right angles to its acceleration vector g.  Use the geodesic equation and the elevator (Equivalence Principle) metric [8]

to show that the trajectory is parabolic. (This is a classic HW problem from Introduction to Modern Dynamics.)

The geodesic equation with time as the dependent variable

This gives two coordinate equations

Note that x0 = ct and x1 = ct are both large relative to the y-motion of the photon.  The metric component that is relevant here is

and the others are unity.  The geodesic becomes (assuming dy/dt = 0)

The Christoffel symbols are

which give

Therefore

or

where the photon falls with acceleration g, as anticipated.

Light Deflection in the Schwarzschild Metric

Do the same problem of the light ray in Einstein’s Elevator, but now using the full Schwarzschild solution to the Einstein Field equations.

Schwarzschild metric

Einstein’s elevator is the classic test of virtually all heuristic questions related to the deflection of light by gravity.  In the previous Example, the deflection was attributed to the Equivalence Principal in which the observer in the elevator cannot discern whether they are in an acceleration rocket ship or standing stationary on Earth.  In that case, the time-like metric component is the sole cause of the free-fall of light in gravity.  In the Schwarzschild metric, on the other hand, the curvature of the field near a spherical gravitating body also contributes.  In this case, the geodesic equation, assuming that dr/dt = 0 for the incoming photon, is

where, as before, the Christoffel symbol for the radial displacements are

Evaluating one of these

The other Christoffel symbol that contributes to the radial motion is

and the geodesic equation becomes

with

The radial acceleration of the light ray in the elevator is thus

The first term on right is free-fall in gravity, just as was obtained from the Equivalence Principal.  The second term is a higher-order correction caused by curvature of spacetime.  The third term is the motion of the light ray relative to the curved ceiling of the elevator in this spherical geometry and hence is a kinematic (or geometric) artefact.  (It is interesting that the GR correction on the curved-ceiling correction is of the same order as the free-fall term, so one would need to be very careful doing such an experiment … if it were at all measurable.) Therefore, the second and third terms are curved-geometry effects while the first term is the free fall of the light ray.


  

Post-Script: The Importance of Library Collections

I was amused to see the library card of the scanned Internet Archive version of Einstein’s Jahrbuch article, shown below. The volume was checked out in August of 1981 from the UC Berkeley Physics Library. It was checked out again 7 years later in September of 1988. These dates coincide with when I arrived at Berkeley to start grad school in physics, and when I graduated from Berkeley to start my post-doc position at Bell Labs. Hence this library card serves as the book ends to my time in Berkeley, a truly exhilarating place that was the top-ranked physics department at that time, with 7 active Nobel Prize winners on its faculty.

During my years at Berkeley, I scoured the stacks of the Physics Library looking for books and journals of historical importance, and was amazed to find the original volumes of Annalen der Physik from 1905 where Einstein published his famous works. This was the same library where, ten years before me, John Clauser was browsing the stacks and found the obscure paper by John Bell on his inequalities that led to Clauser’s experiment on entanglement that won him the Nobel Prize of 2022.

That library at UC Berkeley was recently closed, as was the Physics Library in my department at Purdue University (see my recent Blog), where I also scoured the stacks for rare gems. Some ancient books that I used to be able to check out on a whim, just to soak up their vintage ambience and to get a tactile feel for the real thing held in my hands, are now not even available through Interlibrary Loan. I may be able to get scans from Internet Archive online, but the palpable magic of the moment of discovery is lost.

References:

[1] Einstein, A. (1905). Zur Elektrodynamik bewegter Körper. Annalen der Physik, 17(10), 891–921.

[2] Pais, A (2005). Subtle is the Lord: The Science and Life of Albert Einstein (Oxford University Press). pg. 178

[3] Einstein, A. (1907). Über das Relativitätsprinzip und die aus demselben gezogenen Folgerungen. Jahrbuch der Radioaktivität und Elektronik, 4, 411–462.

[4] A. Einstein (1915), “On the general theory of relativity,” Sitzungsberichte Der Koniglich Preussischen Akademie Der Wissenschaften, pp. 778-786, Nov.

[5] Einstein, A. (1916). Über die spezielle und die allgemeine Relativitätstheorie (Gemeinverständlich). Braunschweig: Friedr. Vieweg & Sohn.

[6] Einstein, A. (1920). Relativity: The Special and the General Theory (A Popular Exposition) (R. W. Lawson, Trans.). London: Methuen & Co. Ltd.

[7] Nolte, D. D. (2018). Galileo Unbound. A Path Across Life, the Universe and Everything. (Oxford University Press)

[8] Nolte, D. D. (2019). Introduction to Modern Dynamics: Chaos, Networks, Space and Time (Oxford University Press).

Read more in Books by David Nolte at Oxford University Press.

Purge and Precipice: The Fall of American Science?

Let’s ask a really crazy question. As a pure intellectual exercise—not that it would ever happen—but just asking: What would it take to destroy American science? I know this is a silly question. After all, no one in their right mind would want to take down American science. It has been the guiding light in the world for the last 100 years, ushering in such technological marvels of modern life like transistors and the computer and lasers and solar panels and vaccines and immunotherapy and disease-resistant crops and such. So of course, American science is a National Treasure, more valuable than all the National Treasures in Washington, and no one would ever dream of attacking those.

But for the sake of argument, just to play Devil’s Advocate, what if someone with some power, someone who could make otherwise sensible people do his will, wanted to wash away the last 100 years of American leadership in Science? How would he do it?

The answer is obvious: Use science … and maybe even physics.

The laws of physics are really pretty simple: Cause and effect, action and reaction … those kinds of things. And modern physics is no longer about rocks thrown from cliffs, but is about the laws governing complex systems, like networks of people.

Can we really put equations to people? This was the grand vision of Isaac Asimov in his Foundation Trilogy. In that story, the number of people in a galaxy became so large that the behavior of the population as a whole could be explained by a physicist, Hari Seldon, using the laws of statistical mechanics. Asimov called it psychohistory.

It turns out we are not that far off today, and we don’t need a galaxy full of people to make it valid. But the name of the theory turns out to be a bit more prosaic than psychohistory: it’s called Network theory.

Network Theory

Network theory, at its core, is simply about nodes and links. It asks simple questions, like: What defines a community? What kind of synergy makes communities work? And when do things fall apart?

Science is a community.

In the United State, there are approximately a million scientists , 70% of whom work in industry with 20% in academia and 10% in government (at least, prior to 2025). Despite the low fraction employed in academia, all scientists and engineers received their degrees from universities and colleges and many received post-graduate training at those universities and at national labs like Los Alamos and the NIH labs in Washington. These are the backbone of the American scientific community, these are the hubs from which the vast network of scientists connect out across the full range of industrial and manufacturing activities that drive 70% of the GDP of the United States. The universities and colleges are also reservoirs for long-term science knowledge that can be tapped at a moment’s notice by industry when it pivots to new materials or new business models.

In network theory, hubs hold the key to the performance of the network. In technical terms, hubs have high average degree, which means that hubs connect to a large fraction of the total network. This is why hubs are central to network health and efficiency. Hubs also are the main cause of the “Small World Effect”, which states that everyone on a network is only a few links away from anyone else. This is also known as “Six degrees of Separation”, because in even vast networks that span the country, it only takes about 6 friends of friends of friends of friends of friends of friends before you connect to any given person. The world is small because you know someone who is a hub, and they know everyone else. This is a fundamental result of network theory, whether the network is of people, or servers, or computer chips.

Having established how important hubs are to network connectivity, it is clear that the disproportionate importance of hubs make them a disproportionate target for network disruption. For instance, in the power grid, take down a large central switching station and you can take down the grid over vast portions of the country. The same is true for science and the science community. Take down a few of the key pins, and the whole network can collapse—a topic of percolation theory.

Percolation and Collapse

Percolation theory says what it does––it tells when a path on a network is likely to “percolate” across it—like water percolating through coffee grounds. For a given number of nodes N, there needs to be enough links so that most of the nodes belong to the largest connected cluster. Then most starting paths can percolate across the whole network. On the other hand, if enough links are broken, then the network breaks apart into a lot of disconnected clusters, and you cannot get from one to the others.

Percolation theory says a lot about the percolation transition that occurs at the percolation threshold—which describes how the likelihood of having a percolating path across a network rises and falls as the number of links in the network increases or decreases. It turns out that for large networks, this transition from percolating to non-percolating is abrupt. When there are just barely enough links to keep the network connected, then removing just one link can separate it into disconnected clusters. In other words, the network collapses.

Therefore, network collapse can be sudden and severe. It is even possible to be near the critical percolation condition and not know it. All can seem fine, with plenty of paths to choose from to get across the network—then lose just a few links—and suddenly the network collapses into a bunch of islands. This is sometimes known as a tipping point—also as a bifurcation or as a catastrophe. Tipping points, bifurcations and percolation transitions get a lot of attention in network theory, because they are sudden and large events that can occur with little forewarning.

So the big question for this blog is: What would it take to have the scientific network of the United States collapse?

Department of Governmental Exterminations (DOGE)

The head of DOGE is a charismatic fellow, and like the villain of Jane Austen’s Pride and Prejudice, he was initially a likable character. But he turned out to be an arbiter of chaos and a cad. No one would want to be him in the end. The same is true in our own Austenesque story of Purge and Precipice: As DOGE purges, we approach the precipice.

Falling off a cliff is easy, because if a network has hubs, and those hubs have a disproportionate importance to keeping the network together, then an excellent strategy to destroy the network would be to randomly take out the most important hubs.

If the hubs of the scientific network across the US are the universities and colleges and government labs, then attack those, even though they only hold 20% to 30% of the scientists in the country, you can bring science to a standstill in the US by breaking apart the network into isolate islands. Alternatively, when talking about individuals in a network, the most important hubs are the scientists who are the repositories of the most knowledge—the elder statesmen of their fields—the ones you can get to buy out and retire.

Networks with strongly connected hubs are the most vulnerable to percolation collapse when the hubs are attacked specifically.

Science Network Evolving under Reduction in Force through Natural Attrition

Fig. 1 Healthy network evolving under a 15% reduction in force (RIF) through natural retirement and attrition.

This simulation looks at a reduction in force (RIF) of 15% and its effect on a healthy interaction network. It uses a scale-free network that evolves in time as individuals retire naturally or move to new jobs. When a node is removed from the net, it becomes a disconnected dot in the video. Other nodes that were “orphaned” by the retirement are reassigned to existing nodes. Links represent scientific interactions or lines of command. A few links randomly shift as interests change. Random retirements might hit a high-degree node (a hub), but the event is rare enough that the natural rearrangements of the links continue to keep the network connected and healthy as it adapts to the loss of key opinion leaders.

Science Network under DOGE Attack

Fig. 2 An attack on the high-degree nodes (the hubs) of the network, leading to the same 15% RIF as Fig. 1. The network becomes fragmented and dysfunctional.

Universities and government laboratories are high-degree nodes that have a disproportionate importance to the Science Network. By targeting these nodes, the network rapidly disintegrates. The effect is too drastic for the rearrangement of some links to fix it.

The percolation probability of an interaction network, like the Science Network, is a fair measure of scientific productivity. The more a network is interconnected, the more ideas flow across the web, eliciting new ideas and discoveries that often lead to new products and growth in the national GDP. But a disrupted network has low productivity. The scientific productivity is plotted in Fig. 3 as a function of the reduction in force up to 15%. Natural attrition can attain this RIF with minimal impact on the productivity of the network measured through its percolation probability. However, targeted attacks on the most influential scientific hubs rapidly degrades the network, breaking it apart into lots of disconnected clusters. There is no free flow of ideas and lost opportunities for new products and eventual erosion of the national GDP.

Fig. 3 Scientific productivity, measured by the percolation probability across the network, as a function of the reduction in force up to 15%. Natural attrition keeps most of the productivity high. Targeted attacks on the most influential science institutions decimate the Science Network.

It takes about 15 years for scientific discoveries to establish new products in the market place. Therefore, a collapse of American science over the next few years won’t be fully felt until around the year 2040. All the politicians in office today will be long gone by then (let’s hope!), so they will never get the blame. But our country will be poorer and weaker, and our lives will be poorer and sicker—the victims of posturing and grandstanding for no real benefit other than the fleeting joy of wrecking what was built over the past century. When I watch the glee of the Perp in Chief and his henchmen as they wreak their havoc, I am reminded of “griefers” in Minecraft.

The Upshot

One of the problems with being a physicist is that sometimes you see the train wreck coming.

I see a train wreck coming.

PostScript

It is important not to take these simulations too literally as if they were an accurate numerical model of the Science Network in the US. The point of doing physics is not to fit all the parameters—that’s for the engineers. The point of doing physics is to recognize the possibilities and to see the phenomena—as well as the dangers.

Take heed of the precipice. It is real. Are we about to go over it? It’s hard to tell. But should we even take the chance?

100 Years of Quantum Physics:  Pauli’s Exclusion Principle (1924)

One hundred years ago this month, in December 1924, Wolfgang Pauli submitted a paper to Zeitschrift für Physik that provided the final piece of the puzzle that connected Bohr’s model of the atom to the structure of the periodic table.  In the process, he introduced a new quantum number into physics that governs how matter as extreme as neutron stars, or as perfect as superfluid helium, organizes itself.

He was led to this crucial insight, not by his superior understanding of quantum physics, which he was grappling with as much as Bohr and Born and Sommerfeld were at that time, but through his superior understanding of relativistic physics that convinced him that the magnetism of atoms in magnetic fields could not be explained through the orbital motion of electrons alone.

Encyclopedia Article on Relativity

Bored with the topics he was being taught in high school in Vienna, Pauli was already reading Einstein on relativity and Emil Jordan on functional analysis before he arrived at the university in Munich to begin studying with Arnold Sommerfeld.  Pauli was still merely a student when Felix Klein approached Sommerfeld to write an article on relativity theory for his Encyclopedia of Mathematical Sciences.  Sommerfeld by that time was thoroughly impressed with Pauli’s command of the subject and suggested that he write the article.


Pauli’s encyclopedia article on relativity expanded to 250 pages and was published in Klein’s fifth volume in 1921 when Pauli was only 21 years old—just 5 years after Einstein had published his definitive work himself!  Pauli’s article is still considered today one of the clearest explanations of both special and general relativity.

Pauli’s approach established the methodical use of metric space concepts that is still used today when teaching introductory courses on the topic.  This contrasts with articles written only a few years earlier that seem archaic by comparison—even Einstein’s paper itself.  As I recently read through his article, I was struck by how similar it is to what I teach from my textbook on modern dynamics to my class at Purdue University for junior physics majors.

Fig. 1 Wolfgang Pauli [Image]

Anomalous Zeeman Effect

In 1922, Pauli completed his thesis on the properties of water molecules and began studying a phenomenon known as the anomalous Zeeman effect.  The Zeeman effect is the splitting of optical transitions in atoms under magnetic fields.  The electron orbital motion couples with the magnetic field through a semi-classical interaction between the magnetic moment of the orbital and the applied magnetic field, producing a contribution to the energy of the electron that is observed when it absorbs or emits light. 

The Bohr model of the atom had already concluded that the angular momentum of electron orbitals was quantized into integer units.  Furthermore, the Stern-Gerlach experiment of 1922 had shown that the projection of these angular momentum states onto the direction of the magnetic field was also quantized.  This was known at the time as “space quantization”.  Therefore, in the Zeeman effect, the quantized angular momentum created quantized energy interactions with the magnetic field, producing the splittings in the optical transitions.

File:Breit-rabi-Zeeman-en.svg
Fig. 2 The magnetic Zeeman splitting of Rb-87 from the weak field to the strong-field (Pachen-Back) effect

So far so good.  But then comes the problem with the anomalous Zeeman effect.

In the Bohr model, all angular momenta have integer values.  But in the anomalous Zeeman effect, the splittings could only be explained with half integers.  For instance, if total angular momentum were equal to one-half, then in a magnetic field it would produce a “doublet” with +1/2 and -1/2 space quantization.  An integer like L = 1 would produce a triplet with +1, 0, and -1 space quantization.  Although doublets of the anomalous Zeeman effect were often observed, half-integers were unheard of (so far) in the quantum numbers of early quantum physics.

But half integers were not the only problem with “2”s in the atoms and elements.  There was also the problem of the periodic table. It, too, seemed to be constructed out of “2”s, multiplying a sequence of the difference of squares.

The Difference of Squares

The difference of squares has a long history in physics stretching all the way back to Galileo Galilei who performed experiments around 1605 on the physics of falling bodies.  He noted that the distance traveled in successive time intervals varied as the difference 12 – 02 = 1, then 22-12 = 3, then 32-22 = 5, then 42-32 = 7 and so on.  In other words, the distances traveled in each successive time interval varied as the odd integers.  Galileo, ever the astute student of physics, recognized that the distance traveled by an accelerating body in a time t varied as the square of time t2.  Today, after Newton, we know that this is simply the dependence of distance for an accelerating body on the square of time s = (1/2)gt2

By early 1924 there was another law of the difference of squares.  But this time the physics was buried deep inside the new science of the elements, put on graphic display through the periodic table. 

The periodic table is constructed on the difference of squares.  First there is 2 for hydrogen and helium.  Then another 2 for lithium and beryllium, followed by 6 for B, C, N, O, F and Ne to make a total of 8.  After that there is another 8 plus 10 for the sequence of Sc, Ti, V, Cr, Mn, Fe, Co, Ni, Cu and Zn to make a total of 18.  The sequence of 2-8-18 is 2•12 = 2, 2•22 = 8, 2•32 = 18 for the sequence 2n2

Why the periodic table should be constructed out of the number 2 times the square of the principal quantum number n was a complete mystery.  Sommerfeld went so far as to call the number sequence of the periodic table a “cabalistic” rule. 

The Bohr Model for Many Electrons

It is easy to picture how confusing this all was to Bohr and Born and others at the time.  From Bohr’s theory of the hydrogen atom, it was clear that there were different energy levels associated with the principal quantum number n, and that this was related directly to angular momentum through the motion of the electrons in the Bohr orbitals. 

But as the periodic table is built up from H to He and then to Li and Be and B, adding in successive additional electrons, one of the simplest questions was why the electrons did not all reside on the lowest energy level?  But even if that question could not be answered, there was the question of why after He the elements Li and Be behaved differently than B, N, O and F, leading to the noble gas Ne.  From normal Zeeman spectroscopy as well as x-ray transitions, it was clear that the noble gases behaved as the core of succeeding elements, like He for Li and Be and Ne for Na and Mg.

To grapple with all of this, Bohr had devised a “building up” rule for how electrons were “filling” the different energy levels as each new electron of the next element was considered.  The noble-gas core played a key role in this model, and the core was also assumed to be contributing to both the normal Zeeman effect as well as the anomalous Zeeman effect with its mysterious half-integer angular momenta.

But frankly, this core model was a mess, with ad hoc rules on how the additional electrons were filling the energy levels and how they were contributing to the total angular momentum.

This was the state of the problem when Pauli, with his exceptional understanding of special relativity, began to dig deep into the problem.  Since the Zeeman splittings were caused by the orbital motion of the electrons, the strongly bound electrons in high-Z atoms would be moving at speeds near the speed of light.  Pauli therefore calculated what the systematic effects would be on the Zeeman splittings as the Z of the atoms got larger and the relativistic effects got stronger.

He calculated this effect to high precision, and then waited for Landé to make the measurements.  When Landé finally got back to him, it was to say that there was absolutely no relativistic corrections for Thallium (Z = 90).  The splitting remained simply fixed by the Bohr magneton value with no relativistic effects.

Pauli had no choice but to reject the existing core model of angular momentum and to ascribe the Zeeman effects to the outer valence electron.  But this was just the beginning.

Pauli’s Breakthrough

https://onionesquereality.wordpress.com/wp-content/uploads/2012/07/wolfgang-pauli.jpg
Fig. 5 Wolfgang Pauli [Image]

By November of 1924 Pauli had concluded, in a letter to Landé

“In a puzzling, non-mechanical way, the valence electron manages to run about in two states with the same k but with different angular momenta.”

And in December of 1924 he submitted his work on the relativistic effects (or lack thereof) to Zeitschrift für Physik,

“From this viewpoint the doublet structure of the alkali spectra as well as the failure of Larmor’s theorem arise through a specific, classically  non-describable sort of Zweideutigkeit (two-foldness) of the quantum-theoretical properties of the valence electron. (Pauli, 1925a, pg. 385)

Around this time, he read a paper by Edmund Stoner in the Philosophical Magazine of London published in October of 1924.  Stoner’s insight was a connection between the number of states observed in a magnetic field and the number of states filled in the successive positions of elements in the periodic table.  Stoner’s insight led naturally to the 2-8-18 sequence for the table, although he was still thinking in terms of the quantum numbers of the core model of the atoms.

This is when Pauli put 2 plus 2 together: He realized that the states of the atom could be indexed by a set of 4 quantum numbers: n-the principal quantum number, k1-the angular momentum, m1-the space quantization number, and a new fourth quantum number m2 that he introduced but that had, as yet, no mechanistic explanation.  With these four quantum numbers enumerated, he then made the major step:

It should be forbidden that more than one electron, having the same equivalent quantum numbers, can be in the same state.  When an electron takes on a set of values for the four quantum numbers, then that state is occupied.

This is the Exclusion Principle:  No two electrons can have the same set of quantum numbers.  Or equivalently, no electron state can be occupied by two electrons.

Fig. 6 Level filling for Krypton using the Pauli Exclusion Principle

Today, we know that Pauli’s Zweideutigkeit is electron spin, a concept first put forward in 1925 by the American physicist Ralph Kronig and later that year by George Uhlenbeck and Samuel Goudsmit.



And Pauli’s Exclusion Principle is a consequence of the antisymmetry of electron wavefunctions first described by Paul Dirac in 1926 after the introduction of wavefunctions into quantum theory by Erwin Schrödinger earlier that year.

Fig. 7 The periodic table today.

Timeline:

1845 – Faraday effect (rotation of light polarization in a magnetic field)

1896 – Zeeman effect (splitting of optical transition in a magnetic field)

1897 – Anomalous Zeeman effect (half-integer splittings)

1902 – Lorentz and Zeeman awarded Nobel prize (for electron theory)

1921 – Paschen-Back effect (strong-field Zeeman effect)

1922 – Stern-Gerlach (space quantization)

1924 – de Broglie matter waves

1924 – Bose statistics of photons

1924 – Stoner (conservation of number of states)

1924 – Pauli Exclusion Principle

References:

E. C. Stoner (Philosophical Magazine, 48 [1924], 719) Issue 286  October 1924

M. Jammer, The conceptual development of quantum mechanics (Los Angeles, Calif.: Tomash Publishers, Woodbury, N.Y. : American Institute of Physics, 1989).

M. Massimi, Pauli’s exclusion principle: The origin and validation of a scientific principle (Cambridge University Press, 2005).

Pauli, W. Über den Einfluß der Geschwindigkeitsabhängigkeit der Elektronenmasse auf den Zeemaneffekt. Z. Physik 31, 373–385 (1925). https://doi.org/10.1007/BF02980592

Pauli, W. (1925). “Über den Zusammenhang des Abschlusses der Elektronengruppen im Atom mit der Komplexstruktur der Spektren”. Zeitschrift für Physik. 31 (1): 765–783

Read more in Books by David Nolte at Oxford University Press

Edward Purcell:  From Radiation to Resonance

As the days of winter darkened in 1945, several young physicists huddled in the basement of Harvard’s Research Laboratory of Physics, nursing a high field magnet to keep it from overheating and dumping its field.  They were working with bootstrapped equipment—begged, borrowed or “stolen” from various labs across the Harvard campus.  The physicist leading the experiment, Edward Mills Purcell, didn’t even work at Harvard—he was still on the payroll of the Radiation Laboratory at MIT, winding down from its war effort on radar research for the military in WWII, so the Harvard experiment was being done on nights and weekends.

Just before Christmas, 1945, as college students were fleeing campus for the first holiday in years without war, the signal generator, borrowed from a psychology lab, launched an electromagnetic pulse into simple paraffin—and disappeared!  It had been absorbed by the nuclear spins of the copious number of hydrogen nuclei (protons) in the wax. 

The experiment was simple, unfunded, bootstrapped—and it launched a new field of physics that ultimately led to magnetic resonance imaging (MRI) that is now the workhorse of 3D medical imaging.

This is the story, in Purcell’s own words, of how he came to the discovery of nuclear magnetic resonance in solids, for which he was awarded the Nobel Prize in Physics in 1952.

Early Days

Edward Mills Purcell (1912 – 1997) was born in a small town in Illinois, the son of a telephone businessman, and some of his earliest memories were of rummaging around in piles of telephone equipment—wires and transformers and capacitors. He especially like the generators:

“You could always get plenty of the bell-ringing generators that were in the old telephones, which consisted of a series of horseshoe magnets making the stator field and an armature that was wound with what must have been a mile of number 39 wire or something like that… These made good shocking machines if nothing else.”

His science education in the small town was modest, mostly chemistry, but he had a physics teacher, a rare woman at that time, who was open to searching minds. When she told the students that you couldn’t pull yourself up using a single pulley, Purcell disagreed and got together with a friend:

“So we went into the barn after school and rigged this thing up with a seat and hooked the spring scales to the upgoing rope and then pulled on the downcoming rope.”

The experiment worked, of course, with the scale reading half the weight of the boy. When they rushed back to tell the physics teacher, she accepted their results immediately—demonstration trumped mere thought, and Purcell had just done his first physics experiment.

However, physics was not a profession in the early 1920’s.

“In the ’20s the idea of chemistry as a science was extremely well publicized and popular, so the young scientist of shall we say 1928 — you’d think of him as a chemist holding up his test tube and sighting through it or something…there was no idea of what it would mean to be a physicist.

The name Steinmetz was more familiar and exciting than the name Einstein, because Steinmetz was the famous electrical engineer at General Electric and was this hunchback with a cigar who was said to know the four-place logarithm table by heart.”

Purdue University and Prof. Lark-Horowitz

Purcell entered Purdue University in the Fall of 1929. The University had only 4500 students who paid $50 a year to attend. He chose a major in electrical engineering, because

“Being a physicist…I don’t remember considering that at that time as something you could be…you couldn’t major in physics. You see, Purdue had electrical, civil, mechanical and chemical engineering. It had something called the School of Science, and you could graduate, having majored in science.”

But he was drawn to physics. The Physics Department at Purdue was going through a Renaissance under the leadership of its new department head Prof. Lark-Horovitz

“His [Lark-Horovitz] coming to Purdue was really quite important for American physics in many ways…  It was he who subsequently over the years brought many important and productive European physicists to this country; they came to Purdue, passed through. And he began teaching; he began having graduate students and teaching really modern physics as of 1930, in his classes.”

Purcell attended Purdue during the early years of the depression when some students didn’t have enough money to find a home:

“People were also living down there in the cellar, sleeping on cots in the research rooms, because it was the Depression and some of the graduate students had nowhere else to live. I’d come in in the morning and find them shaving.”

Lark-Horovitz was a demanding department chair, but he was bringing the department out of the dark ages and into the modern research world.

“Lark-Horovitz ran the physics department on the European style: a pyramid with the professor at the top and everybody down below taking orders and doing what the professor thought ought to be done. This made working for him rather difficult. I was insulated by one layer from that because it was people like Yearian, for whom I was working, who had to deal with the Lark. “

Hubert Yearian had built a 20-kilovolt electron diffraction camera, a Debye-Scherrer transmission camera, just a few years after Davisson and Germer had performed the Nobel-prize winning experiment at Bell Labs that proved the wavelike nature of electrons. Purcell helped Yearian build his own diffraction system, and recalled:

“When I turned on the light in the dark room, I had Debye-Scherrer rings on it from electron diffraction — and that was only five years after electron diffraction had been discovered. So it really was right in the forefront. And as just an undergraduate, to be able to do that at that time was fantastic.”

Purcell graduated from Purdue in 1933 and from contacts through Lark-Horovitz he was able to spend a year in the physics department at Karlsruhe in Germany. He returned to the US in 1934 to enter graduate scool in physics at Harvard, working under Kenneth Bainbridge. His thesis topic was a bit of a bust, a dusty old problem in classical electrostatics that was a topic far older than the electron diffraction he worked on at Purdue. But it was enough to get him his degree in 1938, and he stayed on at Harvard as a faculty instructor until the war broke out.

Radiation Laboratory, MIT

In the Fall at the end of 1940 the Radiation Lab at MIT was launched and began vacuuming up all the unattached physicists in the United States, and Purcell was one of them. The radiation lab also vacuumed up some of the top physicists in the country, like Isidor Rabi from Columbia, to supervise the growing army of scientists that were committed to the war effort—even before the US was in the war.

“Our mission was to make a radar for a British night fighter using 10-centimeter magnetron that had been discovered at Birmingham.”

This research turned Purcell and his cohort into experts in radio-frequency electronics and measurement. He worked closely with Rabi (Nobel Prize 1944) and Norman Ramsey (Nobel Prize 1989) and Jerrold Zacharias, who were in the midst of measuring resonances in molecular beams. The names at the Rad Lab was like reading a Who’s Who of physics at that time:

“And then there was the theoretical group, which was also under Rabi. Most of their theory was concerned with electromagnetic fields and signal to noise, things of that sort. George Uhlenbeck was in charge of it for quite a long time, and Bethe was in it for a while; Schwinger was in it; Frank Carlson; David Saxon, now president of the University of California; Goudsmit also.”

Nuclear Magnetic Resonance

The research by Rabi had established the physics of resonances in molecular beams, but there were serious doubts that such phenomena could exist in solids. This became one of the Holy Grails of physics, with only a few physicists across the country with the skill and understanding to make a try to observe it in the solid state.

Many of the physicists at the Rad Lab were wondering what they should do next, after the war was over.

“Came the end of the war and we were all thinking about what shall we do when we go back and start doing physics. In the course of knocking around with these people, I had learned enough about what they had done in molecular beams to begin thinking about what can we do in the way of resonance with what we’ve learned. And it was out of that kind of talk that I was struck with the idea for what turned into nuclear magnetic resonance.”

“Well, that’s how NMR started, with that idea which, as I say, I can trace back to all those indirect influences of talking with Rabi, Ramsey and Zacharias, thinking about what we should do next.

“We actually did the first NMR experiment here [Harvard], not at MIT. But I wasn’t officially back. In fact, I went around MIT trying to borrow a magnet from somebody, a big magnet, get access to a big magnet so we could try it there and I didn’t have any luck. So I came back and talked to Curry Street, and he invited us to use his big old cosmic ray magnet which was out in the shed. So I didn’t ask anybody else’s permission. I came back and got the shop to make us some new pole pieces, and we borrowed some stuff here and there. We borrowed our signal generator from the Psycho Acoustic Lab that Smitty Stevens had. I don’t know that it ever got back to him. And some of the apparatus was made in the Radiation Lab shops. Bob Pound got the cavity made down there. They didn’t have much to do — things were kind of closing up — and so we bootlegged a cavity down there. And we did the experiment right here on nights and week-ends.

This was in December, 1945.

“Our first experiment was done on paraffin, which I bought up the street at the First National store between here and our house. For paraffin we thought we might have to deal with a relaxation time as long as several hours, and we were prepared to detect it with a signal which was sufficiently weak so that we would not upset the spin temperature while applying the r-f field. And, in fact, in the final time when the experiment was successful, I had been over here all night … nursing the magnet generator along so as to keep the field on for many hours, that being in our view a possible prerequisite for seeing the resonances. Now, it turned out later that in paraffin the relaxation time is actually 10-4 seconds. So I had the magnet on exactly 108 times longer than necessary!

The experiment was completed just before Christmas, 1945.


E. M. Purcell, H. C. Torrey, and R. V. Pound, “RESONANCE ABSORPTION BY NUCLEAR MAGNETIC MOMENTS IN A SOLID,” Physical Review 69, 37-38 (1946).

“But the thing that we did not understand, and it gradually dawned on us later, was really the basic message in the paper that was part of Bloembergen’s thesis … came to be known as BPP (Bloembergen, Purcell and Pound). [This] was the important, dominant role of molecular motion in nuclear spin relaxation, and also its role in line narrowing. So that after that was cleared up, then one understood the physics of spin relaxation and understood why we were getting lines that were really very narrow.”

Diagram of the microwave cavity filled with paraffin.

This was the discovery of nuclear magnetic resonance (NMR) for which Purcell shared the 1952 Nobel Prize in physics with Felix Bloch.

David D. Nolte is the Edward M. Purcell Distinguished Professor of Physics and Astronomy, Purdue University. Sept. 25, 2024

References and Notes

• The quotes from EM Purcell are from the “Living Histories” interview in 1977 by the AIP.

• K. Lark-Horovitz, J. D. Howe, and E. M. Purcell, “A new method of making extremely thin films,” Review of Scientific Instruments 6, 401-403 (1935).

• E. M. Purcell, H. C. Torrey, and R. V. Pound, “RESONANCE ABSORPTION BY NUCLEAR MAGNETIC MOMENTS IN A SOLID,” Physical Review 69, 37-38 (1946).

• National Academy of Sciences Biographies: Edward Mills Purcell

Read more in Books by David Nolte at Oxford University Press

The Vital Virial of Rudolph Clausius: From Stat Mech to Quantum Mech

I often joke with my students in class that the reason I went into physics is because I have a bad memory.  In biology you need to memorize a thousand things, but in physics you only need to memorize 10 things … and you derive everything else!

Of course, the first question they ask me is “What are those 10 things?”.

That’s a hard question to answer, and every physics professor probably has a different set of 10 things.  Obviously, energy conservation would be first on the list, followed by other conservation laws for various types of momentum.  Inverse-square laws probably come next.  But then what?  What do you need to memorize to be most useful when you are working out physics problems on the back of an envelope, when your phone is dead, and you have no access to your laptop or books?

One of my favorites is the Virial Theorem because it rears its head over and over again, whether you are working on problems in statistical mechanics, orbital mechanics or quantum mechanics.

The Virial Theorem

The Virial Theorem makes a simple statement about the balance between kinetic energy and potential energy (in a conservative mechanical system).  It summarizes in a single form many different-looking special cases we learn about in physics.  For instance, everyone learns early in their first mechanics course that the average kinetic energy <T> of a mass on a spring is equal to the average potential energy <V>.  But this seems different than the problem of a circular orbit in gravitation or electrostatics where the average kinetic energy is equal to half the average potential energy, but with the opposite sign.

Yet there is a unity to these two—it is the Virial Theorem:

for cases where the potential energy V has power law dependence V ≈ rn.  The harmonic oscillator has n = 2, leading to the well-known equality between average kinetic and potential energy as

The inverse square force law has a potential that varies with n = -1, leading to the flip in sign.  For instance, for a circular orbit in gravitation, it looks like

and in electrostatics it looks like

where a is the radius of the orbit. 

Yet orbital mechanics is hardly the only place where the Virial Theorem pops up.  It began its life with statistical mechanics.

Rudolph Clausius and his Virial Theorem

The pantheon of physics is a somewhat exclusive club.  It lets in the likes of Galileo, Lagrange, Maxwell, Boltzmann, Einstein, Feynman and Hawking, but it excludes many worthy candidates, like Gilbert, Stevin, Maupertuis, du Chatelet, Arago, Clausius, Heaviside and Meitner all of whom had an outsized influence on the history of physics, but who often do not get their due.  Of this later group, Rudolph Clausius stands above the others because he was an inventor of whole new worlds and whole new terminologies that permeate physics today.

Within the German Confederation dominated by Prussia in the mid 1800’s, Clausius was among the first wave of the “modern” physicists who emerged from new or reorganized German universities that integrated mathematics with practical topics.  Carl Neumann at Königsberg, Carl Gauss and Max Weber at Göttingen, and Hermann von Helmholtz at Berlin were transforming physics from a science focused on pure mechanics and astronomy to one focused on materials and their associated phenomena, applying mathematics to these practical problems.

Clausius was educated at Berlin under Heinrich Gustav Magnus beginning in 1840, and he completed his doctorate at the University of Halle in 1847.  His doctoral thesis on light scattering in the atmosphere represented an early attempt at treating statistical fluctuations.  Though his initial approach was naïve, it helped orient Clausius to physics problems of statistical ensembles and especially to gases.  The sophistication of his physics matured rapidly and already in 1850 he published his famous paper Über die bewegende Kraft der Wärme, und die Gesetze, welche sich daraus für die Wärmelehre selbst ableiten lassen (About the moving power of heat and the laws that can be derived from it for the theory of heat itself). 

Rudolph Clausius
Fig. 1 Rudolph Clausius.

This was the fundamental paper that overturned the archaic theory of caloric, which had assumed that heat was a form of conserved quantity.  Clausius proved that this was not true, and he introduced what are today called the first and second laws of thermodynamics.  This early paper was one in which he was still striving to simplify thermodynamics, and his second law was mostly a qualitative statement that heat flows from higher temperatures to lower.  He refined the second law four years later in 1854 with Über eine veranderte Form des zweiten Hauptsatzes der mechanischen Wärmetheorie (On a modified form of the second law of the mechanical theory of heat).  He gave his concept the name Entropy in 1865 from the Greek word τροπη (transformation or change) with a prefix similar to Energy. 

Clausius was one of the first to consider the kinetic theory of heat where heat was understood as the average kinetic energy of the atoms or molecules that comprised the gas.  He published his seminal work on the topic in 1857 expanding on earlier work by Augustus Krönig.  Maxwell, in turn, expanded on Clausius in 1860 by introducing probability distributions.  By 1870, Clausius was fully immersed in the kinetic theory as he was searching for mechanical proofs of the second law of thermodynamics.  Along the way, he discovered a quantity based on action-reaction pairs of forces that was related to the kinetic energy.

At that time, kinetic energy was often called vis viva, meaning “living force”.  The singular of force (vis) had a plural (virias), so Clausius—always happy to coin new words—called the action-reaction pairs of forces the virial, and hence he proved the Virial Theorem.

The argument is relatively simple.  Consider the action of a single molecule of the gas subject to a force F that is applied reciprocally from another molecule.  Also, for simplicity consider only a single direction in the gas.  The change of the action over time is given by the derivative

The average over all action-reaction pairs is

but by the reciprocal nature of action-reaction pairs, the left-hand side balances exactly to zero, giving

This expression is expanded to include the other directions and to all N bodies to yield the Virial Theorem

where the sum is over all molecules in the gas, and Clausius called the term on the right the Virial.

An important special case is when the force law derives from a power law

Then the Virial Theorem becomes (again in just one dimension)

This is often the most useful form of the theorem.  For a spring force, it leads to <T> = <V>.  For gravitational or electrostatic orbits it is  <T> = -1/2<V>.

The Virial in Astrophysics

Clausius originally developed the Virial Theorem for the kinetic theory of gases, but it has applications that go far beyond.  It is already useful for simple orbital systems like masses interacting through central forces, and these can be scaled up to N-body systems like star clusters or galaxies.

Star clusters are groups of hundreds or thousands of stars that are gravitationally bound.  Such a cluster may begin in a highly non-equilibrium configuration, but the mutual interactions among the stars causes a relaxation to an equilibrium configuration of positions and velocities.  This process is known as Virialization.  The time scale for virializaiton depends on the number of stars and on the initial configuration, such as whether there is a net angular momentum in the cluster.

A gravitational simulation of 700 stars is shown in Fig. 2. The stars are distributed uniformly with zero velocities. The cluster collapses under gravitational attraction, rebounds and approaches a steady state. The Virial Theorem applies at long times. The simulation assumed all motion was in the plane, and a regularization term was added to the gravitational potential to keep forces bounded.

Simulation of the virial theorem for a star cluster with kinetic and potential energy graphs
Fig. 2 A numerical example of the Virial Theorem for a star cluster of 700 stars beginning in a uniform initial state, collapsing under gravitational attraction, rebounding and then approaching a steady state. The kinetic energy and the potential energy of the system satisfy the Virial Theorem at long times.

The Virial in Quantum Physics

Quantum theory holds strong analogs to classical mechanics.  For instance, the quantum commutation relations have strong similarities to Poisson Brackets.  Similarly, the Virial in classical physics has a direct quantum analog.

Begin with the commutator between the Hamiltonian H and the action composed as the product of the position operator and the momentum operator XnPn

Expand the two commutators on the right to give

Now recognize that the commutator with the Hamiltonian is Ehrenfest’s Theorem on the time dependence of the operators

which equals zero when the system become stationary or steady state.  All that remains is to take the expectation value of the equation (which can include many-body interactions as well)

which is the quantum form of the Virital Theorem which is identical to the classical form when the expectation value is replaced by the ensemble average.

For the hydrogen atom this is

for principal quantum number n and Bohr radius aB.  The quantum energy levels of the hydrogen atom are

By David D. Nolte, July 24, 2024

References

“Ueber die bewegende Kraft der Warme and die Gesetze welche sich daraus für die Warmelehre selbst ableiten lassen,” in Annalen der Physik, 79 (1850), 368–397, 500–524.

Über eine veranderte Form des zweiten Hauptsatzes der mechanischen Wärmetheorie, Annalen der Physik, 93 (1854), 481–506.

Ueber die Art der Bewegung, welche wir Warmenennen, Annalen der Physik, 100 (1857), 497–507.

Clausius, RJE (1870). “On a Mechanical Theorem Applicable to Heat”. Philosophical Magazine. Series 4. 40 (265): 122–127.

Matlab Code

function [y0,KE,Upoten,TotE] = Nbody(N,L)   %500, 100, 0

A = -1;        % Grav factor
eps = 1;        % 0.1
K = 0.00001;    %0.000025

format compact

mov_flag = 1;
if mov_flag == 1
    moviename = 'DrawNMovie';
    aviobj = VideoWriter(moviename,'MPEG-4');
    aviobj.FrameRate = 10;
    open(aviobj);
end

hh = colormap(jet);
%hh = colormap(gray);
rie = randintexc(255,255);       % Use this for random colors
%rie = 1:64;                     % Use this for sequential colors
for loop = 1:255
    h(loop,:) = hh(rie(loop),:);
end
figure(1)
fh = gcf;
clf;
set(gcf,'Color','White')
axis off

thet = 2*pi*rand(1,N);
rho = L*sqrt(rand(1,N));
X0 = rho.*cos(thet);
Y0 = rho.*sin(thet);

Vx0 = 0*Y0/L;   %1.5 for 500   2.0 for 700
Vy0 = -0*X0/L;
% X0 = L*2*(rand(1,N)-0.5);
% Y0 = L*2*(rand(1,N)-0.5);
% Vx0 = 0.5*sign(Y0);
% Vy0 = -0.5*sign(X0);
% Vx0 = zeros(1,N);
% Vy0 = zeros(1,N);

for nloop = 1:N
    y0(nloop) = X0(nloop);
    y0(nloop+N) = Y0(nloop);
    y0(nloop+2*N) = Vx0(nloop);
    y0(nloop+3*N) = Vy0(nloop);
end

T = 300;  %500
xp = zeros(1,N); yp = zeros(1,N);

for tloop = 1:T
    tloop
    
    delt = 0.005;
    tspan = [0 loop*delt];
    opts = odeset('RelTol',1e-2,'AbsTol',1e-5);
    [t,y] = ode45(@f5,tspan,y0,opts);
    
    %%%%%%%%% Plot Final Positions
    
    [szt,szy] = size(y);
    
    % Set nodes
    ind = 0; xpold = xp; ypold = yp;
    for nloop = 1:N
        ind = ind+1;
        xp(ind) = y(szt,ind+N);
        yp(ind) = y(szt,ind);
    end
    delxp = xp - xpold;
    delyp = yp - ypold;
    maxdelx = max(abs(delxp));
    maxdely = max(abs(delyp));
    maxdel = max(maxdelx,maxdely);
    
    rngx = max(xp) - min(xp);
    rngy = max(yp) - min(yp);
    maxrng = max(abs(rngx),abs(rngy));
    
    difepmx = maxdel/maxrng;
    
    crad = 2.5;
    subplot(1,2,1)
    gca;
    cla;
    
    % Draw nodes
    for nloop = 1:N
        rn = rand*63+1;
        colorval = ceil(64*nloop/N);
        
        rectangle('Position',[xp(nloop)-crad,yp(nloop)-crad,2*crad,2*crad],...
            'Curvature',[1,1],...
            'LineWidth',0.1,'LineStyle','-','FaceColor',h(colorval,:))
        
    end
    
    [syy,sxy] = size(y);
    y0(:) = y(syy,:);
    
    rnv = (2.0 + 2*tloop/T)*L;    % 2.0   1.5
    
    axis equal
    axis([-rnv rnv -rnv rnv])
    box on
    drawnow
    pause(0.01)
    
    KE = sum(y0(2*N+1:4*N).^2);
    
    Upot = 0;
    for nloop = 1:N
        for mloop = nloop+1:N
            dx = y0(nloop)-y0(mloop);
            dy = y0(nloop+N) - y0(mloop+N);
            dist = sqrt(dx^2+dy^2+eps^2);
            Upot = Upot + A/dist;
        end
    end
    
    Upoten = Upot;
    
    TotE = Upoten + KE;
    
    if tloop == 1
        TotE0 = TotE;
    end

    Upotent(tloop) = Upoten;
    KEn(tloop) = KE;
    TotEn(tloop) = TotE;
    
    xx = 1:tloop;
    subplot(1,2,2)
    plot(xx,KEn,xx,Upotent,xx,TotEn,'LineWidth',3)
    legend('KE','Upoten','TotE')
    axis([0 T -26000 22000])     % 3000 -6000 for 500   6000 -8000 for 700
    
    
    fh = figure(1);
    
    if mov_flag == 1
        frame = getframe(fh);
        writeVideo(aviobj,frame);
    end
    
end

if mov_flag == 1
    close(aviobj);
end

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
    function yd = f5(t,y)
        
        for n1loop = 1:N
            
            posx = y(n1loop);
            posy = y(n1loop+N);
            momx = y(n1loop+2*N);
            momy = y(n1loop+3*N);
            
            tempcx = 0; tempcy = 0;
            
            for n2loop = 1:N
                if n2loop ~= n1loop
                    cposx = y(n2loop);
                    cposy = y(n2loop+N);
                    cmomx = y(n2loop+2*N);
                    cmomy = y(n2loop+3*N);
                    
                    dis = sqrt((cposy-posy)^2 + (cposx-posx)^2 + eps^2);
                    CFx = 0.5*A*(posx-cposx)/dis^3 - 5e-5*momx/dis^4;
                    CFy = 0.5*A*(posy-cposy)/dis^3 - 5e-5*momy/dis^4;
                    
                    tempcx = tempcx + CFx;
                    tempcy = tempcy + CFy;
                    
                end
            end
                        
            ypp(n1loop) = momx;
            ypp(n1loop+N) = momy;
            ypp(n1loop+2*N) = tempcx - K*posx;
            ypp(n1loop+3*N) = tempcy - K*posy;
        end
        
        yd=ypp'; 
     
    end     % end f5

end     % end Nbody

Books by David D. Nolte at Oxford University Press
Read more in Books by David D. Nolte at Oxford University Press

100 Years of Quantum Physics: The Statistics of Satyendra Nath Bose (1924)

One hundred years ago, in July of 1924, a brilliant Indian physicist changed the way that scientists count.  Satyendra Nath Bose (1894 – 1974) mailed a letter to Albert Einstein enclosed with a manuscript containing a new derivation of Planck’s law of blackbody radiation.  Bose had used a radical approach that went beyond the classical statistics of Maxwell and Boltzmann by counting the different ways that photons can fill a volume of space.  His key insight was the indistinguishability of photons as quantum particles. 

Today, the indistinguishability of quantum particles is the foundational element of quantum statistics that governs how fundamental particles combine to make up all the matter of the universe.  At the time, neither Bose nor Einstein realized just how radical his approach was, until Einstein, using Bose’s idea, derived the behavior of material particles under conditions similar black-body radiation, predicting a new state of condensed matter [1].  It would take scientists 70 years to finally demonstrate “Bose-Einstein” condensation in a laboratory in Boulder, Colorado in 1995.

Early Days of the Photon

As outlined in a previous blog (see Who Invented the Quantum? Einstein versus Planck), Max Planck was a reluctant revolutionary.  He was led, almost against his will, in 1900 to postulate a quantized interaction between electromagnetic radiation and the atoms in the walls of a black-body enclosure.  He could not break free from the hold of classical physics, assuming classical properties for the radiation and assigning the quantum only to the “interaction” with matter.  It was Einstein, five years later in 1905, who took the bold step of assigning quantum properties to the radiation field itself, inventing the idea of the “photon” (named years later by the American chemist Gilbert Lewis) as the first quantum particle. 

Despite the vast potential opened by Einstein’s theory of the photon, quantum physics languished for nearly 20 years from 1905 to 1924 as semiclassical approaches dominated the thinking of Niels Bohr in Copenhagen, and Max Born in Göttingen, and Arnold Sommerfeld in Munich, as they grappled with wave-particle duality. 

The existence of the photon, first doubted by almost everyone, was confirmed in 1915 by Robert Millikan’s careful measurement of the photoelectric effect.  But even then, skepticism remained until Arthur Compton demonstrated experimentally in 1923 that the scattering of photons by electrons could only be explained if photons carried discrete energy and momentum in precisely the way that Einstein’s theory required.

Despite the success of Einstein’s photon by 1923, derivations of the Planck law still used a purely wave-based approach to count the number of electromagnetic standing waves that a cavity could support.  Bose would change that by deriving the Planck law using purely quantum methods.

The Quantum Derivation by Bose

Satyendra Nath Bose was born in 1894 in Calcutta, the old British capital city of India, now Kolkata.  He excelled at his studies, especially in mathematics, and received a lecturer post at the University of Calcutta from 1916 to 1921, when he moved into a professorship position at the new University of Dhaka. 

One day, as he was preparing a class lecture on the derivation of Planck’s law,

he became dissatisfied with the usual way it was presented in textbooks, based on standing waves in the cavity, and he flipped the problem.

Rather than deriving the number of standing wave modes in real space, he considered counting the number of ways a photon would fill up phase space.

Phase space is the natural dynamical space of Hamiltonian systems [2], such as collections of quantum particles like photons, in which the axes of the space are defined by the positions and momenta of the particles.  The differential volume of phase space dVPS occupied by a single photon is given by

Using Einstein’s formula for the relationship between momentum and frequency

where h is Planck’s constant, yields

No quantum particle can have its position and momentum defined arbitrarily precisely because of Heisenberg’s uncertainty principle, requiring phase space volumes to be resolvable only to within a minimum reducible volume element given by h3

Therefore, the number of states in phase space occupied by the single photon are obtained by dividing dVPS by h3 to yield

which is half of the prefactor in the Planck law.  Several comments are now necessary. 

First, when Bose did this derivation, there was no Heisenberg Uncertainty relationship—that would come years later in 1927.  Bose was guided, instead, by the work of Bohr and Sommerfeld and Ehrenfest who emphasized the role played by the action principle in quantum systems.  Phase space dimensions are counted in units of action, and the quantized unit of action is given by Planck’s constant h, hence quantized volumes of action in phase space are given by h3.  By taking this step, Bose was anticipating Heisenberg by nearly three years.

Second, Bose knew that his phase space volume was half of the prefactor in Planck’s law.  But since he was counting states, he reasoned that this meant that each photon had two internal degrees of freedom.  A possibility he considered to account for this was that the photon might have a spin that could be aligned, or anti-aligned, with the momentum of the photon [3, 4].  How he thought of spin is hard to fathom, because the spin of the electron, proposed by Uhlenbeck and Goudsmit, was still two years away. 

But Bose was not finished.  The derivation, so far, is just how much phase space volume is accessible to a single photon.  The next step is to count the different ways that many photons can fill up phase space.  For this he used (bringing in the factor of 2 for spin)

where pn is the probability that a volume of phase space contains n photons, plus he used the usual conditions on energy and number

The probability for all the different permutations for how photons can occupy phase space is then given by

A third comment is now necessary:  By assuming this probability, Bose was discounting situations where the photons could be distinguished from one another.  This indistinguishability of quantum particles is absolutely fundamental to our understanding today of quantum statistics, but Bose was using it implicitly for the first time here. 

The final distribution of photons at a given temperature T is found by maximizing the entropy of the system

subject to the conditions of photon energy and number. Bose found the occupancy probabilities to be

with a coefficient B to be found next by using this in the expression for the geometric series

yielding

Also, from the total number of photons

And, from the total energy

Bose obtained, finally

which is Planck’s law.

This derivation uses nothing by the counting of quanta in phase space.  There are no standing waves.  It is a purely quantum calculation—the first of its kind.

Enter Einstein

As usual with revolutionary approaches, Bose’s initial manuscript submitted to the British Philosophical Magazine was rejected.  But he was convinced that he had attained something significant, so he wrote his letter to Einstein containing his manuscript, asking that if Einstein found merit in the derivation, then perhaps he could have it translated into German and submitted to the Zeitschrift für Physik. (That Bose would approach Einstein with this request seems bold, but they had communicated some years before when Bose had translated Einstein’s theory of General Relativity into English.)

Indeed, Einstein recognized immediately what Bose had accomplished, and he translated the manuscript himself into German and submitted it to the Zeitschrift on July 2, 1924 [5].

During his translation, Einstein did not feel that Bose’s conjecture about photon spin was defensible, so he changed the wording to attribute the factor of 2 in the derivation to the two polarizations of light (a semiclassical concept), so Einstein actually backtracked a little from what Bose originally intended as a fully quantum derivation. The existence of photon spin was confirmed by C. V. Raman in 1931 [6].

In late 1924, Einstein applied Bose’s concepts to an ideal gas of material atoms and predicted that at low temperatures the gas would condense into a new state of matter known today as a Bose-Einstein condensate [1]. Matter differs from photons because the conservation of atom number introduces a finite chemical potential to the problem of matter condensation that is not present in the Planck law.

Fig. 1 Experimental evidence for the Bose-Einstein condensate in an atomic vapor [7].

Paul Dirac, in 1945, enshrined the name of Bose by coining the phrase “Boson” to refer to a particle of integer spin, just as he coined “Fermion” after Enrico Fermi to refer to a particle of half-integer spin. All quantum statistics were encased by these two types of quantum particle until 1982, when Frank Wilczek coined the term “Anyon” to describe the quantum statistics of particles confined to two dimensions whose behaviors vary between those of a boson and of a fermion.

By David D. Nolte, June 26, 2024

References

[1] A. Einstein. “Quantentheorie des einatomigen idealen Gases”. Sitzungsberichte der Preussischen Akademie der Wissenschaften. 1: 3. (1925)

[2] D. D. Nolte, “The tangled tale of phase space,” Physics Today 63, 33-38 (2010).

[3] Partha Ghose, “The Story of Bose, Photon Spin and Indistinguishability” arXiv:2308.01909 [physics.hist-ph]

[4] Barry R. Masters, “Satyendra Nath Bose and Bose-Einstein Statistics“, Optics and Photonics News, April, pp. 41-47 (2013)

[5] S. N. Bose, “Plancks Gesetz und Lichtquantenhypothese”, Zeitschrift für Physik , 26 (1): 178–181 (1924)

[6] C. V. Raman and S. Bhagavantam, Ind. J. Phys. vol. 6, p. 353, (1931).

[7] Anderson, M. H.; Ensher, J. R.; Matthews, M. R.; Wieman, C. E.; Cornell, E. A. (14 July 1995). “Observation of Bose-Einstein Condensation in a Dilute Atomic Vapor”. Science. 269 (5221): 198–201.


Books by David Nolte at Oxford University Press
Read more in Books by David Nolte at Oxford University Press

The Ubiquitous George Uhlenbeck

There are sometimes individuals who seem always to find themselves at the focal points of their times.  The physicist George Uhlenbeck was one of these individuals, showing up at all the right times in all the right places at the dawn of modern physics in the 1920’s and 1930’s. He studied under Ehrenfest and Bohr and Born, and he was friends with Fermi and Oppenheimer and Oskar Klein.  He taught physics at the universities at Leiden, Michigan, Utrecht, Columbia, MIT and Rockefeller.  He was a wide-ranging theoretical physicist who worked on Brownian motion, early string theory, quantum tunneling, and the master equation.  Yet he is most famous for the very first thing he did as a graduate student—the discovery of the quantum spin of the electron.

Electron Spin

G. E. Uhlenbeck, and S. Goudsmit, “Spinning electrons and the structure of spectra,” Nature 117, 264-265 (1926).

George Uhlenbeck (1900 – 1988) was born in the Dutch East Indies, the son of a family with a long history in the Dutch military [1].  After the father retired to The Hague, George was expected to follow the family tradition into the military, but he stumbled onto a copy of H. Lorentz’ introductory physics textbook and was hooked.  Unfortunately, to attend university in the Netherlands at that time required knowledge of Greek and Latin, which he lacked, so he entered the Institute of Technology in Delft to study chemical engineering.  He found the courses dreary. 

Fortunately, he was only a few months into his first semester when the language requirement was dropped, and he immediately transferred to the University of Leiden to study physics.  He tried to read Boltzmann, but found him opaque, but then read the famous encyclopedia article by the husband and wife team of Paul and Tatiana Ehrenfest on statistical mechanics (see my Physics Today article [2]), which became his lifelong focus.

After graduating, he continued into graduate school, taking classes from Ehrenfest, but lacking funds, he supported himself by teaching classes at a girls high school, until he heard of a job tutoring the son of the Dutch ambassador to Italy.  He was off to Rome for three years, where he met Enrico Fermi and took classes from Tullio Bevi-Cevita and Vito Volterra.

However, he nearly lost his way.  Surrounded by the rich cultural treasures of Rome, he became deeply interested in art and was seriously considering giving up physics and pursuing a degree in art history.  When Ehrenfest got wind of this change in heart, he recalled Uhlenbeck in 1925 to the Netherlands and shrewdly paired him up with another graduate student, Samuel Goudsmit, to work on a new idea proposed by Wolfgang Pauli a few months earlier on the exclusion principle.

Pauli had explained the filling of the energy levels of atoms by introducing a new quantum number that had two values.  Once an energy level was filled by two electrons, each carrying one of the two quantum numbers, this energy level “excluded” any further filling by other electrons. 

To Uhlenbeck, these two quantum numbers seemed as if they must arise from some internal degree of freedom, and in a flash of insight he imagined that it might be caused if the electron were spinning.  Since spin was a form of angular momentum, the spin degree of freedom would combine with orbital angular momentum to produce a composite angular momentum for the quantum levels of atoms.

The idea of electron spin was not immediately embraced by the broader community, and Bohr and Heisenberg and Pauli had their reservations.  Fortunately, they all were traveling together to attend the 50th anniversary of Lorentz’ doctoral examination and were met at the train station in Leiden by Ehrenfest and Einstein.  As usual, Einstein had grasped the essence of the new physics and explained how the moving electron feels an induced magnetic field which would act on the magnetic moment of the electron to produce spin-orbit coupling.  With that, Bohr was convinced.

Uhlenbeck and Goudsmit wrote up their theory in a short article in Nature, followed by a short note by Bohr.  A few months later, L. H. Thomas, while visiting Bohr in Copenhagen, explained the factor of two that appears in (what later came to be called) Thomas precession of the electron, cementing the theory of electron spin in the new quantum mechanics.

5-Dimensional Quantum Mechanics

P. Ehrenfest, and G. E. Uhlenbeck, “Graphical illustration of De Broglie’s phase waves in the five-dimensional world of O Klein,” Zeitschrift Fur Physik 39, 495-498 (1926).

Around this time, the Swedish physicist Oskar Klein visited Leiden after returning from three years at the University of Michigan where he had taken advantage of the isolation to develop a quantum theory of 5-dimensional spacetime.  This was one of the first steps towards a grand unification of the forces of nature since there was initial hope that gravity and electromagnetism might both be expressed in terms of the five-dimensional space.

An unusual feature of Klein’s 5-dimensional relativity theory was the compactness of the fifth dimension, in which it was “rolled up” into a kind of high-dimensional string with a tiny radius.  If the 4-dimensional theory of spacetime was sometimes hard to visualize, here was an even tougher problem.

Uhlenbeck and Ehrenfest met often with Klein during his stay in Leiden, discussing the geometry and consequences of the 5-dimensional theory.  Ehrenfest was always trying to get at the essence of physical phenomena in the simplest terms.  His famous refrain was “Was ist der Witz?” (What is the point?) [1].  These discussions led to a simple paper in Zeitschrift für Physik published later that year in 1926 by Ehrenfest and Uhlenbeck with the compelling title “Graphical Illustration of De Broglie’s Phase Waves in the Five-Dimensional World of O Klein”.  The paper provided the first visualization of the 5-dimensional spacetime with the compact dimension.  The string-like character of the spacetime was one of the first forays into modern day “string theory” whose dimensions have now expanded to 11 from 5.

During his visit, Klein also told Uhlenbeck about the relativistic Schrödinger equation that he was working on, which would later become the Klein-Gordon equation.  This was a near miss, because what the Klein-Gordon equation was missing was electron spin—which Uhlenbeck himself had introduced into quantum theory—but it would take a few more years before Dirac showed how to incorporate spin into the theory.

Brownian Motion

G. E. Uhlenbeck and L. S. Ornstein, “On the theory of the Brownian motion,” Physical Review 36, 0823-0841 (1930).

After spending time with Bohr in Copenhagen while finishing his PhD, Uhlenbeck visited Max Born at Göttingen where he met J. Robert Oppenheimer who was also visiting Born at that time.  When Uhlenbeck traveled to the United States in late summer of 1927 to take a position at the University of Michigan, he was met at the dock in New York by Oppenheimer.

Uhlenbeck was a professor of physics at Michigan for eight years from 1927 to 1935, and he instituted a series of Summer Schools [3] in theoretical physics that attracted international participants and introduced a new generation of American physicists to the rigors of theory that they previously had to go to Europe to find. 

In this way, Uhlenbeck was part of a great shift that occurred in the teaching of graduate-level physics of the 1930’s that brought European expertise to the United States.  Just a decade earlier, Oppenheimer had to go to Göttingen to find the kind of education that he needed for graduate studies in physics.  Oppenheimer brought the new methods back with him to Berkeley, where he established a strong theory department to match the strong experimental activities of E. O. Lawrence.  Now, European physicists too were coming to America, an exodus accelerated by the increasing anti-Semitism in Europe under the rise of fascism. 

During this time, one of Uhlenbeck’s collaborators was L. S. Ornstein, the director of the Physical Laboratory at the University of Utrecht and a founding member of the Dutch Physical Society.  Uhlenbeck and Ornstein were both interested in the physics of Brownian motion, but wished to establish the phenomenon on a more sound physical basis.  Einstein’s famous paper of 1905 on Brownian motion had made several Einstein-style simplifications that stripped the complicated theory to its bare essentials, but had lost some of the details in the process, such as the role of inertia at the microscale.

Uhlenbeck and Ornstein published a paper in 1930 that developed the stochastic theory of Brownian motion, including the effects of particle inertia. The stochastic differential equation (SDE) for velocity is

where γ is viscosity, Γ is a fluctuation coefficient, and dw is a “Wiener process”. The Wiener differential dw has unusual properties such that

Uhlenbeck and Ornstein solived this SDE to yield an average velocity

which decays to zero at long times, and a variance

that asymptotes to a finite value at long times. The fluctuation coefficient is thus given by

for a process with characteristic speed v0. An estimate for the fluctuation coefficient can be obtained by considering the force F on an object of size a

For instance, for intracellular transport [4], the fluctuation coefficient has a rough value of Γ = 2 Hz μm2/sec2.

Quantum Tunneling

D. M. Dennison and G. E. Uhlenbeck, “The two-minima problem and the ammonia molecule,” Physical Review 41, 313-321 (1932).

By the early 1930’s, quantum tunnelling of the electron through classically forbidden regions of potential energy was well established, but electrons did not have a monopoly on quantum effects.  Entire atoms—electrons plus nucleus—also have quantum wave functions and can experience regions of classically forbidden potential.

Uhlenbeck, with David Dennison, a fellow physicist at Ann Arbor, Michigan, developed the first quantum theory of molecular tunneling for the molecular configuration of ammonia NH3 that can tunnel between the two equivalent configurations. Their use of the WKB approximation in the paper set the standard for subsequent WKB approaches that would play an important role in the calculation of nuclear decay rates.

Master Equation

A. Nordsieck, W. E. Lamb, and G. E. Uhlenbeck, “On the theory of cosmic-ray showers I. The furry model and the fluctuation problem,” Physica 7, 344-360 (1940)

In 1935, Uhlenbeck left Michigan to take up the physics chair recently vacated by Kramers at Utrecht.  However, watching the rising Nazism in Europe, he decided to return to the United States, beginning as a visiting professor at Columbia University in New York in 1940.  During his visit, he worked with W. E. Lamb and A. Nordsieck on the problem of cosmic ray showers. 

Their publication on the topic included a rate equation that is encountered in a wide range of physical phenomena. They called it the “Master Equation” for ease of reference in later parts of the paper, but this phrase stuck, and the “Master Equation” is now a standard tool used by physicists when considering the balances among multiples transitions.

Uhlenbeck never returned to Europe, moving among Michigan, MIT, Princeton and finally settling at Rockefeller University in New York from where he retired in 1971.

By David D. Nolte, April 24, 2024

Selected Works by George Uhlenbeck:

G. E. Uhlenbeck, and S. Goudsmit, “Spinning electrons and the structure of spectra,” Nature 117, 264-265 (1926).

P. Ehrenfest, and G. E. Uhlenbeck, “On the connection of different methods of solution of the wave equation in multi dimensional spaces,” Proceedings of the Koninklijke Akademie Van Wetenschappen Te Amsterdam 29, 1280-1285 (1926).

P. Ehrenfest, and G. E. Uhlenbeck, “Graphical illustration of De Broglie’s phase waves in the five-dimensional world of O Klein,” Zeitschrift Fur Physik 39, 495-498 (1926).

G. E. Uhlenbeck, and L. S. Ornstein, “On the theory of the Brownian motion,” Physical Review 36, 0823-0841 (1930).

D. M. Dennison, and G. E. Uhlenbeck, “The two-minima problem and the ammonia molecule,” Physical Review 41, 313-321 (1932).

E. Fermi, and G. E. Uhlenbeck, “On the recombination of electrons and positrons,” Physical Review 44, 0510-0511 (1933).

A. Nordsieck, W. E. Lamb, and G. E. Uhlenbeck, “On the theory of cosmic-ray showers I The furry model and the fluctuation problem,” Physica 7, 344-360 (1940).

M. C. Wang, and G. E. Uhlenbeck, “On the Theory of the Brownian Motion-II,” Reviews of Modern Physics 17, 323-342 (1945).

G. E. Uhlenbeck, “50 Years of Spin – Personal Reminiscences,” Physics Today 29, 43-48 (1976).

Notes:

[1] George Eugene Uhlenbeck: A Biographical Memoire by George Ford (National Academy of Sciences, 2009). https://www.nasonline.org/publications/biographical-memoirs/memoir-pdfs/uhlenbeck-george.pdf

[2] D. D. Nolte, “The tangled tale of phase space,” Physics Today 63, 33-38 (2010).

[3] One of these was the famous 1948 Summer School session where Freeman Dyson met Julian Schwinger after spending days on a cross-country road trip with Richard Feynman. Schwinger and Feynman had developed two different approaches to quantum electrodynamics (QED), which Dyson subsequently reconciled when he took up his position later that year at Princeton’s Institute for Advanced Study, helping to launch the wave of QED that spread out over the theoretical physics community.

[4] D. D. Nolte, “Coherent light scattering from cellular dynamics in living tissues,” Reports on Progress in Physics 87 (2024).


Read more in Books by David Nolte at Oxford University Press

100 Years of Quantum Physics: de Broglie’s Wave (1924)

One hundred years ago this month, in Feb. 1924, a hereditary member of the French nobility, Louis Victor Pierre Raymond, the 7th Duc de Broglie, published a landmark paper in the Philosophical Magazine of London [1] that revolutionized the nascent quantum theory of the day.

Prior to de Broglie’s theory of quantum matter waves, quantum physics had been mired in ad hoc phenomenological prescriptions like Bohr’s theory of the hydrogen atom and Sommerfeld’s theory of adiabatic invariants.  After de Broglie, Erwin Schrödinger would turn the concept of matter waves into the theory of wave mechanics that we still practice today.

Fig. 1 The 1924 paper by de Broglie in the Philosophical Magazine.

The story of how de Broglie came to his seminal idea had an odd twist, based on an initial misconception that helped him get the right answer ahead of everyone else, for which he was rewarded with the Nobel Prize in Physics.

de Broglie’s Early Days

When Louis de Broglie was a student, his older brother Maurice (the 6th Duc de Broglie) was already a practicing physicist making important discoveries in x-ray physics.  Although Louis initially studied history in preparation for a career in law, and he graduated from the Sorbonne with a degree in history, his brother’s profession drew him like a magnet.  He also read Poincaré at this critical juncture in his career, and he was hooked.  He enrolled in the  Faculty of Sciences for his advanced degree, but World War I side-tracked him into the signal corps, where he was assigned to the wireless station on top of the Eiffel Tower.  He may have participated in the famous interception of a coded German transmission in 1918 that helped turn the tide of the war.

Beginning in 1919, Louis began assisting his brother in the well-equiped private laboratory that Maurice had outfitted in the de Broglie ancestral home.  At that time Maurice was performing x-ray spectroscopy of the inner quantum states of atoms, and he was struck by the duality of x-ray properties that made them behave like particles under some conditions and like waves in others.

Fig. 2 Maurice de Broglie in his private laboratory (Figure credit).
Fig. 3 Louis de Broglie (Figure credit)

Through his close work with his brother, Louis also came to subscribe to the wave-particle duality of x-rays and chose the topic for his PhD thesis—and hence the twist that launched de Broglie backwards towards his epic theory.

de Broglie’s Massive Photons

Today, we say that photons have energy and momentum although they are massless.  The momentum is a simple consequence of Einstein’s special relativity

And if m = 0, then

and momentum requires energy but not necessarily mass. 

But de Broglie started out backwards.  He was so convinced of the particle-like nature of the x-ray photons, that he first considered what would happen if the photons actually did have mass.  He constructed a massive photon and compared its proper frequency with a Lorentz-boosted frequency observed in a laboratory.  The frequency he set for the photon was like an internal clock, set by its rest-mass energy and by Bohr’s quantization condition

He then boosted it into the lab frame by time dilation

But the energy would be transformed according to

with a corresponding frequency

which is in direct contradiction with Bohr’s quantization condition.  What is the resolution of this seeming paradox?

de Broglie’s Matter Wave

de Broglie realized that his “massive photon” must satisfy a condition relating the observed lab frequency to the transformed frequency, such that

This only made sense if his “massive photon” could be represented as a wave with a frequency

that propagated with a phase velocity given by c/β.  (Note that β < 1 so that the phase velocity is greater than the speed of light, which is allowed as long as it does not transmit any energy.)

To a modern reader, this all sounds alien, but only because this work in early 1924 represented his first pass at his theory.  As he worked on this thesis through 1924, finally defending it in November of that year, he refined his arguments, recognizing that when he combined his frequency with his phase velocity,

it yielded the wavelength for a matter wave to be

where p was the relativistic mechanical momentum of a massive particle. 

Using this wavelength, he explained Bohr’s quantization condition as a simple standing wave of the matter wave.  In the light of this derivation, de Broglie wrote

We are then inclined to admit that any moving body may be accompanied by a wave and that it is impossible to disjoin motion of body and propagation of wave.

pg. 450, Philosophical Magazine of London (1924)

Here was the strongest statement yet of the wave-particle duality of quantum particles. de Broglie went even further and connected the ideas of waves and rays through the Hamilton-Jacobi formalism, an approach that Dirac would extend several years later, establishing the formal connection between Hamiltonian physics and wave mechanics.  Furthermore, de Broglie conceived of a “pilot wave” interpretation that removed some of Einstein’s discomfort with the random character of quantum measurement that ultimately led Einstein to battle Bohr in their famous debates, culminating in the iconic EPR paper that has become a cornerstone for modern quantum information science.  After the wave-like nature of particles was confirmed in the Davisson-Germer experiments, de Broglie received the Nobel Prize in Physics in 1929.

Fig. 4 A standing matter wave is a stationary state of constructive interference. This wavefunction is in the L = 5 quantum manifold of the hydrogen atom.

Louis de Broglie was clearly ahead of his times.  His success was partly due to his isolation from the dogma of the day.  He was able to think without the constraints of preconceived ideas.  But as soon as he became a regular participant in the theoretical discussions of his day, and bowed under the pressure from Copenhagen, his creativity essentially ceased. The subsequent development of quantum mechanics would be dominated by Heisenberg, Born, Pauli, Bohr and Schrödinger, beginning at the 1927 Solvay Congress held in Brussels. 

Fig. 5 The 1927 Solvay Congress.

By David D. Nolte, Feb. 14, 2024


[1] L. de Broglie, “A tentative theory of light quanta,” Philosophical Magazine 47, 446-458 (1924).

Read more in Books by David Nolte at Oxford University Press

Frontiers of Physics: The Year in Review (2023)

These days, the physics breakthroughs in the news that really catch the eye tend to be Astro-centric.  Partly, this is due to the new data coming from the James Webb Space Telescope, which is the flashiest and newest toy of the year in physics.  But also, this is part of a broader trend in physics that we see in the interest statements of physics students applying to graduate school.  With the Higgs business winding down for high energy physics, and solid state physics becoming more engineering, the frontiers of physics have pushed to the skies, where there seem to be endless surprises.

To be sure, quantum information physics (a hot topic) and AMO (atomic and molecular optics) are performing herculean feats in the laboratories.  But even there, Bose-Einstein condensates are simulating the early universe, and quantum computers are simulating worm holes—tipping their hat to astrophysics!

So here are my picks for the top physics breakthroughs of 2023. 

The Early Universe

The James Webb Space Telescope (JWST) has come through big on all of its promises!  They said it would revolutionize the astrophysics of the early universe, and they were right.  As of 2023, all astrophysics textbooks describing the early universe and the formation of galaxies are now obsolete, thanks to JWST. 

Foremost among the discoveries is how fast the universe took up its current form.  Galaxies condensed much earlier than expected, as did supermassive black holes.  Everything that we thought took billions of years seem to have happened in only about one-tenth of that time (incredibly fast on cosmic time scales).  The new JWST observations blow away the status quo on the early universe, and now the astrophysicists have to go back to the chalk board. 

Fig. The JWST artist’s rendering. Image credit.

Gravitational Ripples

If LIGO and the first detection of gravitational waves was the huge breakthrough of 2015, detecting something so faint that it took a century to build an apparatus sensitive enough to detect them, then the newest observations of gravitational waves using galactic ripples presents a whole new level of gravitational wave physics.

Fig. Ripples in spacetime.Image credit.

By using the exquisitely precise timing of distant pulsars, astrophysicists have been able to detect a din of gravitational waves washing back and forth across the universe.  These waves came from supermassive black hole mergers in the early universe.  As the waves stretch and compress the space between us and distant pulsars, the arrival times of pulsar pulses detected at the Earth vary a tiny but measurable amount, haralding the passing of a gravitational wave.

This approach is a form of statistical optics in contrast to the original direct detection that was a form of interferometry.  These are complimentary techniques in optics research, just as they will be complimentary forms of gravitational wave astronomy.  Statistical optics (and fluctuation analysis) provides spectral density functions which can yield ensemble averages in the large N limit.  This can answer questions about large ensembles that single interferometric detection cannot contribute to.  Conversely, interferometric detection provides the details of individual events in ways that statistical optics cannot do.  The two complimentary techniques, moving forward, will provide a much clearer picture of gravitational wave physics and the conditions in the universe that generate them.

Phosphorous on Enceladus

Planetary science is the close cousin to the more distant field of cosmology, but being close to home also makes it more immediate.  The search for life outside the Earth stands as one of the greatest scientific quests of our day.  We are almost certainly not alone in the universe, and life may be as close as Enceladus, the icy moon of Saturn. 

Scientists have been studying data from the Cassini spacecraft that observed Saturn close-up for over a decade from 2004 to 2017.  Enceladus has a subsurface liquid ocean that generates plumes of tiny ice crystals that erupt like geysers from fissures in the solid surface.  The ocean remains liquid because of internal tidal heating caused by the large gravitational forces of Saturn. 

Fig. The Cassini Spacecraft. Image credit.

The Cassini spacecraft flew through the plumes and analyzed their content using its Cosmic Dust Analyzer.  While the ice crystals from Enceladus were already known to contain organic compounds, the science team discovered that they also contain phosphorous.  This is the least abundant element within the molecules of life, but it is absolutely essential, providing the backbone chemistry of DNA as well as being a constituent of amino acids. 

With this discovery, all the essential building blocks of life are known to exist on Enceladus, along with a liquid ocean that is likely to be in chemical contact with rocky minerals on the ocean floor, possibly providing the kind of environment that could promote the emergence of life on a planet other than Earth.

Simulating the Expanding Universe in a Bose-Einstein Condensate

Putting the universe under a microscope in a laboratory may have seemed a foolish dream, until a group at the University of Heidelberg did just that. It isn’t possible to make a real universe in the laboratory, but by adjusting the properties of an ultra-cold collection of atoms known as a Bose-Einstein condensate, the research group was able to create a type of local space whose internal metric has a curvature, like curved space-time. Furthermore, by controlling the inter-atomic interactions of the condensate with a magnetic field, they could cause the condensate to expand or contract, mimicking different scenarios for the evolution of our own universe. By adjusting the type of expansion that occurs, the scientists could create hypotheses about the geometry of the universe and test them experimentally, something that could never be done in our own universe. This could lead to new insights into the behavior of the early universe and the formation of its large-scale structure.

Fig. Expansion of the Universe. Image Credit

Quark Entanglement

This is the only breakthrough I picked that is not related to astrophysics (although even this effect may have played a role in the very early universe).

Entanglement is one of the hottest topics in physics today (although the idea is 89 years old) because of the crucial role it plays in quantum information physics.  The topic was awarded the 2022 Nobel Prize in Physics which went to John Clauser, Alain Aspect and Anton Zeilinger.

Direct observations of entanglement have been mostly restricted to optics (where entangled photons are easily created and detected) or molecular and atomic physics as well as in the solid state.

But entanglement eluded high-energy physics (which is quantum matter personified) until 2023 when the Atlas Collaboration at the LHC (Large Hadron Collider) in Geneva posted a manuscript on Arxiv that reported the first observation of entanglement in the decay products of a quark.

Fig. Thresholds for entanglement detection in decays from top quarks. Image credit.

Quarks interact so strongly (literally through the strong force), that entangled quarks experience very rapid decoherence, and entanglement effects virtually disappear in their decay products.  However, top quarks decay so rapidly, that their entanglement properties can be transferred to their decay products, producing measurable effects in the downstream detection.  This is what the Atlas team detected.

While this discovery won’t make quantum computers any better, it does open up a new perspective on high-energy particle interactions, and may even have contributed to the properties of the primordial soup during the Big Bang.