100 Years of Quantum Physics:  Pauli’s Exclusion Principle (1924)

One hundred years ago this month, in December 1924, Wolfgang Pauli submitted a paper to Zeitschrift für Physik that provided the final piece of the puzzle that connected Bohr’s model of the atom to the structure of the periodic table.  In the process, he introduced a new quantum number into physics that governs how matter as extreme as neutron stars, or as perfect as superfluid helium, organizes itself.

He was led to this crucial insight, not by his superior understanding of quantum physics, which he was grappling with as much as Bohr and Born and Sommerfeld were at that time, but through his superior understanding of relativistic physics that convinced him that the magnetism of atoms in magnetic fields could not be explained through the orbital motion of electrons alone.

Encyclopedia Article on Relativity

Bored with the topics he was being taught in high school in Vienna, Pauli was already reading Einstein on relativity and Emil Jordan on functional analysis before he arrived at the university in Munich to begin studying with Arnold Sommerfeld.  Pauli was still merely a student when Felix Klein approached Sommerfeld to write an article on relativity theory for his Encyclopedia of Mathematical Sciences.  Sommerfeld by that time was thoroughly impressed with Pauli’s command of the subject and suggested that he write the article.


Pauli’s encyclopedia article on relativity expanded to 250 pages and was published in Klein’s fifth volume in 1921 when Pauli was only 21 years old—just 5 years after Einstein had published his definitive work himself!  Pauli’s article is still considered today one of the clearest explanations of both special and general relativity.

Pauli’s approach established the methodical use of metric space concepts that is still used today when teaching introductory courses on the topic.  This contrasts with articles written only a few years earlier that seem archaic by comparison—even Einstein’s paper itself.  As I recently read through his article, I was struck by how similar it is to what I teach from my textbook on modern dynamics to my class at Purdue University for junior physics majors.

Fig. 1 Wolfgang Pauli [Image]

Anomalous Zeeman Effect

In 1922, Pauli completed his thesis on the properties of water molecules and began studying a phenomenon known as the anomalous Zeeman effect.  The Zeeman effect is the splitting of optical transitions in atoms under magnetic fields.  The electron orbital motion couples with the magnetic field through a semi-classical interaction between the magnetic moment of the orbital and the applied magnetic field, producing a contribution to the energy of the electron that is observed when it absorbs or emits light. 

The Bohr model of the atom had already concluded that the angular momentum of electron orbitals was quantized into integer units.  Furthermore, the Stern-Gerlach experiment of 1922 had shown that the projection of these angular momentum states onto the direction of the magnetic field was also quantized.  This was known at the time as “space quantization”.  Therefore, in the Zeeman effect, the quantized angular momentum created quantized energy interactions with the magnetic field, producing the splittings in the optical transitions.

File:Breit-rabi-Zeeman-en.svg
Fig. 2 The magnetic Zeeman splitting of Rb-87 from the weak field to the strong-field (Pachen-Back) effect

So far so good.  But then comes the problem with the anomalous Zeeman effect.

In the Bohr model, all angular momenta have integer values.  But in the anomalous Zeeman effect, the splittings could only be explained with half integers.  For instance, if total angular momentum were equal to one-half, then in a magnetic field it would produce a “doublet” with +1/2 and -1/2 space quantization.  An integer like L = 1 would produce a triplet with +1, 0, and -1 space quantization.  Although doublets of the anomalous Zeeman effect were often observed, half-integers were unheard of (so far) in the quantum numbers of early quantum physics.

But half integers were not the only problem with “2”s in the atoms and elements.  There was also the problem of the periodic table. It, too, seemed to be constructed out of “2”s, multiplying a sequence of the difference of squares.

The Difference of Squares

The difference of squares has a long history in physics stretching all the way back to Galileo Galilei who performed experiments around 1605 on the physics of falling bodies.  He noted that the distance traveled in successive time intervals varied as the difference 12 – 02 = 1, then 22-12 = 3, then 32-22 = 5, then 42-32 = 7 and so on.  In other words, the distances traveled in each successive time interval varied as the odd integers.  Galileo, ever the astute student of physics, recognized that the distance traveled by an accelerating body in a time t varied as the square of time t2.  Today, after Newton, we know that this is simply the dependence of distance for an accelerating body on the square of time s = (1/2)gt2

By early 1924 there was another law of the difference of squares.  But this time the physics was buried deep inside the new science of the elements, put on graphic display through the periodic table. 

The periodic table is constructed on the difference of squares.  First there is 2 for hydrogen and helium.  Then another 2 for lithium and beryllium, followed by 6 for B, C, N, O, F and Ne to make a total of 8.  After that there is another 8 plus 10 for the sequence of Sc, Ti, V, Cr, Mn, Fe, Co, Ni, Cu and Zn to make a total of 18.  The sequence of 2-8-18 is 2•12 = 2, 2•22 = 8, 2•32 = 18 for the sequence 2n2

Why the periodic table should be constructed out of the number 2 times the square of the principal quantum number n was a complete mystery.  Sommerfeld went so far as to call the number sequence of the periodic table a “cabalistic” rule. 

The Bohr Model for Many Electrons

It is easy to picture how confusing this all was to Bohr and Born and others at the time.  From Bohr’s theory of the hydrogen atom, it was clear that there were different energy levels associated with the principal quantum number n, and that this was related directly to angular momentum through the motion of the electrons in the Bohr orbitals. 

But as the periodic table is built up from H to He and then to Li and Be and B, adding in successive additional electrons, one of the simplest questions was why the electrons did not all reside on the lowest energy level?  But even if that question could not be answered, there was the question of why after He the elements Li and Be behaved differently than B, N, O and F, leading to the noble gas Ne.  From normal Zeeman spectroscopy as well as x-ray transitions, it was clear that the noble gases behaved as the core of succeeding elements, like He for Li and Be and Ne for Na and Mg.

To grapple with all of this, Bohr had devised a “building up” rule for how electrons were “filling” the different energy levels as each new electron of the next element was considered.  The noble-gas core played a key role in this model, and the core was also assumed to be contributing to both the normal Zeeman effect as well as the anomalous Zeeman effect with its mysterious half-integer angular momenta.

But frankly, this core model was a mess, with ad hoc rules on how the additional electrons were filling the energy levels and how they were contributing to the total angular momentum.

This was the state of the problem when Pauli, with his exceptional understanding of special relativity, began to dig deep into the problem.  Since the Zeeman splittings were caused by the orbital motion of the electrons, the strongly bound electrons in high-Z atoms would be moving at speeds near the speed of light.  Pauli therefore calculated what the systematic effects would be on the Zeeman splittings as the Z of the atoms got larger and the relativistic effects got stronger.

He calculated this effect to high precision, and then waited for Landé to make the measurements.  When Landé finally got back to him, it was to say that there was absolutely no relativistic corrections for Thallium (Z = 90).  The splitting remained simply fixed by the Bohr magneton value with no relativistic effects.

Pauli had no choice but to reject the existing core model of angular momentum and to ascribe the Zeeman effects to the outer valence electron.  But this was just the beginning.

Pauli’s Breakthrough

https://onionesquereality.wordpress.com/wp-content/uploads/2012/07/wolfgang-pauli.jpg
Fig. 5 Wolfgang Pauli [Image]

By November of 1924 Pauli had concluded, in a letter to Landé

“In a puzzling, non-mechanical way, the valence electron manages to run about in two states with the same k but with different angular momenta.”

And in December of 1924 he submitted his work on the relativistic effects (or lack thereof) to Zeitschrift für Physik,

“From this viewpoint the doublet structure of the alkali spectra as well as the failure of Larmor’s theorem arise through a specific, classically  non-describable sort of Zweideutigkeit (two-foldness) of the quantum-theoretical properties of the valence electron. (Pauli, 1925a, pg. 385)

Around this time, he read a paper by Edmund Stoner in the Philosophical Magazine of London published in October of 1924.  Stoner’s insight was a connection between the number of states observed in a magnetic field and the number of states filled in the successive positions of elements in the periodic table.  Stoner’s insight led naturally to the 2-8-18 sequence for the table, although he was still thinking in terms of the quantum numbers of the core model of the atoms.

This is when Pauli put 2 plus 2 together: He realized that the states of the atom could be indexed by a set of 4 quantum numbers: n-the principal quantum number, k1-the angular momentum, m1-the space quantization number, and a new fourth quantum number m2 that he introduced but that had, as yet, no mechanistic explanation.  With these four quantum numbers enumerated, he then made the major step:

It should be forbidden that more than one electron, having the same equivalent quantum numbers, can be in the same state.  When an electron takes on a set of values for the four quantum numbers, then that state is occupied.

This is the Exclusion Principle:  No two electrons can have the same set of quantum numbers.  Or equivalently, no electron state can be occupied by two electrons.

Fig. 6 Level filling for Krypton using the Pauli Exclusion Principle

Today, we know that Pauli’s Zweideutigkeit is electron spin, a concept first put forward in 1925 by the American physicist Ralph Kronig and later that year by George Uhlenbeck and Samuel Goudsmit.



And Pauli’s Exclusion Principle is a consequence of the antisymmetry of electron wavefunctions first described by Paul Dirac in 1926 after the introduction of wavefunctions into quantum theory by Erwin Schrödinger earlier that year.

Fig. 7 The periodic table today.

Timeline:

1845 – Faraday effect (rotation of light polarization in a magnetic field)

1896 – Zeeman effect (splitting of optical transition in a magnetic field)

1897 – Anomalous Zeeman effect (half-integer splittings)

1902 – Lorentz and Zeeman awarded Nobel prize (for electron theory)

1921 – Paschen-Back effect (strong-field Zeeman effect)

1922 – Stern-Gerlach (space quantization)

1924 – de Broglie matter waves

1924 – Bose statistics of photons

1924 – Stoner (conservation of number of states)

1924 – Pauli Exclusion Principle

References:

E. C. Stoner (Philosophical Magazine, 48 [1924], 719) Issue 286  October 1924

M. Jammer, The conceptual development of quantum mechanics (Los Angeles, Calif.: Tomash Publishers, Woodbury, N.Y. : American Institute of Physics, 1989).

M. Massimi, Pauli’s exclusion principle: The origin and validation of a scientific principle (Cambridge University Press, 2005).

Pauli, W. Über den Einfluß der Geschwindigkeitsabhängigkeit der Elektronenmasse auf den Zeemaneffekt. Z. Physik 31, 373–385 (1925). https://doi.org/10.1007/BF02980592

Pauli, W. (1925). “Über den Zusammenhang des Abschlusses der Elektronengruppen im Atom mit der Komplexstruktur der Spektren”. Zeitschrift für Physik. 31 (1): 765–783

Read more in Books by David Nolte at Oxford University Press

The Vital Virial of Rudolph Clausius: From Stat Mech to Quantum Mech

I often joke with my students in class that the reason I went into physics is because I have a bad memory.  In biology you need to memorize a thousand things, but in physics you only need to memorize 10 things … and you derive everything else!

Of course, the first question they ask me is “What are those 10 things?”.

That’s a hard question to answer, and every physics professor probably has a different set of 10 things.  Obviously, energy conservation would be first on the list, followed by other conservation laws for various types of momentum.  Inverse-square laws probably come next.  But then what?  What do you need to memorize to be most useful when you are working out physics problems on the back of an envelope, when your phone is dead, and you have no access to your laptop or books?

One of my favorites is the Virial Theorem because it rears its head over and over again, whether you are working on problems in statistical mechanics, orbital mechanics or quantum mechanics.

The Virial Theorem

The Virial Theorem makes a simple statement about the balance between kinetic energy and potential energy (in a conservative mechanical system).  It summarizes in a single form many different-looking special cases we learn about in physics.  For instance, everyone learns early in their first mechanics course that the average kinetic energy <T> of a mass on a spring is equal to the average potential energy <V>.  But this seems different than the problem of a circular orbit in gravitation or electrostatics where the average kinetic energy is equal to half the average potential energy, but with the opposite sign.

Yet there is a unity to these two—it is the Virial Theorem:

for cases where the potential energy V has power law dependence V ≈ rn.  The harmonic oscillator has n = 2, leading to the well-known equality between average kinetic and potential energy as

The inverse square force law has a potential that varies with n = -1, leading to the flip in sign.  For instance, for a circular orbit in gravitation, it looks like

and in electrostatics it looks like

where a is the radius of the orbit. 

Yet orbital mechanics is hardly the only place where the Virial Theorem pops up.  It began its life with statistical mechanics.

Rudolph Clausius and his Virial Theorem

The pantheon of physics is a somewhat exclusive club.  It lets in the likes of Galileo, Lagrange, Maxwell, Boltzmann, Einstein, Feynman and Hawking, but it excludes many worthy candidates, like Gilbert, Stevin, Maupertuis, du Chatelet, Arago, Clausius, Heaviside and Meitner all of whom had an outsized influence on the history of physics, but who often do not get their due.  Of this later group, Rudolph Clausius stands above the others because he was an inventor of whole new worlds and whole new terminologies that permeate physics today.

Within the German Confederation dominated by Prussia in the mid 1800’s, Clausius was among the first wave of the “modern” physicists who emerged from new or reorganized German universities that integrated mathematics with practical topics.  Carl Neumann at Königsberg, Carl Gauss and Max Weber at Göttingen, and Hermann von Helmholtz at Berlin were transforming physics from a science focused on pure mechanics and astronomy to one focused on materials and their associated phenomena, applying mathematics to these practical problems.

Clausius was educated at Berlin under Heinrich Gustav Magnus beginning in 1840, and he completed his doctorate at the University of Halle in 1847.  His doctoral thesis on light scattering in the atmosphere represented an early attempt at treating statistical fluctuations.  Though his initial approach was naïve, it helped orient Clausius to physics problems of statistical ensembles and especially to gases.  The sophistication of his physics matured rapidly and already in 1850 he published his famous paper Über die bewegende Kraft der Wärme, und die Gesetze, welche sich daraus für die Wärmelehre selbst ableiten lassen (About the moving power of heat and the laws that can be derived from it for the theory of heat itself). 

Rudolph Clausius
Fig. 1 Rudolph Clausius.

This was the fundamental paper that overturned the archaic theory of caloric, which had assumed that heat was a form of conserved quantity.  Clausius proved that this was not true, and he introduced what are today called the first and second laws of thermodynamics.  This early paper was one in which he was still striving to simplify thermodynamics, and his second law was mostly a qualitative statement that heat flows from higher temperatures to lower.  He refined the second law four years later in 1854 with Über eine veranderte Form des zweiten Hauptsatzes der mechanischen Wärmetheorie (On a modified form of the second law of the mechanical theory of heat).  He gave his concept the name Entropy in 1865 from the Greek word τροπη (transformation or change) with a prefix similar to Energy. 

Clausius was one of the first to consider the kinetic theory of heat where heat was understood as the average kinetic energy of the atoms or molecules that comprised the gas.  He published his seminal work on the topic in 1857 expanding on earlier work by Augustus Krönig.  Maxwell, in turn, expanded on Clausius in 1860 by introducing probability distributions.  By 1870, Clausius was fully immersed in the kinetic theory as he was searching for mechanical proofs of the second law of thermodynamics.  Along the way, he discovered a quantity based on action-reaction pairs of forces that was related to the kinetic energy.

At that time, kinetic energy was often called vis viva, meaning “living force”.  The singular of force (vis) had a plural (virias), so Clausius—always happy to coin new words—called the action-reaction pairs of forces the virial, and hence he proved the Virial Theorem.

The argument is relatively simple.  Consider the action of a single molecule of the gas subject to a force F that is applied reciprocally from another molecule.  Also, for simplicity consider only a single direction in the gas.  The change of the action over time is given by the derivative

The average over all action-reaction pairs is

but by the reciprocal nature of action-reaction pairs, the left-hand side balances exactly to zero, giving

This expression is expanded to include the other directions and to all N bodies to yield the Virial Theorem

where the sum is over all molecules in the gas, and Clausius called the term on the right the Virial.

An important special case is when the force law derives from a power law

Then the Virial Theorem becomes (again in just one dimension)

This is often the most useful form of the theorem.  For a spring force, it leads to <T> = <V>.  For gravitational or electrostatic orbits it is  <T> = -1/2<V>.

The Virial in Astrophysics

Clausius originally developed the Virial Theorem for the kinetic theory of gases, but it has applications that go far beyond.  It is already useful for simple orbital systems like masses interacting through central forces, and these can be scaled up to N-body systems like star clusters or galaxies.

Star clusters are groups of hundreds or thousands of stars that are gravitationally bound.  Such a cluster may begin in a highly non-equilibrium configuration, but the mutual interactions among the stars causes a relaxation to an equilibrium configuration of positions and velocities.  This process is known as Virialization.  The time scale for virializaiton depends on the number of stars and on the initial configuration, such as whether there is a net angular momentum in the cluster.

A gravitational simulation of 700 stars is shown in Fig. 2. The stars are distributed uniformly with zero velocities. The cluster collapses under gravitational attraction, rebounds and approaches a steady state. The Virial Theorem applies at long times. The simulation assumed all motion was in the plane, and a regularization term was added to the gravitational potential to keep forces bounded.

Simulation of the virial theorem for a star cluster with kinetic and potential energy graphs
Fig. 2 A numerical example of the Virial Theorem for a star cluster of 700 stars beginning in a uniform initial state, collapsing under gravitational attraction, rebounding and then approaching a steady state. The kinetic energy and the potential energy of the system satisfy the Virial Theorem at long times.

The Virial in Quantum Physics

Quantum theory holds strong analogs to classical mechanics.  For instance, the quantum commutation relations have strong similarities to Poisson Brackets.  Similarly, the Virial in classical physics has a direct quantum analog.

Begin with the commutator between the Hamiltonian H and the action composed as the product of the position operator and the momentum operator XnPn

Expand the two commutators on the right to give

Now recognize that the commutator with the Hamiltonian is Ehrenfest’s Theorem on the time dependence of the operators

which equals zero when the system become stationary or steady state.  All that remains is to take the expectation value of the equation (which can include many-body interactions as well)

which is the quantum form of the Virital Theorem which is identical to the classical form when the expectation value is replaced by the ensemble average.

For the hydrogen atom this is

for principal quantum number n and Bohr radius aB.  The quantum energy levels of the hydrogen atom are

By David D. Nolte, July 24, 2024

References

“Ueber die bewegende Kraft der Warme and die Gesetze welche sich daraus für die Warmelehre selbst ableiten lassen,” in Annalen der Physik, 79 (1850), 368–397, 500–524.

Über eine veranderte Form des zweiten Hauptsatzes der mechanischen Wärmetheorie, Annalen der Physik, 93 (1854), 481–506.

Ueber die Art der Bewegung, welche wir Warmenennen, Annalen der Physik, 100 (1857), 497–507.

Clausius, RJE (1870). “On a Mechanical Theorem Applicable to Heat”. Philosophical Magazine. Series 4. 40 (265): 122–127.

Matlab Code

function [y0,KE,Upoten,TotE] = Nbody(N,L)   %500, 100, 0

A = -1;        % Grav factor
eps = 1;        % 0.1
K = 0.00001;    %0.000025

format compact

mov_flag = 1;
if mov_flag == 1
    moviename = 'DrawNMovie';
    aviobj = VideoWriter(moviename,'MPEG-4');
    aviobj.FrameRate = 10;
    open(aviobj);
end

hh = colormap(jet);
%hh = colormap(gray);
rie = randintexc(255,255);       % Use this for random colors
%rie = 1:64;                     % Use this for sequential colors
for loop = 1:255
    h(loop,:) = hh(rie(loop),:);
end
figure(1)
fh = gcf;
clf;
set(gcf,'Color','White')
axis off

thet = 2*pi*rand(1,N);
rho = L*sqrt(rand(1,N));
X0 = rho.*cos(thet);
Y0 = rho.*sin(thet);

Vx0 = 0*Y0/L;   %1.5 for 500   2.0 for 700
Vy0 = -0*X0/L;
% X0 = L*2*(rand(1,N)-0.5);
% Y0 = L*2*(rand(1,N)-0.5);
% Vx0 = 0.5*sign(Y0);
% Vy0 = -0.5*sign(X0);
% Vx0 = zeros(1,N);
% Vy0 = zeros(1,N);

for nloop = 1:N
    y0(nloop) = X0(nloop);
    y0(nloop+N) = Y0(nloop);
    y0(nloop+2*N) = Vx0(nloop);
    y0(nloop+3*N) = Vy0(nloop);
end

T = 300;  %500
xp = zeros(1,N); yp = zeros(1,N);

for tloop = 1:T
    tloop
    
    delt = 0.005;
    tspan = [0 loop*delt];
    opts = odeset('RelTol',1e-2,'AbsTol',1e-5);
    [t,y] = ode45(@f5,tspan,y0,opts);
    
    %%%%%%%%% Plot Final Positions
    
    [szt,szy] = size(y);
    
    % Set nodes
    ind = 0; xpold = xp; ypold = yp;
    for nloop = 1:N
        ind = ind+1;
        xp(ind) = y(szt,ind+N);
        yp(ind) = y(szt,ind);
    end
    delxp = xp - xpold;
    delyp = yp - ypold;
    maxdelx = max(abs(delxp));
    maxdely = max(abs(delyp));
    maxdel = max(maxdelx,maxdely);
    
    rngx = max(xp) - min(xp);
    rngy = max(yp) - min(yp);
    maxrng = max(abs(rngx),abs(rngy));
    
    difepmx = maxdel/maxrng;
    
    crad = 2.5;
    subplot(1,2,1)
    gca;
    cla;
    
    % Draw nodes
    for nloop = 1:N
        rn = rand*63+1;
        colorval = ceil(64*nloop/N);
        
        rectangle('Position',[xp(nloop)-crad,yp(nloop)-crad,2*crad,2*crad],...
            'Curvature',[1,1],...
            'LineWidth',0.1,'LineStyle','-','FaceColor',h(colorval,:))
        
    end
    
    [syy,sxy] = size(y);
    y0(:) = y(syy,:);
    
    rnv = (2.0 + 2*tloop/T)*L;    % 2.0   1.5
    
    axis equal
    axis([-rnv rnv -rnv rnv])
    box on
    drawnow
    pause(0.01)
    
    KE = sum(y0(2*N+1:4*N).^2);
    
    Upot = 0;
    for nloop = 1:N
        for mloop = nloop+1:N
            dx = y0(nloop)-y0(mloop);
            dy = y0(nloop+N) - y0(mloop+N);
            dist = sqrt(dx^2+dy^2+eps^2);
            Upot = Upot + A/dist;
        end
    end
    
    Upoten = Upot;
    
    TotE = Upoten + KE;
    
    if tloop == 1
        TotE0 = TotE;
    end

    Upotent(tloop) = Upoten;
    KEn(tloop) = KE;
    TotEn(tloop) = TotE;
    
    xx = 1:tloop;
    subplot(1,2,2)
    plot(xx,KEn,xx,Upotent,xx,TotEn,'LineWidth',3)
    legend('KE','Upoten','TotE')
    axis([0 T -26000 22000])     % 3000 -6000 for 500   6000 -8000 for 700
    
    
    fh = figure(1);
    
    if mov_flag == 1
        frame = getframe(fh);
        writeVideo(aviobj,frame);
    end
    
end

if mov_flag == 1
    close(aviobj);
end

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
    function yd = f5(t,y)
        
        for n1loop = 1:N
            
            posx = y(n1loop);
            posy = y(n1loop+N);
            momx = y(n1loop+2*N);
            momy = y(n1loop+3*N);
            
            tempcx = 0; tempcy = 0;
            
            for n2loop = 1:N
                if n2loop ~= n1loop
                    cposx = y(n2loop);
                    cposy = y(n2loop+N);
                    cmomx = y(n2loop+2*N);
                    cmomy = y(n2loop+3*N);
                    
                    dis = sqrt((cposy-posy)^2 + (cposx-posx)^2 + eps^2);
                    CFx = 0.5*A*(posx-cposx)/dis^3 - 5e-5*momx/dis^4;
                    CFy = 0.5*A*(posy-cposy)/dis^3 - 5e-5*momy/dis^4;
                    
                    tempcx = tempcx + CFx;
                    tempcy = tempcy + CFy;
                    
                end
            end
                        
            ypp(n1loop) = momx;
            ypp(n1loop+N) = momy;
            ypp(n1loop+2*N) = tempcx - K*posx;
            ypp(n1loop+3*N) = tempcy - K*posy;
        end
        
        yd=ypp'; 
     
    end     % end f5

end     % end Nbody

Books by David D. Nolte at Oxford University Press
Read more in Books by David D. Nolte at Oxford University Press

100 Years of Quantum Physics: The Statistics of Satyendra Nath Bose (1924)

One hundred years ago, in July of 1924, a brilliant Indian physicist changed the way that scientists count.  Satyendra Nath Bose (1894 – 1974) mailed a letter to Albert Einstein enclosed with a manuscript containing a new derivation of Planck’s law of blackbody radiation.  Bose had used a radical approach that went beyond the classical statistics of Maxwell and Boltzmann by counting the different ways that photons can fill a volume of space.  His key insight was the indistinguishability of photons as quantum particles. 

Today, the indistinguishability of quantum particles is the foundational element of quantum statistics that governs how fundamental particles combine to make up all the matter of the universe.  At the time, neither Bose nor Einstein realized just how radical his approach was, until Einstein, using Bose’s idea, derived the behavior of material particles under conditions similar black-body radiation, predicting a new state of condensed matter [1].  It would take scientists 70 years to finally demonstrate “Bose-Einstein” condensation in a laboratory in Boulder, Colorado in 1995.

Early Days of the Photon

As outlined in a previous blog (see Who Invented the Quantum? Einstein versus Planck), Max Planck was a reluctant revolutionary.  He was led, almost against his will, in 1900 to postulate a quantized interaction between electromagnetic radiation and the atoms in the walls of a black-body enclosure.  He could not break free from the hold of classical physics, assuming classical properties for the radiation and assigning the quantum only to the “interaction” with matter.  It was Einstein, five years later in 1905, who took the bold step of assigning quantum properties to the radiation field itself, inventing the idea of the “photon” (named years later by the American chemist Gilbert Lewis) as the first quantum particle. 

Despite the vast potential opened by Einstein’s theory of the photon, quantum physics languished for nearly 20 years from 1905 to 1924 as semiclassical approaches dominated the thinking of Niels Bohr in Copenhagen, and Max Born in Göttingen, and Arnold Sommerfeld in Munich, as they grappled with wave-particle duality. 

The existence of the photon, first doubted by almost everyone, was confirmed in 1915 by Robert Millikan’s careful measurement of the photoelectric effect.  But even then, skepticism remained until Arthur Compton demonstrated experimentally in 1923 that the scattering of photons by electrons could only be explained if photons carried discrete energy and momentum in precisely the way that Einstein’s theory required.

Despite the success of Einstein’s photon by 1923, derivations of the Planck law still used a purely wave-based approach to count the number of electromagnetic standing waves that a cavity could support.  Bose would change that by deriving the Planck law using purely quantum methods.

The Quantum Derivation by Bose

Satyendra Nath Bose was born in 1894 in Calcutta, the old British capital city of India, now Kolkata.  He excelled at his studies, especially in mathematics, and received a lecturer post at the University of Calcutta from 1916 to 1921, when he moved into a professorship position at the new University of Dhaka. 

One day, as he was preparing a class lecture on the derivation of Planck’s law,

he became dissatisfied with the usual way it was presented in textbooks, based on standing waves in the cavity, and he flipped the problem.

Rather than deriving the number of standing wave modes in real space, he considered counting the number of ways a photon would fill up phase space.

Phase space is the natural dynamical space of Hamiltonian systems [2], such as collections of quantum particles like photons, in which the axes of the space are defined by the positions and momenta of the particles.  The differential volume of phase space dVPS occupied by a single photon is given by

Using Einstein’s formula for the relationship between momentum and frequency

where h is Planck’s constant, yields

No quantum particle can have its position and momentum defined arbitrarily precisely because of Heisenberg’s uncertainty principle, requiring phase space volumes to be resolvable only to within a minimum reducible volume element given by h3

Therefore, the number of states in phase space occupied by the single photon are obtained by dividing dVPS by h3 to yield

which is half of the prefactor in the Planck law.  Several comments are now necessary. 

First, when Bose did this derivation, there was no Heisenberg Uncertainty relationship—that would come years later in 1927.  Bose was guided, instead, by the work of Bohr and Sommerfeld and Ehrenfest who emphasized the role played by the action principle in quantum systems.  Phase space dimensions are counted in units of action, and the quantized unit of action is given by Planck’s constant h, hence quantized volumes of action in phase space are given by h3.  By taking this step, Bose was anticipating Heisenberg by nearly three years.

Second, Bose knew that his phase space volume was half of the prefactor in Planck’s law.  But since he was counting states, he reasoned that this meant that each photon had two internal degrees of freedom.  A possibility he considered to account for this was that the photon might have a spin that could be aligned, or anti-aligned, with the momentum of the photon [3, 4].  How he thought of spin is hard to fathom, because the spin of the electron, proposed by Uhlenbeck and Goudsmit, was still two years away. 

But Bose was not finished.  The derivation, so far, is just how much phase space volume is accessible to a single photon.  The next step is to count the different ways that many photons can fill up phase space.  For this he used (bringing in the factor of 2 for spin)

where pn is the probability that a volume of phase space contains n photons, plus he used the usual conditions on energy and number

The probability for all the different permutations for how photons can occupy phase space is then given by

A third comment is now necessary:  By assuming this probability, Bose was discounting situations where the photons could be distinguished from one another.  This indistinguishability of quantum particles is absolutely fundamental to our understanding today of quantum statistics, but Bose was using it implicitly for the first time here. 

The final distribution of photons at a given temperature T is found by maximizing the entropy of the system

subject to the conditions of photon energy and number. Bose found the occupancy probabilities to be

with a coefficient B to be found next by using this in the expression for the geometric series

yielding

Also, from the total number of photons

And, from the total energy

Bose obtained, finally

which is Planck’s law.

This derivation uses nothing by the counting of quanta in phase space.  There are no standing waves.  It is a purely quantum calculation—the first of its kind.

Enter Einstein

As usual with revolutionary approaches, Bose’s initial manuscript submitted to the British Philosophical Magazine was rejected.  But he was convinced that he had attained something significant, so he wrote his letter to Einstein containing his manuscript, asking that if Einstein found merit in the derivation, then perhaps he could have it translated into German and submitted to the Zeitschrift für Physik. (That Bose would approach Einstein with this request seems bold, but they had communicated some years before when Bose had translated Einstein’s theory of General Relativity into English.)

Indeed, Einstein recognized immediately what Bose had accomplished, and he translated the manuscript himself into German and submitted it to the Zeitschrift on July 2, 1924 [5].

During his translation, Einstein did not feel that Bose’s conjecture about photon spin was defensible, so he changed the wording to attribute the factor of 2 in the derivation to the two polarizations of light (a semiclassical concept), so Einstein actually backtracked a little from what Bose originally intended as a fully quantum derivation. The existence of photon spin was confirmed by C. V. Raman in 1931 [6].

In late 1924, Einstein applied Bose’s concepts to an ideal gas of material atoms and predicted that at low temperatures the gas would condense into a new state of matter known today as a Bose-Einstein condensate [1]. Matter differs from photons because the conservation of atom number introduces a finite chemical potential to the problem of matter condensation that is not present in the Planck law.

Fig. 1 Experimental evidence for the Bose-Einstein condensate in an atomic vapor [7].

Paul Dirac, in 1945, enshrined the name of Bose by coining the phrase “Boson” to refer to a particle of integer spin, just as he coined “Fermion” after Enrico Fermi to refer to a particle of half-integer spin. All quantum statistics were encased by these two types of quantum particle until 1982, when Frank Wilczek coined the term “Anyon” to describe the quantum statistics of particles confined to two dimensions whose behaviors vary between those of a boson and of a fermion.

By David D. Nolte, June 26, 2024

References

[1] A. Einstein. “Quantentheorie des einatomigen idealen Gases”. Sitzungsberichte der Preussischen Akademie der Wissenschaften. 1: 3. (1925)

[2] D. D. Nolte, “The tangled tale of phase space,” Physics Today 63, 33-38 (2010).

[3] Partha Ghose, “The Story of Bose, Photon Spin and Indistinguishability” arXiv:2308.01909 [physics.hist-ph]

[4] Barry R. Masters, “Satyendra Nath Bose and Bose-Einstein Statistics“, Optics and Photonics News, April, pp. 41-47 (2013)

[5] S. N. Bose, “Plancks Gesetz und Lichtquantenhypothese”, Zeitschrift für Physik , 26 (1): 178–181 (1924)

[6] C. V. Raman and S. Bhagavantam, Ind. J. Phys. vol. 6, p. 353, (1931).

[7] Anderson, M. H.; Ensher, J. R.; Matthews, M. R.; Wieman, C. E.; Cornell, E. A. (14 July 1995). “Observation of Bose-Einstein Condensation in a Dilute Atomic Vapor”. Science. 269 (5221): 198–201.


Books by David Nolte at Oxford University Press
Read more in Books by David Nolte at Oxford University Press

100 Years of Quantum Physics: de Broglie’s Wave (1924)

One hundred years ago this month, in Feb. 1924, a hereditary member of the French nobility, Louis Victor Pierre Raymond, the 7th Duc de Broglie, published a landmark paper in the Philosophical Magazine of London [1] that revolutionized the nascent quantum theory of the day.

Prior to de Broglie’s theory of quantum matter waves, quantum physics had been mired in ad hoc phenomenological prescriptions like Bohr’s theory of the hydrogen atom and Sommerfeld’s theory of adiabatic invariants.  After de Broglie, Erwin Schrödinger would turn the concept of matter waves into the theory of wave mechanics that we still practice today.

Fig. 1 The 1924 paper by de Broglie in the Philosophical Magazine.

The story of how de Broglie came to his seminal idea had an odd twist, based on an initial misconception that helped him get the right answer ahead of everyone else, for which he was rewarded with the Nobel Prize in Physics.

de Broglie’s Early Days

When Louis de Broglie was a student, his older brother Maurice (the 6th Duc de Broglie) was already a practicing physicist making important discoveries in x-ray physics.  Although Louis initially studied history in preparation for a career in law, and he graduated from the Sorbonne with a degree in history, his brother’s profession drew him like a magnet.  He also read Poincaré at this critical juncture in his career, and he was hooked.  He enrolled in the  Faculty of Sciences for his advanced degree, but World War I side-tracked him into the signal corps, where he was assigned to the wireless station on top of the Eiffel Tower.  He may have participated in the famous interception of a coded German transmission in 1918 that helped turn the tide of the war.

Beginning in 1919, Louis began assisting his brother in the well-equiped private laboratory that Maurice had outfitted in the de Broglie ancestral home.  At that time Maurice was performing x-ray spectroscopy of the inner quantum states of atoms, and he was struck by the duality of x-ray properties that made them behave like particles under some conditions and like waves in others.

Fig. 2 Maurice de Broglie in his private laboratory (Figure credit).
Fig. 3 Louis de Broglie (Figure credit)

Through his close work with his brother, Louis also came to subscribe to the wave-particle duality of x-rays and chose the topic for his PhD thesis—and hence the twist that launched de Broglie backwards towards his epic theory.

de Broglie’s Massive Photons

Today, we say that photons have energy and momentum although they are massless.  The momentum is a simple consequence of Einstein’s special relativity

And if m = 0, then

and momentum requires energy but not necessarily mass. 

But de Broglie started out backwards.  He was so convinced of the particle-like nature of the x-ray photons, that he first considered what would happen if the photons actually did have mass.  He constructed a massive photon and compared its proper frequency with a Lorentz-boosted frequency observed in a laboratory.  The frequency he set for the photon was like an internal clock, set by its rest-mass energy and by Bohr’s quantization condition

He then boosted it into the lab frame by time dilation

But the energy would be transformed according to

with a corresponding frequency

which is in direct contradiction with Bohr’s quantization condition.  What is the resolution of this seeming paradox?

de Broglie’s Matter Wave

de Broglie realized that his “massive photon” must satisfy a condition relating the observed lab frequency to the transformed frequency, such that

This only made sense if his “massive photon” could be represented as a wave with a frequency

that propagated with a phase velocity given by c/β.  (Note that β < 1 so that the phase velocity is greater than the speed of light, which is allowed as long as it does not transmit any energy.)

To a modern reader, this all sounds alien, but only because this work in early 1924 represented his first pass at his theory.  As he worked on this thesis through 1924, finally defending it in November of that year, he refined his arguments, recognizing that when he combined his frequency with his phase velocity,

it yielded the wavelength for a matter wave to be

where p was the relativistic mechanical momentum of a massive particle. 

Using this wavelength, he explained Bohr’s quantization condition as a simple standing wave of the matter wave.  In the light of this derivation, de Broglie wrote

We are then inclined to admit that any moving body may be accompanied by a wave and that it is impossible to disjoin motion of body and propagation of wave.

pg. 450, Philosophical Magazine of London (1924)

Here was the strongest statement yet of the wave-particle duality of quantum particles. de Broglie went even further and connected the ideas of waves and rays through the Hamilton-Jacobi formalism, an approach that Dirac would extend several years later, establishing the formal connection between Hamiltonian physics and wave mechanics.  Furthermore, de Broglie conceived of a “pilot wave” interpretation that removed some of Einstein’s discomfort with the random character of quantum measurement that ultimately led Einstein to battle Bohr in their famous debates, culminating in the iconic EPR paper that has become a cornerstone for modern quantum information science.  After the wave-like nature of particles was confirmed in the Davisson-Germer experiments, de Broglie received the Nobel Prize in Physics in 1929.

Fig. 4 A standing matter wave is a stationary state of constructive interference. This wavefunction is in the L = 5 quantum manifold of the hydrogen atom.

Louis de Broglie was clearly ahead of his times.  His success was partly due to his isolation from the dogma of the day.  He was able to think without the constraints of preconceived ideas.  But as soon as he became a regular participant in the theoretical discussions of his day, and bowed under the pressure from Copenhagen, his creativity essentially ceased. The subsequent development of quantum mechanics would be dominated by Heisenberg, Born, Pauli, Bohr and Schrödinger, beginning at the 1927 Solvay Congress held in Brussels. 

Fig. 5 The 1927 Solvay Congress.

By David D. Nolte, Feb. 14, 2024


[1] L. de Broglie, “A tentative theory of light quanta,” Philosophical Magazine 47, 446-458 (1924).

Read more in Books by David Nolte at Oxford University Press

Book Preview: Interference and the Story of Optical Interferometry

 Interference: The History of Optical Interferometry and the Scientists who Tamed Light is available at Oxford University Press and at Amazon and Barnes&Nobles .

The synopses of the first chapters can be found in my previous blog. Here are previews of the final chapters.

Chapter 6. Across the Universe: Exoplanets, Black Holes and Gravitational Waves

Stellar interferometry is opening new vistas of astronomy, exploring the wildest occupants of our universe, from colliding black holes half-way across the universe (LIGO) to images of neighboring black holes (EHT) to exoplanets near Earth that may harbor life.

Image of the supermassive black hole in M87 from Event Horizon Telescope.

Across the Universe: Gravitational Waves, Black Holes and the Search for Exoplanets describes the latest discoveries of interferometry in astronomy including the use of nulling interferometry in the Very Large Telescope Interferometer (VLTI) to detect exoplanets orbiting distant stars.  The much larger Event Horizon Telescope (EHT) used long baseline interferometry and closure phase advanced by Roger Jenison to make the first image of a black hole.  The Laser Interferometric Gravitational Observatory (LIGO) represented a several-decade-long drive to detect the first gravitational waves first predicted by Albert Einstein a hundred years ago.

Chapter 7. Two Faces of Microscopy: Diffraction and Interference

From the astronomically large dimensions of outer space to the microscopically small dimensions of inner space, optical interference pushes the resolution limits of imaging.

Ernst Abbe. Image Credit.

Two Faces of Microscopy: Diffraction and Interference describes the development of microscopic principles starting with Joseph Fraunhofer and the principle of diffraction gratings that was later perfected by Henry Rowland for high-resolution spectroscopy.  The company of Carl Zeiss advanced microscope technology after enlisting the help of Ernst Abbe who formed a new theory of image formation based on light interference.  These ideas were extended by Fritz Zernike in the development of phase-contrast microscopy.  The ultimate resolution of microscopes, defined by Abbe and known as the Abbe resolution limit, turned out not to be a fundamental limit, but was surpassed by super-resolution microscopy using concepts of interference microscopy and structured illumination.

Chapter 8. Holographic Dreams of Princess Leia: Crossing Beams

The coherence of laser light is like a brilliant jewel that sparkles in the darkness, illuminating life, probing science and projecting holograms in virtual worlds.

Ted Maiman

Holographic Dreams of Princess Leia: Crossing Beams presents the history of holography, beginning with the original ideas of Denis Gabor who invented optical holography as a means to improve the resolution of electron microscopes.  Holography became mainstream after the demonstrations by Emmett Leith and Juris Upatnieks using lasers that were first demonstrated by Ted Maiman at Hughes Research Lab after suggestions by Charles Townes on the operating principles of the optical maser.  Dynamic holography takes place in crystals that exhibit the photorefractive effect that are useful for adaptive interferometry.  Holographic display technology is under development, using ideas of holography merged with light-field displays that were first developed by Gabriel Lippmann.

Chapter 9. Photon Interference: The Foundations of Quantum Communication and Computing

What is the image of one photon interfering? Better yet, what is the image of two photons interfering? The answer to this crucial question laid the foundation for quantum communication.

Leonard Mandel. Image Credit.

Photon Interference: The Foundations of Quantum Communication moves the story of interferometry into the quantum realm, beginning with the Einstein-Podolski-Rosen paradox and the principle of quantum entanglement that was refined by David Bohm who tried to banish uncertainty from quantum theory.  John Bell and John Clauser pushed the limits of what can be known from quantum measurement as Clauser tested Bell’s inequalities, confirming the fundamental nonlocal character of quantum systems.  Leonard Mandel pushed quantum interference into the single-photon regime, discovering two-photon interference fringes that illustrated deep concepts of quantum coherence.  Quantum communication began with quantum cryptography and developed into quantum teleportation that can provide the data bus of future quantum computers.

Chapter 10. The Quantum Advantage: Interferometric Computing

There is almost no technical advantage better than having exponential resources at hand. The exponential resources of quantum interference provide that advantage to quantum computing which is poised to usher in a new era of quantum information science and technology.

David Deutsch.

The Quantum Advantage: Interferometric Computing describes the development of quantum algorithms and quantum computing beginning with the first quantum algorithm invented by David Deutsch as a side effect of his attempt to prove the multiple world interpretation of quantum theory.  Peter Shor found a quantum algorithm that could factor the product of primes and that threatened all secure communications in the world.  Once the usefulness of quantum algorithms was recognized, quantum computing hardware ideas developed rapidly into quantum circuits supported by quantum logic gates.  The limitation of optical interactions, that hampered the development of controlled quantum gates, led to the proposal of linear optical quantum computing and boson sampling in a complex cascade of single-photon interferometers that has been used to demonstrate quantum supremacy, also known as quantum computational advantage, using photonic integrated circuits.


From Oxford Press: Interference

Stories about the trials and toils of the scientists and engineers who tamed light and used it to probe the universe.

A Short History of Quantum Entanglement

Despite the many apparent paradoxes posed in physics—the twin and ladder paradoxes of relativity theory, Olber’s paradox of the bright night sky, Loschmitt’s paradox of irreversible statistical fluctuations—these are resolved by a deeper look at the underlying assumptions—the twin paradox is resolved by considering shifts in reference frames, the ladder paradox is resolved by the loss of simultaneity, Olber’s paradox is resolved by a finite age to the universe, and Loschmitt’s paradox is resolved by fluctuation theorems.  In each case, no physical principle is violated, and each paradox is fully explained.

However, there is at least one “true” paradox in physics that defies consistent explanation—quantum entanglement.  Quantum entanglement was first described by Einstein with colleagues Podolsky and Rosen in the famous EPR paper of 1935 as an argument against the completeness of quantum mechanics, and it was given its name by Schrödinger the same year in the paper where he introduced his “cat” as a burlesque consequence of entanglement. 

Here is a short history of quantum entanglement [1], from its beginnings in 1935 to the recent 2022 Nobel prize in Physics awarded to John Clauser, Alain Aspect and Anton Zeilinger.

The EPR Papers of 1935

Einstein can be considered as the father of quantum mechanics, even over Planck, because of his 1905 derivation of the existence of the photon as a discrete carrier of a quantum of energy (see Einstein versus Planck).  Even so, as Heisenberg and Bohr advanced quantum mechanics in the mid 1920’s, emphasizing the underlying non-deterministic outcomes of measurements, and in particular the notion of instantaneous wavefunction collapse, they pushed the theory in directions that Einstein found increasingly disturbing and unacceptable. 

This feature is an excerpt from an upcoming book, Interference: The History of Optical Interferometry and the Scientists Who Tamed Light (Oxford University Press, July 2023), by David D. Nolte.

At the invitation-only Solvay Congresses of 1927 and 1930, where all the top physicists met to debate the latest advances, Einstein and Bohr began a running debate that was epic in the history of physics as the two top minds went head-to-head as the onlookers looked on in awe.  Ultimately, Einstein was on the losing end.  Although he was convinced that something was missing in quantum theory, he could not counter all of Bohr’s rejoinders, even as Einstein’s assaults became ever more sophisticated, and he left the field of battle beaten but not convinced.  Several years later he launched his last and ultimate salvo.

Fig. 1 Niels Bohr and Albert Einstein

At the Institute for Advanced Study in Princeton, New Jersey, in the 1930’s Einstein was working with Nathan Rosen and Boris Podolsky when he envisioned a fundamental paradox in quantum theory that occurred when two widely-separated quantum particles were required to share specific physical properties because of simple conservation theorems like energy and momentum.  Even Bohr and Heisenberg could not deny the principle of conservation of energy and momentum, and Einstein devised a two-particle system for which these conservation principles led to an apparent violation of Heisenberg’s own uncertainty principle.  He left the details to his colleagues, with Podolsky writing up the main arguments.  They published the paper in the Physical Review in March of 1935 with the title “Can Quantum-Mechanical Description of Physical Reality be Considered Complete” [2].  Because of the three names on the paper (Einstein, Podolsky, Rosen), it became known as the EPR paper, and the paradox they presented became known as the EPR paradox.

When Bohr read the paper, he was initially stumped and aghast.  He felt that EPR had shaken the very foundations of the quantum theory that he and his institute had fought so hard to establish.  He also suspected that EPR had made a mistake in their arguments, and he halted all work at his institute in Copenhagen until they could construct a definitive answer.  A few months later, Bohr published a paper in the Physical Review in July of 1935, using the identical title that EPR had used, in which he refuted the EPR paradox [3].  There is not a single equation or figure in the paper, but he used his “awful incantation terminology” to maximum effect, showing that one of the EPR assumptions on the assessment of uncertainties to position and momentum was in error, and he was right.

Einstein was disgusted.  He had hoped that this ultimate argument against the completeness of quantum mechanics would stand the test of time, but Bohr had shot it down within mere months.  Einstein was particularly disappointed with Podolsky, because Podolsky had tried too hard to make the argument specific to position and momentum, leaving a loophole for Bohr to wiggle through, where Einstein had wanted the argument to rest on deeper and more general principles. 

Despite Bohr’s victory, Einstein had been correct in his initial formulation of the EPR paradox that showed quantum mechanics did not jibe with common notions of reality.  He and Schrödinger exchanged letters commiserating with each other and encouraging each other in their counter beliefs against Bohr and Heisenberg.  In November of 1935, Schrödinger published a broad, mostly philosophical, paper in Naturwissenschaften [4] in which he amplified the EPR paradox with the use of an absurd—what he called burlesque—consequence of wavefunction collapse that became known as Schrödinger’s Cat.  He also gave the central property of the EPR paradox its name: entanglement.

Ironically, both Einstein’s entanglement paradox and Schrödinger’s Cat, which were formulated originally to be arguments against the validity of quantum theory, have become established quantum tools.  Today, entangled particles are the core workhorses of quantum information systems, and physicists are building larger and larger versions of Schrödinger’s Cat that may eventually merge with the physics of the macroscopic world.

Bohm and Ahronov Tackle EPR

The physicist David Bohm was a rare political exile from the United States.  He was born in the heart of Pennsylvania in the town of Wilkes-Barre, attended Penn State and then the University of California at Berkeley, where he joined Robert Oppenheimer’s research group.  While there, he became deeply involved in the fight for unions and socialism, activities for which he was called before McCarthy’s Committee on Un-American Activities.  He invoked his right to the fifth amendment for which he was arrested.  Although he was later acquitted, Princeton University fired him from his faculty position, and fearing another arrest, he fled to Brazil where his US passport was confiscated by American authorities.  He had become a physicist without a country. 

Fig. 2 David Bohm

Despite his personal trials, Bohm remained scientifically productive.  He published his influential textbook on quantum mechanics in the midst of his Senate hearings, and after a particularly stimulating discussion with Einstein shortly before he fled the US, he developed and published an alternative version of quantum theory in 1952 that was fully deterministic—removing Einstein’s “God playing dice”—by creating a hidden-variable theory [5].

Hidden-variable theories of quantum mechanics seek to remove the randomness of quantum measurement by assuming that some deeper element of quantum phenomena—a hidden variable—explains each outcome.  But it is also assumed that these hidden variables are not directly accessible to experiment.  In this sense, the quantum theory of Bohr and Heisenberg was “correct” but not “complete”, because there were things that the theory could not predict or explain.

Bohm’s hidden variable theory, based on a quantum potential, was able to reproduce all the known results of standard quantum theory without invoking the random experimental outcomes that Einstein abhorred.  However, it still contained one crucial element that could not sweep away the EPR paradox—it was nonlocal.

Nonlocality lies at the heart of quantum theory.  In its simplest form, the nonlocal nature of quantum phenomenon says that quantum states span spacetime with space-like separations, meaning that parts of the wavefunction are non-causally connected to other parts of the wavefunction.  Because Einstein was fundamentally committed to causality, the nonlocality of quantum theory was what he found most objectionable, and Bohm’s elegant hidden-variable theory, that removed Einstein’s dreaded randomness, could not remove that last objection of non-causality.

After working in Brazil for several years, Bohm moved to the Technion University in Israel where he began a fruitful collaboration with Yakir Ahronov.  In addition to proposing the Ahronov-Bohm effect, in 1957 they reformulated Podolsky’s version of the EPR paradox that relied on continuous values of position and momentum and replaced it with a much simpler model based on the Stern-Gerlach effect on spins and further to the case of positronium decay into two photons with correlated polarizations.  Bohm and Ahronov reassessed experimental results of positronium decay that had been made by Madame Wu in 1950 at Columbia University and found it in full agreement with standard quantum theory.

John Bell’s Inequalities

John Stuart Bell had an unusual start for a physicist.  His family was too poor to give him an education appropriate to his skills, so he enrolled in vocational school where he took practical classes that included brick laying.  Working later as a technician in a university lab, he caught the attention of his professors who sponsored him to attend the university.  With a degree in physics, he began working at CERN as an accelerator designer when he again caught the attention of his supervisors who sponsored him to attend graduate school.  He graduated with a PhD and returned to CERN as a card-carrying physicist with all the rights and privileges that entailed.

Fig. 3 John Bell

During his university days, he had been fascinated by the EPR paradox, and he continued thinking about the fundamentals of quantum theory.  On a sabbatical to the Stanford accelerator in 1960 he began putting mathematics to the EPR paradox to see whether any local hidden variable theory could be compatible with quantum mechanics.  His analysis was fully general, so that it could rule out as-yet-unthought-of hidden-variable theories.  The result of this work was a set of inequalities that must be obeyed by any local hidden-variable theory.  Then he made a simple check using the known results of quantum measurement and showed that his inequalities are violated by quantum systems.  This ruled out the possibility of any local hidden variable theory (but not Bohm’s nonlocal hidden-variable theory).  Bell published his analysis in 1964 [6] in an obscure journal that almost no one read…except for a curious graduate student at Columbia University who began digging into the fundamental underpinnings of quantum theory against his supervisor’s advice.

Fig. 4 Polarization measurements on entangled photons violate Bell’s inequality.

John Clauser’s Tenacious Pursuit

As a graduate student in astrophysics at Columbia University, John Clauser was supposed to be doing astrophysics.  Instead, he spent his time musing over the fundamentals of quantum theory.  In 1967 Clauser stumbled across Bell’s paper while he was in the library.  The paper caught his imagination, but he also recognized that the inequalities were not experimentally testable, because they required measurements that depended directly on hidden variables, which are not accessible.  He began thinking of ways to construct similar inequalities that could be put to an experimental test, and he wrote about his ideas to Bell, who responded with encouragement.  Clauser wrote up his ideas in an abstract for an upcoming meeting of the American Physical Society, where one of the abstract reviewers was Abner Shimony of Boston University.  Clauser was surprised weeks later when he received a telephone call from Shimony.  Shimony and his graduate student Micheal Horne had been thinking along similar lines, and Shimony proposed to Clauser that they join forces.  They met in Boston where they were met Richard Holt, a graudate student at Harvard who was working on experimental tests of quantum mechanics.  Collectively, they devised a new type of Bell inequality that could be put to experimental test [7].  The result has become known as the CHSH Bell inequality (after Clauser, Horne, Shimony and Holt).

Fig. 5 John Clauser

When Clauser took a post-doc position in Berkeley, he began searching for a way to do the experiments to test the CHSH inequality, even though Holt had a head start at Harvard.  Clauser enlisted the help of Charles Townes, who convinced one of the Berkeley faculty to loan Clauser his graduate student, Stuart Freedman, to help.  Clauser and Freedman performed the experiments, using a two-photon optical decay of calcium ions and found a violation of the CHSH inequality by 5 standard deviations, publishing their result in 1972 [8]. 

Fig. 6 CHSH inequality violated by entangled photons.

Alain Aspect’s Non-locality

Just as Clauser’s life was changed when he stumbled on Bell’s obscure paper in 1967, the paper had the same effect on the life of French physicist Alain Aspect who stumbled on it in 1975.  Like Clauser, he also sought out Bell for his opinion, meeting with him in Geneva, and Aspect similarly received Bell’s encouragement, this time with the hope to build upon Clauser’s work. 

Fig. 7 Alain Aspect

In some respects, the conceptual breakthrough achieved by Clauser had been the CHSH inequality that could be tested experimentally.  The subsequent Clauser Freedman experiments were not a conclusion, but were just the beginning, opening the door to deeper tests.  For instance, in the Clauser-Freedman experiments, the polarizers were static, and the detectors were not widely separated, which allowed the measurements to be time-like separated in spacetime.  Therefore, the fundamental non-local nature of quantum physics had not been tested.

Aspect began a thorough and systematic program, that would take him nearly a decade to complete, to test the CHSH inequality under conditions of non-locality.  He began with a much brighter source of photons produced using laser excitation of the calcium ions.  This allowed him to perform the experiment in 100’s of seconds instead of the hundreds of hours by Clauser.  With such a high data rate, Aspect was able to verify violation of the Bell inequality to 10 standard deviations, published in 1981 [9].

However, the real goal was to change the orientations of the polarizers while the photons were in flight to widely separated detectors [10].  This experiment would allow the detection to be space-like separated in spacetime.  The experiments were performed using fast-switching acoustic-optic modulators, and the Bell inequality was violated to 5 standard deviations [11].  This was the most stringent test yet performed and the first to fully demonstrate the non-local nature of quantum physics.

Anton Zeilinger: Master of Entanglement

If there is one physicist today whose work encompasses the broadest range of entangled phenomena, it would be the Austrian physicist, Anton Zeilinger.  He began his career in neutron interferometery, but when he was bitten by the entanglement bug in 1976, he switched to quantum photonics because of the superior control that can be exercised using optics over sources and receivers and all the optical manipulations in between.

Fig. 8 Anton Zeilinger

Working with Daniel Greenberger and Micheal Horne, they took the essential next step past the Bohm two-particle entanglement to consider a 3-particle entangled state that had surprising properties.  While the violation of locality by the two-particle entanglement was observed through the statistical properties of many measurements, the new 3-particle entanglement could show violations on single measurements, further strengthening the arguments for quantum non-locality.  This new state is called the GHZ state (after Greenberger, Horne and Zeilinger) [12].

As the Zeilinger group in Vienna was working towards experimental demonstrations of the GHZ state, Charles Bennett of IBM proposed the possibility for quantum teleportation, using entanglement as a core quantum information resource [13].   Zeilinger realized that his experimental set-up could perform an experimental demonstration of the effect, and in a rapid re-tooling of the experimental apparatus [14], the Zeilinger group was the first to demonstrate quantum teleportation that satisfied the conditions of the Bennett teleportation proposal [15].  An Italian-UK collaboration also made an early demonstration of a related form of teleportation in a paper that was submitted first, but published after Zeilinger’s, due to delays in review [16].  But teleportation was just one of a widening array of quantum applications for entanglement that was pursued by the Zeilinger group over the succeeding 30 years [17], including entanglement swapping, quantum repeaters, and entanglement-based quantum cryptography. Perhaps most striking, he has worked on projects at astronomical observatories that entangle photons coming from cosmic sources.

By David D. Nolte Nov. 26, 2022


Read more about the history of quantum entanglement in Interference (New From Oxford University Press, 2023)

A popular account of the trials and toils of the scientists and engineers who tamed light and used it to probe the universe.


Video Lectures

Physics Colloquium on the Backstory of the 2023 Nobel Prize in Physics


Timeline

1935 – Einstein EPR

1935 – Bohr EPR

1935 – Schrödinger: Entanglement and Cat

1950 – Madam Wu positron decay

1952 – David Bohm and Non-local hidden variables

1957 – Bohm and Ahronov version of EPR

1963 – Bell’s inequalities

1967 – Clauser reads Bell’s paper

1967 – Commins experiment with Calcium

1969 – CHSH inequality: measurable with detection inefficiencies

1972 – Clauser and Freedman experiment

1975 – Aspect reads Bell’s paper

1976 – Zeilinger reads Bell’s paper

1981 – Aspect two-photon generation source

1982 – Aspect time variable analyzers

1988 – Parametric down-conversion of EPR pairs (Shih and Alley, Ou and Mandel)

1989 – GHZ state proposed

1993 – Bennett quantum teleportation proposal

1995 – High-intensity down-conversion source of EPR pairs (Kwiat and Zeilinger)

1997 – Zeilinger quantum teleportation experiment

1999 – Observation of the GHZ state


Bibliography

[1] See the full details in: David D. Nolte, Interference: A History of Interferometry and the Scientists Who Tamed Light (Oxford University Press, July 2023)

[2] A. Einstein, B. Podolsky, N. Rosen, Can quantum-mechanical description of physical reality be considered complete? Physical Review 47, 0777-0780 (1935).

[3] N. Bohr, Can quantum-mechanical description of physical reality be considered complete? Physical Review 48, 696-702 (1935).

[4] E. Schrödinger, Die gegenwärtige Situation in der Quantenmechanik. Die Naturwissenschaften 23, 807-12; 823-28; 844-49 (1935).

[5] D. Bohm, A suggested interpretation of the quantum theory in terms of hidden variables .1. Physical Review 85, 166-179 (1952); D. Bohm, A suggested interpretation of the quantum theory in terms of hidden variables .2. Physical Review 85, 180-193 (1952).

[6] J. Bell, On the Einstein-Podolsky-Rosen paradox. Physics 1, 195 (1964).

[7] 1. J. F. Clauser, M. A. Horne, A. Shimony, R. A. Holt, Proposed experiment to test local hidden-variable theories. Physical Review Letters 23, 880-& (1969).

[8] S. J. Freedman, J. F. Clauser, Experimental test of local hidden-variable theories. Physical Review Letters 28, 938-& (1972).

[9] A. Aspect, P. Grangier, G. Roger, EXPERIMENTAL TESTS OF REALISTIC LOCAL THEORIES VIA BELLS THEOREM. Physical Review Letters 47, 460-463 (1981).

[10]  Alain Aspect, Bell’s Theorem: The Naïve Veiw of an Experimentalit. (2004), hal- 00001079

[11] A. Aspect, J. Dalibard, G. Roger, EXPERIMENTAL TEST OF BELL INEQUALITIES USING TIME-VARYING ANALYZERS. Physical Review Letters 49, 1804-1807 (1982).

[12] D. M. Greenberger, M. A. Horne, A. Zeilinger, in 1988 Fall Workshop on Bells Theorem, Quantum Theory and Conceptions of the Universe. (George Mason Univ, Fairfax, Va, 1988), vol. 37, pp. 69-72.

[13] C. H. Bennett, G. Brassard, C. Crepeau, R. Jozsa, A. Peres, W. K. Wootters, Teleporting an unknown quantum state via dual classical and einstein-podolsky-rosen channels. Physical Review Letters 70, 1895-1899 (1993).

[14]  J. Gea-Banacloche, Optical realizations of quantum teleportation, in Progress in Optics, Vol 46, E. Wolf, Ed. (2004), vol. 46, pp. 311-353.

[15] D. Bouwmeester, J.-W. Pan, K. Mattle, M. Eibl, H. Weinfurter, A. Zeilinger, Experimental quantum teleportation. Nature 390, 575-579 (1997).

[16] D. Boschi, S. Branca, F. De Martini, L. Hardy, S. Popescu, Experimental realization of teleporting an unknown pure quantum state via dual classical and Einstein-podolsky-Rosen Channels. Phys. Rev. Lett. 80, 1121-1125 (1998).

[17]  A. Zeilinger, Light for the quantum. Entangled photons and their applications: a very personal perspective. Physica Scripta 92, 1-33 (2017).

A Short History of Quantum Tunneling

Quantum physics is often called “weird” because it does things that are not allowed in classical physics and hence is viewed as non-intuitive or strange.  Perhaps the two “weirdest” aspects of quantum physics are quantum entanglement and quantum tunneling.  Entanglement allows a particle state to extend across wide expanses of space, while tunneling allows a particle to have negative kinetic energy.  Neither of these effects has a classical analog.

Quantum entanglement arose out of the Bohr-Einstein debates at the Solvay Conferences in the 1920’s and 30’s, and it was the subject of a recent Nobel Prize in Physics (2022).  The quantum tunneling story is just as old, but it was recognized much earlier by the Nobel Prize in 1972 when it was awarded to Brian Josephson, Ivar Giaever and Leo Esaki—each of whom was a graduate student when they discovered their respective effects and two of whom got their big idea while attending a lecture class. 

Always go to class, you never know what you might miss, and the payoff is sometimes BIG

Ivar Giaever

Of the two effects, tunneling is the more common and the more useful in modern electronic devices (although entanglement is coming up fast with the advent of quantum information science). Here is a short history of quantum tunneling, told through a series of publications that advanced theory and experiments.

Double-Well Potential: Friedrich Hund (1927)

The first analysis of quantum tunneling was performed by Friedrich Hund (1896 – 1997), a German physicist who studied early in his career with Born in Göttingen and Bohr in Copenhagen.  He published a series of papers in 1927 in Zeitschrift für Physik [1] that solved the newly-proposed Schrödinger equation for the case of the double well potential.  He was particularly interested in the formation of symmetric and anti-symmetric states of the double well that contributed to the binding energy of atoms in molecules.  He derived the first tunneling-frequency expression for a quantum superposition of the symmetric and anti-symmetric states

where f is the coherent oscillation frequency, V is the height of the potential and hν is the quantum energy of the isolated states when the atoms are far apart.  The exponential dependence on the potential height V made the tunnel effect extremely sensitive to the details of the tunnel barrier.

Fig. 1 Friedrich Hund

Electron Emission: Lothar Nordheim and Ralph Fowler (1927 – 1928)

The first to consider quantum tunneling from a bound state to a continuum state was Lothar Nordheim (1899 – 1985), a German physicist who studied under David Hilbert and Max Born at Göttingen and worked with John von Neumann and Eugene Wigner and later with Hans Bethe. In 1927 he solved the problem of a particle in a well that is separated from continuum states by a thin finite barrier [2]. Using the new Schrödinger theory, he found transmission coefficients that were finite valued, caused by quantum tunneling of the particle through the barrier. Nordheim’s use of square potential wells and barriers are now, literally, textbook examples that every student of quantum mechanics solves. (For a quantum simulation of wavefunction tunneling through a square barrier see the companion Quantum Tunneling YouTube video.) Nordheim later escaped the growing nationalism and anti-semitism in Germany in the mid 1930’s to become a visiting professor of physics at Purdue University in the United States, moving to a permanent position at Duke University.

Fig. 2 Nordheim square tunnel barrier and Fowler-Nordheim triangular tunnel barrier for electron tunneling from bound states into the continuum.

One of the giants of mathematical physics in the UK from the 1920s through the 1930’s was Ralph Fowler (1889 – 1944). Three of his doctoral students went on to win Nobel Prizes (Chandrasekhar, Dirac and Mott) and others came close (Bhabha, Hartree, Lennard-Jones). In 1928 Fowler worked with Nordheim on a more realistic version of Nordheim’s surface electron tunneling that could explain thermionic emission of electrons from metals under strong electric fields. The electric field modified Nordheim’s square potential barrier into a triangular barrier (which they treated using WKB theory) to obtain the tunneling rate [3]. This type of tunnel effect is now known as Fowler-Nordheim tunneling.

Nuclear Alpha Decay: George Gamow (1928)

George Gamov (1904 – 1968) is one of the icons of mid-twentieth-century physics. He was a substantial physicist who also had a solid sense of humor that allowed him to achieve a level of cultural popularity shared by a few of the larger-than-life physicists of his time, like Richard Feynman and Stephen Hawking. His popular books included One Two Three … Infinity as well as a favorite series of books under the rubric of Mr. Tompkins (Mr. Tompkins in Wonderland and Mr. Tompkins Explores the Atom, among others). He also wrote a history of the early years of quantum theory (Thirty Years that Shook Physics).

In 1928 Gamow was in Göttingen (the Mecca of early quantum theory) with Max Born when he realized that the radioactive decay of Uranium by alpha decay might be explained by quantum tunneling. It was known that nucleons were bound together by some unknown force in what would be an effective binding potential, but that charged alpha particles would also feel a strong electrostatic repulsive potential from a nucleus. Gamow combined these two potentials to create a potential landscape that was qualitatively similar to Nordheim’s original system of 1927, but with a potential barrier that was neither square nor triangular (like the Fowler-Nordheim situation).

Fig. 3 George Gamow

Gamow was able to make an accurate approximation that allowed him to express the decay rate in terms of an exponential term

where Zα is the atomic charge of the alpha particle, Z is the nuclear charge of the Uranium decay product and v is the speed of the alpha particle detected in external measurements [4].

The very next day after Gamow submitted his paper, Ronald Gurney and Edward Condon of Princeton University submitted a paper [5] that solved the same problem using virtually the same approach … except missing Gamow’s surprisingly concise analytic expression for the decay rate.

Molecular Tunneling: George Uhlenbeck (1932)

Because tunneling rates depend inversely on the mass of the particle tunneling through the barrier, electrons are more likely to tunnel through potential barriers than atoms. However, hydrogen is a particularly small atom and is therefore the most amenable to experiencing tunneling.

The first example of atom tunneling is associated with hydrogen in the ammonia molecule NH3. The molecule has a pyramidal structure with the Nitrogen hovering above the plane defined by the three hydrogens. However, an equivalent configuration has the Nitrogen hanging below the hydrogen plane. The energies of these two configurations are the same, but the Nitrogen must tunnel from one side of the hydrogen plane to the other through a barrier. The presence of light-weight hydrogen that can “move out of the way” for the nitrogen makes this barrier very small (infrared energies). When the ammonia is excited into its first vibrational excited state, the molecular wavefunction tunnels through the barrier, splitting the excited level by an energy associated with a wavelength of 1.2 cm which is in the microwave. This tunnel splitting was the first microwave transition observed in spectroscopy and is used in ammonia masers.

Fig. 4 Nitrogen inversion in the ammonia molecule is achieved by excitation to a vibrational excited state followed by tunneling through the barrier, proposed by George Uhlenbeck in 1932.

One of the earliest papers [6] written on the tunneling of nitrogen in ammonia was published by George Uhlenbeck in 1932. George Uhlenbeck (1900 – 1988) was a Dutch-American theoretical physicist. He played a critical role, with Samuel Goudsmit, in establishing the spin of the electron in 1925. Both Uhlenbeck and Goudsmit were close associates of Paul Ehrenfest at Leiden in the Netherlands. Uhlenbeck is also famous for the Ornstein-Uhlenbeck process which is a generalization of Einstein’s theory of Brownian motion that can treat active transport such as intracellular transport in living cells.

Solid-State Electron Tunneling: Leo Esaki (1957)

Although the tunneling of electrons in molecular bonds and in the field emission from metals had been established early in the century, direct use of electron tunneling in solid state devices had remained elusive until Leo Esaki (1925 – ) observed electron tunneling in heavily doped Germanium and Silicon semiconductors. Esaki joined an early precursor of Sony electronics in 1956 and was supported to obtain a PhD from the University of Tokyo. In 1957 he was working with heavily-doped p-n junction diodes and discovered a phenomenon known as negative differential resistance where the current through an electronic device actually decreases as the voltage increases.

Because the junction thickness was only about 100 atoms, or about 10 nanometers, he suspected and then proved that the electronic current was tunneling quantum mechanically through the junction. The negative differential resistance was caused by a decrease in available states to the tunneling current as the voltage increased.

Fig. 5 Esaki tunnel diode with heavily doped p- and n-type semiconductors. At small voltages, electrons and holes tunnel through the semiconductor bandgap across a junction that is only about 10 nm wide. Ht higher voltage, the electrons and hole have no accessible states to tunnel into, producing negative differential resistance where the current decreases with increasing voltage.

Esaki tunnel diodes were the fastest semiconductor devices of the time, and the negative differential resistance of the diode in an external circuit produced high-frequency oscillations. They were used in high-frequency communication systems. They were also radiation hard and hence ideal for the early communications satellites. Esaki was awarded the 1973 Nobel Prize in Physics jointly with Ivar Giaever and Brian Josephson.

Superconducting Tunneling: Ivar Giaever (1960)

Ivar Giaever (1929 – ) is a Norwegian-American physicist who had just joined the GE research lab in Schenectady New York in 1958 when he read about Esaki’s tunneling experiments. He was enrolled at that time as a graduate student in physics at Rensselaer Polytechnic Institute (RPI) where he was taking a course in solid state physics and learning about superconductivity. Superconductivity is carried by pairs of electrons known as Cooper pairs that spontaneously bind together with a binding energy that produced an “energy gap” in the electron energies of the metal, but no one had ever found a way to directly measure it. The Esaki experiment made him immediately think of the equivalent experiment in which Cooper pairs might tunnel between two superconductors (through a thin oxide layer) and yield a measurement of the energy gap. The idea actually came to him during the class lecture.

The experiments used a junction between aluminum and lead (Al—Al2O3—Pb). At first, the temperature of the system was adjusted so that Al remained a normal metal and Pb was superconducting, and Giaever observed a tunnel current with a threshold related to the gap in Pb. Then the temperature was lowered so that both Al and Pb were superconducting, and a peak in the tunnel current appeared at the voltage associated with the difference in the energy gaps (predicted by Harrison and Bardeen).

Fig. 6 Diagram from Giaever “The Discovery of Superconducting Tunneling” at https://conferences.illinois.edu/bcs50/pdf/giaever.pdf

The Josephson Effect: Brian Josephson (1962)

In Giaever’s experiments, the external circuits had been designed to pick up “ordinary” tunnel currents in which individual electrons tunneled through the oxide rather than the Cooper pairs themselves. However, in 1962, Brian Josephson (1940 – ), a physics graduate student at Cambridge, was sitting in a lecture (just like Giaever) on solid state physics given by Phil Anderson (who was on sabbatical there from Bell Labs). During lecture he had the idea to calculate whether it was possible for the Cooper pairs themselves to tunnel through the oxide barrier. Building on theoretical work by Leo Falicov who was at the University of Chicago and later at Berkeley (years later I was lucky to have Leo as my PhD thesis advisor at Berkeley), Josephson found a surprising result that even when the voltage was zero, there would be a supercurrent that tunneled through the junction (now known as the DC Josephson Effect). Furthermore, once a voltage was applied, the supercurrent would oscillate (now known as the AC Josephson Effect). These were strange and non-intuitive results, so he showed Anderson his calculations to see what he thought. By this time Anderson had already been extremely impressed by Josephson (who would often come to the board after one of Anderson’s lectures to show where he had made a mistake). Anderson checked over the theory and agreed with Josephson’s conclusions. Bolstered by this reception, Josephson submitted the theoretical prediction for publication [9].

As soon as Anderson returned to Bell Labs after his sabbatical, he connected with John Rowell who was making tunnel junction experiments, and they revised the external circuit configuration to be most sensitive to the tunneling supercurrent, which they observed in short time and submitted a paper for publication. Since then, the Josephson Effect has become a standard element of ultra-sensitive magnetometers, measurement standards for charge and voltage, far-infrared detectors, and have been used to construct rudimentary qubits and quantum computers.

By David D. Nolte: Nov. 6, 2022


YouTube Video

YouTube Video of Quantum Tunneling Systems


References:

[1] F. Hund, Z. Phys. 40, 742 (1927). F. Hund, Z. Phys. 43, 805 (1927).

[2] L. Nordheim, Z. Phys. 46, 833 (1928).

[3] R. H. Fowler, L. Nordheim, Proc. R. Soc. London, Ser. A 119, 173 (1928).

[4] G. Gamow, Z. Phys. 51, 204 (1928).

[5] R. W. Gurney, E. U. Condon, Nature 122, 439 (1928). R. W. Gurney, E. U. Condon, Phys. Rev. 33, 127 (1929).

[6] Dennison, D. M. and G. E. Uhlenbeck. “The two-minima problem and the ammonia molecule.” Physical Review 41(3): 313-321. (1932)

[7] L. Esaki, New Phenomenon in Narrow Germanium Para-Normal-Junctions, Phys. Rev., 109, 603-604 (1958); L. Esaki, (1974). Long journey into tunneling, disintegration, Proc. of the Nature 123, IEEE, 62, 825.

[8] I. Giaever, Energy Gap in Superconductors Measured by Electron Tunneling, Phys. Rev. Letters, 5, 147-148 (1960); I. Giaever, Electron tunneling and superconductivity, Science, 183, 1253 (1974)

[9] B. D. Josephson, Phys. Lett. 1, 251 (1962); B.D. Josephson, The discovery of tunneling supercurrent, Science, 184, 527 (1974).

[10] P. W. Anderson, J. M. Rowell, Phys. Rev. Lett. 10, 230 (1963); Philip W. Anderson, How Josephson discovered his effect, Physics Today 23, 11, 23 (1970)

[11] Eugen Merzbacher, The Early History of Quantum Tunneling, Physics Today 55, 8, 44 (2002)

[12] Razavy, Mohsen. Quantum Theory Of Tunneling, World Scientific Publishing Company, 2003.



Interference (New from Oxford University Press, 2023)

Read the stories of the scientists and engineers who tamed light and used it to probe the universe.

Available from Amazon.

Available from Oxford U Press

Available from Barnes & Nobles

Is There a Quantum Trajectory? The Phase-Space Perspective

At the dawn of quantum theory, Heisenberg, Schrödinger, Bohr and Pauli were embroiled in a dispute over whether trajectories of particles, defined by their positions over time, could exist. The argument against trajectories was based on an apparent paradox: To draw a “line” depicting a trajectory of a particle along a path implies that there is a momentum vector that carries the particle along that path. But a line is a one-dimensional curve through space, and since at any point in time the particle’s position is perfectly localized, then by Heisenberg’s uncertainty principle, it can have no definable momentum to carry it along.

My previous blog shows the way out of this paradox, by assembling wavepackets that are spread in both space and momentum, explicitly obeying the uncertainty principle. This is nothing new to anyone who has taken a quantum course. But the surprising thing is that in some potentials, like a harmonic potential, the wavepacket travels without broadening, just like classical particles on a trajectory. A dramatic demonstration of this can be seen in this YouTube video. But other potentials “break up” the wavepacket, especially potentials that display classical chaos. Because phase space is one of the best tools for studying classical chaos, especially Hamiltonian chaos, it can be enlisted to dig deeper into the question of the quantum trajectory—not just about the existence of a quantum trajectory, but why quantum systems retain a shadow of their classical counterparts.

Phase Space

Phase space is the state space of Hamiltonian systems. Concepts of phase space were first developed by Boltzmann as he worked on the problem of statistical mechanics. Phase space was later codified by Gibbs for statistical mechanics and by Poincare for orbital mechanics, and it was finally given its name by Paul and Tatiana Ehrenfest (a husband-wife team) in correspondence with the German physicist Paul Hertz (See Chapter 6, “The Tangled Tale of Phase Space”, in Galileo Unbound by D. D. Nolte (Oxford, 2018)).

The stretched-out phase-space functions … are very similar to the stochastic layer that forms in separatrix chaos in classical systems.

The idea of phase space is very simple for classical systems: it is just a plot of the momentum of a particle as a function of its position. For a given initial condition, the trajectory of a particle through its natural configuration space (for instance our 3D world) is traced out as a path through phase space. Because there is one momentum variable per degree of freedom, then the dimensionality of phase space for a particle in 3D is 6D, which is difficult to visualize. But for a one-dimensional dynamical system, like a simple harmonic oscillator (SHO) oscillating in a line, the phase space is just two-dimensional, which is easy to see. The phase-space trajectories of an SHO are simply ellipses, and if the momentum axis is scaled appropriately, the trajectories are circles. The particle trajectory in phase space can be animated just like a trajectory through configuration space as the position and momentum change in time p(x(t)). For the SHO, the point follows the path of a circle going clockwise.

Fig. 1 Phase space of the simple harmonic oscillator. The “orbits” have constant energy.

A more interesting phase space is for the simple pendulum, shown in Fig. 2. There are two types of orbits: open and closed. The closed orbits near the origin are like those of a SHO. The open orbits are when the pendulum is spinning around. The dividing line between the open and closed orbits is called a separatrix. Where the separatrix intersects itself is a saddle point. This saddle point is the most important part of the phase space portrait: it is where chaos emerges when perturbations are added.

Fig. 2 Phase space for a simple pendulum. For small amplitudes the orbits are closed like those of a SHO. For large amplitudes the orbits become open as the pendulum spins about its axis. (Reproduced from Introduction to Modern Dynamics, 2nd Ed., pg. )

One route to classical chaos is through what is known as “separatrix chaos”. It is easy to see why saddle points (also known as hyperbolic points) are the source of chaos: as the system trajectory approaches the saddle, it has two options of which directions to go. Any additional degree of freedom in the system (like a harmonic drive) can make the system go one way on one approach, and the other way on another approach, mixing up the trajectories. An example of the stochastic layer of separatrix chaos is shown in Fig. 3 for a damped driven pendulum. The chaotic behavior that originates at the saddle point extends out along the entire separatrix.

Fig. 3 The stochastic layer of separatrix chaos for a damped driven pendulum. (Reproduced from Introduction to Modern Dynamics, 2nd Ed., pg. )

The main question about whether or not there is a quantum trajectory depends on how quantum packets behave as they approach a saddle point in phase space. Since packets are spread out, it would be reasonable to assume that parts of the packet will go one way, and parts of the packet will go another. But first, one has to ask: Is a phase-space description of quantum systems even possible?

Quantum Phase Space: The Wigner Distribution Function

Phase-space portraits are arguably the most powerful tool in the toolbox of classical dynamics, and one would like to retain its uses for quantum systems. However, there is that pesky paradox about quantum trajectories that cannot admit the existence of one-dimensional curves through such a phase space. Furthermore, there is no direct way of taking a wavefunction and simply “finding” its position or momentum to plot points on such a quantum phase space.

The answer was found in 1932 by Eugene Wigner (1902 – 1905), an Hungarian physicist working at Princeton. He realized that it was impossible to construct a quantum probability distribution in phase space that had positive values everywhere. This is a problem, because negative probabilities have no direct interpretation. But Wigner showed that if one relaxed the requirements a bit, so that expectation values computed over some distribution function (that had positive and negative values) gave correct answers that matched experiments, then this distribution function would “stand in” for an actual probability distribution.

The distribution function that Wigner found is called the Wigner distribution function. Given a wavefunction ψ(x), the Wigner distribution is defined as

Fig. 4 Wigner distribution function in (x, p) phase space.

The Wigner distribution function is the Fourier transform of the convolution of the wavefunction. The pure position dependence of the wavefunction is converted into a spread-out position-momentum function in phase space. For a Gaussian wavefunction ψ(x) with a finite width in space, the W-function in phase space is a two-dimensional Gaussian with finite widths in both space and momentum. In fact, the Δx-Δp product of the W-function is precisely the uncertainty production of the Heisenberg uncertainty relation.

The question of the quantum trajectory from the phase-space perspective becomes whether a Wigner function behaves like a localized “packet” that evolves in phase space in a way analogous to a classical particle, and whether classical chaos is reflected in the behavior of quantum systems.

The Harmonic Oscillator

The quantum harmonic oscillator is a rare and special case among quantum potentials, because the energy spacings between all successive states are all the same. This makes it possible for a Gaussian wavefunction, which is a superposition of the eigenstates of the harmonic oscillator, to propagate through the potential without broadening. To see an example of this, watch the first example in this YouTube video for a Schrödinger cat state in a two-dimensional harmonic potential. For this very special potential, the Wigner distribution behaves just like a (broadened) particle on an orbit in phase space, executing nice circular orbits.

A comparison of the classical phase-space portrait versus the quantum phase-space portrait is shown in Fig. 5. Where the classical particle is a point on an orbit, the quantum particle is spread out, obeying the Δx-Δp Heisenberg product, but following the same orbit as the classical particle.

Fig. 5 Classical versus quantum phase-space portraits for a harmonic oscillator. For a classical particle, the trajectory is a point executing an orbit. For a quantum particle, the trajectory is a Wigner distribution that follows the same orbit as the classical particle.

However, a significant new feature appears in the Wigner representation in phase space when there is a coherent superposition of two states, known as a “cat” state, after Schrödinger’s cat. This new feature has no classical analog. It is the coherent interference pattern that appears at the zero-point of the harmonic oscillator for the Schrödinger cat state. There is no such thing as “classical” coherence, so this feature is absent in classical phase space portraits.

Two examples of Wigner distributions are shown in Fig. 6 for a statistical (incoherent) mixture of packets and a coherent superposition of packets. The quantum coherence signature is present in the coherent case but not the statistical mixture case. The coherence in the Wigner distribution represents “off-diagonal” terms in the density matrix that leads to interference effects in quantum systems. Quantum computing algorithms depend critically on such coherences that tend to decay rapidly in real-world physical systems, known as decoherence, and it is possible to make statements about decoherence by watching the zero-point interference.

Fig. 6 Quantum phase-space portraits of double wave packets. On the left, the wave packets have no coherence, being a statistical mixture. On the right is the case for a coherent superposition, or “cat state” for two wave packets in a one-dimensional harmonic oscillator.

Whereas Gaussian wave packets in the quantum harmonic potential behave nearly like classical systems, and their phase-space portraits are almost identical to the classical phase-space view (except for the quantum coherence), most quantum potentials cause wave packets to disperse. And when saddle points are present in the classical case, then we are back to the question about how quantum packets behave as they approach a saddle point in phase space.

Quantum Pendulum and Separatrix Chaos

One of the simplest anharmonic oscillators is the simple pendulum. In the classical case, the period diverges if the pendulum gets very close to going vertical. A similar thing happens in the quantum case, but because the motion has strong anharmonicity, an initial wave packet tends to spread dramatically as parts of the wavefunction less vertical stretch away from the part of the wave function that is more nearly vertical. Fig. 7 is a snap-shot about a eighth of a period after the wave packet was launched. The packet has already stretched out along the separatrix. A double-cat-state was used, so there is a second packet that has coherent interference with the first. To see a movie of the time evolution of the wave packet and the orbit in quantum phase space, see the YouTube video.

Fig. 7 Wavefunction of a quantum pendulum released near vertical. The phase-space portrait is very similar to the classical case, except that the phase-space distribution is stretched out along the separatrix. The initial state for the phase-space portrait was a cat state.

The simple pendulum does have a saddle point, but it is degenerate because the angle is modulo -2-pi. A simple potential that has a non-degenerate saddle point is a double-well potential.

Quantum Double-Well and Separatrix Chaos

The symmetric double-well potential has a saddle point at the mid-point between the two well minima. A wave packet approaching the saddle will split into to packets that will follow the individual separatrixes that emerge from the saddle point (the unstable manifolds). This effect is seen most dramatically in the middle pane of Fig. 8. For the full video of the quantum phase-space evolution, see this YouTube video. The stretched-out distribution in phase space is highly analogous to the separatrix chaos seen for the classical system.

Fig. 8 Phase-space portraits of the Wigner distribution for a wavepacket in a double-well potential. The packet approaches the central saddle point, where the probability density splits along the unstable manifolds.

Conclusion

A common statement often made about quantum chaos is that quantum systems tend to suppress chaos, only exhibiting chaos for special types of orbits that produce quantum scars. However, from the phase-space perspective, the opposite may be true. The stretched-out Wigner distribution functions, for critical wave packets that interact with a saddle point, are very similar to the stochastic layer that forms in separatrix chaos in classical systems. In this sense, the phase-space description brings out the similarity between classical chaos and quantum chaos.

By David D. Nolte Sept. 25, 2022


YouTube Video


For more on the history of quantum trajectories, see Galileo Unbound from Oxford Press:


References

1. T. Curtright, D. Fairlie, C. Zachos, A Concise Treatise on Quantum Mechanics in Phase Space.  (World Scientific, New Jersey, 2014).

2. J. R. Nagel, A Review and Application of the Finite-Difference Time-Domain Algorithm Applied to the Schrödinger Equation, ACES Journal, Vol. 24, NO. 1, pp. 1-8 (2009)

Is There a Quantum Trajectory?

Heisenberg’s uncertainty principle is a law of physics – it cannot be violated under any circumstances, no matter how much we may want it to yield or how hard we try to bend it.  Heisenberg, as he developed his ideas after his lone epiphany like a monk on the isolated island of Helgoland off the north coast of Germany in 1925, became a bit of a zealot, like a religious convert, convinced that all we can say about reality is a measurement outcome.  In his view, there was no independent existence of an electron other than what emerged from a measuring apparatus.  Reality, to Heisenberg, was just a list of numbers in a spread sheet—matrix elements.  He took this line of reasoning so far that he stated without exception that there could be no such thing as a trajectory in a quantum system.  When the great battle commenced between Heisenberg’s matrix mechanics against Schrödinger’s wave mechanics, Heisenberg was relentless, denying any reality to Schrödinger’s wavefunction other than as a calculation tool.  He was so strident that even Bohr, who was on Heisenberg’s side in the argument, advised Heisenberg to relent [1].  Eventually a compromise was struck, as Heisenberg’s uncertainty principle allowed Schrödinger’s wave functions to exist within limits—his uncertainty limits.

Disaster in the Poconos

Yet the idea of an actual trajectory of a quantum particle remained a type of heresy within the close quantum circles.  Years later in 1948, when a young Richard Feynman took the stage at a conference in the Poconos, he almost sabotaged his career in front of Bohr and Dirac—two of the giants who had invented quantum mechanics—by having the audacity to talk about particle trajectories in spacetime diagrams.

Feynman was making his first presentation of a new approach to quantum mechanics that he had developed based on path integrals. The challenge was that his method relied on space-time graphs in which “unphysical” things were allowed to occur.  In fact, unphysical things were required to occur, as part of the sum over many histories of his path integrals.  For instance, a key element in the approach was allowing electrons to travel backwards in time as positrons, or a process in which the electron and positron annihilate into a single photon, and then the photon decays back into an electron-positron pair—a process that is not allowed by mass and energy conservation.  But this is a possible history that must be added to Feynman’s sum.

It all looked like nonsense to the audience, and the talk quickly derailed.  Dirac pestered him with questions that he tried to deflect, but Dirac persisted like a raven.  A question was raised about the Pauli exclusion principle, about whether an orbital could have three electrons instead of the required two, and Feynman said that it could—all histories were possible and had to be summed over—an answer that dismayed the audience.  Finally, as Feynman was drawing another of his space-time graphs showing electrons as lines, Bohr rose to his feet and asked derisively whether Feynman had forgotten Heisenberg’s uncertainty principle that made it impossible to even talk about an electron trajectory.

It was hopeless.  The audience gave up and so did Feynman as the talk just fizzled out.  It was a disaster.  What had been meant to be Feynman’s crowning achievement and his entry to the highest levels of theoretical physics, had been a terrible embarrassment.  He slunk home to Cornell where he sank into one of his depressions.  At the close of the Pocono conference, Oppenheimer, the reigning king of physics, former head of the successful Manhattan Project and newly selected to head the prestigious Institute for Advanced Study at Princeton, had been thoroughly disappointed by Feynman.

But what Bohr and Dirac and Oppenheimer had failed to understand was that as long as the duration of unphysical processes was shorter than the energy differences involved, then it was literally obeying Heisenberg’s uncertainty principle.  Furthermore, Feynman’s trajectories—what became his famous “Feynman Diagrams”—were meant to be merely cartoons—a shorthand way to keep track of lots of different contributions to a scattering process.  The quantum processes certainly took place in space and time, conceptually like a trajectory, but only so far as time durations, and energy differences and locations and momentum changes were all within the bounds of the uncertainty principle.  Feynman had invented a bold new tool for quantum field theory, able to supply deep results quickly.  But no one at the Poconos could see it.

Fig. 1 The first Feynman diagram.

Coherent States

When Feynman had failed so miserably at the Pocono conference, he had taken the stage after Julian Schwinger, who had dazzled everyone with his perfectly scripted presentation of quantum field theory—the competing theory to Feynman’s.  Schwinger emerged the clear winner of the contest.  At that time, Roy Glauber (1925 – 2018) was a young physicist just taking his PhD from Schwinger at Harvard, and he later received a post-doc position at Princeton’s Institute for Advanced Study where he became part of a miniature revolution in quantum field theory that revolved around—not Schwinger’s difficult mathematics—but Feynman’s diagrammatic method.  So Feynman won in the end.  Glauber then went on to Caltech, where he filled in for Feynman’s lectures when Feynman was off in Brazil playing the bongos.  Glauber eventually returned to Harvard where he was already thinking about the quantum aspects of photons in 1956 when news of the photon correlations in the Hanbury-Brown Twiss (HBT) experiment were published.  Three years later, when the laser was invented, he began developing a theory of photon correlations in laser light that he suspected would be fundamentally different than in natural chaotic light. 

Because of his background in quantum field theory, and especially quantum electrodynamics, it was fairly easy to couch the quantum optical properties of coherent light in terms of Dirac’s creation and annihilation operators of the electromagnetic field. Glauber developed a “coherent state” operator that was a minimum uncertainty state of the quantized electromagnetic field, related to the minimum-uncertainty wave functions derived initially by Schrödinger in the late 1920’s. The coherent state represents a laser operating well above the lasing threshold and behaved as “the most classical” wavepacket that can be constructed.  Glauber was awarded the Nobel Prize in Physics in 2005 for his work on such “Glauber states” in quantum optics.

Fig. 2 Roy Glauber

Quantum Trajectories

Glauber’s coherent states are built up from the natural modes of a harmonic oscillator.  Therefore, it should come as no surprise that these coherent-state wavefunctions in a harmonic potential behave just like classical particles with well-defined trajectories. The quadratic potential matches the quadratic argument of the the Gaussian wavepacket, and the pulses propagate within the potential without broadening, as in Fig. 3, showing a snapshot of two wavepackets propagating in a two-dimensional harmonic potential. This is a somewhat radical situation, because most wavepackets in most potentials (or even in free space) broaden as they propagate. The quadratic potential is a special case that is generally not representative of how quantum systems behave.

Fig. 3 Harmonic potential in 2D and two examples of pairs of pulses propagating without broadening. The wavepackets in the center are oscillating in line, and the wavepackets on the right are orbiting the center of the potential in opposite directions. (Movies of the quantum trajectories can be viewed at Physics Unbound.)

To illustrate this special status for the quadratic potential, the wavepackets can be launched in a potential with a quartic perturbation. The quartic potential is anharmonic—the frequency of oscillation depends on the amplitude of oscillation unlike for the harmonic oscillator, where amplitude and frequency are independent. The quartic potential is integrable, like the harmonic oscillator, and there is no avenue for chaos in the classical analog. Nonetheless, wavepackets broaden as they propagate in the quartic potential, eventually spread out into a ring in the configuration space, as in Fig. 4.

Fig. 4 Potential with a quartic corrections. The initial gaussian pulses spread into a “ring” orbiting the center of the potential.

A potential with integrability has as many conserved quantities to the motion as there are degrees of freedom. Because the quartic potential is integrable, the quantum wavefunction may spread, but it remains highly regular, as in the “ring” that eventually forms over time. However, integrable potentials are the exception rather than the rule. Most potentials lead to nonintegrable motion that opens the door to chaos.

A classic (and classical) potential that exhibits chaos in a two-dimensional configuration space is the famous Henon-Heiles potential. This has a four-dimensional phase space which admits classical chaos. The potential has a three-fold symmetry which is one reason it is non-integral, since a particle must “decide” which way to go when it approaches a saddle point. In the quantum regime, wavepackets face the same decision, leading to a breakup of the wavepacket on top of a general broadening. This allows the wavefunction eventually to distribute across the entire configuration space, as in Fig. 5.

Fig. 5 The Henon-Heiles two-dimensional potential supports Hamiltonian chaos in the classical regime. In the quantum regime, the wavefunction spreads to eventually fill the accessible configuration space (for constant energy).

Youtube Video

Movies of quantum trajectories can be viewed at my Youtube Channel, Physics Unbound. The answer to the question “Is there a quantum trajectory?” can be seen visually as the movies run—they do exist in a very clear sense under special conditions, especially coherent states in a harmonic oscillator. And the concept of a quantum trajectory also carries over from a classical trajectory in cases when the classical motion is integrable, even in cases when the wavefunction spreads over time. However, for classical systems that display chaotic motion, wavefunctions that begin as coherent states break up into chaotic wavefunctions that fill the accessible configuration space for a given energy. The character of quantum evolution of coherent states—the most classical of quantum wavefunctions—in these cases reflects the underlying character of chaotic motion in the classical analogs. This process can be seen directly watching the movies as a wavepacket approaches a saddle point in the potential and is split. Successive splits of the multiple wavepackets as they interact with the saddle points is what eventually distributes the full wavefunction into its chaotic form.

Therefore, the idea of a “quantum trajectory”, so thoroughly dismissed by Heisenberg, remains a phenomenological guide that can help give insight into the behavior of quantum systems—both integrable and chaotic.

As a side note, the laws of quantum physics obey time-reversal symmetry just as the classical equations do. In the third movie of “A Quantum Ballet“, wavefunctions in a double-well potential are tracked in time as they start from coherent states that break up into chaotic wavefunctions. It is like watching entropy in action as an ordered state devolves into a disordered state. But at the half-way point of the movie, the imaginary part of the wavefunction has its sign flipped, and the dynamics continue. But now the wavefunctions move from disorder into an ordered state, seemingly going against the second law of thermodynamics. Flipping the sign of the imaginary part of the wavefunction at just one instant in time plays the role of a time-reversal operation, and there is no violation of the second law.

By David D. Nolte, Sept. 4, 2022


YouTube Video

YouTube Video of Quantum Trajectories


For more on the history of quantum trajectories, see Galileo Unbound from Oxford Press:


References

[1] See Chapter 8 , On the Quantum Footpath, in Galileo Unbound, D. D. Nolte (Oxford University Press, 2018)

[2] J. R. Nagel, A Review and Application of the Finite-Difference Time-Domain Algorithm Applied to the Schrödinger Equation, ACES Journal, Vol. 24, NO. 1, pp. 1-8 (2009)

Quantum Chaos and the Cheshire Cat

Alice’s disturbing adventures in Wonderland tumbled upon her like a string of accidents as she wandered a world of chaos.  Rules were never what they seemed and shifted whenever they wanted.  She even met a cat who grinned ear-to-ear and could disappear entirely, or almost entirely, leaving only its grin.

The vanishing Cheshire Cat reminds us of another famous cat—Arnold’s Cat—that introduced the ideas of stretching and folding of phase-space volumes in non-integrable Hamiltonian systems.  But when Arnold’s Cat becomes a Quantum Cat, a central question remains: What happens to the chaotic behavior of the classical system … does it survive the transition to quantum mechanics?  The answer is surprisingly like the grin of the Cheshire Cat—the cat vanishes, but the grin remains.  In the quantum world of the Cheshire Cat, the grin of the classical cat remains even after the rest of the cat vanished. 

The Cheshire Cat fades away, leaving only its grin, like a fine filament, as classical chaos fades into quantum, leaving behind a quantum scar.

The Quantum Mechanics of Classically Chaotic Systems

The simplest Hamiltonian systems are integrable—they have as many constants of the motion as degrees of freedom.  This holds for quantum systems as well as for classical.  There is also a strong correspondence between classical and quantum systems for the integrable cases—literally the Correspondence Principle—that states that quantum systems at high quantum number approach classical behavior.  Even at low quantum numbers, classical resonances are mirrored by quantum eigenfrequencies that can show highly regular spectra.

But integrable systems are rare—surprisingly rare.  Almost no real-world Hamiltonian system is integrable, because the real world warps the ideal.  No spring can displace indefinitely, and no potential is perfectly quadratic.  There are always real-world non-idealities that destroy one constant of the motion or another, opening the door to chaos.

When classical Hamiltonian systems become chaotic, they don’t do it suddenly.  Almost all transitions to chaos in Hamiltonian systems are gradual.  One of the best examples of this is the KAM theory that starts with invariant action integrals that generate invariant tori in phase space.  As nonintegrable perturbations increase, the tori break up slowly into island chains of stability as chaos infiltrates the separatrixes—first as thin filaments of chaos surrounding the islands—then growing in width to take up more and more of phase space.  Even when chaos is fully developed, small islands of stability can remain—the remnants of stable orbits of the unperturbed system.

When the classical becomes quantum, chaos softens.  Quantum wave functions don’t like to be confined—they spread and they tunnel.  The separatrix of classical chaos—that barrier between regions of phase space—cannot constrain the exponential tails of wave functions.  And the origin of chaos itself—the homoclinic point of the separatrix—gets washed out.  Then the regular orbits of the classical system reassert themselves, and they appear, like the vestige of the Cheshire Cat, as a grin.

The Quantum Circus

The empty stadium is a surprisingly rich dynamical system that has unexpected structure in both the classical and the quantum domain.  Its importance in classical dynamics comes from the fact that its periodic orbits are unstable and its non-periodic orbits are ergodic (filling all available space if given long enough).  The stadium itself is empty so that particles (classical or quantum) are free to propagate between reflections from the perfectly-reflecting walls of the stadium.  The ergodicity comes from the fact that the stadium—like a classic Roman chariot-race stadium, also known as a circus—is not a circle, but has a straight stretch between two half circles.  This simple modification takes the stable orbits of the circle into the unstable orbits of the stadium.

A single classical orbit in a stadium is shown in Fig 1. This is an ergodic orbit that is non-periodic and eventually would fill the entire stadium space. There are other orbits that are nearly periodic, such as one that bounces back and forth vertically between the linear portions, but even this orbit will eventually wander into the circular part of the stadium and then become ergodic. The big quantum-classical question is what happens to these classical orbits when the stadium is shrunk to the nanoscale?

Fig. 1 A classical trajectory in a stadium. It will eventually visit every point, a property known as ergodicity.

Simulating an evolving quantum wavefunction in free space is surprisingly simple. Given a beginning quantum wavefunction A(x,y,t0), the discrete update equation is

Perfect reflection from the boundaries of the stadium are incorporated through imposing a boundary condition that sends the wavefunction to zero. Simple!

A snap-shot of a wavefunction evolving in the stadium is shown in Fig. 2. To see a movie of the time evolution, see my YouTube episode.

Fig. 2 Snapshot of a quantum wavefunction in the stadium. (From YouTube)

The time average of the wavefunction after a long time has passed is shown in Fig. 3. Other than the horizontal nodal line down the center of the stadium, there is little discernible structure or symmetry. This is also true for the mean squared wavefunction shown in Fig. 4, although there is some structure that may be emerging in the semi-circular regions.

Fig. 3 Time-average wavefunction after a long time.
Fig. 4 Time-average of the squared wavefunction after a long time.

On the other hand, for special initial conditions that have a lot of symmetry, something remarkable happens. Fig. 5 shows several mean-squared results for special initial conditions. There is definite structure in these cases that were given the somewhat ugly name “quantum scars” in the 1980’s by Eric Heller who was one of the first to study this phenomenon [1].

Fig. 5 Quantum scars reflect periodic (but unstable) orbits of the classical system. Quantum effects tend to quench chaos and favor regular motion.

One can superpose highly-symmetric classical trajectories onto the figures, as shown in the bottom row. All of these classical orbits go through a high-symmetry point, such as the center of the stadium (on the left image) and through the focal point of the circular mirrors (in the other two images). The astonishing conclusion of this exercise is that the highly-symmetric periodic classical orbits remain behind as quantum scars—like the Cheshire Cat’s grin—when the system is in the quantum realm. The classical orbits that produce quantum scars have the important property of being periodic but unstable. A slight perturbation from the symmetric trajectory causes it to eventually become ergodic (chaotic). These scars are regions with enhanced probability density, what might be termed “quantum trajectories”, but do not show strong interference patterns.

It is important to make the distinction that it is also possible to construct special wavefunctions that are strictly periodic, such as a wave bouncing perfectly vertically between the straight portions. This leads to large-scale interference patterns that are not the same as the quantum scars.

Quantum Chaos versus Laser Speckle

In addition to the bouncing-wave cases that do not strictly produce quantum scars, there is another “neutral” phenomenon that produces interference patterns that look a lot like scars, but are simply the random addition of lots of plane waves with the same wavelength [2]. A snapshot in time of one of these superpositions is shown in Fig. 6. To see how the waves add together, see the YouTube channel episode.

Fig. 6 The sum of 100 randomly oriented plane waves of constant wavelength. (A snapshot from YouTube.)

By David D. Nolte, Aug. 14, 2022


YouTube Video


Read more about the history of chaos theory in Galileo Unbound from Oxford Press:


References

[1] Heller E J, Bound-state eigenfunctions of classically chaotic hamiltonian-systems – scars of periodic-orbits, Physical Review Letters 53 ,1515 (1984)

[2] Gutzwiller M C, Chaos in classical and quantum mechanics (New York: New York : Springer-Verlag, 1990)