Quantum Seeing without Looking? More Weirdness from the Quantum Realm

Quantum sensors have amazing powers.  They can detect the presence of an obstacle without ever interacting with it.  For instance, consider a bomb that is coated with a light sensitive layer that sets off the bomb if it absorbs just a single photon.  Then put this bomb inside a quantum sensor system and shoot photons at it.  Remarkably, using the weirdness of quantum mechanics, it is possible to design the system in such a way that you can detect the presence of the bomb using photons without ever setting it off.  How can photons see the bomb without illuminating it?  The answer is a bizarre side effect of quantum physics in which quantum wavefunctions are recognized as the root of reality as opposed to the pesky wavefunction collapse at the moment of measurement.

The ability for a quantum system to see an object with light, without exposing it, is uniquely a quantum phenomenon that has no classical analog.

All Paths Lead to Feynman

When Richard Feynman was working on his PhD under John Archibald Wheeler at Princeton in the early 1940’s he came across an obscure paper written by Paul Dirac in 1933 that connected quantum physics with classical Lagrangian physics.  Dirac had recognized that the phase of a quantum wavefunction was analogous to the classical quantity called the “Action” that arises from Lagrangian physics.  Building on this concept, Feynman constructed a new interpretation of quantum physics, known as the “many histories” interpretation, that occupies the middle ground between Schrödinger’s wave mechanics and Heisenberg’s matrix mechanics.  One of the striking consequences of the many histories approach is the emergence of the principle of least action—a classical concept—into interpretations of quantum phenomena.  In this approach, Feynman considered ALL possible histories for the propagation of a quantum particle from one point to another, he tabulated the quantum action in the phase factor, and he then summed all of these histories.

One of the simplest consequences of the sum over histories is a quantum interpretation of Snell’s law of refraction in optics.  When summing over all possible trajectories of a photon from a point above to a point below an interface, there are a subset of paths for which the action integral varies very little from one path in the subset to another.  The consequence of this is that the phases of all these paths add constructively, producing a large amplitude to the quantum wavefunction along the centroid of these trajectories.  Conversely, for paths far away from this subset, the action integral takes on many values and the phases tend to interfere destructively, canceling the wavefunction along these other paths.  Therefore, the most likely path of the photon between the two points is the path of maximum constructive interference and hence the path of stationary action.  It is simple so show that this path is none other than the classical path determined by Snell’s Law and equivalently by Fermat’s principle of least time.  With the many histories approach, we can add the principle of least (or stationary) action to the list of explanations of Snell’s Law.  This argument holds as well for an electron (with mass and a de Broglie wavelength) as it does for a photon, so this not just a coincidence specific to optics but is a fundamental part of quantum physics.

A more subtle consequence of the sum over histories view of quantum phenomena is Young’s double slit experiment for electrons, shown at the top of Fig 1.  The experiment consists of a source that emits only a single electron at a time that passes through a double-slit mask to impinge on an electron detection screen.  The wavefunction for a single electron extends continuously throughout the full spatial extent of the apparatus, passing through both slits.  When the two paths intersect at the screen, the difference in the quantum phases of the two paths causes the combined wavefunction to have regions of total constructive interference and other regions of total destructive interference.  The probability of detecting an electron is proportional to the squared amplitude of the wavefunction, producing a pattern of bright stripes separated by darkness.  At positions of destructive interference, no electrons are detected when both slits are open.  However, if an opaque plate blocks the upper slit, then the interference pattern disappears, and electrons can be detected at those previously dark locations.  Therefore, the presence of the object can be deduced by the detection of electrons at locations that should be dark.

Fig. 1  Demonstration of the sum over histories in a double-slit experiment for electrons. In the upper frame, the electron interference pattern on the phosphorescent screen produces bright and dark stripes.  No electrons hit the screen in a dark stripe.  When the upper slit is blocked (bottom frame), the interference pattern disappears, and an electron can arrive at the location that had previously been dark.

Consider now when the opaque plate is an electron-sensitive detector.  In this case, a single electron emitted by the source can be detected at the screen or at the plate.  If it is detected at the screen, it can appear at the location of a dark fringe, heralding the presence of the opaque plate.  Yet the quantum conundrum is that when the electron arrives at a dark fringe, it must be detected there as a whole, it cannot be detected at the electron-sensitive plate too.  So how does the electron sense the presence of the detector without exposing it, without setting it off? 

In Feynman’s view, the electron does set off the detector as one possible history.  And that history interferes with the other possible history when the electron arrives at the screen.  While that interpretation may seem weird, mathematically it is a simple statement that the plate blocks the wavefunction from passing through the upper slit, so the wavefunction in front of the screen, resulting from all possible paths, has no interference fringes (other than possible diffraction from the lower slit).  From this point of view, the wavefunction samples all of space, including the opaque plate, and the eventual absorption of a photon one place or another has no effect on the wavefunction.  In this sense, it is the wavefunction, prior to any detection event, that samples reality.  If the single electron happens to show up at a dark fringe at the screen, the plate, through its effects on the total wavefunction, has been detected without interacting with the photon. 

This phenomenon is known as an interaction-free measurement, but there are definitely some semantics issues here.  Just because the plate doesn’t absorb a photon, it doesn’t mean that the plate plays no role.  The plate certainly blocks the wavefunction from passing through the upper slit.  This might be called an “interaction”, but that phrase it better reserved for when the photon is actually absorbed, while the role of the plate in shaping the wavefunction is better described as one of the possible histories.

Quantum Seeing in the Dark

Although Feynman was thinking hard (and clearly) about these issues as he presented his famous lectures in physics at Cal Tech during 1961 to 1963, the specific possibility of interaction-free measurement dates more recently to 1993 when Avshalom C. Elitzur and Lev Vaidman at Tel Aviv University suggested a simple Michelson interferometer configuration that could detect an object half of the time without interacting with it [1].  They are the ones who first pressed this point home by thinking of a light-sensitive bomb.  There is no mistaking when a bomb goes off, so it tends to give an exaggerated demonstration of the interaction-free measurement. 

The Michelson interferometer for interaction-free measurement is shown in Fig. 2.  This configuration uses a half-silvered beamsplitter to split the possible photon paths.  When photons hit the beamsplitter, they either continue traveling to the right, or are deflected upwards.  After reflecting off the mirrors, the photons again encounter the beamsplitter, where, in each case, they continue undeflected or are reflected.  The result is that two paths combine at the beamsplitter to travel to the detector, while two other paths combine to travel back along the direction of the incident beam. 

Fig. 2 A quantum-seeing in the dark (QSD) detector with a photo-sensitive bomb. A single photon is sent into the interferometer at a time. If the bomb is NOT present, destructive interference at the detector guarantees that the photon is not detected. However, if the bomb IS present, it destroys the destructive interference and the photon can arrive at the detector. That photon heralds the presence of the bomb without setting it off. (Reprinted from Mind @ Light Speed)

The paths of the light beams can be adjusted so that the beams that combine to travel to the detector experience perfect destructive interference.  In this situation, the detector never detects light, and all the light returns back along the direction of the incident beam.  Quantum mechanically, when only a single photon is present in the interferometer at a time, we would say that the quantum wavefunction of the photon interferes destructively along the path to the detector, and constructively along the path opposite to the incident beam, and the detector would detect no photons.  It is clear that the unobstructed path of both beams results in the detector making no detections.

Now place the light sensitive bomb in the upper path.  Because this path is no longer available to the photon wavefunction, the destructive interference of the wavefunction along the detector path is removed.  Now when a single photon is sent into the interferometer, three possible things can happen.  One, the photon is reflected by the beamsplitter and detonates the bomb.  Two, the photon is transmitted by the beamsplitter, reflects off the right mirror, and is transmitted again by the beamsplitter to travel back down the incident path without being detected by the detector.  Three, the photon is transmitted by the beamsplitter, reflects off the right mirror, and is reflected off the beamsplitter to be detected by the detector. 

In this third case, the photon is detected AND the bomb does NOT go off, which succeeds at quantum seeing in the dark.  The odds are much better than for Young’s experiment.  If the bomb is present, it will detonate a maximum of 50% of the time.  The other 50%, you will either detect a photon (signifying the presence of the bomb), or else you will not detect a photon (giving an ambiguous answer and requiring you to perform the experiment again).  When you perform the experiment again, you again have a 50% chance of detonating the bomb, and a 25% chance of detecting it without it detonating, but again a 25% chance of not detecting it, and so forth.  All in all, every time you send in a photon, you have one chance in four of seeing the bomb without detonating it.  These are much better odds than for the Young’s apparatus where only exact detection of the photon at a forbidden location would signify the presence of the bomb.

It is possible to increase your odds above one chance in four by decreasing the reflectivity of the beamsplitter.  In practice, this is easy to do simply by depositing less and less aluminum on the surface of the glass plate.  When the reflectivity gets very low, let us say at the level of 1%, then most of the time the photon just travels back along the direction it came and you have an ambiguous result.  On the other hand, when the photon does not return, there is an equal probability of detonation as detection.  This means that, though you may send in many photons, your odds for eventually seeing the bomb without detonating it are nearly 50%, which is a factor of two better odds than for the half-silvered beamsplitter.  A version of this experiment was performed by Paul Kwiat in 1995 as a postdoc at Innsbruck with Anton Zeilinger.  It was Kwiat who coined the phrase “quantum seeing in the dark” as a catchier version of “interaction-free measurement” [2].

A 50% chance of detecting the bomb without setting it off sounds amazing, until you think that there is a 50% chance that it will go off and kill you.  Then those odds don’t look so good.  But optical phenomena never fail to surprise, and they never let you down.  A crucial set of missing elements in the simple Michelson experiment was polarization-control using polarizing beamsplitters and polarization rotators.  These are common elements in many optical systems, and when they are added to the Michelson quantum sensor, they can give almost a 100% chance of detecting the bomb without setting it off using the quantum Zeno effect.

The Quantum Zeno Effect

Photons carry polarization as their prime quantum number, with two possible orientations.  These can be defined in different ways, but the two possible polarizations are orthogonal to each other.  For instance, these polarization pairs can be vertical (V)  and horizontal (H), or they can be right circular  and left circular.  One of the principles of quantum state evolution is that a quantum wavefunction can be maintained in a specific state, even if it has a tendency naturally to drift out of that state, by repeatedly making a quantum measurement that seeks to measure deviations from that state.  In practice, the polarization of a photon can be maintained by repeatedly passing it through a polarizing beamsplitter with the polarization direction parallel to the original polarization of the photon.  If there is a deviation in the photon polarization direction by a small angle, then a detector on the side port of the polarizing beamsplitter will fire with a probability equal to the square of the sine of the deviation.  If the deviation angle is very small, say Δθ, then the probability of measuring the deviation is proportional to (Δθ)2, which is an even smaller number.  Furthermore, the probability that the photon will transmit through the polarizing beamsplitter is equal to 1-(Δθ)2 , which is nearly 100%.

This is what happens in Fig. 3 when the photo-sensitive bomb IS present. A single H-polarized photon is injected through a switchable mirror into the interferometer on the right. In the path of the photon is a polarization rotator that rotates the polarization by a small angle Δθ. There is nearly a 100% chance that the photon will transmit through the polarizing beamsplitter with perfect H-polarization reflect from the mirror and return through the polarizing beamsplitter, again with perfect H-polarization to pass through the polarization rotator to the switchable mirror where it reflects, gains another increment to its polarization angle, which is still small, and transmits through the beamsplitter, etc. At each pass, the photon polarization is repeatedly “measured” to be horizontal. After a number of passes N = π/Δθ/2, the photon is switched out of the interferometer and is transmitted through the external polarizing beamsplitter where it is detected at the H-photon detector.

Now consider what happens when the bomb IS NOT present. This time, even though there is a high amplitude for the transmitted photon, there is that Δθ amplitude for reflection out the V port. This small V-amplitude, when it reflects from the mirror, recombines with the H-amplitude at the polarizing beamsplitter to produce a polarization that has the same tilted polarizaton that it started with, sending it back in the direction from which it came. (In this situation, the detector on the “dark” port of the internal beamsplitter never sees the photon because of destructive interference along this path.) The photon is then rotated once more by the polarization rotator, and the photon polarization is rotated again, etc.. Now, after a number of passes N = π/Δθ/2, the photon has acquired a V polarization and is switched out of the interferometer. At the external polarizing beamsplitter it is reflected out of the V-port where it is detected at the V-photon detector.

Fig. 3  Quantum Zeno effect for interaction-free measurement.  If the bomb is present, the H-photon detector detects the output photon without setting it off.  The switchable mirror ejects the photon after it makes π/Δθ/2 round trips in the polarizing interferometer.

The two end results of this thought experiment are absolutely distinct, giving a clear answer to the question whether the bomb is present or not. If the bomb IS present, the H-detector fires. If the bomb IS NOT present, then the V-detector fires. Through all of this, the chance to set off the bomb is almost zero. Therefore, this quantum Zeno interaction-free measurement detects the bomb with nearly 100% efficiency with almost no chance of setting it off. This is the amazing consequence of quantum physics. The wavefunction is affected by the presence of the bomb, altering the interference effects that allow the polarization to rotate. But the likelihood of a photon being detected by the bomb is very low.

On a side note: Although ultrafast switchable mirrors do exist, the experiment was much easier to perform by creating a helix in the optical path through the system so that there is only a finite number of bounces of the photon inside the cavity. See Ref. [2] for details.

In conclusion, the ability for a quantum system to see an object with light, without exposing it, is uniquely a quantum phenomenon that has no classical analog.  No E&M wave description can explain this effect.


Further Reading

I first wrote about quantum seeing the dark in my 2001 book on the future of optical physics and technology: Nolte, D. D. (2001). Mind at Light Speed : A new kind of intelligence. (New York, Free Press)

More on the story of Feynman and Wheeler and what they were trying to accomplish is told in Chapter 8 of Galileo Unbound on the physics and history of dynamics: Nolte, D. D. (2018). Galileo Unbound: A Path Across Life, the Universe and Everything (Oxford University Press).

Paul Kwiat introduced to the world to interaction-free measurements in 1995 in this illuminating Scientific American article: Kwiat, P., H. Weinfurter and A. Zeilinger (1996). “Quantum seeing in the dark – Quantum optics demonstrates the existence of interaction-free measurements: the detection of objects without light-or anything else-ever hitting them.” Scientific American 275(5): 72-78.


References

[1] Elitzur, A. C. and L. Vaidman (1993). “QUANTUM-MECHANICAL INTERACTION-FREE MEASUREMENTS.” Foundations of Physics 23(7): 987-997.

[2] Kwiat, P., H. Weinfurter, T. Herzog, A. Zeilinger and M. A. Kasevich (1995). “INTERACTION-FREE MEASUREMENT.” Physical Review Letters 74(24): 4763-4766.

Freeman Dyson’s Quantum Odyssey

In the fall semester of 1947, a brilliant young British mathematician arrived at Cornell University to begin a yearlong fellowship paid by the British Commonwealth.  Freeman Dyson (1923 –) had received an undergraduate degree in mathematics from Cambridge University and was considered to be one of their brightest graduates.  With strong recommendations, he arrived to work with Hans Bethe on quantum electrodynamics.  He made rapid progress on a relativistic model of the Lamb shift, inadvertently intimidating many of his fellow graduate students with his mathematical prowess.  On the other hand, someone who intimidated him, was Richard Feynman.

Initially, Dyson considered Feynman to be a bit of a buffoon and slacker, but he started to notice that Feynman could calculate QED problems in a few lines that took him pages.

Freeman Dyson at Princeton in 1972.

I think like most science/geek types, my first introduction to the unfettered mind of Freeman Dyson was through the science fiction novel Ringworld by Larry Niven. The Dyson ring, or Dyson sphere, was conceived by Dyson when he was thinking about the ultimate fate of civilizations and their increasing need for energy. The greatest source of energy on a stellar scale is of course a star, and Dyson envisioned an advanced civilization capturing all that emitted stellar energy by building a solar collector with a radius the size of a planetary orbit. He published the paper “Search for Artificial Stellar Sources of Infra-Red Radiation” in the prestigious magazine Science in 1960. The practicality of such a scheme has to be seriously questioned, but it is a classic example of how easily he thinks outside the box, taking simple principles and extrapolating them to extreme consequences until the box looks like a speck of dust. I got a first-hand chance to see his way of thinking when he gave a physics colloquium at Cornell University in 1980 when I was an undergraduate there. Hans Bethe still had his office at that time in the Newman laboratory. I remember walking by and looking into his office getting a glance of him editing a paper at his desk. The topic of Dyson’s talk was the fate of life in the long-term evolution of the universe. His arguments were so simple they could not be refuted, yet the consequences for the way life would need to evolve in extreme time was unimaginable … it was a bazaar and mind blowing experience for me as an undergrad … and and example of the strange worlds that can be imagined through simple physics principles.

Initially, as Dyson settled into his life at Cornell under Bethe, he considered Feynman to be a bit of a buffoon and slacker, but he started to notice that Feynman could calculate QED problems in a few lines that took him pages.  Dyson paid closer attention to Feynman, eventually spending more of his time with him than Bethe, and realized that Feynman had invented an entirely new way of calculating quantum effects that used cartoons as a form of book keeping to reduce the complexity of many calculations.  Dyson still did not fully understand how Feynman was doing it, but knew that Feynman’s approach was giving all the right answers.  Around that time, he also began to read about Schwinger’s field-theory approach to QED, following Schwinger’s approach as far as he could, but always coming away with the feeling that it was too complicated and required too much math—even for him! 

Road Trip Across America

That summer, Dyson had time to explore America for the first time because Bethe had gone on an extended trip to Europe.  It turned out that Feynman was driving his car to New Mexico to patch things up with an old flame from his Los Alamos days, so Dyson was happy to tag along.  For days, as they drove across the US, they talked about life and physics and QED.  Dyson had Feynman all to himself and began to see daylight in Feynman’s approach, and to understand that it might be consistent with Schwinger’s and Tomonaga’s field theory approach.  After leaving Feynman in New Mexico, he travelled to the University of Michigan where Schwinger gave a short course on QED, and he was able to dig deeper, talking with him frequently between lectures. 

At the end of the summer, it had been arranged that he would spend the second year of his fellowship at the Institute for Advanced Study in Princeton where Oppenheimer was the new head.  As a final lark before beginning that new phase of his studies he spent a week at Berkeley.  The visit there was uneventful, and he did not find the same kind of open camaraderie that he had found with Bethe in the Newman Laboratory at Cornell, but it left him time to think.  And the more he thought about Schwinger and Feynman, the more convinced he became that the two were equivalent.  On the long bus ride back east from Berkeley, as he half dozed and half looked out the window, he had an epiphany.  He saw all at once how to draw the map from one to the other.  What was more, he realized that many of Feynman’s techniques were much simpler than Schwinger’s, which would significantly simplify lengthy calculations.  By the time he arrived in Chicago, he was ready to write it all down, and by the time he arrived in Princeton, he was ready to publish.  It took him only a few weeks to do it, working with an intensity that he had never experienced before.  When he was done, he sent the paper off to the Physical Review[1].

Dyson knew that he had achieved something significant even though he was essentially just a second-year graduate student, at least from the point of view of the American post-graduate system.  Cambridge was a little different, and Dyson’s degree there was more than the standard bachelor’s degree here.  Nonetheless, he was now under the auspices of the Institute for Advanced Study, where Einstein had his office, and he had sent off an unsupervised manuscript for publication without any imprimatur from the powers at be.  The specific power that mattered most was Oppenheimer, who arrived a few days after Dyson had submitted his manuscript.  When he greeted Oppenheimer, he was excited and pleased to hand him a copy.  Oppenheimer, on the other hand, was neither excited nor pleased to receive it.  Oppenheimer had formed a particularly bad opinion of Feynman’s form of QED at the conference held in the Poconos (to read about Feynman’s disaster at the Poconos conference, see my blog) half-a-year earlier and did not think that this brash young grad student could save it.  Dyson, on his part, was taken aback.  No one who has ever met Dyson would ever call him brash, but in this case he fought for a higher cause, writing a bold memo to Oppenheimer—that terrifying giant of a personality—outlining the importance of the Feynman theory.

Battle for the Heart of Quantum Field Theory 

Oppenheimer decided to give Dyson a chance, and arranged for a series of seminars where Dyson could present the story to the assembled theory group at the Institute, but Dyson could make little headway.  Every time he began to make progress, Oppenheimer would bring it crashing to a halt with scathing questions and criticisms.  This went on for weeks, until Bethe visited from Cornell.  Bethe by then was working with the Feynman formalism himself.  As Bethe lectured in front of Oppenheimer, he seeded his talk with statements such as “surely they had all seen this from Dyson”, and Dyson took the opportunity to pipe up that he had not been allowed to get that far.  After Bethe left, Oppenheimer relented, arranging for Dyson to give three seminars in one week.  The seminars each went on for hours, but finally Dyson got to the end of it.  The audience shuffled out of the seminar room with no energy left for discussions or arguments.  Later that day, Dyson found a note in his box from Oppenheimer saying “Nolo Contendre”—Dyson had won!

With that victory under his belt, Dyson was in a position to communicate the new methods to a small army of postdocs at the Institute, supervising their progress on many outstanding problems in quantum electrodynamics that had resisted calculations using the complicated Schwinger-Tomonaga theory.  Feynman, by this time, had finally published two substantial papers on his approach[2], which added to the foundation that Dyson was building at Princeton.  Although Feynman continued to work for a year or two on QED problems, the center of gravity for these problems shifted solidly to the Institute for Advanced Study and to Dyson.  The army of postdocs that Dyson supervised helped establish the use of Feynman diagrams in QED, calculating ever higher-order corrections to electromagnetic interactions.  These same postdocs were among the first batch of wartime-trained theorists to move into faculty positions across the US, bringing the method of Feynman diagrams with them, adding to the rapid dissemination of Feynman diagrams into many aspects of theoretical physics that extend far beyond QED [3].

As a graduate student at Berkeley in the 1980’s I ran across a very simple-looking equation called “the Dyson equation” in our graduate textbook on relativistic quantum mechanics by Sakurai. The Dyson equation is the extraordinarily simple expression of an infinite series of Feynman diagrams that describes how an electron interacts with itself through the emission of virtual photons that link to virtual electron-positron pairs. This process leads to the propagator Green’s function for the electron and is the starting point for including the simple electron in more complex particle interactions.

The Dyson equation for the single-electron Green’s function represented as an infinite series of Feynman diagrams.

I had no feel for the use of the Dyson equation, barely limping through relativistic quantum mechanics, until a few years later when I was working at Lawrence Berkeley Lab with Mirek Hamera, a visiting scientist from Warwaw Poland who introduced me to the Haldane-Anderson model that applied to a project I was working on for my PhD. Using the theory, with Dyson’s equation at its heart, we were able to show that tightly bound electrons on transition-metal impurities in semiconductors acted as internal reference levels that allowed us to measure internal properties of semiconductors that had never been accessible before. A few years later, I used Dyson’s equation again when I was working on small precipitates of arsenic in the semiconductor GaAs, using the theory to describe an accordion-like ladder of electron states that can occur within the semiconductor bandgap when a nano-sphere takes on multiple charges [4].

The Coulomb ladder of deep energy states of a nano-sphere in GaAs calculated using self-energy principles first studied by Dyson.

I last saw Dyson when he gave the Hubert James Memorial Lecture at Purdue University in 1996. The title of his talk was “How the Dinosaurs Might Have Been Saved: Detection and Deflection of Earth-Impacting Bodies”. As always, his talk was wild and wide ranging, using the simplest possible physics to derive the most dire consequences of our continued existence on this planet.


[1] Dyson, F. J. (1949). “THE RADIATION THEORIES OF TOMONAGA, SCHWINGER, AND FEYNMAN.” Physical Review 75(3): 486-502.

[2] Feynman, R. P. (1949). “THE THEORY OF POSITRONS.” Physical Review 76(6): 749-759.  Feynman, R. P. (1949). “SPACE-TIME APPROACH TO QUANTUM ELECTRODYNAMICS.” Physical Review 76(6): 769-789.

[3] Kaiser, D., K. Ito and K. Hall (2004). “Spreading the tools of theory: Feynman diagrams in the USA, Japan, and the Soviet Union.” Social Studies of Science 34(6): 879-922.

[4] Nolte, D. D. (1998). “Mesoscopic Point-like Defects in Semiconductors.” Phys. Rev. B58(12): pg. 7994

Feynman and the Dawn of QED

In the years immediately following the Japanese surrender at the end of WWII, before the horror and paranoia of global nuclear war had time to sink into the psyche of the nation, atomic scientists were the rock stars of their times.  Not only had they helped end the war with a decisive stroke, they were also the geniuses who were going to lead the US and the World into a bright new future of possibilities.  To help kick off the new era, the powers in Washington proposed to hold a US meeting modeled on the European Solvay Congresses.  The invitees would be a select group of the leading atomic physicists: invitation only!  The conference was held at the Rams Head Inn on Shelter Island, at the far end of Long Island, New York in June of 1947.  The two dozen scientists arrived in a motorcade with police escort and national press coverage.  Richard Feynman was one of the select invitees, although he had done little fundamental work beyond his doctoral thesis with Wheeler.  This would be his first real chance to expound on his path integral formulation of quantum mechanics.  It was also his first conference where he was with all the big guns.  Oppenheimer and Bethe were there as well as Wheeler and Kramers, von Neumann and Pauling.  It was an august crowd and auspicious occasion.

Shelter Island and the Foundations of Quantum Mechanics

            The topic that had been selected for the conference was Foundations of Quantum Mechanics, which at that time meant quantum electrodynamics, known as QED, a theory that was at the forefront of theoretical physics, but mired in theoretical difficulties.  Specifically, it was waist deep in infinities that cropped up in calculations that went beyond the lowest order.  The theorists could do back-of-the-envelope calculations with ease and arrive quickly at rough numbers that closely matched experiment, but as soon as they tried to be more accurate, results diverged, mainly because of the self-energy of the electron, which was the problem that Wheeler and Feynman had started on at the beginning of his doctoral studies [1].  As long as experiments had only limited resolution, the calculations were often good enough.  But at the Shelter Island conference, Willis Lamb, a theorist-turned-experimentalist from Columbia University, announced the highest resolution atomic spectroscopy of atomic hydrogen ever attained, and there was a deep surprise in the experimental results.

An obvious photo-op at Shelter Island with, left to right: W. Lamb, Abraham Pais, John Wheeler (holding paper), Richard P. Feynman (holding pen), Herman Feschbach and Julian Schwinger.

            Hydrogen, of course, is the simplest of all atoms.  This was the atom that launched Bohr’s model, inspired Heisenberg’s matrix mechanics and proved Schrödinger’s wave mechanics.  Deviations from the classical Bohr levels, measured experimentally, were the testing grounds for Dirac’s relativistic quantum theory that had enjoyed unparalleled success until Lamb’s presentation at Shelter Island.  Lamb showed there was an exceedingly small energy splitting of about 200 parts in a billion that amounted to a wavelength of 28 cm in the microwave region of the electromagnetic spectrum.  This splitting was not predicted, nor could it be described, by the formerly successful relativistic Dirac theory of the electron. 

            The audience was abuzz with excitement.  Here was a very accurate measurement that stood ready for the theorists to test their theories on.  In the discussions, Oppenheimer guessed that the splitting was likely caused by electromagnetic interactions related to the self energy of the electron.  Victor Weisskopf of MIT with Julian Schwinger of Harvard suggested that, although the total energy calculations of each level might be infinite,  the difference in energy DE should be finite.  After all, in spectroscopy it is only the energy difference that is measured experimentally.  Absolute energies are not accessible directly to experiment.  The trick was how to subtract one infinity from another in a consistent way to get a finite answer.  Many of the discussions in the hallways, as well as many of the presentations, revolved around this question.  For instance, Kramers suggested that there should be two masses in the electron theory—one is the observed electron mass seen in experiments, and the second is a type of internal or bare mass of the electron to be used in perturbation calculations. 

            On the train ride up state after the Shelter Island Conference, Hans Bethe took out his pen and a sheaf of paper and started scribbling down ideas about how to use mass renormalization, subtracting infinity from infinity in a precise and consistent way to get finite answers in the QED calculations.  He made surprising progress, and by the time the train pulled into the station at Schenectady he had achieved a finite calculation in reasonable agreement with Lamb’s shift.  Oppenheimer had been right that the Lamb shift was electromagnetic in origin, and the suggestion by Weisskopf and Schwinger that the energy difference would be finite was indeed the correct approach.  Bethe was thrilled with his own progress and quickly wrote up a paper draft and sent a copy in letters to Oppenheimer and Weisskopf [2].  Oppenheimer’s reply was gracious, but Weisskopf initially bristled because he also had tried the calculations after the conference, but had failed where Bethe had succeeded.  On the other hand, both pointed out to Bethe that his calculation was non-relativistic, and that a relativistic calculation was still needed.

When Bethe returned to Cornell, he told Feynman about the success of his calculations but that a relativistic version was still missing. Feynman told him on the spot that he knew how to do it and that he would have it the next day. Feynman’s optimism was based on the new approach to relativistic quantum electrodynamics that he had been developing with the aid of his newly-invented “Feynman Diagrams”. Despite his optimism, he hit a snag that evening as he tried to calculate the self-energy of the electron. When he met with Bethe the next day, they both tried to to reconcile the calculations with Feynman’s new approach, but they failed to find a path through the calculations that made sense. Somewhat miffed, because he knew that his approach should work, Feynman got down to work in a way that he had usually avoided (he had always liked finding the “easy” path through tough problems). Over several intense months, he began to see how it all would work out.

           At the same time that Feynman was making progress on his work, word arrived at Cornell of progress being made by Julian Schwinger at Harvard.  Schwinger was a mathematical prodigy like Feynman, and also like Feynman had grown up in New York city, but they came from very different neighborhoods and had very different styles.  Schwinger was a formalist who pursued everything with precision and mathematical rigor.  He lectured calmly without notes in flawless presentations.  Feynman, on the other hand, did his physics by feel.  He made intuitive guesses and checked afterwards if they were right, testing ideas through trial and error.  His lectures ranged widely, with great energy, without structure, following wherever the ideas might lead.  This difference in approach and style between Schwinger and Feynman would have embarrassing consequences at the upcoming sequel to the Shelter Island conference that was to be held in late March 1948 at a resort in the Pocono Mountains in Pennsylvania.

The Conference in the Poconos

           The Pocono conference was poised to be for the theorists Schwinger and Feynman what the Shelter Island had been for the experimentalists Rabi and Lamb—a chance to drop bombshells.  There was a palpable buzz leading up to the conference with advance word coming from Schwinger about his successful calculation of the g-factor of the electron and the Lamb shift.  In addition to the attendees who had been at Shelter Island, the Pocono conference was attended by Bohr and Dirac—two of the giants who had invented quantum mechanics.  Schwinger began his presentation first.  He had developed a rigorous mathematical method to remove the infinities from QED, enabling him to make detailed calculations of the QED corrections—a significant achievement—but the method was terribly complicated and tedious.  His presentation went on for many hours in his carefully crafted style, without notes, delivered like a speech.  Even so, the audience grew restless, and whenever Schwinger tried to justify his work on physical grounds, Bohr would speak up, and arguments among the attendees would ensue, after which Schwinger would say that all would become clear at the end.  Finally, he came to the end, where only Fermi and Bethe had followed him.  The rest of the audience was in a daze.

            Feynman was nervous.  It had seemed to him that Schwinger’s talk had gone badly, despite Schwinger’s careful preparation.  Furthermore, the audience was spent and not in a mood to hear anything challenging.  Bethe suggested that if Feynman stuck to the math instead of the physics, then the audience might not interrupt so much.  So Feynman restructured his talk in the short break before he was to begin.  Unfortunately, Feynman’s strength was in physical intuition, and although he was no slouch at math, he was guided by visualization and by trial and error.  Many of the steps in his method worked (he knew this because they gave the correct answers and because he could “feel” they were correct), but he did not have all the mathematical justifications.  What he did have was a completely new way of thinking about quantum electromagnetic interactions and a new way of making calculations that were far simpler and faster than Schwinger’s.  The challenge was that he relied on space-time graphs in which “unphysical” things were allowed to occur, and in fact were required to occur, as part of the sum over many histories of his path integrals.  For instance, a key element in the approach was allowing electrons to travel backwards in time as positrons.  In addition, a process in which the electron and positron annihilate into a single photon, and then the photon decays into an electron-positron pair, is not allowed by mass and energy conservation, but this is a possible history that must add to the sum.  As long as the time between the photon emission and decay is short enough to satisfy Heisenberg’s uncertainty principle, there is no violation of physics.

Feynman’s first published “Feynman Diagram” in the Physical Review (1948) [3] (Photograph reprinted from “Galileo Unbound” (D. Nolte, Oxford University Press, 2018)

            None of this was familiar to the audience, and the talk quickly derailed.  Dirac pestered him with questions that he tried to deflect, but Dirac persisted like a raven pecking at dead meat.  A question was raised about the Pauli exclusion principle, about whether an orbital could have three electrons instead of the required two, and Feynman said that it could (all histories were possible and had to be summed over), an answer that dismayed the audience.  Finally, as Feynman was drawing another of his space-time graphs showing electrons as lines, Bohr rose to his feet and asked whether Feynman had forgotten Heisenberg’s uncertainty principle that made it impossible to even talk about an electron trajectory.  It was hopeless.  Bohr had not understood that the diagrams were a shorthand notation not to be taken literally.  The audience gave up and so did Feynman.  The talk just fizzled out.  It was a disaster.

           At the close of the Pocono conference, Schwinger was the hero, and his version of QED appeared to be the right approach [4].  Oppenheimer, the reigning king of physics, former head of the successful Manhattan Project and newly selected to head the prestigious Institute for Advanced Study at Princeton, had been thoroughly impressed by Schwinger and thoroughly disappointed by Feynman.  When Oppenheimer returned to Princeton, a letter was waiting for him in the mail from a colleague he knew in Japan by the name of Sin-Itiro Tomonaga [5].  In the letter, Tomonaga described work he had completed, unbeknownst to anyone in the US or Europe, on a renormalized QED.  His results and approach were similar to Schwinger’s but had been accomplished independently in a virtual vacuum that surrounded Japan after the end of the war.  His results cemented the Schwinger-Tomonaga approach to QED, further elevating them above the odd-ball Feynman scratchings.  Oppenheimer immediately circulated the news of Tomonaga’s success to all the attendees of the Pocono conference.  It appeared that Feynman was destined to be a footnote, but the prevailing winds were about to change as Feynman retreated to Cornell. In defeat, Feynman found the motivation to establish his simplified yet powerful version of quantum electrodynamics. He published his approach in 1948, a method that surpassed Schwinger and Tomonaga in conceptual clarity and ease of calculation. This work was to catapult Feynman to the pinnacles of fame, becoming the physicist next to Einstein whose name was most recognizable, in that later half of the twentieth century, to the man in the street (helped by a series of books that mythologized his exploits [6]).



[1] See Chapter 8 “On the Quantum Footpath”, Galileo Unbound (Oxford, 2018)

[2] Schweber, S. S. QED and the men who made it : Dyson, Feynman, Schwinger, and Tomonaga. Princeton, N.J. :, Princeton University Press. (1994)

[3] Feynman, R. P. “Space-time Approach to Quantum Electrodynamics.” Physical Review 76(6): 769-789. (1949)

[4] Schwinger, J. “ON QUANTUM-ELECTRODYNAMICS AND THE MAGNETIC MOMENT OF THE ELECTRON.” Physical Review 73(4): 416-417. (1948)

[5] Tomonaga, S. “ON INFINITE FIELD REACTIONS IN QUANTUM FIELD THEORY.” Physical Review 74(2): 224-225. (1948)

[6] Surely You’re Joking, Mr. Feynman!: Adventures of a Curious Character, Richard Feynman, Ralph Leighton (contributor), Edward Hutchings (editor), 1985, W W Norton,

A Wealth of Motions: Six Generations in the History of the Physics of Motion

SixGenerations3

Since Galileo launched his trajectory, there have been six broad generations that have traced the continuing development of concepts of motion. These are: 1) Universal Motion; 2) Phase Space; 3) Space-Time; 4) Geometric Dynamics; 5) Quantum Coherence; and 6) Complex Systems. These six generations were not all sequential, many evolving in parallel over the centuries, borrowing from each other, and there surely are other ways one could divide up the story of dynamics. But these six generations capture the grand concepts and the crucial paradigm shifts that are Galileo’s legacy, taking us from Galileo’s trajectory to the broad expanses across which physicists practice physics today.

Universal Motion emerged as a new concept when Isaac Newton proposed his theory of universal gravitation by which the force that causes apples to drop from trees is the same force that keeps the Moon in motion around the Earth, and the Earth in motion around the Sun. This was a bold step because even in Newton’s day, some still believed that celestial objects obeyed different laws. For instance, it was only through the work of Edmund Halley, a contemporary and friend of Newton’s, that comets were understood to travel in elliptical orbits obeying the same laws as the planets. Universal Motion included ideas of momentum from the start, while concepts of energy and potential, which fill out this first generation, took nearly a century to develop in the hands of many others, like Leibniz and Euler and the Bernoullis. This first generation was concluded by the masterwork of the Italian-French mathematician Joseph-Louis Lagrange, who also planted the seed of the second generation.

The second generation, culminating in the powerful and useful Phase Space, also took more than a century to mature. It began when Lagrange divorced dynamics from geometry, establishing generalized coordinates as surrogates to directions in space. Ironically, by discarding geometry, Lagrange laid the foundation for generalized spaces, because generalized coordinates could be anything, coming in any units and in any number, each coordinate having its companion velocity, doubling the dimension for every freedom. The Austrian physicist Ludwig Boltzmann expanded the number of dimensions to the scale of Avogadro’s number of particles, and he discovered the conservation of phase space volume, an invariance of phase space that stays the same even as 1023 atoms (Avogadro’s number) in ideal gases follow their random trajectories. The idea of phase space set the stage for statistical mechanics and for a new probabilistic viewpoint of mechanics that would extend into chaotic motions.

The French mathematician Henri Poincaré got a glimpse of chaotic motion in 1890 as he rushed to correct an embarrassing mistake in his manuscript that had just won a major international prize. The mistake was mathematical, but the consequences were profoundly physical, beginning the long road to a theory of chaos that simmered, without boiling, for nearly seventy years until computers became common lab equipment. Edward Lorenz of MIT, working on models of the atmosphere in the late 1960s, used one of the earliest scientific computers to expose the beauty and the complexity of chaotic systems. He discovered that the computer simulations were exponentially sensitive to the initial conditions, and the joke became that a butterfly flapping its wings in China could cause hurricanes in the Atlantic. In his computer simulations, Lorenz discovered what today is known as the Lorenz butterfly, an example of something called a “strange attractor”. But the term chaos is a bit of a misnomer, because chaos theory is primarily about finding what things are shared in common, or are invariant, among seemingly random-acting systems.

The third generation in concepts of motion, Space-Time, is indelibly linked with Einstein’s special theory of relativity, but Einstein was not its originator. Space-time was the brain child of the gifted but short-lived Prussian mathematician Hermann Minkowski, who had been attracted from Königsberg to the mathematical powerhouse at the University in Göttingen, Germany around the turn of the 20th Century by David Hilbert. Minkowski was an expert in invariant theory, and when Einstein published his special theory of relativity in 1905 to explain the Lorentz transformations, Minkowski recognized a subtle structure buried inside the theory. This structure was related to Riemann’s metric theory of geometry, but it had the radical feature that time appeared as one of the geometric dimensions. This was a drastic departure from all former theories of motion that had always separated space and time: trajectories had been points in space that traced out a continuous curve as a function of time. But in Minkowski’s mind, trajectories were invariant curves, and although their mathematical representation changed with changing point of view (relative motion of observers), the trajectories existed in a separate unchanging reality, not mere functions of time, but eternal. He called these trajectories world lines. They were static structures in a geometry that is today called Minkowski space. Einstein at first was highly antagonistic to this new view, but he relented, and later he so completely adopted space-time in his general theory that today Minkowski is almost forgotten, his echo heard softly in expressions of the Minkowski metric that is the background to Einstein’s warped geometry that bends light and captures errant space craft.

The fourth generation in the development of concepts of motion, Geometric Dynamics, began when an ambitious French physicist with delusions of grandeur, the historically ambiguous Pierre Louis Maupertuis, returned from a scientific boondoggle to Lapland where he measured the flatness of the Earth in defense of Newtonian physics over Cartesian. Skyrocketed to fame by the success of the expedition, he began his second act by proposing the Principle of Least Action, a principle by which all motion seeks to be most efficient by taking a geometric path that minimizes a physical quantity called action. In this principle, Maupertuis saw both a universal law that could explain all of physical motion, as well as a path for himself to gain eternal fame in the company of Galileo and Newton. Unfortunately, his high hopes were dashed through personal conceit and nasty intrigue, and most physicists today don’t even recognize his name. But the idea of least action struck a deep chord that reverberates throughout physics. It is the first and fundamental example of a minimum principle, of which there are many. For instance, minimum potential energy identifies points of system equilibrium, and paths of minimum distances are geodesic paths. In dynamics, minimization of the difference between potential and kinetic energies identifies the dynamical paths of trajectories, and minimization of distance through space-time warped by mass and energy density identifies the paths of falling objects.

Maupertuis’ fundamentally important idea was picked up by Euler and Lagrange, expanding it through the language of differential geometry. This was the language of Bernhard Riemann, a gifted and shy German mathematician whose mathematical language was adopted by physicists to describe motion as a geodesic, the shortest path like a great-circle route on the Earth, in an abstract dynamical space defined by kinetic energy and potentials. In this view, it is the geometry of the abstract dynamical space that imposes Galileo’s simple parabolic form on freely falling objects. Einstein took this viewpoint farther than any before him, showing how mass and energy warped space and how free objects near gravitating bodies move along geodesic curves defined by the shape of space. This brought trajectories to a new level of abstraction, as space itself became the cause of motion. Prior to general relativity, motion occurred in space. Afterwards, motion was caused by space. In this sense, gravity is not a force, but is like a path down which everything falls.

The fifth generation of concepts of motion, Quantum Coherence, increased abstraction yet again in the comprehension of trajectories, ushering in difficult concepts like wave-particle duality and quantum interference. Quantum interference underlies many of the counter-intuitive properties of quantum systems, including the possibility for quantum systems to be in two or more states at the same time, and for quantum computers to crack unbreakable codes. But this new perspective came with a cost, introducing fundamental uncertainties that are locked in a battle of trade-offs as one measurement becomes more certain and others becomes more uncertain.

Einstein distrusted Heisenberg’s uncertainty principle, not that he disagreed with its veracity, but he felt it was more a statement of ignorance than a statement of fundamental unknowability. In support of Einstein, Schrödinger devised a thought experiment that was meant to be a reduction to absurdity in which a cat is placed in a box with a vial of poison that would be broken if a quantum particle decays. The cruel fate of Schrödinger’s cat, who might or might not be poisoned, hinges on whether or not someone opens the lid and looks inside. Once the box is opened, there is one world in which the cat is alive and another world in which the cat is dead. These two worlds spring into existence when the box is opened—a bizarre state of affairs from the point of view of a pragmatist. This is where Richard Feynman jumped into the fray and redefined the idea of a trajectory in a radically new way by showing that a quantum trajectory is not a single path, like Galileo’s parabola, but the combined effect of the quantum particle taking all possible paths simultaneously. Feynman established this new view of quantum trajectories in his thesis dissertation under the direction of John Archibald Wheeler at Princeton. By adapting Maupertuis’ Principle of Least Action to quantum mechanics, Feynman showed how every particle takes every possible path—simultaneously—every path interfering in such as way that only the path with the most constructive interference is observed. In the quantum view, the deterministic trajectory of the cannon ball evaporates into a cloud of probable trajectories.

In our current complex times, the sixth generation in the evolution of concepts of motion explores Complex Systems. Lorenz’s Butterfly has more to it than butterflies, because Life is the greatest complex system of our experience and our existence. We are the end result of a cascade of self-organizing events that began half a billion years after Earth coalesced out of the nebula, leading to the emergence of consciousness only about 100,000 years ago—a fact that lets us sit here now and wonder about it all. That we are conscious is perhaps no accident. Once the first amino acids coagulated in a muddy pool, we have been marching steadily uphill, up a high mountain peak in a fitness landscape. Every advantage a species gained over its environment and over its competitors exerted a type of pressure on all the other species in the ecosystem that caused them to gain their own advantage.

The modern field of evolutionary dynamics spans a wide range of scales across a wide range of abstractions. It treats genes and mutations on DNA in much the same way it treats the slow drift of languages and the emergence of new dialects. It treats games and social interactions the same way it does the evolution of cancer. Evolutionary dynamics is the direct descendant of chaos theory that turned butterflies into hurricanes, but the topics it treats are special to us as evolved species, and as potential victims of disease. The theory has evolved its own visualizations, such as the branches in the tree of life and the high mountain tops in fitness landscapes separated by deep valleys. Evolutionary dynamics draws, in a fundamental way, on dynamic processes in high dimensions, without which it would be impossible to explain how something as complex as human beings could have arisen from random mutations.

These six generations in the development of dynamics are not likely to stop, and future generations may arise as physicists pursue the eternal quest for the truth behind the structure of reality.