Einstein’s theory of gravity came from a simple happy thought that occurred to him as he imagined an unfortunate worker falling from a roof, losing hold of his hammer, only to find both the hammer and himself floating motionless relative to each other as if gravity had ceased to exist. With this one thought, Einstein realized that the falling (i.e. accelerating) reference frame was in fact an inertial frame, and hence all the tricks that he had learned and invented to deal with inertial relativistic frames could apply just as well to accelerating frames in gravitational fields.
Gravitational lensing (and microlensing) have become a major tool of discovery in astrophysics applied to the study of quasars, dark matter and even the search for exoplanets.
Armed with this new perspective, one of the earliest discoveries that Einstein made was that gravity must bend light paths. This phenomenon is fundamentally post-Newtonian, because there can be no possible force of gravity on a massless photon—yet Einstein’s argument for why gravity should bend light is so obvious that it is manifestly true, as demonstrated by Arthur Eddington during the solar eclipse of 1919, launching Einstein to world-wide fame. It is also demonstrated by the beautiful gravitational lensing phenomenon of Einstein arcs. Einstein arcs are the distorted images of bright distant light sources in the universe caused by an intervening massive object, like a galaxy or galaxy cluster, that bends the light rays. A number of these arcs are seen in images of the Abel cluster of galaxies in Fig. 1.
Gravitational lensing (and microlensing) have become a major tool of discovery in astrophysics applied to the study of quasars, dark matter and even the search for exoplanets. However, as soon as Einstein conceived of gravitational lensing, in 1912, he abandoned the idea as too small and too unlikely to ever be useful, much like he abandoned the idea of gravitational waves in 1915 as similarly being too small ever to detect. It was only at the persistence of an amateur Czech scientist twenty years later that Einstein reluctantly agreed to publish his calculations on gravitational lensing.
The History of Gravitational Lensing
In 1912, only a few years after his “happy thought”, and fully three years before he published his definitive work on General Relativity, Einstein derived how light would be affected by a massive object, causing light from a distant source to be deflected like a lens. The historian of physics, Jürgen Renn discovered these derivations in Einstein’s notebooks while at the Max Planck Institute for the History of Science in Berlin in 1996 . However, Einstein also calculated the magnitude of the effect and dismissed it as too small, and so he never published it.
Years later, in 1936, Einstein received a visit from a Czech electrical engineer Rudi Mandl, an amateur scientist who had actually obtained a small stipend from the Czech government to visit Einstein at the Institute for Advanced Study at Princeton. Mandl had conceived of the possibility of gravitational lensing and wished to bring it to Einstein’s attention, thinking that the master would certainly know what to do with the idea. Einstein was obliging, redoing his calculations of 1912 and obtaining once again the results that made him believe that the effect would be too small to be seen. However, Mandl was persistent and pressed Einstein to publish the results, which he did . In his submission letter to the editor of Science, Einstein stated “Let me also thank you for your cooperation with the little publication, which Mister Mandl squeezed out of me. It is of little value, but it makes the poor guy happy”. Einstein’s pessimism was based on his thinking that isolated stars would be the only source of the gravitational lens (he did not “believe” in black holes), but in 1937 Fritz Zwicky at Cal Tech (a gadfly genius) suggested that the newly discovered phenomenon of “galaxy clusters” might provide the massive gravity that would be required to produce the effect. Although, to be visible, a distant source would need to be extremely bright.
Potential sources were discovered in the 1960’s using radio telescopes that discovered quasi-stellar objects (known as quasars) that are extremely bright and extremely far away. Quasars also appear in the visible range, and in 1979 a twin quasar was discovered by astronomers using the telescope at the Kitt Peak Obversvatory in Arizona–two quasars very close together that shared identical spectral fingerprints. The astronomers realized that it could be a twin image of a single quasar caused by gravitational lensing, which they published as a likely explanation. Although the finding was originally controversial, the twin-image was later confirmed, and many additional examples of gravitational lensing have since been discovered.
The Optics of Gravity and Light
Gravitational lenses are terrible optical instruments. A good imaging lens has two chief properties: 1) It produces increasing delay on a wavefront as the radial distance from the optic axis decreases; and 2) it deflects rays with increasing deflection angle as the radial distance of a ray increases away from the optic axis (the center of the lens). Both properties are part of the same effect: the conversion, by a lens, of an incident plane wave into a converging spherical wave. A third property of a good lens ensures minimal aberrations of the converging wave: a quadratic dependence of wavefront delay on radial distance from the optic axis. For instance, a parabolic lens produces a diffraction-limited focal spot.
Now consider the optical effects of gravity around a black hole. One of Einstein’s chief discoveries during his early investigations into the effects of gravity on light is the analogy of warped space-time as having an effective refractive index. Light propagates through space affected by gravity as if there were a refractive index associated with the gravitational potential. In a previous blog on the optics of gravity, I showed the simple derivation of the refractive effects of gravity on light based on the Schwarschild metric applied to a null geodesic of a light ray. The effective refractive index near a black hole is
This effective refractive index diverges at the Schwarzschild radius of the black hole. It produces the maximum delay, not on the optic axis as for a good lens, but at the finite distance RS. Furthermore, the maximum deflection also occurs at RS, and the deflection decreases with increasing radial distance. Both of these properties of gravitational lensing are opposite to the properties of a good lens. For this reason, the phrase “gravitational lensing” is a bit of a misnomer. Gravitating bodies certainly deflect light rays, but the resulting optical behavior is far from that of an imaging lens.
The path of a ray from a distant quasar, through the thin gravitational lens of a galaxy, and intersecting the location of the Earth, is shown in Fig. 2. The location of the quasar is a distance R from the “optic axis”. The un-deflected angular position is θ0, and with the intervening galaxy the image appears at the angular position θ. The angular magnification is therefore M = θ/θ0.
The deflection angles are related through
where b is the “impact parameter”
These two equations are solved to give to an expression that relates the unmagnified angle θ0 to the magnified angle θ as
is the angular size of the Einstein ring when the source is on the optic axis. The quadratic equation has two solutions that gives two images of the distant quasar. This is the origin of the “double image” that led to the first discovery of gravitational lensing in 1979.
When the distant quasar is on the optic axis, then θ0 = 0 and the deflection of the rays produces, not a double image, but an Einstein ring with an angular size of θE. For typical lensing objects, the angular size of Einstein rings are typically in the range of tens of microradians. The angular magnification for decreasing distance R diverges as
But this divergence is more a statement of the bad lens behavior than of actual image size. Because the gravitational lens is inverted (with greater deflection closer to the optic axis) compared to an ideal thin lens, it produces a virtual image ring that is closer than the original object, as in Fig. 3.
The location of the virtual image behind the gravitational lens (when the quasar is on the optic axis) is obtained from
If the quasar is much further from the lens than the Earth, then the image location is zi = -L1/2, or behind the lens by half the distance from the Earth to the lens. The longitudinal magnification is then
Note that while the transverse (angular) magnification diverges as the object approaches the optic axis, the longitudinal magnification remains finite but always greater than unity.
The Caustic Curves of Einstein Rings
Because gravitational lenses have such severe aberration relative to an ideal lens, and because the angles are so small, an alternate approach to understanding the optics of gravity is through the theory of light caustics. In a previous blog on the optics of caustics I described how sets of deflected rays of light become enclosed in envelopes that produce regions of high and low intensity. These envelopes are called caustics. Gravitational light deflection also causes caustics.
In addition to envelopes, it is also possible to trace the time delays caused by gravity on wavefronts. In the regions of the caustic envelopes, these wavefronts can fold back onto themselves so that different parts of the image arrive at different times coming from different directions.
An example of gravitational caustics is shown in Fig. 4. Rays are incident vertically on a gravitational thin lens which deflects the rays so that they overlap in the region below the lens. The red curves are selected wavefronts at three successively later times. The gravitational potential causes a time delay on the propgating front, with greater delays in regions of stronger gravitational potential. The envelope function that is tangent to the rays is called the caustic, here shown as the dense blue mesh. In this case there is a cusp in the caustic near z = -1 below the lens. The wavefronts become multiple-valued past the cusp
The intensity of the distant object past the lens is concentrated near the caustic envelope. The intensity of the caustic at z = -6 is shown in Fig. 5. The ring structure is the cross-sectional spatial intensity at the fixed observation plane, but a transform to the an angular image is one-to-one, so the caustic intensity distribution is also similar to the view of the Einstein ring from a position at z = -6 on the optic axis.
The gravitational potential is a function of the mass distribution in the gravitational lens. A different distribution with a flatter distribution of mass near the optic axis is shown in Fig. 6. There are multiple caustics in this case with multi-valued wavefronts. Because caustics are sensitive to mass distribution in the gravitational lens, astronomical observations of gravitational caustics can be used to back out the mass distribution, including dark matter or even distant exoplanets.
# -*- coding: utf-8 -*-
Created on Tue Mar 30 19:47:31 2021
@author: David Nolte
Introduction to Modern Dynamics, 2nd edition (Oxford University Press, 2019)
import numpy as np
from matplotlib import pyplot as plt
n = n0/(1 + abs(x)**expon)**(1/expon);
delt = 0.001
Ly = 10
Lx = 5
n0 = 1
expon = 2 # adjust this from 1 to 10
delx = 0.01
rng = np.int(Lx/delx)
x = delx*np.linspace(-rng,rng)
n = refindex(x)
dndx = np.diff(n)/np.diff(x)
lines = plt.plot(x,n)
lines2 = plt.plot(dndx)
Nloop = 160;
xd = np.zeros((Nloop,3))
yd = np.zeros((Nloop,3))
for loop in range(0,Nloop):
xp = -Lx + 2*Lx*(loop/Nloop)
plt.plot([xp, xp],[2, 0],'b',linewidth = 0.25)
thet = (refindex(xp+delt) - refindex(xp-delt))/(2*delt)
xb = xp + np.tan(thet)*Ly
plt.plot([xp, xb],[0, -Ly],'b',linewidth = 0.25)
for sloop in range(0,3):
delay = n0/(1 + abs(xp)**expon)**(1/expon) - n0
dis = 0.75*(sloop+1)**2 - delay
xfront = xp + np.sin(thet)*dis
yfront = -dis*np.cos(thet)
xd[loop,sloop] = xfront
yd[loop,sloop] = yfront
for sloop in range(0,3):
plt.plot(xd[:,sloop],yd[:,sloop],'r',linewidth = 0.5)
 J. Renn, T. Sauer and J. Stachel, “The Origin of Gravitational Lensing: A Postscript to Einstein’s 1936 Science Paper, Science 275. 184 (1997)
 A. Einstein, “Lens-Like Action of a Star by the Deviation of Light in the Gravitational Field”, Science 84, 506 (1936)
 (Here is an excellent review article on the topic.) J. Wambsganss, “Gravitational lensing as a powerful astrophysical tool: Multiple quasars, giant arcs and extrasolar planets,” Annalen Der Physik, vol. 15, no. 1-2, pp. 43-59, Jan-Feb (2006) SpringerLink
Snorkeling above a shallow reef on a clear sunny day transports you to an otherworldly galaxy of spectacular deep colors and light reverberating off of the rippled surface. Playing across the underwater floor of the reef is a fabulous light show of bright filaments entwining and fluttering, creating random mesh networks of light and dark. These same patterns appear on the bottom of swimming pools in summer and in deep fountains in parks.
Johann Bernoulli had a stormy career and a problematic personality–but he was brilliant even among the bountiful Bernoulli clan. Using methods of tangents, he found the analytic solution of the caustic of the circle.
Something similar happens when a bare overhead light reflects from the sides of a circular glass of water. The pattern no longer moves, but a dazzling filament splays across the bottom of the glass with a sharp bright cusp at the center. These bright filaments of light have an age old name — Caustics — meaning burning as in burning with light. The study of caustics goes back to Archimedes of Syracuse and his apocryphal burning mirrors that are supposed to have torched the invading triremes of the Roman navy in 212 BC.
Caustics in optics are concentrations of light rays that form bright filaments, often with cusp singularities. Mathematically, they are envelope curves that are tangent to a set of lines. Cata-caustics are caustics caused by light reflecting from curved surfaces. Dia-caustics are caustics caused by light refracting from transparent curved materials.
From Leonardo to Huygens
Even after Archimedes, burning mirrors remained an interest for a broad range of scientists, artists and engineers. Leonardo Da Vinci took an interest around 1503 – 1506 when he drew reflected caustics from a circular mirror in his many notebooks.
In the decades after Newton and Leibniz invented the calculus, a small cadre of mathematicians strove to apply the new method to understand aspects of the physical world. At at a time when Newton had left the calculus behind to follow more arcane pursuits, Lebniz, Jakob and Johann Bernoulli, Guillaume de l’Hôpital, Émilie du Chatelet and Walter von Tschirnhaus were pushing notation reform (mainly following Leibniz) to make the calculus easier to learn and use, as well as finding new applications of which there were many.
Ehrenfried Walter von Tschirnhaus (1651 – 1708) was a German mathematician and physician and a lifelong friend of Leibniz, who he met in Paris in 1675. He was one of only five mathematicians to provide a solution to Johann Bernoulli’s brachistochrone problem. One of the recurring interests of von Tschirnhaus, that he revisited throughout his carrier, was in burning glasses and mirrors. A burning glass is a high-quality magnifying lens that brings the focus of the sun to a fine point to burn or anneal various items. Burning glasses were used to heat small items for manufacture or for experimentation. For instance, Priestly and Lavoisier routinely used burning glasses in their chemistry experiments. Low optical aberrations were required for the lenses to bring the light to the finest possible focus, so the study of optical focusing was an important topic both academically and practically. Tshirnhaus had his own laboratory to build and test burning mirrors, and he became aware of the cata-caustic patterns of light reflected from a circular mirror or glass surface. Given his parallel interest in the developing calculus methods, he published a paper in Acta Eruditorum in 1682 that constructed the envelope function created by the cata-caustics of a circle. However, Tschirnhaus did not produce the analytic function–that was provided by Johann Bernoulli ten years later in 1692.
Johann Bernoulli had a stormy career and a problematic personality–but he was brilliant even among the Bountiful Bernoulli clan. Using methods of tangents, he found the analytic solution of the caustic of the circle. He did this by stating the general equation for all reflected rays and then finding when their y values are independent of changing angle … in other words using the principle of stationarity which would later become a potent tool in the hands of Lagrange as he developed Lagrangian physics.
The equation for the reflected ray, expressing y as a function of x for a given angle α in Fig. 5, is
The condition of the caustic envelope requires the change in y with respect to the angle α to vanish while treating x as a constant. This is a partial derivative, and Johann Bernoulli is giving an early use of this method in 1692 to ensure the stationarity of y with respect to the changing angle. The partial derivative is
This is solved to give
Plugging this into the equation at the top equation above yields
These last two expressions for x and y in terms of the angle α are a parametric representation of the caustic. Combining them gives the solution to the caustic of the circle
The square root provides the characteristic cusp at the center of the caustic.
Python Code: raycaustic.py
There are lots of options here. Try them all … then add your own!
# -*- coding: utf-8 -*-
Created on Tue Feb 16 16:44:42 2021
D. D. Nolte, Optical Interferometry for Biology and Medicine (Springer,2011)
import numpy as np
from matplotlib import pyplot as plt
# model_case 1 = cosine
# model_case 2 = circle
# model_case 3 = square root
# model_case 4 = inverse power law
# model_case 5 = ellipse
# model_case 6 = secant
# model_case 7 = parabola
# model_case 8 = Cauchy
model_case = int(input('Input Model Case (1-7)'))
if model_case == 1:
model_title = 'cosine'
xleft = -np.pi
xright = np.pi
ybottom = -1
ytop = 1.2
elif model_case == 2:
model_title = 'circle'
xleft = -1
xright = 1
ybottom = -1
ytop = .2
elif model_case == 3:
model_title = 'square-root'
xleft = 0
xright = 4
ybottom = -2
ytop = 2
elif model_case == 4:
model_title = 'Inverse Power Law'
xleft = 1e-6
xright = 4
ybottom = 0
ytop = 4
elif model_case == 5:
model_title = 'ellipse'
a = 0.5
b = 2
xleft = -b
xright = b
ybottom = -a
ytop = 0.5*b**2/a
elif model_case == 6:
model_title = 'secant'
xleft = -np.pi/2
xright = np.pi/2
ybottom = 0.5
ytop = 4
elif model_case == 7:
model_title = 'Parabola'
xleft = -2
xright = 2
ybottom = 0
ytop = 4
elif model_case == 8:
model_title = 'Cauchy'
xleft = 0
xright = 4
ybottom = 0
ytop = 4
if model_case == 1:
y = -np.cos(x)
elif model_case == 2:
y = -np.sqrt(1-x**2)
elif model_case == 3:
y = -np.sqrt(x)
elif model_case == 4:
y = x**(-0.75)
elif model_case == 5:
y = -a*np.sqrt(1-x**2/b**2)
elif model_case == 6:
y = 1.0/np.cos(x)
elif model_case == 7:
y = 0.5*x**2
elif model_case == 8:
y = 1/(1 + x**2)
xx = np.arange(xleft,xright,0.01)
yy = feval(xx)
lines = plt.plot(xx,yy)
delx = 0.001
N = 75
for i in range(N+1):
x = xleft + (xright-xleft)*(i-1)/N
val = feval(x)
valp = feval(x+delx/2)
valm = feval(x-delx/2)
deriv = (valp-valm)/delx
phi = np.arctan(deriv)
slope = np.tan(np.pi/2 + 2*phi)
if np.abs(deriv) < 1:
xf = (ytop-val+slope*x)/slope;
yf = ytop;
xf = (ybottom-val+slope*x)/slope;
yf = ybottom;
plt.plot([x, x],[ytop, val],linewidth = 0.5)
plt.plot([x, xf],[val, yf],linewidth = 0.5)
The Dia-caustics of Swimming Pools
A caustic is understood mathematically as the envelope function of multiple rays that converge in the Fourier domain (angular deflection measured at far distances). These are points of mathematical stationarity, in which the ray density is invariant to first order in deviations in the refracting surface. The rays themselves are the trajectories of the Eikonal Equation as rays of light thread their way through complicated optical systems.
The basic geometry is shown in Fig 7 for a ray incident on a nonplanar surface emerging into a less-dense medium. From Snell’s law we have the relation for light entering a dense medium like light into water
where n is the relative index (ratio), and the small-angle approximation has been made. The incident angle θ1 is simply related to the slope of the interface dh/dx as
where the small-angle approximation is used again. The angular deflection relative to the optic axis is then
which is equal to the optical path difference through the sample.
In two dimensions, the optical path difference can be replaced with a general potential
and the two orthogonal angular deflections (measured in the far field on a Fourier plane) are
These angles describe the deflection of the rays across the sample surface. They are also the right-hand side of the Eikonal Equation, the equation governing ray trajectories through optical systems.
Caustics are lines of stationarity, meaning that the density of rays is independent of first-order changes in the refracting sample. The condition of stationarity is defined by the Jacobian of the transformation from (x,y) to (θx, θy) with
where the second expression is the Hessian determinant of the refractive power of the uneven surface. When this condition is satisfied, the envelope function bounding groups of collected rays is stationary to perturbations in the inhomogeneous sample.
An example of diacaustic formation from a random surface is shown in Fig. 8 generated by the Python program caustic.py. The Jacobian density (center) outlines regions in which the ray density is independent of small changes in the surface. They are positions of the zeros of the Hessian determinant, the regions of zero curvature of the surface or potential function. These high-intensity regions spread out and are intercepted at some distance by a suface, like the bottom of a swimming pool, where the concentrated rays create bright filaments. As the wavelets on the surface of the swimming pool move, the caustic filaments on the bottom of the swimming pool dance about.
Optical caustics also occur in the gravitational lensing of distant quasars by galaxy clusters in the formation of Einstein rings and arcs seen by deep field telescopes, as described in my following blog post.
Python Code: caustic.py
This Python code was used to generate the caustic patterns in Fig. 8. You can change the surface roughness by changing the divisors on the last two arguments on Line 58. The distance to the bottom of the swimming pool can be changed by changing the parameter d on Line 84.
# -*- coding: utf-8 -*-
Created on Tue Feb 16 19:50:54 2021
D. D. Nolte, Optical Interferometry for Biology and Medicine (Springer,2011)
import numpy as np
from matplotlib import pyplot as plt
from numpy import random as rnd
from scipy import signal as signal
N = 256
x = np.arange(-sx/2,sy/2,1)
y = np.arange(-sy/2,sy/2,1)
y = y[..., None]
ex = np.ones(shape=(sy,1))
x2 = np.kron(ex,x**2/(2*wx**2));
ey = np.ones(shape=(1,sx));
y2 = np.kron(y**2/(2*wy**2),ey);
rad2 = (x2+y2);
A = np.exp(-rad2);
Btemp = 2*np.pi*rnd.rand(sy,sx);
B = np.exp(complex(0,1)*Btemp);
C = gauss2(sy,sx,wy,wx);
Atemp = signal.convolve2d(B,C,'same');
Intens = np.mean(np.mean(np.abs(Atemp)**2));
D = np.real(Atemp/np.sqrt(Intens));
Dphs = np.arctan2(np.imag(D),np.real(D));
return D, Dphs
Sp, Sphs = speckle2(N,N,N/16,N/16)
plt.matshow(Sp,2,cmap=plt.cm.get_cmap('seismic')) # hsv, seismic, bwr
fx, fy = np.gradient(Sp);
fxx,fxy = np.gradient(fx);
fyx,fyy = np.gradient(fy);
J = fxx*fyy - fxy*fyx;
D = np.abs(1/J)
plt.matshow(D,3,cmap=plt.cm.get_cmap('gray')) # hsv, seismic, bwr
eps = 1e-7
cnt = 0
E = np.zeros(shape=(N,N))
for yloop in range(0,N-1):
for xloop in range(0,N-1):
d = N/2
indx = int(N/2 + (d*(fx[yloop,xloop])+(xloop-N/2)/2))
indy = int(N/2 + (d*(fy[yloop,xloop])+(yloop-N/2)/2))
if ((indx > 0) and (indx < N)) and ((indy > 0) and (indy < N)):
E[indy,indx] = E[indy,indx] + 1
The quantum of light—the photon—is a little over 100 years old. It was born in 1905 when Einstein merged Planck’s blackbody quantum hypothesis with statistical mechanics and concluded that light itself must be quantized. No one believed him! Fast forward to today, and the photon is a modern workhorse of modern quantum technology. Quantum encryption and communication are performed almost exclusively with photons, and many prototype quantum computers are optics based. Quantum optics also underpins atomic and molecular optics (AMO), which is one of the hottest and most rapidly advancing frontiers of physics today.
Only after the availability of “quantum” light sources … could photon numbers be manipulated at will, launching the modern era of quantum optics.
This blog tells the story of the early days of the photon and of quantum optics. It begins with Einstein in 1905 and ends with the demonstration of photon anti-bunching that was the first fundamentally quantum optical phenomenon observed seventy years later in 1977. Across that stretch of time, the photon went from a nascent idea in Einstein’s fertile brain to the most thoroughly investigated quantum particle in the realm of physics.
The Photon: Albert Einstein (1905)
When Planck presented his quantum hypothesis in 1900 to the German Physical Society , his model of black body radiation retained all its classical properties but one—the quantized interaction of light with matter. He did not think yet in terms of quanta, only in terms of steps in a continuous interaction.
The quantum break came from Einstein when he published his 1905 paper proposing the existence of the photon—an actual quantum of light that carried with it energy and momentum . His reasoning was simple and iron-clad, resting on Planck’s own blackbody relation that Einstein combined with simple reasoning from statistical mechanics. He was led inexorably to the existence of the photon. Unfortunately, almost no one believed him (see my blog on Einstein and Planck).
This was before wave-particle duality in quantum thinking, so the notion that light—so clearly a wave phenomenon—could be a particle was unthinkable. It had taken half of the 19th century to rid physics of Newton’s corpuscules and emmisionist theories of light, so to bring it back at the beginning of the 20th century seemed like a great blunder. However, Einstein persisted.
In 1909 he published a paper on the fluctuation properties of light  in which he proposed that the fluctuations observed in light intensity had two contributions: one from the discreteness of the photons (what we call “shot noise” today) and one from the fluctuations in the wave properties. Einstein was proposing that both particle and wave properties contributed to intensity fluctuations, exhibiting simultaneous particle-like and wave-like properties. This was one of the first expressions of wave-particle duality in modern physics.
In 1916 and 1917 Einstein took another bold step and proposed the existence of stimulated emission . Once again, his arguments were based on simple physics—this time the principle of detailed balance—and he was led to the audacious conclusion that one photon can stimulated the emission of another. This would become the basis of the laser forty-five years later.
While Einstein was confident in the reality of the photon, others sincerely doubted its existence. Robert Milliken (1868 – 1953) decided to put Einstein’s theory of photoelectron emission to the most stringent test ever performed. In 1915 he painstakingly acquired the definitive dataset with the goal to refute Einstein’s hypothesis, only to confirm it in spectacular fashion . Partly based on Milliken’s confirmation of Einstein’s theory of the photon, Einstein was awarded the Nobel Prize in Physics in 1921.
From that point onward, the physical existence of the photon was accepted and was incorporated routinely into other physical theories. Compton used the energy and the momentum of the photon in 1922 to predict and measure Compton scattering of x-rays off of electrons . The photon was given its modern name by Gilbert Lewis in 1926 .
Single-Photon Interference: Geoffry Taylor (1909)
If a light beam is made up of a group of individual light quanta, then in the limit of very dim light, there should just be one photon passing through an optical system at a time. Therefore, to do optical experiments on single photons, one just needs to reach the ultimate dim limit. As simple and clear as this argument sounds, it has problems that only were sorted out after the Hanbury Brown and Twiss experiments in the 1950’s and the controversy they launched (see below). However, in 1909, this thinking seemed like a clear approach for looking for deviations in optical processes in the single-photon limit.
In 1909, Geoffry Ingram Taylor (1886 – 1975) was an undergraduate student at Cambridge University and performed a low-intensity Young’s double-slit experiment (encouraged by J. J. Thomson). At that time the idea of Einstein’s photon was only 4 years old, and Bohr’s theory of the hydrogen atom was still a year away. But Thomson believed that if photons were real, then their existence could possibly show up as deviations in experiments involving single photons. Young’s double-slit experiment is the classic demonstration of the classical wave nature of light, so performing it under conditions when (on average) only a single photon was in transit between a light source and a photographic plate seemed like the best place to look.
The experiment was performed by finding an optimum exposure of photographic plates in a double slit experiment, then reducing the flux while increasing the exposure time, until the single-photon limit was achieved while retaining the same net exposure of the photographic plate. Under the lowest intensity, when only a single photon was in transit at a time (on average), Taylor performed the exposure for three months. To his disappointment, when he developed the film, there was no significant difference between high intensity and low intensity interference fringes . If photons existed, then their quantized nature was not showing up in the low-intensity interference experiment.
The reason that there is no single-photon-limit deviation in the behavior of the Young double-slit experiment is because Young’s experiment only measures first-order coherence properties. The average over many single-photon detection events is described equally well either by classical waves or by quantum mechanics. Quantized effects in the Young experiment could only appear in fluctuations in the arrivals of photons, but in Taylor’s day there was no way to detect the arrival of single photons.
Quantum Theory of Radiation : Paul Dirac (1927)
After Paul Dirac (1902 – 1984) was awarded his doctorate from Cambridge in 1926, he received a stipend that sent him to work with Niels Bohr (1885 – 1962) in Copenhagen. His attention focused on the electromagnetic field and how it interacted with the quantized states of atoms. Although the electromagnetic field was the classical field of light, it was also the quantum field of Einstein’s photon, and he wondered how the quantized harmonic oscillators of the electromagnetic field could be generated by quantum wavefunctions acting as operators. He decided that, to generate a photon, the wavefunction must operate on a state that had no photons—the ground state of the electromagnetic field known as the vacuum state.
Dirac put these thoughts into their appropriate mathematical form and began work on two manuscripts. The first manuscript contained the theoretical details of the non-commuting electromagnetic field operators. He called the process of generating photons out of the vacuum “second quantization”. In second quantization, the classical field of electromagnetism is converted to an operator that generates quanta of the associated quantum field out of the vacuum (and also annihilates photons back into the vacuum). The creation operators can be applied again and again to build up an N-photon state containing N photons that obey Bose-Einstein statistics, as they must, as required by their integer spin, and agreeing with Planck’s blackbody radiation.
Dirac then showed how an interaction of the quantized electromagnetic field with quantized energy levels involved the annihilation and creation of photons as they promoted electrons to higher atomic energy levels, or demoted them through stimulated emission. Very significantly, Dirac’s new theory explained the spontaneous emission of light from an excited electron level as a direct physical process that creates a photon carrying away the energy as the electron falls to a lower energy level. Spontaneous emission had been explained first by Einstein more than ten years earlier when he derived the famous A and B coefficients , but the physical mechanism for these processes was inferred rather than derived. Dirac, in late 1926, had produced the first direct theory of photon exchange with matter .
Einstein-Podolsky-Rosen (EPR) and Bohr (1935)
The famous dialog between Einstein and Bohr at the Solvay Conferences culminated in the now famous “EPR” paradox of 1935 when Einstein published (together with B. Podolsky and N. Rosen) a paper that contained a particularly simple and cunning thought experiment. In this paper, not only was quantum mechanics under attack, but so was the concept of reality itself, as reflected in the paper’s title “Can Quantum Mechanical Description of Physical Reality Be Considered Complete?” .
Einstein considered an experiment on two quantum particles that had become “entangled” (meaning they interacted) at some time in the past, and then had flown off in opposite directions. By the time their properties are measured, the two particles are widely separated. Two observers each make measurements of certain properties of the particles. For instance, the first observer could choose to measure either the position or the momentum of one particle. The other observer likewise can choose to make either measurement on the second particle. Each measurement is made with perfect accuracy. The two observers then travel back to meet and compare their measurements. When the two experimentalists compare their data, they find perfect agreement in their values every time that they had chosen (unbeknownst to each other) to make the same measurement. This agreement occurred either when they both chose to measure position or both chose to measure momentum.
It would seem that the state of the particle prior to the second measurement was completely defined by the results of the first measurement. In other words, the state of the second particle is set into a definite state (using quantum-mechanical jargon, the state is said to “collapse”) the instant that the first measurement is made. This implies that there is instantaneous action at a distance −− violating everything that Einstein believed about reality (and violating the law that nothing can travel faster than the speed of light). He therefore had no choice but to consider this conclusion of instantaneous action to be false. Therefore quantum mechanics could not be a complete theory of physical reality −− some deeper theory, yet undiscovered, was needed to resolve the paradox.
Bohr, on the other hand, did not hold “reality” so sacred. In his rebuttal to the EPR paper, which he published six months later under the identical title , he rejected Einstein’s criterion for reality. He had no problem with the two observers making the same measurements and finding identical answers. Although one measurement may affect the conditions of the second despite their great distance, no information could be transmitted by this dual measurement process, and hence there was no violation of causality. Bohr’s mind-boggling viewpoint was that reality was nonlocal, meaning that in the quantum world the measurement at one location does influence what is measured somewhere else, even at great distance. Einstein, on the other hand, could not accept a nonlocal reality.
The Intensity Interferometer: Hanbury Brown and Twiss (1956)
Optical physics was surprisingly dormant from the 1930’s through the 1940’s. Most of the research during this time was either on physical optics, like lenses and imaging systems, or on spectroscopy, which was more interested in the physical properties of the materials than in light itself. This hiatus from the photon was about to change dramatically, not driven by physicists, but driven by astronomers.
The development of radar technology during World War II enabled the new field of radio astronomy both with high-tech receivers and with a large cohort of scientists and engineers trained in radio technology. In the late 1940’s and early 1950’s radio astronomy was starting to work with long baselines to better resolve radio sources in the sky using interferometery. The first attempts used coherent references between two separated receivers to provide a common mixing signal to perform field-based detection. However, the stability of the reference was limiting, especially for longer baselines.
In 1950, a doctoral student in the radio astronomy department of the University of Manchester, R. Hanbury Brown, was given the task to design baselines that could work at longer distances to resolve smaller radio sources. After struggling with the technical difficulties of providing a coherent “local” oscillator for distant receivers, Hanbury Brown had a sudden epiphany one evening. Instead of trying to reference the field of one receiver to the field of another, what if, instead, one were to reference the intensity of one receiver to the intensity of the other, specifically correlating the noise on the intensity? To measure intensity requires no local oscillator or reference field. The size of an astronomical source would then show up in how well the intensity fluctuations correlated with each other as the distance between the receivers was changed. He did a back of the envelope calculation that gave him hope that his idea might work, but he needed more rigorous proof if he was to ask for money to try out his idea. He tracked down Richard Twiss at a defense research lab and the two working out the theory of intensity correlations for long-baseline radio interferometry. Using facilities at the famous Jodrell Bank Radio Observatory at Manchester, they demonstrated the principle of their intensity interferometer and measured the angular size of Cygnus A and Cassiopeia A, two of the strongest radio sources in the Northern sky.
One of the surprising side benefits of the intensity interferometer over field-based interferometry was insensitivity to environmental phase fluctuations. For radio astronomy the biggest source of phase fluctuations was the ionosphere, and the new intensity interferometer was immune to its fluctuations. Phase fluctuations had also been the limiting factor for the Michelson stellar interferometer which had limited its use to only about half a dozen stars, so Hanbury Brown and Twiss decided to revisit visible stellar interferometry using their new concept of intensity interferometry.
To illustrate the principle for visible wavelengths, Hanbury Brown and Twiss performed a laboratory experiment to correlate intensity fluctuations in two receivers illuminated by a common source through a beam splitter. The intensity correlations were detected and measured as a function of path length change, illustrating an excess correlation in noise for short path lengths that decayed as the path length increased. They published their results in Nature magazine in 1956 that immediately ignited a firestorm of protest from physicists .
In the 1950’s, many physicists had embraced the discrete properties of the photon and had developed a misleading mental picture of photons as individual and indivisible particles that could only go one way or another from a beam splitter, but not both. Therefore, the argument went, if the photon in an attenuated beam was detected in one detector at the output of a beam splitter, then it cannot be detected at the other. This would produce an anticorrelation in coincidence counts at the two detectors. However, the Hanbury Brown Twiss (HBT) data showed a correlation from the two detectors. This launched an intense controversy in which some of those who accepted the results called for a radical new theory of the photon, while most others dismissed the HBT results as due to systematics in the light source. The heart of this controversy was quickly understood by the Nobel laureate E. M Purcell. He correctly pointed out that photons are bosons and are indistinguishable discrete particles and hence are likely to “bunch” together, according to quantum statistics, even under low light conditions . Therefore, attenuated “chaotic” light would indeed show photodetector correlations, even if the average photon number was less than a single photon at a time, the photons would still bunch.
The bunching of photons in light is a second order effect that moves beyond the first-order interference effects of Young’s double slit, but even here the quantum nature of light is not required. A semiclassical theory of light emission from a spectral line with a natural bandwidth also predicts intensity correlations, and the correlations are precisely what would be observed for photon bunching. Therefore, even the second-order HBT results, when performed with natural light sources, do not distinguish between classical and quantum effects in the experimental results. But this reliance on natural light sources was about to change fundmaentally with the invention of the laser.
Invention of the Laser : Ted Maiman (1959)
One of the great scientific breakthroughs of the 20th century was the nearly simultaneous yet independent realization by several researchers around 1951 (by Charles H. Townes of Columbia University, by Joseph Weber of the University of Maryland, and by Alexander M. Prokhorov and Nikolai G. Basov at the Lebedev Institute in Moscow) that clever techniques and novel apparati could be used to produce collections of atoms that had more electrons in excited states than in ground states. Such a situation is called a population inversion. If this situation could be attained, then according to Einstein’s 1917 theory of photon emission, a single photon would stimulate a second photon, which in turn would stimulate two additional electrons to emit two identical photons to give a total of four photons −− and so on. Clearly this process turns a single photon into a host of photons, all with identical energy and phase.
Charles Townes and his research group were the first to succeed in 1953 in producing a device based on ammonia molecules that could work as an intense source of coherent photons. The initial device did not amplify visible light, but amplified microwave photons that had wavelengths of about 3 centimeters. They called the process microwave amplification by stimulated emission of radiation, hence the acronym “MASER”. Despite the significant breakthrough that this invention represented, the devices were very expensive and difficult to operate. The maser did not revolutionize technology, and some even quipped that the acronym stood for “Means of Acquiring Support for Expensive Research”. The maser did, however, launch a new field of study, called quantum electronics, that was the direct descendant of Einstein’s 1917 paper. Most importantly, the existence and development of the maser became the starting point for a device that could do the same thing for light.
The race to develop an optical maser (later to be called laser, for light amplification by stimulated emission of radiation) was intense. Many groups actively pursued this holy grail of quantum electronics. Most believed that it was possible, which made its invention merely a matter of time and effort. This race was won by Theodore H. Maiman at Hughes Research Laboratory in Malibu California in 1960 . He used a ruby crystal that was excited into a population inversion by an intense flash tube (like a flash bulb) that had originally been invented for flash photography. His approach was amazingly simple −− blast the ruby with a high-intensity pulse of light and see what comes out −− which explains why he was the first. Most other groups had been pursuing much more difficult routes because they believed that laser action would be difficult to achieve.
Perhaps the most important aspect of Maiman’s discovery was that it demonstrated that laser action was actually much simpler than people anticipated, and that laser action is a fairly common phenomenon. His discovery was quickly repeated by other groups, and then additional laser media were discovered such as helium-neon gas mixtures, argon gas, carbon dioxide gas, garnet lasers and others. Within several years, over a dozen different material and gas systems were made to lase, opening up wide new areas of research and development that continues unabated to this day. It also called for new theories of optical coherence to explain how coherent laser light interacted with matter.
Coherent States : Glauber (1963)
The HBT experiment had been performed with attenuated chaotic light that had residual coherence caused by the finite linewidth of the filtered light source. The theory of intensity correlations for this type of light was developed in the 1950’s by Emil Wolf and Leonard Mandel using a semiclassical theory in which the statistical properties of the light was based on electromagnetics without a direct need for quantized photons. The HBT results were fully consistent with this semiclassical theory. However, after the invention of the laser, new “coherent” light sources became available that required a fundamentally quantum depiction.
Roy Glauber was a theoretical physicist who received his PhD working with Julian Schwinger at Harvard. He spent several years as a post-doc at Princeton’s Institute for Advanced Study starting in 1949 at the time when quantum field theory was being developed by Schwinger, Feynman and Dyson. While Feynman was off in Brazil for a year learning to play the bongo drums, Glauber filled in for his lectures at Cal Tech. He returned to Harvard in 1952 in the position of an assistant professor. He was already thinking about the quantum aspects of photons in 1956 when news of the photon correlations in the HBT experiment were published, and when the laser was invented three years later, he began developing a theory of photon correlations in laser light that he suspected would be fundamentally different than in natural chaotic light.
Because of his background in quantum field theory, and especially quantum electrodynamics, it was a fairly easy task to couch the quantum optical properties of coherent light in terms of Dirac’s creation and annihilation operators of the electromagnetic field. Related to the minimum-uncertainty wave functions derived initially by Schrödinger in the late 1920’s, Glauber developed a “coherent state” operator that was a minimum uncertainty state of the quantized electromagnetic field . This coherent state represents a laser operating well above the lasing threshold and predicted that the HBT correlations would vanish. Glauber was awarded the Nobel Prize in Physics in 2005 for his work on such “Glauber” states in quantum optics.
Single-Photon Optics: Kimble and Mandel (1977)
Beyond introducing coherent states, Glauber’s new theoretical approach, and parallel work by George Sudarshan around the same time , provided a new formalism for exploring quantum optical properties in which fundamentally quantum processes could be explored that could not be predicted using only semiclassical theory. For instance, one could envision producing photon states in which the photon arrivals at a detector could display the kind of anti-bunching that had originally been assumed (in error) by the critics of the HBT experiment. A truly one-photon state, also known as a Fock state or a number state, would be the extreme limit in which the quantum field possessed a single quantum that could be directed at a beam splitter and would emerge either from one side or the other with complete anti-correlation. However, generating such a state in the laboratory remained a challenge.
In 1975 by Carmichel and Walls predicted that resonance fluorescence could produce quantized fields that had lower correlations than coherent states . In 1977 H. J. Kimble, M. Dagenais and L. Mandel demonstrated, for the first time, photon antibunching between two photodetectors at the two ports of a beam splitter . They used a beam of sodium atoms pumped by a dye laser.
This first demonstration of photon antibunching represents a major milestone in the history of quantum optics. Taylor’s first-order experiments in 1909 showed no difference between classical electromagnetic waves and a flux of photons. Similarly the second-order HBT experiment of 1956 using chaotic light could be explained equally well using classical or quantum approaches to explain the observed photon correlations. Even laser light (when the laser is operated far above threshold) produced classic “classical” wave effects with only the shot noise demonstrating the discreteness of photon arrivals. Only after the availability of “quantum” light sources, beginning with the work of Kimble and Mandel, could photon numbers be manipulated at will, launching the modern era of quantum optics. Later experiments by them and others have continually improved the control of photon states.
1900 – Planck (1901). “Law of energy distribution in normal spectra.” Annalen Der Physik 4(3): 553-563.
1905 – A. Einstein (1905). “Generation and conversion of light with regard to a heuristic point of view.” Annalen Der Physik 17(6): 132-148.
1909 – A. Einstein (1909). “On the current state of radiation problems.” Physikalische Zeitschrift 10: 185-193.
1909 – G.I. Taylor: Proc. Cam. Phil. Soc. Math. Phys. Sci. 15 , 114 (1909) Single photon double-slit experiment
1915 – Millikan, R. A. (1916). “A direct photoelectric determination of planck’s “h.”.” Physical Review 7(3): 0355-0388. Photoelectric effect.
1916 – Einstein, A. (1916). “Strahlungs-Emission un -Absorption nach der Quantentheorie.” Verh. Deutsch. Phys. Ges. 18: 318.. Einstein predicts stimulated emission
1923 –Compton, Arthur H. (May 1923). “A Quantum Theory of the Scattering of X-Rays by Light Elements”. Physical Review. 21 (5): 483–502.
1926 – Lewis, G. N. (1926). “The conservation of photons.” Nature 118: 874-875.. Gilbert Lewis named “photon”
1927 – D. Dirac, P. A. M. (1927). “The quantum theory of the emission and absorption of radiation.” Proceedings of the Royal Society of London Series a-Containing Papers of a Mathematical and Physical Character 114(767): 243-265.
1932 – E. P. Wigner: Phys. Rev. 40, 749 (1932)
1935 – A. Einstein, B. Podolsky, N. Rosen: Phys. Rev. 47 , 777 (1935). EPR paradox.
1935 – N. Bohr: Phys. Rev. 48 , 696 (1935). Bohr’s response to the EPR paradox.
 Einstein, A. (1916). “Strahlungs-Emission un -Absorption nach der Quantentheorie.” Verh. Deutsch. Phys. Ges. 18: 318; Einstein, A. (1917). “Quantum theory of radiation.” Physikalische Zeitschrift 18: 121-128.
 Brown, R. H. and R. Q. Twiss (1956). “Correlation Between Photons in 2 Coherent Beams of Light.” Nature177(4497): 27-29;  R. H. Brown and R. Q. Twiss, “Test of a new type of stellar interferometer on Sirius,” Nature, vol. 178, no. 4541, pp. 1046-1048, (1956).
 Glauber, R. J. (1963). “Photon Correlations.” Physical Review Letters 10(3): 84.
 Sudarshan, E. C. G. (1963). “Equivalence of semiclassical and quantum mechanical descriptions of statistical light beams.” Physical Review Letters 10(7): 277-&.; Mehta, C. L. and E. C. Sudarshan (1965). “Relation between quantum and semiclassical description of optical coherence.” Physical Review 138(1B): B274.
Quantum sensors have amazing powers. They can detect the presence of an obstacle without ever interacting with it. For instance, consider a bomb that is coated with a light sensitive layer that sets off the bomb if it absorbs just a single photon. Then put this bomb inside a quantum sensor system and shoot photons at it. Remarkably, using the weirdness of quantum mechanics, it is possible to design the system in such a way that you can detect the presence of the bomb using photons without ever setting it off. How can photons see the bomb without illuminating it? The answer is a bizarre side effect of quantum physics in which quantum wavefunctions are recognized as the root of reality as opposed to the pesky wavefunction collapse at the moment of measurement.
The ability for a quantum system to see an object with light, without exposing it, is uniquely a quantum phenomenon that has no classical analog.
All Paths Lead to Feynman
When Richard Feynman was working on his PhD under John Archibald Wheeler at Princeton in the early 1940’s he came across an obscure paper written by Paul Dirac in 1933 that connected quantum physics with classical Lagrangian physics. Dirac had recognized that the phase of a quantum wavefunction was analogous to the classical quantity called the “Action” that arises from Lagrangian physics. Building on this concept, Feynman constructed a new interpretation of quantum physics, known as the “many histories” interpretation, that occupies the middle ground between Schrödinger’s wave mechanics and Heisenberg’s matrix mechanics. One of the striking consequences of the many histories approach is the emergence of the principle of least action—a classical concept—into interpretations of quantum phenomena. In this approach, Feynman considered ALL possible histories for the propagation of a quantum particle from one point to another, he tabulated the quantum action in the phase factor, and he then summed all of these histories.
One of the simplest consequences of the sum over histories is a quantum interpretation of Snell’s law of refraction in optics. When summing over all possible trajectories of a photon from a point above to a point below an interface, there are a subset of paths for which the action integral varies very little from one path in the subset to another. The consequence of this is that the phases of all these paths add constructively, producing a large amplitude to the quantum wavefunction along the centroid of these trajectories. Conversely, for paths far away from this subset, the action integral takes on many values and the phases tend to interfere destructively, canceling the wavefunction along these other paths. Therefore, the most likely path of the photon between the two points is the path of maximum constructive interference and hence the path of stationary action. It is simple so show that this path is none other than the classical path determined by Snell’s Law and equivalently by Fermat’s principle of least time. With the many histories approach, we can add the principle of least (or stationary) action to the list of explanations of Snell’s Law. This argument holds as well for an electron (with mass and a de Broglie wavelength) as it does for a photon, so this not just a coincidence specific to optics but is a fundamental part of quantum physics.
A more subtle consequence of the sum over histories view of quantum phenomena is Young’s double slit experiment for electrons, shown at the top of Fig 1. The experiment consists of a source that emits only a single electron at a time that passes through a double-slit mask to impinge on an electron detection screen. The wavefunction for a single electron extends continuously throughout the full spatial extent of the apparatus, passing through both slits. When the two paths intersect at the screen, the difference in the quantum phases of the two paths causes the combined wavefunction to have regions of total constructive interference and other regions of total destructive interference. The probability of detecting an electron is proportional to the squared amplitude of the wavefunction, producing a pattern of bright stripes separated by darkness. At positions of destructive interference, no electrons are detected when both slits are open. However, if an opaque plate blocks the upper slit, then the interference pattern disappears, and electrons can be detected at those previously dark locations. Therefore, the presence of the object can be deduced by the detection of electrons at locations that should be dark.
Fig. 1 Demonstration of the sum over histories in a double-slit experiment for electrons. In the upper frame, the electron interference pattern on the phosphorescent screen produces bright and dark stripes. No electrons hit the screen in a dark stripe. When the upper slit is blocked (bottom frame), the interference pattern disappears, and an electron can arrive at the location that had previously been dark.
Consider now when the opaque plate is an electron-sensitive detector. In this case, a single electron emitted by the source can be detected at the screen or at the plate. If it is detected at the screen, it can appear at the location of a dark fringe, heralding the presence of the opaque plate. Yet the quantum conundrum is that when the electron arrives at a dark fringe, it must be detected there as a whole, it cannot be detected at the electron-sensitive plate too. So how does the electron sense the presence of the detector without exposing it, without setting it off?
In Feynman’s view, the electron does set off the detector as one possible history. And that history interferes with the other possible history when the electron arrives at the screen. While that interpretation may seem weird, mathematically it is a simple statement that the plate blocks the wavefunction from passing through the upper slit, so the wavefunction in front of the screen, resulting from all possible paths, has no interference fringes (other than possible diffraction from the lower slit). From this point of view, the wavefunction samples all of space, including the opaque plate, and the eventual absorption of a photon one place or another has no effect on the wavefunction. In this sense, it is the wavefunction, prior to any detection event, that samples reality. If the single electron happens to show up at a dark fringe at the screen, the plate, through its effects on the total wavefunction, has been detected without interacting with the photon.
This phenomenon is known as an interaction-free measurement, but there are definitely some semantics issues here. Just because the plate doesn’t absorb a photon, it doesn’t mean that the plate plays no role. The plate certainly blocks the wavefunction from passing through the upper slit. This might be called an “interaction”, but that phrase it better reserved for when the photon is actually absorbed, while the role of the plate in shaping the wavefunction is better described as one of the possible histories.
Quantum Seeing in the Dark
Although Feynman was thinking hard (and clearly) about these issues as he presented his famous lectures in physics at Cal Tech during 1961 to 1963, the specific possibility of interaction-free measurement dates more recently to 1993 when Avshalom C. Elitzur and Lev Vaidman at Tel Aviv University suggested a simple Michelson interferometer configuration that could detect an object half of the time without interacting with it . They are the ones who first pressed this point home by thinking of a light-sensitive bomb. There is no mistaking when a bomb goes off, so it tends to give an exaggerated demonstration of the interaction-free measurement.
The Michelson interferometer for interaction-free measurement is shown in Fig. 2. This configuration uses a half-silvered beamsplitter to split the possible photon paths. When photons hit the beamsplitter, they either continue traveling to the right, or are deflected upwards. After reflecting off the mirrors, the photons again encounter the beamsplitter, where, in each case, they continue undeflected or are reflected. The result is that two paths combine at the beamsplitter to travel to the detector, while two other paths combine to travel back along the direction of the incident beam.
Fig. 2 A quantum-seeing in the dark (QSD) detector with a photo-sensitive bomb. A single photon is sent into the interferometer at a time. If the bomb is NOT present, destructive interference at the detector guarantees that the photon is not detected. However, if the bomb IS present, it destroys the destructive interference and the photon can arrive at the detector. That photon heralds the presence of the bomb without setting it off. (Reprinted from Mind @ Light Speed)
The paths of the light beams can be adjusted so that the beams that combine to travel to the detector experience perfect destructive interference. In this situation, the detector never detects light, and all the light returns back along the direction of the incident beam. Quantum mechanically, when only a single photon is present in the interferometer at a time, we would say that the quantum wavefunction of the photon interferes destructively along the path to the detector, and constructively along the path opposite to the incident beam, and the detector would detect no photons. It is clear that the unobstructed path of both beams results in the detector making no detections.
Now place the light sensitive bomb in the upper path. Because this path is no longer available to the photon wavefunction, the destructive interference of the wavefunction along the detector path is removed. Now when a single photon is sent into the interferometer, three possible things can happen. One, the photon is reflected by the beamsplitter and detonates the bomb. Two, the photon is transmitted by the beamsplitter, reflects off the right mirror, and is transmitted again by the beamsplitter to travel back down the incident path without being detected by the detector. Three, the photon is transmitted by the beamsplitter, reflects off the right mirror, and is reflected off the beamsplitter to be detected by the detector.
In this third case, the photon is detected AND the bomb does NOT go off, which succeeds at quantum seeing in the dark. The odds are much better than for Young’s experiment. If the bomb is present, it will detonate a maximum of 50% of the time. The other 50%, you will either detect a photon (signifying the presence of the bomb), or else you will not detect a photon (giving an ambiguous answer and requiring you to perform the experiment again). When you perform the experiment again, you again have a 50% chance of detonating the bomb, and a 25% chance of detecting it without it detonating, but again a 25% chance of not detecting it, and so forth. All in all, every time you send in a photon, you have one chance in four of seeing the bomb without detonating it. These are much better odds than for the Young’s apparatus where only exact detection of the photon at a forbidden location would signify the presence of the bomb.
It is possible to increase your odds above one chance in four by decreasing the reflectivity of the beamsplitter. In practice, this is easy to do simply by depositing less and less aluminum on the surface of the glass plate. When the reflectivity gets very low, let us say at the level of 1%, then most of the time the photon just travels back along the direction it came and you have an ambiguous result. On the other hand, when the photon does not return, there is an equal probability of detonation as detection. This means that, though you may send in many photons, your odds for eventually seeing the bomb without detonating it are nearly 50%, which is a factor of two better odds than for the half-silvered beamsplitter. A version of this experiment was performed by Paul Kwiat in 1995 as a postdoc at Innsbruck with Anton Zeilinger. It was Kwiat who coined the phrase “quantum seeing in the dark” as a catchier version of “interaction-free measurement” .
A 50% chance of detecting the bomb without setting it off sounds amazing, until you think that there is a 50% chance that it will go off and kill you. Then those odds don’t look so good. But optical phenomena never fail to surprise, and they never let you down. A crucial set of missing elements in the simple Michelson experiment was polarization-control using polarizing beamsplitters and polarization rotators. These are common elements in many optical systems, and when they are added to the Michelson quantum sensor, they can give almost a 100% chance of detecting the bomb without setting it off using the quantum Zeno effect.
The Quantum Zeno Effect
Photons carry polarization as their prime quantum number, with two possible orientations. These can be defined in different ways, but the two possible polarizations are orthogonal to each other. For instance, these polarization pairs can be vertical (V) and horizontal (H), or they can be right circular and left circular. One of the principles of quantum state evolution is that a quantum wavefunction can be maintained in a specific state, even if it has a tendency naturally to drift out of that state, by repeatedly making a quantum measurement that seeks to measure deviations from that state. In practice, the polarization of a photon can be maintained by repeatedly passing it through a polarizing beamsplitter with the polarization direction parallel to the original polarization of the photon. If there is a deviation in the photon polarization direction by a small angle, then a detector on the side port of the polarizing beamsplitter will fire with a probability equal to the square of the sine of the deviation. If the deviation angle is very small, say Δθ, then the probability of measuring the deviation is proportional to (Δθ)2, which is an even smaller number. Furthermore, the probability that the photon will transmit through the polarizing beamsplitter is equal to 1-(Δθ)2 , which is nearly 100%.
This is what happens in Fig. 3 when the photo-sensitive bomb IS present. A single H-polarized photon is injected through a switchable mirror into the interferometer on the right. In the path of the photon is a polarization rotator that rotates the polarization by a small angle Δθ. There is nearly a 100% chance that the photon will transmit through the polarizing beamsplitter with perfect H-polarization reflect from the mirror and return through the polarizing beamsplitter, again with perfect H-polarization to pass through the polarization rotator to the switchable mirror where it reflects, gains another increment to its polarization angle, which is still small, and transmits through the beamsplitter, etc. At each pass, the photon polarization is repeatedly “measured” to be horizontal. After a number of passes N = π/Δθ/2, the photon is switched out of the interferometer and is transmitted through the external polarizing beamsplitter where it is detected at the H-photon detector.
Now consider what happens when the bomb IS NOT present. This time, even though there is a high amplitude for the transmitted photon, there is that Δθ amplitude for reflection out the V port. This small V-amplitude, when it reflects from the mirror, recombines with the H-amplitude at the polarizing beamsplitter to produce a polarization that has the same tilted polarizaton that it started with, sending it back in the direction from which it came. (In this situation, the detector on the “dark” port of the internal beamsplitter never sees the photon because of destructive interference along this path.) The photon is then rotated once more by the polarization rotator, and the photon polarization is rotated again, etc.. Now, after a number of passes N = π/Δθ/2, the photon has acquired a V polarization and is switched out of the interferometer. At the external polarizing beamsplitter it is reflected out of the V-port where it is detected at the V-photon detector.
Fig. 3 Quantum Zeno effect for interaction-free measurement. If the bomb is present, the H-photon detector detects the output photon without setting it off. The switchable mirror ejects the photon after it makes π/Δθ/2 round trips in the polarizing interferometer.
The two end results of this thought experiment are absolutely distinct, giving a clear answer to the question whether the bomb is present or not. If the bomb IS present, the H-detector fires. If the bomb IS NOT present, then the V-detector fires. Through all of this, the chance to set off the bomb is almost zero. Therefore, this quantum Zeno interaction-free measurement detects the bomb with nearly 100% efficiency with almost no chance of setting it off. This is the amazing consequence of quantum physics. The wavefunction is affected by the presence of the bomb, altering the interference effects that allow the polarization to rotate. But the likelihood of a photon being detected by the bomb is very low.
On a side note: Although ultrafast switchable mirrors do exist, the experiment was much easier to perform by creating a helix in the optical path through the system so that there is only a finite number of bounces of the photon inside the cavity. See Ref.  for details.
In conclusion, the ability for a quantum system to see an object with light, without exposing it, is uniquely a quantum phenomenon that has no classical analog. No E&M wave description can explain this effect.
I first wrote about quantum seeing the dark in my 2001 book on the future of optical physics and technology: Nolte, D. D. (2001). Mind at Light Speed : A new kind of intelligence. (New York, Free Press)
More on the story of Feynman and Wheeler and what they were trying to accomplish is told in Chapter 8 of Galileo Unbound on the physics and history of dynamics: Nolte, D. D. (2018). Galileo Unbound: A Path Across Life, the Universe and Everything (Oxford University Press).
Paul Kwiat introduced to the world to interaction-free measurements in 1995 in this illuminating Scientific American article: Kwiat, P., H. Weinfurter and A. Zeilinger (1996). “Quantum seeing in the dark – Quantum optics demonstrates the existence of interaction-free measurements: the detection of objects without light-or anything else-ever hitting them.” Scientific American 275(5): 72-78.
 Elitzur, A. C. and L. Vaidman (1993). “QUANTUM-MECHANICAL INTERACTION-FREE MEASUREMENTS.” Foundations of Physics 23(7): 987-997.
 Kwiat, P., H. Weinfurter, T. Herzog, A. Zeilinger and M. A. Kasevich (1995). “INTERACTION-FREE MEASUREMENT.” Physical Review Letters 74(24): 4763-4766.
Imagine if you could use the physics of coherent light to record a 3D hologram of a cancer tumor and use it to select the best therapy for the cancer patient.
This week in Scientific Reports, a Nature Research publication, we demonstrate the first step towards that goal.
In a collaboration between Purdue University and the Northwestern University School of Medicine, we performed Doppler spectroscopy of intracellular dynamics of human epithelial ovarian cancer tumor biopsies and observed how they responded to selected anti-cancer drugs. Distinctly different Doppler spectra were observed for patients who went into remission versus those who failed to achieve cancer remission. This is the first clinical pilot trial of the technology, known as Biodynamic Imaging (BDI), published in human cancer research.
BDI may, in the future, make it possible to select the most effective therapies for individual cancer patients, realizing the long-sought dream of personalized cancer care.
The Purdue University Office of Technology Transfer has licensed the BDI patent portfolio to Animated Dynamics, Inc., located in Indianapolis, IN, that is working to commercialize the technology to translate it to the cancer clinic. Currently less than 40% of all cancer patients respond favorably to their chemotherapy. Using BDI technology our hope is to improve rates of remission in select cancer settings.
Christian Andreas Doppler (1803 – 1853) was born in Salzburg, Austria, to a longstanding family of stonemasons. As a second son, he was expected to help his older brother run the business, so his Father had him tested in his 18th year for his suitability for a career in business. The examiner Simon Stampfer (1790 – 1864), an Austrian mathematician and inventor teaching at the Lyceum in Salzburg, discovered that Doppler had a gift for mathematics and was better suited for a scientific career. Stampfer’s enthusiasm convinced Doppler’s father to enroll him in the Polytechnik Institute in Vienna (founded only a few years earlier in 1815) where he took classes in mathematics, mechanics and physics  from 1822 to 1825. Doppler excelled in his courses, but was dissatisfied with the narrowness of the education, yearning for more breadth and depth in his studies and for more significance in his positions, feelings he would struggle with for his entire short life. He left Vienna, returning to the Lyceum in Salzburg to round out his education with philosophy, languages and poetry. Unfortunately, this four-year detour away from technical studies impeded his ability to gain a permanent technical position, so he began a temporary assistantship with a mathematics professor at Vienna. As he approached his 30th birthday this term expired without prospects. He was about to emigrate to America when he finally received an offer to teach at a secondary school in Prague.
To read about the attack by Joseph Petzval on Doppler’s effect and the effect it had on Doppler, see my feature article “The Fall and Rise of the Doppler Effect“ in Physics Today, 73(3) 30, March (2020).
Doppler in Prague
Prague gave Doppler new life. He was a professor with a position that allowed him to marry the daughter of a sliver and goldsmith from Salzburg. He began to publish scholarly papers, and in 1837 was appointed supplementary professor of Higher Mathematics and Geometry at the Prague Technical Institute, promoted to full professor in 1841. It was here that he met the unusual genius Bernard Bolzano (1781 – 1848), recently returned from political exile in the countryside. Bolzano was a philosopher and mathematician who developed rigorous concepts of mathematical limits and is famous today for his part in the Bolzano-Weierstrass theorem in functional analysis, but he had been too liberal and too outspoken for the conservative Austrian regime and had been dismissed from the University in Prague in 1819. He was forbidden to publish his work in Austrian journals, which is one reason why much of Bolzano’s groundbreaking work in functional analysis remained unknown during his lifetime. However, he participated in the Bohemian Society for Science from a distance, recognizing the inventive tendencies in the newcomer Doppler and supporting him for membership in the Bohemian Society. When Bolzano was allowed to return in 1842 to the Polytechnic Institute in Prague, he and Doppler became close friends as kindred spirits.
Prague, Czech Republic
On May 25, 1842, Bolzano presided as chairman over a meeting of the Bohemian Society for Science on the day that Doppler read a landmark paper on the color of stars to a meagre assembly of only five regular members of the Society . The turn-out was so small that the meeting may have been held in the robing room of the Society rather than in the meeting hall itself. Leading up to this famous moment, Doppler’s interests were peripatetic, ranging widely over mathematical and physical topics, but he had lately become fascinated by astronomy and by the phenomenon of stellar aberration. Stellar aberration was discovered by James Bradley in 1729 and explained as the result of the Earth’s yearly motion around the Sun, causing the apparent location of a distant star to change slightly depending on the direction of the Earth’s motion. Bradley explained this in terms of the finite speed of light and was able to estimate it to within several percent . As Doppler studied Bradley aberration, he wondered how the relative motion of the Earth would affect the color of the star. By making a simple analogy of a ship traveling with, or against, a series of ocean waves, he concluded that the frequency of impact of the peaks and troughs of waves on the ship was no different than the arrival of peaks and troughs of the light waves impinging on the eye. Because perceived color was related to the frequency of excitation in the eye, he concluded that the color of light would be slightly shifted to the blue if approaching, and to the red if receding from, the light source.
Doppler wave fronts from a source emitting spherical waves moving with speeds β relative to the speed of the wave in the medium.
Doppler calculated the magnitude of the effect by taking a simple ratio of the speed of the observer relative to the speed of light. What he found was that the speed of the Earth, though sufficient to cause the detectable aberration in the position of stars, was insufficient to produce a noticeable change in color. However, his interest in astronomy had made him familiar with binary stars where the relative motion of the light source might be high enough to cause color shifts. In fact, in the star catalogs there were examples of binary stars that had complementary red and blue colors. Therefore, the title of his paper, published in the Proceedings of the Royal Bohemian Society of Sciences a few months after he read it to the society, was “On the Coloured Light of the Double Stars and Certain Other Stars of the Heavens: Attempt at a General Theory which Incorporates Bradley’s Theorem of Aberration as an Integral Part” .
Title page of Doppler’s 1842 paper introducing the Doppler Effect.
Doppler’s analogy was correct, but like all analogies not founded on physical law, it differed in detail from the true nature of the phenomenon. By 1842 the transverse character of light waves had been thoroughly proven through the work of Fresnel and Arago several decades earlier, yet Doppler held onto the old-fashioned notion that light was composed of longitudinal waves. Bolzano, fully versed in the transverse nature of light, kindly published a commentary shortly afterwards  showing how the transverse effect for light, and a longitudinal effect for sound, were both supported by Doppler’s idea. Yet Doppler also did not know that speeds in visual binaries were too small to produce noticeable color effects to the unaided eye. Finally, (and perhaps the greatest flaw in his argument on the color of stars) a continuous spectrum that extends from the visible into the infrared and ultraviolet would not change color because all the frequencies would shift together preserving the flat (white) spectrum.
The simple algebraic derivation of the Doppler Effect in the 1842 publication..
Doppler’s twelve years in Prague were intense. He was consumed by his Society responsibilities and by an extremely heavy teaching load that included personal exams of hundreds of students. The only time he could be creative was during the night while his wife and children slept. Overworked and running on too little rest, his health already frail with the onset of tuberculosis, Doppler collapsed, and he was unable to continue at the Polytechnic. In 1847 he transferred to the School of Mines and Forrestry in Schemnitz (modern Banská Štiavnica in Slovakia) with more pay and less work. Yet the revolutions of 1848 swept across Europe, with student uprisings, barricades in the streets, and Hungarian liberation armies occupying the cities and universities, giving him no peace. Providentially, his former mentor Stampfer retired from the Polytechnic in Vienna, and Doppler was called to fill the vacancy.
Although Doppler was named the Director of Austria’s first Institute of Physics and was elected to the National Academy, he ran afoul of one of the other Academy members, Joseph Petzval (1807 – 1891), who persecuted Doppler and his effect. To read a detailed description of the attack by Petzval on Doppler’s effect and the effect it had on Doppler, see my feature article “The Fall and Rise of the Doppler Effect” in Physics Today, March issue (2020).
It is difficult today to appreciate just how deeply engrained the reality of the luminiferous ether was in the psyche of the 19th century physicist. The last of the classical physicists were reluctant even to adopt Maxwell’s electromagnetic theory for the explanation of optical phenomena, and as physicists inevitably were compelled to do so, some of their colleagues looked on with dismay and disappointment. This was the situation for Woldemar Voigt (1850 – 1919) at the University of Göttingen, who was appointed as one of the first professors of physics there in 1883, to be succeeded in later years by Peter Debye and Max Born. Voigt received his doctorate at the University of Königsberg under Franz Neumann, exploring the elastic properties of rock salt, and at Göttingen he spent a quarter century pursuing experimental and theoretical research into crystalline properties. Voigt’s research, with students like Paul Drude, laid the foundation for the modern field of solid state physics. His textbook Lehrbuch der Kristallphysik published in 1910 remained influential well into the 20th century because it adopted mathematical symmetry as a guiding principle of physics. It was in the context of his studies of crystal elasticity that he introduced the word “tensor” into the language of physics.
At the January 1887 meeting of the Royal Society of Science at Göttingen, three months before Michelson and Morely began their reality-altering experiments at the Case Western Reserve University in Cleveland Ohio, Voit submitted a paper deriving the longitudinal optical Doppler effect in an incompressible medium. He was responding to results published in 1886 by Michelson and Morely on their measurements of the Fresnel drag coefficient, which was the precursor to their later results on the absolute motion of the Earth through the ether.
Fresnel drag is the effect of light propagating through a medium that is in motion. The French physicist Francois Arago (1786 – 1853) in 1810 had attempted to observe the effects of corpuscles of light emitted from stars propagating with different speeds through the ether as the Earth spun on its axis and traveled around the sun. He succeeded only in observing ordinary stellar aberration. The absence of the effects of motion through the ether motivated Augustin-Jean Fresnel (1788 – 1827) to apply his newly-developed wave theory of light to explain the null results. In 1818 Fresnel derived an expression for the dragging of light by a moving medium that explained the absence of effects in Arago’s observations. For light propagating through a medium of refractive index n that is moving at a speed v, the resultant velocity of light is
where the last term in parenthesis is the Fresnel drag coefficient. The Fresnel drag effect supported the idea of the ether by explaining why its effects could not be observed—a kind of Catch-22—but it also applied to light moving through a moving dielectric medium. In 1851, Fizeau used an interferometer to measure the Fresnel drag coefficient for light moving through moving water, arriving at conclusions that directly confirmed the Fresnel drag effect. The positive experiments of Fizeau, as well as the phenomenon of stellar aberration, would be extremely influential on the thoughts of Einstein as he developed his approach to special relativity in 1905. They were also extremely influential to Michelson, Morley and Voigt.
In his paper on the absence of the Fresnel drag effect in the first Michelson-Morley experiment, Voigt pointed out that an equation of the form
is invariant under the transformation
From our modern vantage point, we immediately recognize (to within a scale factor) the Lorentz transformation of relativity theory. The first equation is common Galilean relativity, but the last equation was something new, introducing a position-dependent time as an observer moved with speed relative to the speed of light . Using these equations, Voigt was the first to derive the longitudinal (conventional) Doppler effect from relativistic effects.
Voigt’s derivation of the longitudinal Doppler effect used a classical approach that is still used today in Modern Physics textbooks to derive the Doppler effect. The argument proceeds by considering a moving source that emits a continuous wave in the direction of motion. Because the wave propagates at a finite speed, the moving source chases the leading edge of the wave front, catching up by a small amount by the time a single cycle of the wave has been emitted. The resulting compressed oscillation represents a blue shift of the emitted light. By using his transformations, Voigt arrived at the first relativistic expression for the shift in light frequency. At low speeds, Voigt’s derivation reverted to Doppler’s original expression.
A few months after Voigt delivered his paper, Michelson and Morley announced the results of their interferometric measurements of the motion of the Earth through the ether—with their null results. In retrospect, the Michelson-Morley experiment is viewed as one of the monumental assaults on the old classical physics, helping to launch the relativity revolution. However, in its own day, it was little more than just another null result on the ether. It did incite Fitzgerald and Lorentz to suggest that length of the arms of the interferometer contracted in the direction of motion, with the eventual emergence of the full Lorentz transformations by 1904—seventeen years after the Michelson results.
In 1904 Einstein, working in relative isolation at the Swiss patent office, was surprisingly unaware of the latest advances in the physics of the ether. He did not know about Voigt’s derivation of the relativistic Doppler effect (1887) as he had not heard of Lorentz’s final version of relativistic coordinate transformations (1904). His thinking about relativistic effects focused much farther into the past, to Bradley’s stellar aberration (1725) and Fizeau’s experiment of light propagating through moving water (1851). Einstein proceeded on simple principles, unencumbered by the mental baggage of the day, and delivered his beautifully minimalist theory of special relativity in his famous paper of 1905 “On the Electrodynamics of Moving Bodies”, independently deriving the Lorentz coordinate transformations .
One of Einstein’s talents in theoretical physics was to predict new phenomena as a way to provide direct confirmation of a new theory. This was how he later famously predicted the deflection of light by the Sun and the gravitational frequency shift of light. In 1905 he used his new theory of special relativity to predict observable consequences that included a general treatment of the relativistic Doppler effect. This included the effects of time dilation in addition to the longitudinal effect of the source chasing the wave. Time dilation produced a correction to Doppler’s original expression for the longitudinal effect that became significant at speeds approaching the speed of light. More significantly, it predicted a transverse Doppler effect for a source moving along a line perpendicular to the line of sight to an observer. This effect had not been predicted either by Doppler or by Voigt. The equation for the general Doppler effect for any observation angle is
Just as Doppler had been motivated by Bradley’s aberration of starlight when he conceived of his original principle for the longitudinal Doppler effect, Einstein combined the general Doppler effect with his results for the relativistic addition of velocities (also in his 1905 Annalen paper) as the conclusive treatment of stellar aberration nearly 200 years after Bradley first observed the effect.
Despite the generally positive reception of Einstein’s theory of special relativity, some of its consequences were anathema to many physicists at the time. A key stumbling block was the question whether relativistic effects, like moving clocks running slowly, were only apparent, or were actually real, and Einstein had to fight to convince others of its reality. When Johannes Stark (1874 – 1957) observed Doppler line shifts in ion beams called “canal rays” in 1906 (Stark received the 1919 Nobel prize in part for this discovery) , Einstein promptly published a paper suggesting how the canal rays could be used in a transverse geometry to directly detect time dilation through the transverse Doppler effect . Thirty years passed before the experiment was performed with sufficient accuracy by Herbert Ives and G. R. Stilwell in 1938 to measure the transverse Doppler effect . Ironically, even at this late date, Ives and Stilwell were convinced that their experiment had disproved Einstein’s time dilation by supporting Lorentz’ contraction theory of the electron. The Ives-Stilwell experiment was the first direct test of time dilation, followed in 1940 by muon lifetime measurements .
 Bradley, J (1729). “Account of a new discoved Motion of the Fix’d Stars”. Phil Trans. 35: 637–660.
 C. A. DOPPLER, “Über das farbige Licht der Doppelsterne und einiger anderer Gestirne des Himmels (About the coloured light of the binary stars and some other stars of the heavens),” Proceedings of the Royal Bohemian Society of Sciences, vol. V, no. 2, pp. 465–482, (Reissued 1903) (1842).
 B. Bolzano, “Ein Paar Bemerkunen über die Neu Theorie in Herrn Professor Ch. Doppler’s Schrift “Über das farbige Licht der Doppersterne und eineger anderer Gestirnedes Himmels”,” Pogg. Anal. der Physik und Chemie, vol. 60, p. 83, 1843; B. Bolzano, “Christian Doppler’s neuste Leistunen af dem Gebiet der physikalischen Apparatenlehre, Akoustik, Optik and optische Astronomie,” Pogg. Anal. der Physik und Chemie, vol. 72, pp. 530-555, 1847.
 W. Voigt, “Uber das Doppler’sche Princip,” Göttinger Nachrichten, vol. 7, pp. 41–51, (1887). The common use of c to express the speed of light came later from Voigt’s student Paul Drude.
 A. Einstein, “On the electrodynamics of moving bodies,” Annalen Der Physik, vol. 17, pp. 891-921, 1905.
 J. Stark, W. Hermann, and S. Kinoshita, “The Doppler effect in the spectrum of mercury,” Annalen Der Physik, vol. 21, pp. 462-469, Nov 1906.
 A. Einstein, “”Über die Möglichkeit einer neuen Prüfung des Relativitätsprinzips”,” vol. 328, pp. 197–198, 1907.
 H. E. Ives and G. R. Stilwell, “An experimental study of the rate of a moving atomic clock,” Journal of the Optical Society of America, vol. 28, p. 215, 1938.
 B. Rossi and D. B. Hall, “Variation of the Rate of Decay of Mesotrons with Momentum,” Physical Review, vol. 59, pp. 223–228, 1941.
The bending of light rays as they enter a transparent medium—what today is called Snell’s Law—has had a long history of independent discoveries and radically different approaches. The general problem of refraction was known to the Greeks in the first century AD, and it was later discussed by the Arabic scholar Alhazan. Ibn Sahl in Bagdad in 984 AD was the first to put an accurate equation to the phenomenon. Thomas Harriott in England discussed the problem with Johannes Kepler in 1602, unaware of the work by Ibn Sahl. Willebrord Snellius (1580–1626) in the Netherlands derived the equation for refraction in 1621, but did not publish it, though it was known to Christian Huygens (1629 – 1695). René Descartes (1596 – 1650), unaware of Snellius’ work, derived the law in his Dioptrics, using his newly-invented coordinate geometry. Christiaan Huygens, in his Traité de la Lumière in 1678, derived the law yet again, this time using his principle of secondary waves, though he acknowledged the prior work of Snellius, permanently cementing the shortened name “Snell” to the law of refraction. Snell’s Law is a special case of the Eikonal Equation and is useful to calculate the caustic envelope of rays transmitted through undulating surfaces.
Through this history and beyond, there have been many approaches to deriving Snell’s Law. Some used ideas of momentum, while others used principles of waves. Today, there are roughly five different ways to derive Snell’s law. These are:
1) Huygens’ Principle,
2) Fermat’s Principle,
3) Wavefront Continuity
4) Plane-wave Boundary Conditions, and
5) Photon Momentum Conservation.
The approaches differ in detail, but they fall into two rough categories: the first two fall under minimization or extremum principles, and the last three fall under continuity or conservation principles.
Snell’s Law: Huygens’ Principle
Huygens’ principle, published in 1687, states that every point on a wavefront serves as the source of a spherical secondary wave. This was one of the first wave principles ever proposed for light (Robert Hooke had suggested that light had wavelike character based on his observations of colors in thin films) yet remains amazingly powerful even today. It can be used not only to derive Snell’s law but also properties of light scattering and diffraction. Huygens’ principle is a form of minimization principle: it finds the direction of propagation (for a spherically expanding wavefront from a point where a ray strikes a surface) that yields a minimum angle (tangent to the surface) relative to a second source. Finding the tangent to the spherical surface is a minimization problem and yields Snell’s Law.
The use of Huygen’s principle for the derivation of Snell’s Law is shown in Fig. 1. Two parallel incoming rays strike a surface a distance d apart. The first point emits a secondary spherical wave into the second medium. The wavefront propagates at a speed of v2 relative to the speed in the first medium of v1. In the diagram, the propagation distance over the distance d is equal to the sine of the angle
Solving for d and equating the two equations gives
The speed depends on the refractive index as
which leads to Snell’s Law:
Snell’s Law: Fermat’s Principle
Fermat’s principle of least time is a direct minimization problem that finds the least time it takes light to propagate from one point to another. One of the central questions about Fermat’s principle is: why does it work? Why is the path of least time the path light needs to take? I’ll answer that question after we do the derivation. The configuration of the problem is shown in Fig. 2.
Consider a source point A and a destination point B. Light travels in a strait line in each medium, deflecting at the point x on the figure. The speed in medium 1 is c/n1, and the speed in medium 2 is c/n2. What position x provides the minimum time?
The distances from A to x, and from x to B are, respectively:
The total time is
Minimize this expression by taking the derivative of the time relative to the position x and setting the result to zero
Converting the cosines to sines yields Snell’s Law
Fermat’s principle of least time can be explained in terms of wave interference. If we think of all paths being taken by propagating waves, then those waves that take paths that differ only a little from the optimum path still interfere constructively. This is the principle of stationarity and is related to the principle of least action. The time minimizes a quadratic expression that deviates from the minimum only in second order (shown in the right part of Fig. 2). Therefore, all “nearby” paths interfere constructively, while paths that are farther away begin to interfere destructively. Therefore, the path of least time is also the path of stationary time and hence stationary optical path length and hence the path of maximum constructive interference. This is the actual path taken by the wave—and the light.
Snell’s Law: Wavefront Continuity
When a wave passes across an interface between two transparent media the phase of the wave remains continuous. This continuity of phase provides a way to derive Snell’s Law. Consider Fig. 3. A plane wave with wavelength l1 is incident from medium 1 on an interface with medium 2 in which the wavelength is l2. The wavefronts remain continuous, but they are “kinked” at the interface.
The waves in medium 1 and medium 2 share the part of the interface between wavefronts. This distance is
The wavelengths in the two media are related to the refractive index through
where l0 is the free-space wavelength. Plugging these into the first expression yields
which relates the denominators through Snell’s Law
Snell’s Law: Plane-Wave Boundary Condition
Maxwell’s four equations in integral form can each be applied to the planar interface between two refractive media.
All four boundary conditions can be written as
where i, r and t stand for incident, reflected and transmitted. The only way this condition can be true for all possible values of the fields is if the phases of the wave terms are all the same (phase-matching), namely
which in turn guarantees that the transverse projection of the k-vector is continuous across the interface
and the transverse components (projections) are
where the last line states both Snell’s law of refraction and the law of reflection. Therefore, the general wave boundary condition leads immediately to Snell’s Law.
Snell’s Law: Momentum Conservation
Going from Maxwell’s equations for classical fields to photons keeps the same mathematical form for the transverse components for the k-vectors, but now interprets them in a different manner. Where before there was a requirement for phase-matching the classical waves at the interface, in the photon picture the transverse k-vector becomes the transverse momentum through de Broglie’s equation
Therefore, continuity of the transverse k-vector is interpreted as conservation of transverse momentum of the photon across the interface. In the figure the second medium is denser with a larger refractive index n2 > n1. Hence, the momentum of the photon in the second medium is larger while keeping the transverse momentum projection the same. This simple interpretation gives the same mathematical form as the previous derivation using classical boundary conditions, namely
which is again Snell’s law and the law of reflection.
Snell’s Law has an eerie habit of springing from almost any statement that can be made about a dielectric interface. It yields the path of least time, tracks the path of maximum constructive interference, produces wavefronts that are extremally tangent to wavefronts, connects continuous wavefronts across the interface, conserves transverse momentum, and guarantees phase matching. These all sound very different, yet all lead to the same simple law of Snellius and Ibn Sahl.
physics of a path of light passing a gravitating body is one of the hardest
concepts to understand in General Relativity, but it is also one of the
easiest. It is hard because there can be
no force of gravity on light even though the path of a photon bends as it
passes a gravitating body. It is easy,
because the photon is following the simplest possible path—a geodesic equation
for force-free motion.
This blog picks up where my last blog left off, having there defined the geodesic equation and presenting the Schwarzschild metric. With those two equations in hand, we could simply solve for the null geodesics (a null geodesic is the path of a light beam through a manifold). But there turns out to be a simpler approach that Einstein came up with himself (he never did like doing things the hard way). He just had to sacrifice the fundamental postulate that he used to explain everything about Special Relativity.
Throwing Special Relativity Under the Bus
The fundamental postulate of Special Relativity states that the speed of light is the same for all observers. Einstein posed this postulate, then used it to derive some of the most astonishing consequences of Special Relativity—like E = mc2. This postulate is at the rock core of his theory of relativity and can be viewed as one of the simplest “truths” of our reality—or at least of our spacetime.
Yet as soon as Einstein began thinking how to extend SR to a more general situation, he realized almost immediately that he would have to throw this postulate out. While the speed of light measured locally is always equal to c, the apparent speed of light observed by a distant observer (far from the gravitating body) is modified by gravitational time dilation and length contraction. This means that the apparent speed of light, as observed at a distance, varies as a function of position. From this simple conclusion Einstein derived a first estimate of the deflection of light by the Sun, though he initially was off by a factor of 2. (The full story of Einstein’s derivation of the deflection of light by the Sun and the confirmation by Eddington is in Chapter 7 of Galileo Unbound (Oxford University Press, 2018).)
The “Optics” of Gravity
The invariant element for a light path moving radially in the Schwarzschild geometry is
The apparent speed of light is
where c(r) is always less than c, when observing it from
flat space. The “refractive index” of
space is defined, as for any optical material, as the ratio of the constant speed
divided by the observed speed
Because the Schwarzschild metric has the property
the effective refractive index of warped space-time is
with a divergence at the Schwarzschild
The refractive index of warped space-time in the limit of weak gravity can be used in the ray equation (also known as the Eikonal equation described in an earlier blog)
where the gradient of the refractive index of space is
The ray equation is then a four-variable flow
These equations represent a 4-dimensional flow for a light ray confined to a plane. The trajectory of any light path is found by using an ODE solver subject to the initial conditions for the direction of the light ray. This is simple for us to do today with Python or Matlab, but it was also that could be done long before the advent of computers by early theorists of relativity like Max von Laue (1879 – 1960).
The Relativity of Max von Laue
In the Fall of 1905 in Berlin, a young German physicist by the name of Max Laue was sitting in the physics colloquium at the University listening to another Max, his doctoral supervisor Max Planck, deliver a seminar on Einstein’s new theory of relativity. Laue was struck by the simplicity of the theory, in this sense “simplistic” and hence hard to believe, but the beauty of the theory stuck with him, and he began to think through the consequences for experiments like the Fizeau experiment on partial ether drag.
Armand Hippolyte Louis Fizeau (1819 – 1896) in 1851 built one of the world’s first optical interferometers and used it to measure the speed of light inside moving fluids. At that time the speed of light was believed to be a property of the luminiferous ether, and there were several opposing theories on how light would travel inside moving matter. One theory would have the ether fully stationary, unaffected by moving matter, and hence the speed of light would be unaffected by motion. An opposite theory would have the ether fully entrained by matter and hence the speed of light in moving matter would be a simple sum of speeds. A middle theory considered that only part of the ether was dragged along with the moving matter. This was Fresnel’s partial ether drag hypothesis that he had arrived at to explain why his friend Francois Arago had not observed any contribution to stellar aberration from the motion of the Earth through the ether. When Fizeau performed his experiment, the results agreed closely with Fresnel’s drag coefficient, which seemed to settle the matter. Yet when Michelson and Morley performed their experiments of 1887, there was no evidence for partial drag.
Even after the exposition by Einstein on relativity in 1905, the disagreement of the Michelson-Morley results with Fizeau’s results was not fully reconciled until Laue showed in 1907 that the velocity addition theorem of relativity gave complete agreement with the Fizeau experiment. The velocity observed in the lab frame is found using the velocity addition theorem of special relativity. For the Fizeau experiment, water with a refractive index of n is moving with a speed v and hence the speed in the lab frame is
The difference in the speed of light between the stationary and the moving water is the difference
where the last term is precisely the Fresnel drag coefficient. This was one of the first definitive “proofs” of the validity of Einstein’s theory of relativity, and it made Laue one of relativity’s staunchest proponents. Spurred on by his success with the Fresnel drag coefficient explanation, Laue wrote the first monograph on relativity theory, publishing it in 1910.
A Nobel Prize for Crystal X-ray Diffraction
In 1909 Laue became a Privatdozent under Arnold Sommerfeld (1868 – 1951) at the university in Munich. In the Spring of 1912 he was walking in the Englischer Garten on the northern edge of the city talking with Paul Ewald (1888 – 1985) who was finishing his doctorate with Sommerfed studying the structure of crystals. Ewald was considering the interaction of optical wavelength with the periodic lattice when it struck Laue that x-rays would have the kind of short wavelengths that would allow the crystal to act as a diffraction grating to produce multiple diffraction orders. Within a few weeks of that discussion, two of Sommerfeld’s students (Friedrich and Knipping) used an x-ray source and photographic film to look for the predicted diffraction spots from a copper sulfate crystal. When the film was developed, it showed a constellation of dark spots for each of the diffraction orders of the x-rays scattered from the multiple periodicities of the crystal lattice. Two years later, in 1914, Laue was awarded the Nobel prize in physics for the discovery. That same year his father was elevated to the hereditary nobility in the Prussian empire and Max Laue became Max von Laue.
Von Laue was not one to take risks, and he remained conservative in many of his interests. He was immensely respected and played important roles in the administration of German science, but his scientific contributions after receiving the Nobel Prize were only modest. Yet as the Nazis came to power in the early 1930’s, he was one of the few physicists to stand up and resist the Nazi take-over of German physics. He was especially disturbed by the plight of the Jewish physicists. In 1933 he was invited to give the keynote address at the conference of the German Physical Society in Wurzburg where he spoke out against the Nazi rejection of relativity as they branded it “Jewish science”. In his speech he likened Einstein, the target of much of the propaganda, to Galileo. He said, “No matter how great the repression, the representative of science can stand erect in the triumphant certainty that is expressed in the simple phrase: And yet it moves.” Von Laue believed that truth would hold out in the face of the proscription against relativity theory by the Nazi regime. The quote “And yet it moves” is supposed to have been muttered by Galileo just after his abjuration before the Inquisition, referring to the Earth moving around the Sun. Although the quote is famous, it is believed to be a myth.
In an odd side-note of history, von Laue sent his gold Nobel prize medal to Denmark for its safe keeping with Niels Bohr so that it would not be paraded about by the Nazi regime. Yet when the Nazis invaded Denmark, to avoid having the medals fall into the hands of the Nazis, the medal was dissolved in aqua regia by a member of Bohr’s team, George de Hevesy. The gold completely dissolved into an orange liquid that was stored in a beaker high on a shelf through the war. When Denmark was finally freed, the dissolved gold was precipitated out and a new medal was struck by the Nobel committee and re-presented to von Laue in a ceremony in 1951.
The Orbits of Light Rays
Von Laue’s interests always stayed close to the properties of light and electromagnetic radiation ever since he was introduced to the field when he studied with Woldemor Voigt at Göttingen in 1899. This interest included the theory of relativity, and only a few years after Einstein published his theory of General Relativity and Gravitation, von Laue added to his earlier textbook on relativity by writing a second volume on the general theory. The new volume was published in 1920 and included the theory of the deflection of light by gravity.
One of the very few illustrations in his second volume is of light coming into interaction with a super massive gravitational field characterized by a Schwarzschild radius. (No one at the time called it a “black hole”, nor even mentioned Schwarzschild. That terminology came much later.) He shows in the drawing, how light, if incident at just the right impact parameter, would actually loop around the object. This is the first time such a diagram appeared in print, showing the trajectory of light so strongly affected by gravity.
Python Code: gravlens.py
# -*- coding: utf-8 -*-
Created on Tue May 28 11:50:24 2019
D. D. Nolte, Introduction to Modern Dynamics: Chaos, Networks, Space and Time, 2nd ed. (Oxford,2019)
import numpy as np
import matplotlib as mpl
from mpl_toolkits.mplot3d import Axes3D
from scipy import integrate
from matplotlib import pyplot as plt
from matplotlib import cm
circle = plt.Circle((0,0), radius= 10, color = 'black')
A = 10
eps = 1e-6
rp0 = np.sqrt(x**2 + y**2);
n = 1/(1 - A/(rp0+eps))
fac = np.abs((1-9*(A/rp0)**2/8)) # approx correction to Eikonal
nx = -fac*n**2*A*x/(rp0+eps)**3
ny = -fac*n**2*A*y/(rp0+eps)**3
x, y, z, w = x_y_z
[n,nx,ny] = refindex(x,y)
yp = np.zeros(shape=(4,))
yp = z/n
yp = w/n
yp = nx
yp = ny
for loop in range(-5,30):
xstart = -100
ystart = -2.245 + 4*loop
[n,nx,ny] = refindex(xstart,ystart)
y0 = [xstart, ystart, n, 0]
tspan = np.linspace(1,400,2000)
y = integrate.odeint(flow_deriv, y0, tspan)
xx = y[1:2000,0]
yy = y[1:2000,1]
lines = plt.plot(xx,yy)
c = create_circle()
axes = plt.gca()
# Now set up a circular photon orbit
xstart = 0
ystart = 15
[n,nx,ny] = refindex(xstart,ystart)
y0 = [xstart, ystart, n, 0]
tspan = np.linspace(1,94,1000)
y = integrate.odeint(flow_deriv, y0, tspan)
xx = y[1:1000,0]
yy = y[1:1000,1]
lines = plt.plot(xx,yy)
plt.setp(lines, linewidth=2, color = 'black')
One of the most striking effects of gravity on photon trajectories is the possibility for a photon to orbit a black hole in a circular orbit. This is shown in Fig. 3 as the black circular ring for a photon at a radius equal to 1.5 times the Schwarzschild radius. This radius defines what is known as the photon sphere. However, the orbit is not stable. Slight deviations will send the photon spiraling outward or inward.
The Eikonal approximation does not strictly hold under strong gravity, but the Eikonal equations with the effective refractive index of space still yield semi-quantitative behavior. In the Python code, a correction factor is used to match the theory to the circular photon orbits, while still agreeing with trajectories far from the black hole. The results of the calculation are shown in Fig. 3. For large impact parameters, the rays are deflected through a finite angle. At a critical impact parameter, near 3 times the Schwarzschild radius, the ray loops around the black hole. For smaller impact parameters, the rays are captured by the black hole.
Photons pile up around the black hole at the photon sphere. The first image ever of the photon sphere of a black hole was made earlier this year (announced April 10, 2019). The image shows the shadow of the supermassive black hole in the center of Messier 87 (M87), an elliptical galaxy 55 million light-years from Earth. This black hole is 6.5 billion times the mass of the Sun. Imaging the photosphere required eight ground-based radio telescopes placed around the globe, operating together to form a single telescope with an optical aperture the size of our planet. The resolution of such a large telescope would allow one to image a half-dollar coin on the surface of the Moon, although this telescope operates in the radio frequency range rather than the optical.
Nature loves the path of steepest descent. Place a ball on a smooth curved surface and release it, and it will instantansouly accelerate in the direction of steepest descent. Shoot a laser beam from an oblique angle onto a piece of glass to hit a target inside, and the path taken by the beam is the only path that decreases the distance to the target in the shortest time. Diffract a stream of electrons from the surface of a crystal, and quantum detection events are greatest at the positions where the troughs and peaks of the deBroglie waves converge the most. The first example is Newton’s second law. The second example is Fermat’s principle and Snell’s Law. The third example is Feynman’s path-integral formulation of quantum mechanics. They all share in common a minimization principle—the principle of least action—that the path of a dynamical system is the one that minimizes a property known as “action”.
The Eikonal Equation is the “F = ma” of ray optics. It’s solutions describe the paths of light rays through complicated media.
The principle of least action, first proposed by the French physicist Maupertuis through mechanical analogy, became a principle of Lagrangian mechanics in the hands of Lagrange, but was still restricted to mechanical systems of particles. The principle was generalized forty years later by Hamilton, who began by considering the propagation of light waves, and ended by transforming mechanics into a study of pure geometry divorced from forces and inertia. Optics played a key role in the development of mechanics, and mechanics returned the favor by giving optics the Eikonal Equation. The Eikonal Equation is the “F = ma” of ray optics. It’s solutions describe the paths of light rays through complicated media.
Anyone who has taken a course in optics knows that Étienne-Louis Malus (1775-1812) discovered the polarization of light, but little else is taught about this French mathematician who was one of the savants Napoleon had taken along with himself when he invaded Egypt in 1798. After experiencing numerous horrors of war and plague, Malus returned to France damaged but wiser. He discovered the polarization of light in the Fall of 1808 as he was playing with crystals of icelandic spar at sunset and happened to view last rays of the sun reflected from the windows of the Luxumbourg palace. Icelandic spar produces double images in natural light because it is birefringent. Malus discovered that he could extinguish one of the double images of the Luxumbourg windows by rotating the crystal a certain way, demonstrating that light is polarized by reflection. The degree to which light is extinguished as a function of the angle of the polarizing crystal is known as Malus’ Law.
Malus had picked up an interest in the general properties of light and imaging during lulls in his ordeal in Egypt. He was an emissionist following his compatriot Laplace, rather than an undulationist following Thomas Young. It is ironic that the French scientists were staunchly supporting Newton on the nature of light, while the British scientist Thomas Young was trying to upend Netwonian optics. Almost all physicists at that time were emissionists, only a few years after Young’s double-slit experiment of 1804, and few serious scientists accepted Young’s theory of the wave nature of light until Fresnel and Arago supplied the rigorous theory and experimental proofs much later in 1819.
As a prelude to his later discovery of polarization, Malus had earlier proven a theorem about trajectories that particles of light take through an optical system. One of the key questions about the particles of light in an optical system was how they formed images. The physics of light particles moving through lenses was too complex to treat at that time, but reflection was relatively easy based on the simple reflection law. Malus proved a theorem mathematically that after a reflection from a curved mirror, a set of rays perpendicular to an initial nonplanar surface would remain perpendicular at a later surface after reflection (this property is closely related to the conservation of optical etendue). This is known as Malus’ Theorem, and he thought it only held true after a single reflection, but later mathematicians proved that it remains true even after an arbitrary number of reflections, even in cases when the rays intersect to form an optical effect known as a caustic. The mathematics of caustics would catch the interest of an Irish mathematician and physicist who helped launch a new field of mathematical physics.
Hamilton’s Characteristic Function
William Rowan Hamilton (1805 – 1865) was a child prodigy who taught himself thirteen languages by the time he was thirteen years old (with the help of his linguist uncle), but mathematics became his primary focus at Trinity College at the University in Dublin. His mathematical prowess was so great that he was made the Astronomer Royal of Ireland while still an undergraduate student. He also became fascinated in the theory of envelopes of curves and in particular to the mathematics of caustic curves in optics.
In 1823 at the age of 18, he wrote a paper titled Caustics that was read to the Royal Irish Academy. In this paper, Hamilton gave an exceedingly simple proof of Malus’ Law, but that was perhaps the simplest part of the paper. Other aspects were mathematically obscure and reviewers requested further additions and refinements before publication. Over the next four years, as Hamilton expanded this work on optics, he developed a new theory of optics, the first part of which was published as Theory of Systems of Rays in 1827 with two following supplements completed by 1833 but never published.
Hamilton’s most important contribution
to optical theory (and eventually to mechanics) he called his characteristic
function. By applying the principle of
Fermat’s least time, which he called his principle of stationary action, he
sought to find a single unique function that characterized every path through
an optical system. By first proving
Malus’ Theorem and then applying the theorem to any system of rays using the
principle of stationary action, he was able to construct two partial
differential equations whose solution, if it could be found, defined every ray
through the optical system. This result
was completely general and could be extended to include curved rays passing
through inhomogeneous media. Because it
mapped input rays to output rays, it was the most general characterization of
any defined optical system. The
characteristic function defined surfaces of constant action whose normal
vectors were the rays of the optical system.
Today these surfaces of constant action are called the Eikonal function
(but how it got its name is the next chapter of this story). Using his characteristic function, Hamilton
predicted a phenomenon known as conical refraction in 1832, which was
subsequently observed, launching him to a level of fame unusual for an
Once Hamilton had established his principle of stationary action of curved light rays, it was an easy step to extend it to apply to mechanical systems of particles with curved trajectories. This step produced his most famous work On a General Method in Dynamics published in two parts in 1834 and 1835  in which he developed what became known as Hamiltonian dynamics. As his mechanical work was extended by others including Jacobi, Darboux and Poincaré, Hamilton’s work on optics was overshadowed, overlooked and eventually lost. It was rediscovered when Schrödinger, in his famous paper of 1926, invoked Hamilton’s optical work as a direct example of the wave-particle duality of quantum mechanics . Yet in the interim, a German mathematician tackled the same optical problems that Hamilton had seventy years earlier, and gave the Eikonal Equation its name.
The German mathematician Heinrich Bruns (1848-1919) was engaged chiefly with the measurement of the Earth, or geodesy. He was a professor of mathematics in Berlin and later Leipzig. One claim fame was that one of his graduate students was Felix Hausdorff  who would go on to much greater fame in the field of set theory and measure theory (the Hausdorff dimension was a precursor to the fractal dimension). Possibly motivated by his studies done with Hausdorff on refraction of light by the atmosphere, Bruns became interested in Malus’ Theorem for the same reasons and with the same goals as Hamilton, yet was unaware of Hamilton’s work in optics.
The mathematical process of creating “images”, in the sense of a mathematical mapping, made Bruns think of the Greek word eikwn which literally means “icon” or “image”, and he published a small book in 1895 with the title Das Eikonal in which he derived a general equation for the path of rays through an optical system. His approach was heavily geometrical and is not easily recognized as an equation arising from variational principals. It rediscovered most of the results of Hamilton’s paper on the Theory of Systems of Rays and was thus not groundbreaking in the sense of new discovery. But it did reintroduce the world to the problem of systems of rays, and his name of Eikonal for the equations of the ray paths stuck, and was used with increasing frequency in subsequent years. Arnold Sommerfeld (1868 – 1951) was one of the early proponents of the Eikonal equation and recognized its connection with action principles in mechanics. He discussed the Eikonal equation in a 1911 optics paper with Runge  and in 1916 used action principles to extend Bohr’s model of the hydrogen atom . While the Eikonal approach was not used often, it became popular in the 1960’s when computational optics made numerical solutions possible.
Lagrangian Dynamics of Light Rays
In physical optics, one of the most important properties of a ray passing through an optical system is known as the optical path length (OPL). The OPL is the central quantity that is used in problems of interferometry, and it is the central property that appears in Fermat’s principle that leads to Snell’s Law. The OPL played an important role in the history of the calculus when Johann Bernoulli in 1697 used it to derive the path taken by a light ray as an analogy of a brachistochrone curve – the curve of least time taken by a particle between two points.
The OPL between two points in a refractive medium is the sum of the piecewise product of the refractive index n with infinitesimal elements of the path length ds. In integral form, this is expressed as
where the “dot” is a derivative
with respedt to s. The optical
Lagrangian is recognized as
The Lagrangian is inserted into the Euler equations to yield (after some algebra, see Introduction to Modern Dynamics pg. 336)
This is a second-order
ordinary differential equation in the variables xa that define the
ray path through the system. It is
literally a “trajectory” of the ray, and the Eikonal equation becomes the F =
ma of ray optics.
In a paraxial system (in which
the rays never make large angles relative to the optic axis) it is common to
select the position z as a single parameter to define the curve of the ray path
so that the trajectory is parameterized as
where the derivatives
are with respect to z, and the effective Lagrangian is recognized as
formulation is derived from the Lagrangian by defining an optical Hamiltonian
as the Legendre transform of the Lagrangian.
To start, the Lagrangian is expressed in terms of the generalized
coordinates and momenta. The generalized
optical momenta are defined as
This relationship leads
to an alternative expression for the Eikonal equation (also known as the scalar
Eikonal equation) expressed as
where S(x,y,z) = const. is the eikonal function. The
momentum vectors are perpendicular to the surfaces of constant S, which
are recognized as the wavefronts of a propagating wave.
Lagrangian can be restated as a function of the generalized momenta as
and the Legendre
transform that takes the Lagrangian into the Hamiltonian is
The trajectory of the
rays is the solution to Hamilton’s equations of motion applied to this
If the optical rays are
restricted to the x-y plane, then Hamilton’s equations of motion can be
expressed relative to the path length ds, and the momenta are pa =
ndxa/ds. The ray equations are
(simply expressing the 2 second-order Eikonal equation as 4 first-order
where the dot is a derivative
with respect to the element ds.
As an example, consider a radial refractive index profile in the x-y plane
where r is the radius on the x-y plane. Putting this refractive index profile into the Eikonal equations creates a two-dimensional orbit in the x-y plane. The Eikonal Equation is the “F = ma” of ray optics. It’s solutions describe the paths of light rays through complicated media, including the phenomenon of gravitational lensing, described in my following blog post.
Python Code: raysimple.py
The following Python code solves for individual trajectories.
# -*- coding: utf-8 -*-
Created on Tue May 28 11:50:24 2019
D. D. Nolte, Introduction to Modern Dynamics: Chaos, Networks, Space and Time, 2nd ed. (Oxford,2019)
import numpy as np
import matplotlib as mpl
from mpl_toolkits.mplot3d import Axes3D
from scipy import integrate
from matplotlib import pyplot as plt
from matplotlib import cm
# selection 1 = Gaussian
# selection 2 = Donut
selection = 1
if selection == 1:
sig = 10
n = 1 + np.exp(-(x**2 + y**2)/2/sig**2)
nx = (-2*x/2/sig**2)*np.exp(-(x**2 + y**2)/2/sig**2)
ny = (-2*y/2/sig**2)*np.exp(-(x**2 + y**2)/2/sig**2)
elif selection == 2:
sig = 10;
r2 = (x**2 + y**2)
r1 = np.sqrt(r2)
np.expon = np.exp(-r2/2/sig**2)
n = 1+0.3*r1*np.expon;
nx = 0.3*r1*(-2*x/2/sig**2)*np.expon + 0.3*np.expon*2*x/r1
ny = 0.3*r1*(-2*y/2/sig**2)*np.expon + 0.3*np.expon*2*y/r1
x, y, z, w = x_y_z
n, nx, ny = refindex(x,y)
yp = np.zeros(shape=(4,))
yp = z/n
yp = w/n
yp = nx
yp = ny
V = np.zeros(shape=(100,100))
for xloop in range(100):
xx = -20 + 40*xloop/100
for yloop in range(100):
yy = -20 + 40*yloop/100
n,nx,ny = refindex(xx,yy)
V[yloop,xloop] = n
fig = plt.figure(1)
contr = plt.contourf(V,100, cmap=cm.coolwarm, vmin = 1, vmax = 3)
fig.colorbar(contr, shrink=0.5, aspect=5)
fig = plt.show()
v1 = 0.707 # Change this initial condition
v2 = np.sqrt(1-v1**2)
y0 = [12, v1, 0, v2] # Change these initial conditions
tspan = np.linspace(1,1700,1700)
y = integrate.odeint(flow_deriv, y0, tspan)
lines = plt.plot(y[1:1550,0],y[1:1550,1])
An excellent textbook on geometric optics from Hamilton’s point of view is K. B. Wolf, Geometric Optics in Phase Space (Springer, 2004). Another is H. A. Buchdahl, An Introduction to Hamiltonian Optics (Dover, 1992).
A rather older textbook on geometrical optics is by J. L. Synge, Geometrical Optics: An Introduction to Hamilton’s Method (Cambridge University Press, 1962) showing the derivation of the ray equations in the final chapter using variational methods. Synge takes a dim view of Bruns’ term “Eikonal” since Hamilton got there first and Bruns was unaware of it.
A book that makes an especially strong case for the Optical-Mechanical analogy of Fermat’s principle, connecting the trajectories of mechanics to the paths of optical rays is Daryl Holm, Geometric Mechanics: Part I Dynamics and Symmetry (Imperial College Press 2008).
 Hamilton, W. R. “On a general method in dynamics I.” Mathematical Papers, I ,103-161: 247-308. (1834); Hamilton, W. R. “On a general method in dynamics II.” Mathematical Papers, I ,103-161: 95-144. (1835)
 Schrodinger, E. “Quantification of the eigen-value problem.” Annalen Der Physik 79(6): 489-527. (1926)
There is a very real possibility that quantum computing is, and always will be, a technology of the future. Yet if it is ever to be the technology of the now, then it needs two things: practical high-performance implementation and a killer app. Both of these will require technological breakthroughs. Whether this will be enough to make quantum computing real (commercializable) was the topic of a special symposium at the Conference on Lasers and ElectroOptics (CLEO) held in San Jose the week of May 6, 2019.
Quantum computing is stuck in a sort of limbo between hype and hope, pitched with incredible (unbelievable?) claims, yet supported by tantalizing laboratory demonstrations.
The symposium had panelists from many top groups working in quantum information science, including Jerry Chow (IBM), Mikhail Lukin (Harvard), Jelena Vuckovic (Stanford), Birgitta Whaley (Berkeley) and Jungsang Kim (IonQ). The moderator Ben Eggleton (U Sydney) posed the question to the panel: “Will Quantum Computing Actually Work?”. My Blog for this week is a report, in part, of what they said, and also what was happening in the hallways and the scientific sessions at CLEO. My personal view after listening and watching this past week is that the future of quantum computers is optics.
It is either ironic or obvious that the central figure behind quantum computing is Albert Einstein. It is obvious because Einstein provided the fundamental tools of quantum computing by creating both quanta and entanglement (the two key elements to any quantum computer). It is ironic, because Einstein turned his back on quantum mechanics, and he “invented” entanglement to actually argue that it was an “incomplete science”.
The actual quantum revolution did not begin with Max Planck in 1900, as so many Modern Physics textbooks attest, but with Einstein in 1905. This was his “miracle year” when he published 5 seminal papers, each of which solved one of the greatest outstanding problems in the physics of the time. In one of those papers he used simple arguments based on statistics, combined with the properties of light emission, to propose — actually to prove — that light is composed of quanta of energy (later to be named “photons” by Gilbert Lewis in 1924). Although Planck’s theory of blackbody radiation contained quanta implicitly through the discrete actions of his oscillators in the walls of the cavity, Planck vigorously rejected the idea that light itself came in quanta. He even apologized for Einstein, as he was proposing Einstein for membership the Berlin Academy, saying that he should be admitted despite his grave error of believing in light quanta. When Millikan set out in 1914 to prove experimentally that Einstein was wrong about photons by performing exquisite experiments on the photoelectric effect, he actually ended up proving that Einstein was right after all, which brought Einstein the Nobel Prize in 1921.
In the early 1930’s after a series of intense and public debates with Bohr over the meaning of quantum mechanics, Einstein had had enough of the “Copenhagen Interpretation” of quantum mechanics. In league with Schrödinger, who deeply disliked Heisenberg’s version of quantum mechanics, the two proposed two of the most iconic problems of quantum mechanics. Schrödinger launched, as a laughable parody, his eponymously-named “Schrödinger’s Cat”, and Einstein launched what has become known as the “Entanglement”. Each was intended to show the absurdity of quantum mechanics and drive a nail into its coffin, but each has been embraced so thoroughly by physicists that Schrödinger and Einstein are given the praise and glory for inventing these touchstones of quantum science. Schrödinger’s cat and entanglement both lie at the heart of the problems and the promise of quantum computers.
Between Hype and Hope
Quantum computing is stuck in a sort of limbo between hype and hope, pitched with incredible (unbelievable?) claims, yet supported by tantalizing laboratory demonstrations. In the midst of the current revival in quantum computing interest (the first wave of interest in quantum computing was in the 1990’s, see “Mind at Light Speed“), the US Congress has passed a house resolution to fund quantum computing efforts in the United States with a commitment $1B. This comes on the heels of commercial efforts in quantum computing by big players like IBM, Microsoft and Google, and also is partially in response to China’s substantial financial commitment to quantum information science. These acts, and the infusion of cash, will supercharge efforts on quantum computing. But this comes with real danger of creating a bubble. If there is too much hype, and if the expensive efforts under-deliver, then the bubble will burst, putting quantum computing back by decades. This has happened before, as in the telecom and fiber optics bubble of Y2K that burst in 2001. The optics industry is still recovering from that crash nearly 20 years later. The quantum computing community will need to be very careful in managing expectations, while also making real strides on some very difficult and long-range problems.
This was part of what the discussion at the CLEO symposium centered around. Despite the charge by Eggleton to “be real” and avoid the hype, there was plenty of hype going around on the panel and plenty of optimism, tempered by caution. I admit that there is reason for cautious optimism. Jerry Chow showed IBM’s very real quantum computer (with a very small number of qubits) that can be accessed through the cloud by anyone. They even built a user interface to allow users to code their own quantum codes. Jungsang Kim of IonQ was equally optimistic, showing off their trapped-atom quantum computer with dozens of trapped ions acting as individual qubits. Admittedly Chow and Kim have vested interests in their own products, but the technology is certainly impressive. One of the sharpest critics, Mikhail Lukin of Harvard, was surprisingly also one of the most optimistic. He made clear that scalable quantum computers in the near future is nonsense. Yet he is part of a Harvard-MIT collaboration that has constructed a 51-qubit array of trapped atoms that sets a world record. Although it cannot be used for quantum computing, it was used to simulate a complex many-body physics problem, and it found an answer that could not be calculated or predicted using conventional computers.
The panel did come to a general consensus about quantum computing that highlights the specific challenges that the field will face as it is called upon to deliver on its hyperbole. They each echoed an idea known as the “supremacy plot” which is a two-axis graph of number of qubits and number of operations (also called circuit depth). The graph has one region that is not interesting, one region that is downright laughable (at the moment), and one final area of great hope. The region of no interest lies in the range of large numbers of qubits but low numbers of operations, or large numbers of operations on a small number of qubits. Each of these extremes can easily be calculated on conventional computers and hence is of no practical interest. The region that is laughable is the the area of large numbers of qubits and large numbers of operations. No one suggested that this area can be accessed in even the next 10 years. The region that everyone is eager to reach is the region of “quantum supremacy”. This consists of quantum computers that have enough qubits and enough operations that they cannot be simulated by classical computers. When asked where this region is, the panel consensus was that it would require more than 50 qubits and more than hundreds or thousands of operations. What makes this so exciting is that there are real technologies that are now approaching this region–and they are based on light.
Chris Monroe’s Perfect Qubits
The second plenary session at CLEO featured the recent Nobel prize winners Art Ashkin, Donna Strickland and Gerard Mourou who won the 2018 Nobel prize in physics for laser applications. (Donna Strickland is only the third woman to win the Nobel prize in physics.) The warm-up band for these headliners was Chris Monroe, founder of the start-up company IonQ out of the University of Maryland. Monroe outlined the general layout of their quantum computer which is based on trapped atoms which he called “perfect qubits”. Each trapped atom is literally an atomic clock with the kind of exact precision that atomic clocks come with. The quantum properties of these atoms are as perfect as is needed for any quantum computation, and the limits on the performance of the current IonQ system is entirely caused by the classical controls that trap and manipulate the atoms. This is where the efforts of their rapidly growing R&D team are focused.
If trapped atoms are the perfect qubit, then the perfect quantum communication channel is the photon. The photon in vacuum is the quintessential messenger, propagating forever and interacting with nothing. This is why experimental cosmologists can see the photons originating from the Big Bang 13 billion years ago (actually from about a hundred thousand years after the Big Bang when the Universe became transparent). In a quantum computer based on trapped atoms as the gates, photons become the perfect wires.
On the quantum supremacy chart, Monroe plotted the two main quantum computing technologies: solid state (based mainly on superconductors but also some semiconductor technology) and trapped atoms. The challenges to solid state quantum computers comes with the scale-up to the range of 50 qubits or more that will be needed to cross the frontier into quantum supremacy. The inhomogeneous nature of solid state fabrication, as perfected as it is for the transistor, is a central problem for a solid state solution to quantum computing. Furthermore, by scaling up the number of solid state qubits, it is extremely difficult to simultaneously increase the circuit depth. In fact, circuit depth is likely to decrease (initially) as the number of qubits rises because of the two-dimensional interconnect problem that is well known to circuit designers. Trapped atoms, on the other hand, have the advantages of the perfection of atomic clocks that can be globally interconnected through perfect photon channels, and scaling up the number of qubits can go together with increased circuit depth–at least in the view of Monroe, who admittedly has a vested interest. But he was speaking before an audience of several thousand highly-trained and highly-critical optics specialists, and no scientist in front of such an audience will make a claim that cannot be supported (although the reality is always in the caveats).
The Future of Quantum Computing is Optics
The state of the art of the photonic control of light equals the levels of sophistication of electronic control of the electron in circuits. Each is driven by big-world applications: electronics by the consumer electronics and computer market, and photonics by the telecom industry. Having a technology attached to a major world-wide market is a guarantee that progress is made relatively quickly with the advantages of economy of scale. The commercial driver is profits, and the driver for funding agencies (who support quantum computing) is their mandate to foster competitive national economies that create jobs and improve standards of living.
The yearly CLEO conference is one of
the top conferences in laser science in the world, drawing in thousands of
laser scientists who are working on photonic control. Integrated optics is one of the current hot
topics. It brings many of the resources
of the electronics industry to bear on photonics. Solid state optics is mostly concerned with
quantum properties of matter and its interaction with photons, and this year’s
CLEO conference hosted many focused sessions on quantum sensors, quantum control,
quantum information and quantum communication.
The level of external control of quantum systems is increasing at a
spectacular rate. Sitting in the
audience at CLEO you get the sense that you are looking at the embryonic stages
of vast new technologies that will be enlisted in the near future for quantum
computing. The challenge is, there are
so many variants that it is hard to know which of these naissent technologies
will win and change the world. But the
key to technological progress is diversity (as it is for society), because it
is the interplay and cross-fertilization among the diverse technologies that
drives each forward, and even technologies that recede away still contribute to
the advances of the winning technology.
The expert panel at CLEO on the
future of quantum computing punctuated their moments of hype with moments of
realism as they called for new technologies to solve some of the current
barriers to quantum computers. Walking
out of the panel discussion that night, and walking into one of the CLEO technical
sessions the next day, you could almost connect the dots. The enabling technologies being requested by
the panel are literally being built by the audience.
In the end, the panel had a surprisingly prosaic argument in favor of the current push to build a working quantum computer. It is an echo of the movie Field of Dreams, with the famous quote “If you build it they will come”. That was the plea made by Lukin, who argued that by putting quantum computers into the hands of users, then the killer app that will drive the future economics of quantum computers likely will emerge. You don’t really know what to do with a quantum computer until you have one.
Given the “perfect qubits” of
trapped atoms, and the “perfect photons” of the communication channels,
combined with the dizzying assortment of quantum control technologies being
invented and highlighted at CLEO, it is easy to believe that the first
large-scale quantum computers will be based on light.