Alice’s disturbing adventures in Wonderland tumbled upon her like a string of accidents as she wandered a world of chaos. Rules were never what they seemed and shifted whenever they wanted. She even met a cat who grinned ear-to-ear and could disappear entirely, or almost entirely, leaving only its grin.
The vanishing Cheshire Cat reminds us of another famous cat—Arnold’s Cat—that introduced the ideas of stretching and folding of phase-space volumes in non-integrable Hamiltonian systems. But when Arnold’s Cat becomes a Quantum Cat, a central question remains: What happens to the chaotic behavior of the classical system … does it survive the transition to quantum mechanics? The answer is surprisingly like the grin of the Cheshire Cat—the cat vanishes, but the grin remains. In the quantum world of the Cheshire Cat, the grin of the classical cat remains even after the rest of the cat vanished.
The Cheshire Cat fades away, leaving only its grin, like a fine filament, as classical chaos fades into quantum, leaving behind a quantum scar.
The Quantum Mechanics of Classically Chaotic Systems
The simplest Hamiltonian systems are integrable—they have as many constants of the motion as degrees of freedom. This holds for quantum systems as well as for classical. There is also a strong correspondence between classical and quantum systems for the integrable cases—literally the Correspondence Principle—that states that quantum systems at high quantum number approach classical behavior. Even at low quantum numbers, classical resonances are mirrored by quantum eigenfrequencies that can show highly regular spectra.
But integrable systems are rare—surprisingly rare. Almost no real-world Hamiltonian system is integrable, because the real world warps the ideal. No spring can displace indefinitely, and no potential is perfectly quadratic. There are always real-world non-idealities that destroy one constant of the motion or another, opening the door to chaos.
When classical Hamiltonian systems become chaotic, they don’t do it suddenly. Almost all transitions to chaos in Hamiltonian systems are gradual. One of the best examples of this is the KAM theory that starts with invariant action integrals that generate invariant tori in phase space. As nonintegrable perturbations increase, the tori break up slowly into island chains of stability as chaos infiltrates the separatrixes—first as thin filaments of chaos surrounding the islands—then growing in width to take up more and more of phase space. Even when chaos is fully developed, small islands of stability can remain—the remnants of stable orbits of the unperturbed system.
When the classical becomes quantum, chaos softens. Quantum wave functions don’t like to be confined—they spread and they tunnel. The separatrix of classical chaos—that barrier between regions of phase space—cannot constrain the exponential tails of wave functions. And the origin of chaos itself—the homoclinic point of the separatrix—gets washed out. Then the regular orbits of the classical system reassert themselves, and they appear, like the vestige of the Cheshire Cat, as a grin.
The Quantum Circus
The empty stadium is a surprisingly rich dynamical system that has unexpected structure in both the classical and the quantum domain. Its importance in classical dynamics comes from the fact that its periodic orbits are unstable and its non-periodic orbits are ergodic (filling all available space if given long enough). The stadium itself is empty so that particles (classical or quantum) are free to propagate between reflections from the perfectly-reflecting walls of the stadium. The ergodicity comes from the fact that the stadium—like a classic Roman chariot-race stadium, also known as a circus—is not a circle, but has a straight stretch between two half circles. This simple modification takes the stable orbits of the circle into the unstable orbits of the stadium.
A single classical orbit in a stadium is shown in Fig 1. This is an ergodic orbit that is non-periodic and eventually would fill the entire stadium space. There are other orbits that are nearly periodic, such as one that bounces back and forth vertically between the linear portions, but even this orbit will eventually wander into the circular part of the stadium and then become ergodic. The big quantum-classical question is what happens to these classical orbits when the stadium is shrunk to the nanoscale?
Simulating an evolving quantum wavefunction in free space is surprisingly simple. Given a beginning quantum wavefunction A(x,y,t0), the discrete update equation is
Perfect reflection from the boundaries of the stadium are incorporated through imposing a boundary condition that sends the wavefunction to zero. Simple!
A snap-shot of a wavefunction evolving in the stadium is shown in Fig. 2. To see a movie of the time evolution, see my YouTube episode.
The time average of the wavefunction after a long time has passed is shown in Fig. 3. Other than the horizontal nodal line down the center of the stadium, there is little discernible structure or symmetry. This is also true for the mean squared wavefunction shown in Fig. 4, although there is some structure that may be emerging in the semi-circular regions.
On the other hand, for special initial conditions that have a lot of symmetry, something remarkable happens. Fig. 5 shows several mean-squared results for special initial conditions. There is definite structure in these cases that were given the somewhat ugly name “quantum scars” in the 1980’s by Eric Heller who was one of the first to study this phenomenon .
One can superpose highly-symmetric classical trajectories onto the figures, as shown in the bottom row. All of these classical orbits go through a high-symmetry point, such as the center of the stadium (on the left image) and through the focal point of the circular mirrors (in the other two images). The astonishing conclusion of this exercise is that the highly-symmetric periodic classical orbits remain behind as quantum scars—like the Cheshire Cat’s grin—when the system is in the quantum realm. The classical orbits that produce quantum scars have the important property of being periodic but unstable. A slight perturbation from the symmetric trajectory causes it to eventually become ergodic (chaotic). These scars are regions with enhanced probability density, what might be termed “quantum trajectories”, but do not show strong interference patterns.
It is important to make the distinction that it is also possible to construct special wavefunctions that are strictly periodic, such as a wave bouncing perfectly vertically between the straight portions. This leads to large-scale interference patterns that are not the same as the quantum scars.
Quantum Chaos versus Laser Speckle
In addition to the bouncing-wave cases that do not strictly produce quantum scars, there is another “neutral” phenomenon that produces interference patterns that look a lot like scars, but are simply the random addition of lots of plane waves with the same wavelength . A snapshot in time of one of these superpositions is shown in Fig. 6. To see how the waves add together, see the YouTube channel episode.
Machine learning is characterized, more than by any other aspect, by the high dimensionality of the data spaces it seeks to find patterns in. Hence, one of the principle functions of machine learning is to reduce the dimensionality of the data to lower dimensions—a process known as dimensionality reduction.
There are two driving reasons to reduce the dimensionality of data:
First, typical dimensionalities faced by machine learning problems can be in the hundreds or thousands. Trying to visualize such high dimensions may sound mind expanding, but it is really just saying that a data problem may have hundreds or thousands of different data entries for a single event. And many, or even most, of those entries may not be independent. While many others may be pure noise—or at least not relevant to the pattern. Deep learning dimensionality reduction seeks to find the dependences—many of them nonlinear and non-single-valued (non-invertible)—and to reject the noise channels.
Second, the geometry of high dimension is highly unintuitive. Many of the things we take for granted in our pitifully low dimension of 3 (or 4 if you include time) just don’t hold in high dimensions. For instance, in very high dimensions almost all random vectors in a hyperspace are orthogonal, and almost all random unit vectors in the hyperspace are equidistant. Even the topology of landscapes in high dimensions is unintuitive—there are far more mountain ridges than mountain peaks—with profound consequences for dynamical processes such as random walks (see my Blog on a Random Walk in 10 Dimensions). In fact, we owe our evolutionary existence to this effect! Therefore, deep dimensionality reduction is a way to bring complex data down to a dimensionality where our intuition can be applied to “explain” the data.
But what is a dimension? And can you find the “right” dimensionality when performing dimensionality reduction? Once again, our intuition struggles with these questions, as first discovered by a nineteenth-century German mathematician whose mind-expanding explorations of the essence of different types of infinity shattered the very concept of dimension.
Georg Cantor (1845 – 1918) was born in Russia, and the family moved to Germany while Cantor was still young. In 1863, he enrolled at the University of Berlin where he sat on lectures by Weierstrass and Kronecker. He received his doctorate in 1867 and his Habilitation in 1869, moving into a faculty position at the University of Halle and remaining there for the rest of his career. Cantor published a paper early in 1872 on the question of whether the representation of an arbitrary function by a Fourier series is unique. He had found that even though the series might converge to a function almost everywhere, there surprisingly could still be an infinite number of points where the convergence failed. Originally, Cantor was interested in the behavior of functions at these points, but his interest soon shifted to the properties of the points themselves, which became his life’s work as he developed set theory and transfinite mathematics.
In 1878, in a letter to his friend Richard Dedekind, Cantor showed that there was a one-to-one correspondence between the real numbers and the points in any n-dimensional space. He was so surprised by his own result that he wrote to Dedekind “I see it, but I don’t believe it.” Previously, the ideas of a dimension, moving in a succession from one (a line) to two (a plane) to three (a volume) had been absolute. However, with his newly-discovered mapping, the solid concepts of dimension and dimensionality began to dissolve. This was just the beginning of a long history of altered concepts of dimension (see my Blog on the History of Fractals).
Mapping Two Dimensions to One
Cantor devised a simple mapping that is at once obvious and subtle. To take a specific example of mapping a plane to a line, one can consider the two coordinate values (x,y) both with a closed domain on [0,1]. Each can be expressed as a decimal fraction given by
These two values can be mapped to a single value through a simple “ping-pong” between the decimal digits as
If x and y are irrational, then this presents a simple mapping of a pair of numbers (two-dimensional coordinates) to a single number. In this way, the special significance of two dimensions relative to one dimension is lost. In terms of set theory nomenclature, the cardinality of the two-dimensional R2 is the same as for the one-dimensions R.
Nonetheless, intuition about dimension still has it merits, and a subtle aspect of this mapping is that it contains discontinuities. These discontinuities are countably infinite, but they do disrupt any smooth transformation from 2D to 1D, which is where the concepts of intrinsic dimension are hidden. The topological dimension of the plane is clearly equal to 2, and that of the line is clearly equal to 1, determined by the dimensionality D+1 of cuts that are required to separate the sets. The area is separated by a D = 1 line, while the line is separated by a D= 0 point.
While the discontinuities help preserve the notions of dimension, and they are countably infinite (with the cardinality of the rational numbers), they pertain merely to the representation of number by decimals. As an example, in decimal notation for a number a = 14159/105, one has two equivalent representations
trailing either infinitely many 0’s or infinitely many 9’s. Despite the fact that these representations point to the same number, when it is used as one of the pairs in the bijection of Fig. 1, it produces two distinctly different numbers of t in R. Fortunately, there is a way to literally sweep this under the rug. Any time one retrieves a number that has repeating 0’s or 9’s, simply sweep it to the right it by dividing by a power of 10 to a region of trailing zeros for a different number, call it b, as in Fig. 2. The shifted version of a will not overlap with the number b, because b also ends in repeating 0’s or 9’s and so is swept farther to the right, and so on to infinity, so none of these numbers overlap, each is distinct, and the mapping becomes what is known as a bijection with one-to-one correspondence.
When the peripatetic mathematician Guiseppe Peano learned of Cantor’s result for the mapping of 2D to 1D, he sought to demonstrate the correspondence geometrically, and he constructed a continuous curve that filled space, publishing the method in Sur une courbe, qui remplit toute une aire plane  in 1890. The construction of Peano’s curve proceeds by taking a square and dividing it into 9 equal sub squares. Lines connect the centers of each of the sub squares. Then each sub square is divided again into 9 sub squares whose centers are all connected by lines. At this stage, the original pattern, repeated 9 times, is connected together by 8 links, forming a single curve. This process is repeated infinitely many times, resulting in a curve that passes through every point of the original plane square. In this way, a line is made to fill a plane. Where Cantor had proven abstractly that the cardinality of the real numbers was the same as the points in n-dimensional space, Peano created a specific example.
This was followed quickly by another construction, invented by David Hilbert in 1891, that divided the square into four instead of nine, simplifying the construction, but also showing that such constructions were easily generated . The space-filling curves of Peano and Hilbert have the extreme properties that a one-dimensional curve approaches every point in a two-dimensional space. These curves have topological dimensionality of 1D and a fractal dimensionality of 2D.
A One-Dimensional Deep Discrete Encoder for MNIST
When performing dimensionality reduction in deep learning it is tempting to think that there is an underlying geometry to the data—as if they reside on some submanifold of the original high-dimensional space. While this can be the case (for which dimensionality reduction is called deep metric learning) more often there is no such submanifold. For instance, when there is a highly conditional nature to the different data channels, in which some measurements are conditional on the values of other measurements, then there is no simple contiguous space that supports the data.
It is also tempting to think that a deep learning problem has some intrinsic number of degrees of freedom which should be matched by the selected latent dimensionality for the dimensionality reduction. For instance, if there are roughly five degrees of freedom buried within a complicated data set, then it is tempting to believe that the appropriate latent dimensionality also should be chosen to be five. But this also is missing the mark.
Take, for example, the famous MNIST data set of hand-written digits from 0 to 9. Each digit example is contained in a 28-by28 two-dimensional array that is typically flattened to a 784-element linear vector that locates that single digit example within a 784-dimensional space. The goal is to reduce the dimensionality down to a manageable number—but what should the resulting latent dimensionality be? How many degrees of freedom are involved in writing digits? Furthermore, given that there are ten classes of digits, should the chosen dimensionality of the latent space be related to the number of classes?
To illustrate that the dimensionality of the latent space has little to do with degrees of freedom or the number of classes, let’s build a simple discrete encoder that encodes the MNIST data set to the one-dimensional line—following Cantor.
The deep network of the encoder can have a simple structure that terminates with a single neuron that has a piece-wise linear output. The objective function (or loss function) measures the squared distances of the outputs of the single neuron, after transformation by the network, to the associated values 0 to 9. And that is it! Train the network by minimizing the loss function.
The results of the linear encoder are shown in Fig. 6 (the transverse directions are only for ease of visualization…the classification is along the long axis). The different dots are specific digits, and the colors are the digit value. There is a clear trend from 1 through 10, although with this linear encoder there is substantial overlap among the point clouds.
Despite appearances, this one-dimensional discrete line encoder is NOT a form of regression. There is no such thing as 1.5 as the average of a 1-digit and a 2-digit. And 5 is not the average of a 4-digit and a 6-digit. The digits are images that have no intrinsic numerical value. Therefore, this one-dimensional encoder is highly discontinuous, and intuitive ideas about intrinsic dimension for continuous data remain secure.
The purpose of this Blog (apart from introducing Cantor and his ideas on dimension theory) was to highlight a key question of dimensionality reduction in representation theory: What is the intrinsic dimensionality of a dataset? The answer, in the case of discrete classes, is that there is no intrinsic dimensionality. Having 1 degree of freedom or 10 degrees of freedom, i.e. latent dimensionalities of 1 or 10, is mostly irrelevant. In ideal cases, one is just as good as the other.
On the other hand, for real-world data with its inevitable variability and finite variance, there can be reasons to choose one latent dimensionality over another. In fact, the “best” choice of dimensionality is one less than the number of classes. For instance, in the case of MNIST with its 10 classes, that is a 9-dimensional latent space. The reason this is “best” has to do with the geometry of high-dimensional simplexes, which will be the topic of a future Blog.
By David D. Nolte, June 20, 2022
 G.Peano: Sur une courbe, qui remplit toute une aire plane. Mathematische Annalen 36 (1890), 157–160.
 D. Hilbert, “Uber die stetige Abbildung einer Linie auf ein Fllichenstilck,” Mathemutische Anna/en, vol. 38, pp. 459-460, (1891)
Harmonic oscillators are one of the fundamental elements of physical theory. They arise so often in so many different contexts that they can be viewed as a central paradigm that spans all aspects of physics. Some famous physicists have been quoted to say that the entire universe is composed of simple harmonic oscillators (SHO).
Despite the physicist’s love affair with it, the SHO is pathological! First, it has infinite frequency degeneracy which makes it prone to the slightest perturbation that can tip it into chaos, in contrast to non-harmonic cyclic dynamics that actually protects us from the chaos of the cosmos (see my Blog on Chaos in the Solar System). Second, the SHO is nowhere to be found in the classical world. Linear oscillators are purely harmonic, with a frequency that is independent of amplitude—but no such thing exists! All oscillators must be limited, or they could take on infinite amplitude and infinite speed, which is nonsense. Even the simplest of simple harmonic oscillators would be limited by nothing other than the speed of light. Relativistic effects would modify the linearity, especially through time dilation effects, rendering the harmonic oscillator anharmonic.
Despite the physicist’s love affair with it, the SHO is pathological!
Therefore, for students of physics as well as practitioners, it is important to break the shackles imposed by the SHO and embrace the anharmonic harmonic oscillator as the foundation of physics. Here is a brief survey of several famous anharmonic oscillators in the history of physics, followed by the mathematical analysis of the relativistic anharmonic linear-spring oscillator.
Anharmonic oscillators have a long venerable history with many varieties. Many of these have become central models in systems as varied as neural networks, synchronization, grandfather clocks, mechanical vibrations, business cycles, ecosystem populations and more.
Already by the mid 1600’s Christiaan Huygens (1629 – 1695) knew that the pendulum becomes slower when it has larger amplitudes. The pendulum was one of the best candidates for constructing an accurate clock needed for astronomical observations and for the determination of longitude at sea. Galileo (1564 – 1642) had devised the plans for a rudimentary pendulum clock that his son attempted to construct, but the first practical pendulum clock was invented and patented by Huygens in 1657. However, Huygens’ modified verge escapement required his pendulum to swing with large amplitudes, which brought it into the regime of anharmonicity. The equations of the simple pendulum are truly simple, but the presence of the sinθ makes it the simplest anharmonic oscillator.
Therefore, Huygens searched for the mathematical form of a tautochrone curve for the pendulum (a curve that is traversed with equal times independently of amplitude) and in the process he invented the involutes and evolutes of a curve—precursors of the calculus. The answer to the tautochrone question is a cycloid (see my Blog on Huygen’s Tautochrone Curve).
Hermann von Helmholtz
Hermann von Helmholtz (1821 – 1894) was possibly the greatest German physicist of his generation—an Einstein before Einstein—although he began as a medical doctor. His study of muscle metabolism, drawing on the early thermodynamic work of Carnot, Clapeyron and Joule, led him to explore and to express the conservation of energy in its clearest form. Because he postulated that all forms of physical processes—electricity, magnetism, heat, light and mechanics—contributed to the interconversion of energy, he sought to explore them all, bringing his research into the mainstream of physics. His laboratory in Berlin became world famous, attracting to his laboratory the early American physicists Henry Rowland (founder and first president of the American Physical Society) and Albert Michelson (first American Nobel prize winner).
Even the simplest of simple harmonic oscillators would be limited by nothing other than the speed of light.
Helmholtz also pursued a deep interest in the physics of sensory perception such as sound. This research led to his invention of the Helmholtz oscillator which is a highly anharmonic relaxation oscillator in which a tuning fork was placed near an electromagnet that was powered by a mercury switch attached to the fork. As the tuning fork vibrated, the mercury came in and out of contact with it, turning on and off the magnet, which fed back on the tuning fork, and so on, enabling the device, once started, to continue oscillating without interruption. This device is called a tuning-fork resonator, and it became the first door-bell buzzers. (These are not to be confused with Helmholtz resonances that are formed when blowing across the open neck of a beer bottle.)
Baron John Strutt, the Lord Rayleigh (1842 – 1919) like Helmholtz also was a generalist and had a strong interest in the physics of sound. He was inspired by Helmholtz’ oscillator to consider general nonlinear anharmonic oscillators mathematically. He was led to consider the effects of anharmonic terms added to the harmonic oscillator equation. in a paper published in the Philosophical Magazine issue of 1883 with the title On Maintained Vibrations, he introduced an equation to describe the self-oscillation by adding an extra term to a simple harmonic oscillator. The extra term depended on the cube of the velocity, representing a balance between the gain of energy from a steady force and natural dissipation by friction. Rayleigh suggested that this equation applied to a wide range of self-oscillating systems, such as violin strings, clarinet reeds, finger glasses, flutes, organ pipes, among others (see my Blog on Rayleigh’s Harp.)
The first systematic study of quadratic and cubic deviations from the harmonic potential was performed by the German engineer George Duffing (1861 – 1944) under the conditions of a harmonic drive. The Duffing equation incorporates inertia, damping, the linear spring and nonlinear deviations.
Duffing confirmed his theoretical predictions with careful experiments and established the lowest-order corrections to ideal masses on springs. His work was rediscovered in the 1960’s after Lorenz helped launch numerical chaos studies. Duffing’s driven potential becomes especially interesting when α is negative and β is positive, creating a double-well potential. The driven double-well is a classic chaotic system (see my blog on Duffing’s Oscillator).
Balthasar van der Pol
Autonomous oscillators are one of the building blocks of complex systems, providing the fundamental elements for biological oscillators, neural networks, business cycles, population dynamics, viral epidemics, and even the rings of Saturn. The most famous autonomous oscillator (after the pendulum clock) is named for a Dutch physicist, Balthasar van der Pol (1889 – 1959), who discovered the laws that govern how electrons oscillate in vacuum tubes, but the dynamical system that he developed has expanded to become the new paradigm of cyclic dynamical systems to replace the SHO (see my Blog on GrandFather Clocks.)
Turning from this general survey, let’s find out what happens when special relativity is added to the simplest SHO .
Relativistic Linear-Spring Oscillator
The theory of the relativistic one-dimensional linear-spring oscillator starts from a relativistic Lagrangian of a free particle (with no potential) yielding the generalized relativistic momentum
The Lagrangian that accomplishes this is 
where the invariant 4-velocity is
When the particle is in a potential, the Lagrangian becomes
The action integral that is minimized is
and the Lagrangian for integration of the action integral over proper time is
The relativistic modification in the potential energy term of the Lagrangian is not in the spring constant, but rather is purely a time dilation effect. This is captured by the relativistic Lagrangian
where the dot is with respect to proper time τ. The classical potential energy term in the Lagrangian is multiplied by the relativistic factor γ, which is position dependent because of the non-constant speed of the oscillator mass. The Euler-Lagrange equations are
where the subscripts in the variables are a = 0, 1 for the time and space dimensions, respectively. The derivative of the time component of the 4-vector is
From the derivative of the Lagrangian with respect to speed, the following result is derived
where E is the constant total relativistic energy. Therefore,
which provides an expression for the derivative of the coordinate time with respect to the proper time where
The position-dependent γ(x) factor is then
The Euler-Lagrange equation with a = 1 is
providing the flow equations for the (an)harmonic oscillator with respect to proper time
This flow represents a harmonic oscillator modified by the γ(x) factor, due to time dilation, multiplying the spring force term. Therefore, at relativistic speeds, the oscillator is no longer harmonic even though the spring constant remains truly a constant. The term in parentheses effectively softens the spring for larger displacement, and hence the frequency of oscillation becomes smaller.
The state-space diagram of the anharmonic oscillator is shown in Fig. 3 with respect to proper time (the time read on a clock co-moving with the oscillator mass). At low energy, the oscillator is harmonic with a natural period of the SHO. As the maximum speed exceeds β = 0.8, the period becomes longer and the trajectory less sinusoidal. The position and speed for β = 0.9999 is shown in Fig. 4. The mass travels near the speed of light as it passes the origin, producing significant time dilation at that instant. The average time dilation through a single cycle is about a factor of three, despite the large instantaneous γ = 70 when the mass passes the origin.
By David D. Nolte, May 29, 2022
 W. Moreau, R. Easther, and R. Neutze, “RELATIVISTIC (AN)HARMONIC OSCILLATOR,” American Journal of Physics, Article vol. 62, no. 6, pp. 531-535, Jun (1994)
In one interpretation of quantum physics, when you snap your fingers, the trajectory you are riding through reality fragments into a cascade of alternative universes—one for each possible quantum outcome among all the different quantum states composing the molecules of your fingers.
This is the Many-Worlds Interpretation (MWI) of quantum physics first proposed rigorously by Hugh Everett in his doctoral thesis in 1957 under the supervision of John Wheeler at Princeton University. Everett had been drawn to this interpretation when he found inconsistencies between quantum physics and gravitation—topics which were supposed to have been his actual thesis topic. But his side-trip into quantum philosophy turned out to be a one-way trip. The reception of his theory was so hostile, no less than from Copenhagen and Bohr himself, that Everett left physics and spent a career at the Pentagon.
Resurrecting MWI in the Name of Quantum Information
Fast forward by 20 years, after Wheeler had left Princeton for the University of Texas at Austin, and once again a young physicist was struggling to reconcile quantum physics with gravity. Once again the many worlds interpretation of quantum physics seemed the only sane way out of the dilemma, and once again a side-trip became a life-long obsession.
David Deutsch, visiting Wheeler in the early 1980’s, became convinced that the many worlds interpretation of quantum physics held the key to paradoxes in the theory of quantum information (For the full story of Wheeler, Everett and Deutsch, see Ref ). He was so convinced, that he began a quest to find a physical system that operated on more information than could be present in one universe at a time. If such a physical system existed, it would be because streams of information from more than one universe were coming together and combining in a way that allowed one of the universes to “borrow” the information from the other.
It took only a year or two before Deutsch found what he was looking for—a simple quantum algorithm that yielded twice as much information as would be possible if there were no parallel universes. This is the now-famous Deutsch algorithm—the first quantum algorithm . At the heart of the Deutsch algorithm is a simple quantum interference. The algorithm did nothing useful—but it convinced Deutsch that two universes were interfering coherently in the measurement process, giving that extra bit of information that should not have been there otherwise. A few years later, the Deutsch-Josza algorithm  expanded the argument to interfere an exponentially larger amount of information streams from an exponentially larger number of universes to create a result that was exponentially larger than any classical computer could produce. This marked the beginning of the quest for the quantum computer that is running red-hot today.
Deutsch’s “proof” of the many-worlds interpretation of quantum mechanics is not a mathematical proof but is rather a philosophical proof. It holds no sway over how physicists do the math to make their predictions. The Copenhagen interpretation, with its “spooky” instantaneous wavefunction collapse, works just fine predicting the outcome of quantum algorithms and the exponential quantum advantage of quantum computing. Therefore, the story of David Deutsch and the MWI may seem like a chimera—except for one fact—it inspired him to generate the first quantum algorithm that launched what may be the next revolution in the information revolution of modern society. Inspiration is important in science, because it lets scientists create things that had been impossible before.
But if quantum interference is the heart of quantum computing, then there is one physical system that has the ultimate simplicity that may yet inspire future generations of physicists to invent future impossible things—the quantum beam splitter. Nothing in the study of quantum interference can be simpler than a sliver of dielectric material sending single photons one way or another. Yet the outcome of this simple system challenges the mind and reminds us of why Everett and Deutsch embraced the MWI in the first place.
The Classical Beam Splitter
The so-called “beam splitter” is actually a misnomer. Its name implies that it takes a light beam and splits it into two, as if there is only one input. But every “beam splitter” has two inputs, which is clear by looking at the classical 50/50 beam splitter. The actual action of the optical element is the combination of beams into superpositions in each of the outputs. It is only when one of the input fields is zero, a special case, that the optical element acts as a beam splitter. In general, it is a beam combiner.
Given two input fields, the output fields are superpositions of the inputs
The square-root of two factor ensures that energy is conserved, because optical fluence is the square of the fields. This relation is expressed more succinctly as a matrix input-output relation
The phase factors in these equations ensure that the matrix is unitary
reflecting energy conservation.
The Quantum Beam Splitter
A quantum beam splitter is just a classical beam splitter operating at the level of individual photons. Rather than describing single photons entering or leaving the beam splitter, it is more practical to describe the properties of the fields through single-photon quantum operators
where the unitary matrix is the same as the classical case, but with fields replaced by the famous “a” operators. The photon operators operate on single photon modes. For instance, the two one-photon input cases are
where the creation operators operate on the vacuum state in each of the input modes.
The fundamental combinational properties of the beam splitter are even more evident in the quantum case, because there is no such thing as a single input to a quantum beam splitter. Even if no photons are directed into one of the input ports, that port still receives a “vacuum” input, and this vacuum input contributes to the fluctuations observed in the outputs.
The input-output relations for the quantum beam splitter are
The beam splitter operating on a one-photon input converts the input-mode creation operator into a superposition of out-mode creation operators that generates
The resulting output is entangled: either the single photon exits one port, or it exits the other. In the many worlds interpretation, the photon exits from one port in one universe, and it exits from the other port in a different universe. On the other hand, in the Copenhagen interpretation, the two output ports of the beam splitter are perfectly anti-correlated.
The Hong-Ou-Mandel (HOM) Interferometer
When more than one photon is incident on a beam splitter, the fascinating effects of quantum interference come into play, creating unexpected outputs for simple inputs. For instance, the simplest example is a two photon input where a single photon is present in each input port of the beam splitter. The input state is represented with single creation operators operating on each vacuum state of each input port
creating a single photon in each of the input ports. The beam splitter operates on this input state by converting the input-mode creation operators into out-put mode creation operators to give
The important step in this process is the middle line of the equations: There is perfect destructive interference between the two single-photon operations. Therefore, both photons always exit the beam splitter from the same port—never split. Furthermore, the output is an entangled two-photon state, once more splitting universes.
The two-photon interference experiment was performed in 1987 by Chung Ki Hong and Jeff Ou, students of Leonard Mandel at the Optics Institute at the University of Rochester , and this two-photon operation of the beam splitter is now called the HOM interferometer. The HOM interferometer has become a center-piece for optical and photonic implementations of quantum information processing and quantum computers.
N-Photons on a Beam Splitter
Of course, any number of photons can be input into a beam splitter. For example, take the N-photon input state
The beam splitter acting on this state produces
The quantity on the right hand side can be re-expressed using the binomial theorem
where the permutations are defined by the binomial coefficient
The output state is given by
which is a “super” entangled state composed of N multi-photon states, involving N different universes.
Coherent States on a Quantum Beam Splitter
Surprisingly, there is a multi-photon input state that generates a non-entangled output—as if the input states were simply classical fields. These are the so-called coherent states, introduced by Glauber and Sudarshan [5, 6]. Coherent states can be described as superpositions of multi-photon states, but when a beam splitter operates on these superpositions, the outputs are simply 50/50 mixtures of the states. For instance, if the input scoherent tates are denoted by α and β, then the output states after the beam splitter are
This output is factorized and hence is NOT entangled. This is one of the many reasons why coherent states in quantum optics are considered the “most classical” of quantum states. In this case, a quantum beam splitter operates on the inputs just as if they were classical fields.
 D. Deutsch, “Quantum-theory, the church-turing principle and the universal quantum computer,” Proceedings of the Royal Society of London Series a-Mathematical Physical and Engineering Sciences, vol. 400, no. 1818, pp. 97-117, (1985)
 D. Deutsch and R. Jozsa, “Rapid solution of problems by quantum computation,” Proceedings of the Royal Society of London Series a-Mathematical Physical and Engineering Sciences, vol. 439, no. 1907, pp. 553-558, Dec (1992)
 C. K. Hong, Z. Y. Ou, and L. Mandel, “Measurement of subpicosecond time intervals between 2 photons by interference,” Physical Review Letters, vol. 59, no. 18, pp. 2044-2046, Nov (1987)
 Glauber, R. J. (1963). “Photon Correlations.” Physical Review Letters 10(3): 84.
 Sudarshan, E. C. G. (1963). “Equivalence of semiclassical and quantum mechanical descriptions of statistical light beams.” Physical Review Letters 10(7): 277-&.; Mehta, C. L. and E. C. Sudarshan (1965). “Relation between quantum and semiclassical description of optical coherence.” Physical Review 138(1B): B274.
New from Oxford Press: The History of Optical Interferometry (Late Summer 2023)
The mysteries of human intelligence are among the final frontiers of science. Despite our pride in what science has achieved across the past century, we have stalled when it comes to understanding intelligence or emulating it. The best we have done so far is through machine learning — harnessing the computational power of computers to begin to mimic the mind, attempting to answer the essential question:
How do we get machines to Know what we Know?
In modern machine learning, the answer is algorithmic.
In post-modern machine learning, the answer is manifestation.
The algorithms of modern machine learning are cause and effect, rules to follow, producing only what the programmer can imagine. But post-modern machine learning has blown past explicit algorithms to embrace deep networks. Deep networks today are defined by neural networks with thousands, or tens of thousands, or even hundreds of thousands, of neurons arrayed in multiple layers of dense connections. The interactivity of so many crossing streams of information defies direct deconstruction of what the networks are doing — they are post-modern. Their outputs manifest themselves, self-assembling into simplified structures and patterns and dependencies that are otherwise buried unseen in complicated data.
Deep learning emerged as recently as 2006 and has opened wide new avenues of artificial intelligence that move beyond human capabilities for some tasks. But deep learning also has pitfalls, some of which are inherited from the legacy approaches of traditional machine learning, and some of which are inherent in the massively high-dimensional spaces in which deep learning takes place. Nonetheless, deep learning has revolutionized many aspects of science, and there is reason for optimism that the revolution will continue. Fifty years from now, looking back, we may recognize this as the fifth derivative of the industrial revolution (Phase I: Steam. Phase II: Electricity. Phase III: Automation. Phase IV: Information. Phase V: Intelligence).
Conventional machine learning, as we know it today, has had many names. It began with Multivariate Analysis of mathematical population dynamics around the turn of the last century, pioneered by Francis Galton (1874), Karl Pearson (1901), Charles Spearman (1904) and Ronald Fisher (1922) among others.
The first on-line computers during World War II were developed to quickly calculate the trajectories of enemy aircraft for gunnery control, introducing the idea of feedback control of machines. This was named Cybernetics by Norbert Wiener, who had participated in the development of automated control of antiaircraft guns.
A decade later, during the Cold War, it became necessary to find hidden objects in large numbers of photographs. The embryonic electronic digital computers of the day were far too slow with far too little memory to do the task, so the Navy contracted with the Cornell Aeronautical Laboratory in Cheektowaga, New York, a suburb of Buffalo, to create an analog computer capable of real-time image analysis. This led to the invention of the Perceptron by Frank Rosenblatt as the first neural network-inspired computer , building on ideas of neural logic developed by Warren McColloch and Walter Pitts.
Several decades passed with fits and starts as neural networks remained too simple to accomplish anything profound. Then in 1986, David Rumelhart and Ronald Williams at UC San Diego with Geoff Hinton at Carnegie-Mellon discovered a way to train multiple layers of neurons, in a process called error back propagation . This publication opened the floodgates of Connectionism — also known as Parallel Distributed Processing. The late 80’s and much of the 90’s saw an expansion of interest in neural networks, until the increasing size of the networks ran into limits caused by the processing speed and capacity of conventional computers towards the end of the decade. During this time it had become clear that the most interesting computations required many layers of many neurons, and the number of neurons expanded into the thousands, but it was difficult to train such systems that had tens of thousands of adjustable parameters, and research in neural networks once again went into a lull.
The beginnings of deep learning started with two breakthroughs. The first was by Yann Lecun at Bell Labs in 1998 who developed, with Leon Bottou, Yoshua Bengio and Patrick Haffner, a Convolutional Neural Network that had seven layers of neurons that classified hand-written digits . The second was from Geoff Hinton in 2006, by then at the University of Toronto, who discovered a fast learning algorithm for training deep layers of neurons . By the mid 2010’s, research on neural networks was hotter than ever, propelled in part by several very public successes, such as Deep Mind’s machine that beat the best player in the world at Go in 2017, self-driving cars, personal assistants like Siri and Alexa, and YouTube recommendations.
The Challenges of Deep Learning
Deep learning today is characterized by neural network architectures composed of many layers of many neurons. The nature of deep learning brings with it two main challenges: 1) efficient training of the neural weights, and 2) generalization of trained networks to perform accurately on previously unseen data inputs.
Solutions to the first challenge, efficient training, are what allowed the deep revolution in the first place—the result of a combination of increasing computer power with improvements in numerical optimization. This included faster personal computers that allowed nonspecialists to work with deep network programming environments like Matlab’s Deep Learning toolbox and Python’s TensorFlow.
Solutions to the second challenge, generalization, rely heavily on a process known as “regularization”. The term “regularization” has a slippery definition, an obscure history, and an awkward sound to it. Regularization is the noun form of the verb “to regularize” or “to make regular”. Originally, regularization was used to keep certain inverse algorithms from blowing up, like inverse convolutions, also known as deconvolution. Direct convolution is a simple mathematical procedure that “blurs” ideal data based on the natural response of a measuring system. However, if one has experimental data, one might want to deconvolve the system response from the data to recover the ideal data. But this procedure is numerically unstable and can “blow up”, often because of the divide-by-zero problem. Regularization was a simple technique that kept denominators from going to zero.
Regularization became a common method for inverse problems that are notorious to solve because of the many-to-one mapping that can occur in measurement systems. There can be a number of possible causes that produce a single response. Regularization was a way of winnowing out “unphysical” solutions so that the physical inverse solution remained.
During the same time, regularization became a common tool used by quantum field theorists to prevent certain calculated quantities from diverging to infinity. The solution was again to keep denominators from going to zero by setting physical cut-off lengths on the calculations. These cut-offs were initially ad hoc, but the development of renormalization group theory by Kenneth Wilson at Cornell (Nobel Prize in 1982) provided a systematic approach to solving the infinities of quantum field theory.
With the advent of neural networks, having hundreds to thousands to millions of adjustable parameters, regularization became the catch-all term for fighting the problem of over-fitting. Over-fitting occurs when there are so many adjustable parameters that any training data can be fit, and the neural network becomes just a look-up table. Look-up tables are the ultimate hash code, but they have no predictive capability. If a slightly different set of data are fed into the network, the output can be anything. In over-fitting, there is no generalization, the network simply learns the idiosyncrasies of the training data without “learning” the deeper trends or patterns that would allow it to generalize to handle different inputs.
Over the past decades a wide collection of techniques have been developed to reduce over-fitting of neural networks. These techniques include early stopping, k-fold holdout, drop-out, L1 and L2 weight-constraint regularization, as well as physics-based constraints. The goal of all of these techniques is to keep neural nets from creating look-up tables and instead learning the deep codependencies that exist within complicated data.
Table II. Regularization Techniques in Machine Learning
By judicious application of these techniques, combined with appropriate choices of network design, amazingly complex problems can be solved by deep networks and they can generalized too (to some degree). As the field moves forward, we may expect additional regularization tricks to improve generalization, and design principles will emerge so that the networks no longer need to be constructed by trial and error.
The Potential of Deep Learning
In conventional machine learning, one of the most critical first steps performed on a dataset has been feature extraction. This step is complicated and difficult, especially when the signal is buried either in noise or in confounding factors (also known as distractors). The analysis is often highly sensitive to the choice of features, and the selected features may not even be the right ones, leading to bad generalization. In deep learning, feature extraction disappears into the net itself. Optimizing the neural weights subject to appropriate constraints forces the network to find where the relevant information lies and what to do with it.
The key to finding the right information was not just having many neurons, but having many layers, which is where the potential of deep learning emerges. It is as if each successive layer is learning a more abstract or more generalized form of the information than the last. This hierarchical layering is most evident in the construction of convolutional deep networks, where the layers are receiving a telescoping succession of information fields from lower levels. Geoff Hinton‘s Deep Belief Network, which launched the deep revolution in 2006, worked explicitly with this hierarchy in mind through direct design of the network architecture. Since then, network architecture has become more generalized, with less up-front design while relying on the increasingly sophisticated optimization techniques of training to set the neural weights. For instance, a simplified instance of a deep network is shown in Fig. 4 with three hidden layers of neurons.
The mathematical structure of a deep network is surprisingly simple. The equations for the network in Fig. 4, that convert an input xa to an output ye, are
These equations use index notation to denote vectors (single superscript) and matrices (double indexes). The repeated index (one up and one down) denotes an implicit “Einstein” summation. The function φ(.) is known as the activation function, which is nonlinear. One of the simplest activation functions to use and analyze, and the current favorite, is known as the ReLU (for rectifying linear unit). Note that these equations represent a simple neural cascade, as the output of one layer becomes the input for the next.
The training of all the matrix elements assumes a surprisingly simply optimization function, known as an objective function or a loss function, that can look like
where the first term is the mean squared error of the network output ye relative to the desired output y0 for the training set, and the second term, known as a regularization term (see section above) is a quadratic form that keeps the weights from blowing up. This loss function is minimized over the set of adjustable matrix weights.
The network in Fig. 4 is just a toy, with only 5 inputs and 5 outputs and only 23 neurons. But it has 30+36+36+30+23 = 155 adjustable weights. If this seems like overkill, but it is nothing compared to neural networks with thousands of neurons per layer and tens of layers. That massive overkill is exactly the power of deep learning — as well as its pitfall.
The Pitfalls of Deep Learning
Despite the impressive advances in deep learning, serious pitfalls remain for practitioners. One of the most challenging problems in deep learning is the saddle-point problem. A saddle-point in an objective function is like a mountain pass through the mountains: at the top of the pass it slopes downward in two opposite directions into the valleys but slopes upward in the two orthogonal directions to mountain peaks. A saddle point is an unstable equilibrium where a slight push this way or that can lead the traveller to two very different valleys separated by high mountain ridges. In our familiar three-dimensional space, saddle points are relatively rare and landscapes are dominated by valleys and mountaintops. But this intuition about landscapes fails in high dimensions.
Landscapes in high dimensions are dominated by neutral ridges that span the domain of the landscape. This key understanding about high-dimensional space actually came from the theory of evolutionary dynamics for the evolution of species. In the early days of evolutionary dynamics, there was a serious challenge to understand how genetic mutations could allow such diverse speciation to occur. If the fitness of a species were viewed as a fitness landscape, and if a highly-adapted species were viewed as a mountain peak in this landscape, then genetic mutations would need to drive the population state point into “valleys of death” that would need to be crossed to arrive at a neighboring fitness peak. It would seem that genetic mutations would likely kill off the species in the valleys before they could rise to the next equilibrium.
However, the geometry of high dimensions does not follow this simple low-dimensional intuition. As more dimensions come available, landscapes have more and more ridges of relatively constant height that span the full space (See my recent blog on random walks in 10-dimensions and my short YouTube video). For a species to move from one fitness peak to another fitness peak in a fitness landscape (in ultra-high-dimensional mutation space), all that is needed is for a genetic mutation to step the species off of the fitness peak onto a neutral ridge where many mutations can keep the species on that ridge as it moves ever farther away from the last fitness peak. Eventually, the neutral ridge brings the species near a new fitness peak where it can climb to the top, creating a new stable species. The point is that most genetic mutations are neutral — they do not impact the survivability of an individual. This is known as the neutral network theory of evolution proposed by Motoo Kimura (1924 – 1994) . As these mutation accumulate, the offspring can get genetically far from the progenitor. And when a new fitness peak comes near, many of the previously neutral mutations can come together and become a positive contribution to fitness as the species climbs the new fitness peak.
The neutral network of genetic mutation was a paradigm shift in the field of evolutionary dynamics, and it also taught everyone how different random walks in high-dimensional spaces are from random walks in 3D. But although neutral networks solved the evolution problem, they become a two-edged sword in machine learning. On the positive side, fitness peaks are just like the minima of objective functions, and the ability for partial solutions to perform random walks along neutral ridges in the objective-function space allows optimal solutions to be found across a broad range of the configuration space of the neural weights. However, on the negative side, ridges are loci of unstable equilibrium. Hence there are always multiple directions that a solution state can go to minimize the objective function. Each successive run of a deep-network neural weight optimizer can find equivalent optimal solutions — but they each can be radically different. There is no hope of averaging the weights of an ensemble of networks to arrive at an “average” deep network. The averaging would simply drive all weights to zero. Certainly, the predictions of an ensemble of equivalently trained networks can be averaged—but this does not illuminate what is happening “under the hood” of the machine, which is where our own “understanding” of what the network is doing would come from.
Post-Modern Machine Learning
Post-modernism is admittedly kind of a joke — it works so hard to pull down every aspect of human endeavor that it falls victim to its own methods. The convoluted arguments made by its proponents sound like ultimate slacker talk — circuitous logic circling itself in an endless loop of denial.
But post-modernism does have its merits. It surfs on the moving crest of what passes as modernism, as modernism passes onward to its next phase. The philosophy of post-modernism moves beyond rationality in favor of a subjectivism in which cause and effect are blurred. For instance, in post-modern semiotic theory, a text or a picture is no longer an objective element of reality, but fragments into multiple semiotic versions, each one different for each different reader or watcher — a spectrum of collaborative efforts between each consumer and the artist. The reader brings with them a unique set of life experiences that interact with the text to create an entirely new experience in each reader’s mind.
Deep learning is post-modern in the sense that deterministic algorithms have disappeared. Instead of a traceable path of sequential operations, neural nets scramble information into massively-parallel strings of partial information that cross and interact nonlinearly with other massively-parallel strings. It is difficult to impossible to trace any definable part of the information from input to output. The output simply manifests some aspect of the data that was hidden from human view.
But the Age of Artificial Intelligence is not here yet. The vast multiplicity of saddle ridges in high dimensions is one of the drivers for one of the biggest pitfalls of deep learning — the need for massive amounts of training data. Because there are so many adjustable parameters in a neural network, and hence so many dimensions, a tremendous amount of training data is required to train a network to convergence. This aspect of deep learning stands in strong contrast to human children who can be shown a single picture of a bicycle next to a tricycle, and then they can achieve almost perfect classification accuracy when shown any number of photographs of different bicycles and tricycles. Humans can generalize with an amazingly small amount of data, while deep networks often need thousands of examples. This example alone points to the marked difference between human intelligence and the current state of deep learning. There is still a long road ahead.
By David D. Nolte, April 18, 2022
 F. Rosenblatt, “THE PERCEPTRON – A PROBABILISTIC MODEL FOR INFORMATION-STORAGE AND ORGANIZATION IN THE BRAIN,” Psychological Review, vol. 65, no. 6, pp. 386-408, (1958)
 D. E. Rumelhart, G. E. Hinton, and R. J. Williams, “LEARNING REPRESENTATIONS BY BACK-PROPAGATING ERRORS,” Nature, vol. 323, no. 6088, pp. 533-536, Oct (1986)
 LeCun, Yann; Léon Bottou; Yoshua Bengio; Patrick Haffner (1998). “Gradient-based learning applied to document recognition”. Proceedings of the IEEE. 86 (11): 2278–2324.
 G. E. Hinton, S. Osindero, and Y. W. Teh, “A fast learning algorithm for deep belief nets,” Neural Computation, vol. 18, no. 7, pp. 1527-1554, Jul (2006)
 M. Kimura, The Neutral Theory of Molecular Evolution. Cambridge University Press, 1968.
The ability to travel to the stars has been one of mankind’s deepest desires. Ever since we learned that we are just one world in a vast universe of limitless worlds, we have yearned to visit some of those others. Yet nature has thrown up an almost insurmountable barrier to that desire–the speed of light. Only by traveling at or near the speed of light may we venture to far-off worlds, and even then, decades or centuries will pass during the voyage. The vast distances of space keep all the worlds isolated–possibly for the better.
Yet the closest worlds are not so far away that they will always remain out of reach. The very limit of the speed of light provides ways of getting there within human lifetimes. The non-intuitive effects of special relativity come to our rescue, and we may yet travel to the closest exoplanet we know of.
Proxima Centauri b
The closest habitable Earth-like exoplanet is Proxima Centauri b, orbiting the red dwarf star Proxima Centauri that is about 4.2 lightyears away from Earth. The planet has a short orbital period of only about 11 Earth days, but the dimness of the red dwarf puts the planet in what may be a habitable zone where water is in liquid form. Its official discovery date was August 24, 2016 by the European Southern Observatory in the Atacama Desert of Chile using the Doppler method. The Alpha Centauri system is a three-star system, and even before the discovery of the planet, this nearest star system to Earth was the inspiration for the Hugo-Award winning sci-fi trilogy The Three Body Problem by Chinese author Liu Cixin, originally published in 2008.
It may seem like a coincidence that the closest Earth-like planet to Earth is in the closest star system to Earth, but it says something about how common such exoplanets may be in our galaxy.
There are already plans to send centimeter-sized spacecraft to Alpha Centauri. One such project that has received a lot of press is Breakthrough Starshot, a project of the Breakthrough Initiatives. Breakthrough Starshot would send around 1000 centimeter-sized camera-carrying laser-fitted spacecraft with 5-meter-diameter solar sails propelled by a large array of high-power lasers. The reason there are so many of these tine spacecraft is because of the collisions that are expected to take place with interstellar dust during the voyage. It is possible that only a few dozen of the craft will finally make it to Alpha Centauri intact.
As these spacecraft fly by the Alpha Centauri system, possibly within one hundred million miles of Proxima Centauri b, their tiny HR digital cameras will take pictures of the planet’s surface with enough resolution to see surface features. The on-board lasers will then transmit the pictures back to Earth. The travel time to the planet is expected to be 20 or 30 years, plus the four years for the laser information to make it back to Earth. Therefore, it would take a quarter century after launch to find out if Proxima Centauri b is habitable or not. The biggest question is whether it has an atmosphere. The red dwarf it orbits sends out catastrophic electromagnetic bursts that could strip the planet of its atmosphere thus preventing any chance for life to evolve or even to be sustained there if introduced.
There are multiple projects under consideration for travel to the Alpha Centauri systems. Even NASA has a tentative mission plan called the 2069 Mission (100 year anniversary of the Moon landing). This would entail a single spacecraft with a much larger solar sail than the small starshot units. Some of the mission plans proposed star-drive technology, such as nuclear propulsion systems, rather than light sails. Some of these designs could sustain a 1-g acceleration throughout the entire mission. It is intriguing to do the math on what such a mission could look like, in terms of travel time. Could we get an unmanned probe to Alpha Centauri in a matter of years? Let’s find out.
Special Relativity of Acceleration
The most surprising aspect of deriving the properties of relativistic acceleration using special relativity is that it works at all. We were all taught as young physicists that special relativity deals with inertial frames in constant motion. So the idea of frames that are accelerating might first seem to be outside the scope of special relativity. But one of Einstein’s key insights, as he sought to extend special relativity towards a more general theory, was that one can define a series of instantaneously inertial co-moving frames relative to an accelerating body. In other words, at any instant in time, the accelerating frame has an inertial co-moving frame. Once this is defined, one can construct invariants, just as in usual special relativity. And these invariants unlock the full mathematical structure of accelerating objects within the scope of special relativity.
For instance, the four-velocity and the four-acceleration in a co-moving frame for an object accelerating at g are given by
The object is momentarily stationary in the co-moving frame, which is why the four-velocity has only the zeroth component, and the four-acceleration has simply g for its first component.
Armed with these four-vectors, one constructs the invariants
This last equation is solved for the specific co-moving frame as
But the invariant is more general, allowing the expression
From these, putting them all together, one obtains the general differential equations for the change in velocity as a set of coupled equations
The solution to these equations is
where the unprimed frame is the lab frame (or Earth frame), and the primed frame is the frame of the accelerating object, for instance a starship heading towards Alpha Centauri. These equations allow one to calculate distances, times and speeds as seen in the Earth frame as well as the distances, times and speeds as seen in the starship frame. If the starship is accelerating at some acceleration g’ other than g, then the results are obtained simply by replacing g by g’ in the equations.
It turns out that the acceleration due to gravity on our home planet provides a very convenient (but purely coincidental) correspondence
With a similarly convenient expression
These considerably simplify the math for a starship accelerating at g.
Let’s now consider a starship accelerating by g for the first half of the flight to Alpha Centauri, turning around and decelerating at g for the second half of the flight, so that the starship comes to a stop at its destination. The equations for the times to the half-way point are
This means at the midpoint that 1.83 years have elapsed on the starship, and about 3 years have elapsed on Earth. The total time to get to Alpha Centauri (and come to a stop) is then simply
It is interesting to look at the speed at the midpoint. This is obtained by
which is solved to give
This amazing result shows that the starship is traveling at 95% of the speed of light at the midpoint when accelerating at the modest value of g for about 3 years. Of course, the engineering challenges for providing such an acceleration for such a long time are currently prohibitive … but who knows? There is a lot of time ahead of us for technology to advance to such a point in the next century or so.
g0 = 1;
L = 4.37;
for loop = 1:100
g = 0.1*loop*g0;
taup = (1/g)*acosh(g*L/2 + 1);
tearth = (1/g)*sinh(g*taup);
tauspacecraft(loop) = 2*taup;
tlab(loop) = 2*tearth;
acc(loop) = g;
legend('Space Craft','Earth Frame','FontSize',18)
dum = set(gcf,'Color','White');
H = gca;
H.LineWidth = 2;
H.FontSize = 18;
To Centauri and Beyond
Once we get unmanned probes to Alpha Centauri, it opens the door to star systems beyond. The next closest are Barnards star at 6 Ly away, Luhman 16 at 6.5 Ly, Wise at 7.4 Ly, and Wolf 359 at 7.9 Ly. Several of these are known to have orbiting exoplanets. Ross 128 at 11 Ly and Lyuten at 12.2 Ly have known earth-like planets. There are about 40 known earth-like planets within 40 lightyears from Earth, and likely there are more we haven’t found yet. It is almost inconceivable that none of these would have some kind of life. Finding life beyond our solar system would be a monumental milestone in the history of science. Perhaps that day will come within this century.
An old joke goes that Democracy is a terrible form of government … except it’s better than all the others!
Our world today is faced with conflict between democracy and dictatorship. On the one side is the free world, where leaders are chosen by some form of representation of large numbers of citizens and sometimes even a majority. On the other side is authoritarianism where a select few are selected by a select few to govern everyone else.
[I]t has been said that democracy is the worst form of Government except all those other forms that have been tried from time to time; but there is the broad feeling in our country that the people should rule, and that public opinion expressed by all constitutional means, should shape, guide, and control the actions of Ministers who are their servants and not their masters.
Winston Churchill (1947)
An argument in favor of democracy is freedom of choice for the largest segment of the population, plus the ability to remove leaders who fail to provide for the perceived welfare of the most citizens. This makes democracy adaptive, shifting with the times. It also makes leaders accountable for their actions and crimes. An argument in favor of authoritarianism is the myth of the benevolent dictator–someone who knows what’s best for the people even if the people don’t know it themselves.
But dictators are rarely benevolent, and as they become saturated with power, they are corrupted. The criminal massacres, perpetrated by Putin, of Ukrainian civilians is one of the strongest recent arguments against authoritarianism. A single man decides, on a whim, the life and death of thousands or maybe more. The invasion of Ukraine is so egregious and unwarranted, that we wonder how the Russian people can put up with their isolated and manic leader. Yet by some measure more than 60% of the people in Russia approve of the war.
How can the free world see the invasion as the atrocity it is, while Russia’s majority sees it as a just war? The answer is a surprising result of population dynamics known as the replicator-mutator equation. The challenge for us here in the free world is to learn how to game the replicator-mutator equation to break up the monopoly of popular opinion and make Putin pay for his arrogance. This blog explains how “mass hysteria” can arise from forces within a complex environment, and how to construct a possible antidote.
There are several simple models of population dynamics that try to explain the rise and fall of the number of individuals that belong to varying cohorts within the population. These models incorporate aspects of relative benefit of one group over another, plus the chance to change sides–defection. The dynamics under these conditions can be highly nonlinear and highly non-intuitive. One of the simplest of these models is known as the replicator-mutator model where replication follows the fitness of the cohort, and where individuals can defect to a “more fit” cohort.
The basic dynamics of the model are
where xa is the fraction of the population that is in cohort a, Wab is a transition probability, and φ is the average fitness of the full population. The transition matrix is given by
where fb is the fitness of cohort b and Qba is a stochastic matrix that allows for defection of an individual from one cohort to another. The fitness of a cohort is given by
where pbc is the pay-off matrix for the relative benefit of one cohort at the expense of another. Finally the average fitness is
The Einstein implicit summation convention is assumed in all of these equations, and the metric space in which the dynamics are embedded is “flat” so that there is no essential difference between superscripts and subscripts. There is also a conservation law that the sum over all population fractions equals unity.
In the language of population dynamics, this model has frequency-dependent fitness, with defection and pay-off, in a zero-sum game.
One of the simplest questions to answer with this model is how so many people can come to believe one thing. This is known as “opinion uniformity”.
Uniformity versus Diversity
This replicator-mutator model explains the property of opinion uniformity, as well as the opposite extreme of opinion diversity. The starting point for both is the pay-off matrix pbc which is assumed to be unity on the diagonal for b = c and to a constant factor a for b ~= c. This pay-off is symmetric so that all opinions are equally “believable”. The stochastic defection matrix is close to unity on the diagonal, and has random terms on the off-diagonal that are proportional to a constant ε. The defection matrix allows a person from one cohort to defect to the belief system of another cohort if they believe that the new cohort has more merit. Cohorts with greater merit (fitness) gain more members over time, while cohorts with lower merit have fewer members over time.
Note that the fitness increases with the number of members in the cohort. This is the bandwagon effect. A belief system is perceived to have more merit if there are more people who believe it. This clearly creates a positive feedback that would cause this cohort to grow. Even though all the cohorts have this same positive feedback, the zero-sum rule only allows one of the cohorts to grow to its maximum extent, taking members away from all the other cohorts. This is illustrated in Fig. 1. One belief system wins, taking almost the full population with it.
What allows the winner to take all is the positive feedback where the fitness of the cohort increases with the number of members, combined with the ability for that cohort to take members from other cohorts through the defection matrix.
However, all of the cohorts are trying the same thing, and the pay-off matrix is fully symmetric and equal for all cohorts, so no cohort is intrinsically “better” than another. This property opens the door to a strong alternative to opinion uniformity. In fact, as more members are allowed to defect, it creates a trend counter to winner-take-all, helping to equalize the cohorts. Suddenly, a bifurcation is passed when the winner-take-all converts discontinuously to a super-symmetric situation when all opinions are held by equal numbers of people. This is illustrated in Fig. 2 for a slightly higher defection rate ε = 0.03. The parameters are identical to those in Fig. 1, but the higher defection rate stabilizes the super-symmetric state of maximum diversity.
These two extreme results of the replicator-mutator equation, that switch suddenly from one to the other dependent on the defection rate, may seem to produce solutions neither of which are ideal for a healthy democracy. One the one hand, in the uniform case where the wining opinion is monolithic, everyone is a carbon-copy of everyone else, which is a form of cultural death (lack of diversity). But, on the other hand, one might argue that maximum opinion diversity is just as concerning, because no-one can agree on anything. If all opinions are equivalent, then everyone in the society believes something different and there is no common ground. But in the diversity case, at least there is no state-level control of the population. In the case of opinion uniformity, the wining opinion can be manipulated by propaganda.
The Propaganda Machine
A government can “seed” the belief networks with propaganda that favors the fitness of what they want their citizens to hear. Because of the positive feedback, any slight advantage of one opinion over others can allow that opinion to gain large numbers through the bandwagon effect. Of course, even stronger control that stifles dissent, for instance by shutting down the free press, makes it that much more likely that the state-controlled story is believed. This may be one reason for the 60% (as of the writing of this blog) support Putin’s war, despite the obvious lies that are being told. This is illustrated in Fig. 3 by boosting the payoff between two similar lies that the government wants its people to believe. These rise to take about 60% of the population. Members of the cohort are brain-washed, not by the government alone, but by all their neighbors who are parroting the same thing.
Breaking the Monopoly of Thought
How do we fight back? Not just against the Kremlin’s propaganda, but also against QANON and Trump’s Big Lie and the pernicious fallacy of nationalism? The answer is simple: diversity of thought! The sliver bullet in the replicator-mutator model is the defection matrix. The existence of a bifurcation means that a relatively small increase in the amount of diverse opinion, and the freedom to swap opinions, can lead to a major qualitative collapse of the monolithic thought, even when supported by government propaganda, as shown in Fig. 4. More people may still believe in the state-supported propaganda than the others, but it is no longer a majority.
The above models were all very homogeneous. It is more realistic that people are connected through small-world networks. In this case, there is much more diversity, as shown in Fig. 5, although the defection rate needs to be much higher to prevent a monolithic opinion from dominating. The state-supported propaganda is buried in the resulting mix of diverse ideas. Therefore, to counteract state control, people must feel free to hop about in their choice of beliefs and have access to other beliefs.
This is a bit paradoxical. On the one hand, the connectivity of the internet has fostered the rise of conspiracy theories and other odd-ball ideas. But sustained access to multiple sources of information is the best defense against all that crazy stuff winning out. In other words, not only do we have to put up with the lunatic fringe if we are to have full diversity of thought, but we need to encourage everyone to feel free to “shop around” for different ideas, even if some of them are crazy. Our free society shouldn’t be cancelling people who have divergent opinions, because that sets us down the path to authoritarianism. As a recent add said in the New York Times, “Cancel culture cancels culture.” Unfortunately, authoritarianism is on the rise around the world, and the US almost suffered that fate on Jan. 6, 2021. Furthermore, with Xi aligning with Putin and giving him the green light on Ukraine–cynically on the eve of the Olympic Games (of peace)–the new world order will revolve around that axis for decades to come, if the world survives that long. Diversity and freedom may be the only antidote.
By David D. Nolte, March 24, 2022
Matlab Program: Repmut.m
N = 63;
p = 0.5;
mutype = 1; % 0 = Hamming 1 = rand
pay = 1; % 0 = Hamming 1 = 1/sqrt(N)
ep = 0.5; % average mutation rate: 0.1 to 0.01 typical (0.4835)
%%%%% Set original population
x0temp = rand(1,N); % Initial population
sx = sum(x0temp);
y0 = x0temp/sx;
Pop0 = sum(y0);
%%%%% Set Adjacency
%node = makeglobal(N);
%node = makeER(N,0.25); % 0.5 0.25
%node = makeSF(N,6); % 12 6
node = makeSW(N,7,0.125); % 15,0.5 7,0.5
[Adj,degree,Lap] = adjacency(node);
%%%%%% Set Hamming distance
for yloop = 1:N
for xloop = 1:N
H(yloop,xloop) = hamming(yloop-1,xloop-1);
%%%%%%% Set Mutation matrix
if mutype == 0
Qtemp = 1./(1+H/ep); %Mutation matrix on Hamming
Qsum = sum(Qtemp,2);
mnQsum = mean(Qsum);
% Normalize mutation among species
for yloop = 1:N
for xloop = 1:N
Q(yloop,xloop) = Qtemp(yloop,xloop)/Qsum(xloop);
elseif mutype == 1
S = stochasticmatrix(N);
Stemp = S - diag(diag(S));
Qtemp = ep*Stemp;
sm = sum(Qtemp,2)';
Q = Qtemp + diag(ones(1,N) - sm);
%%%%%%% Set payoff matrix
if pay == 1
payoff = zeros(N,N);
for yloop = 1:N
payoff(yloop,yloop) = 1;
for xloop = yloop + 1:N
payoff(yloop,xloop) = p;
payoff(xloop,yloop) = p;
payoff(1,N) = 1; % Propaganda
payoff(N,1) = 1;
elseif pay == 0
payoff = zerodiag(exp(-1*H));
% Run time evolution
tspan = [0 4000];
[t,x] = ode45(@quasispec,tspan,y0);
[sz,dum] = size(t);
Popend = sum(x(sz,:))
for loop = 1:N
fit(loop) = sum(payoff(:,loop)'.*x(sz,:));
phistar = sum(fit.*x(sz,:)) % final average fitness
xend = x(sz,:)
sortxend = sort(xend,'descend');
coher = sum(sortxend(1:2))
h = colormap(lines);
for loop = 1:N
for loop = 1:N
for loop = 1:N
for loop = 1:N
function yd = quasispec(~,y)
for floop = 1:N
f(floop) = sum(payoff(:,floop).*y);
Connect = Adj + eye(N);
% Transition matrix
for yyloop = 1:N
for xxloop = 1:N
W(yyloop,xxloop) = f(yyloop)*(Connect(yyloop,xxloop)*Q(yyloop,xxloop));
phi = sum(f'.*y); % Average fitness of population
yd = W*y - phi*y;
end % end quasispec
M. A. Nowak, Evolutionary Dynamics: Exploring the Equations of Life. Cambridge, Mass.: Harvard University Press, 2006.
There are many known super-Jupiters that orbit their stars—they are detected through a slight Doppler wobble they induce on their stars . But what would become of a rocky planet also orbiting those stars as they feel the tug of both the star and the super planet?
This is not of immediate concern for us, because our solar system has had its current configuration of planets for over 4 billion years. But there can be wandering interstellar planets or brown dwarfs that could visit our solar system, like Oumuamua did in 2017, but much bigger and able to scramble the planetary orbits. Such hypothesized astronomical objects have been given the name “Nemesis“, and it warrants thought on what living in an altered solar system might be like.
What would happen to Earth if Jupiter were 50 times bigger? Could we survive?
The Three-Body Problem
The Sun-Earth-Jupiter configuration is a three-body problem that has a long and interesting history, playing a key role in several aspects of modern dynamics . There is no general analytical solution to the three-body problem. To find the behavior of three mutually interacting bodies requires numerical solution. However, there are subsets of the three-body problem that do yield to partial analytical approaches. One of these is called the restricted three-body problem . It consists of two massive bodies plus a third (nearly) massless body that all move in a plane. This restricted problem was first tackled by Euler and later by Poincaré, who discovered the existence of chaos in its solutions.
The geometry of the restricted three-body problem is shown in Fig. 1. In this problem, take mass m1 = mS to be the Sun’s mass, m2 = mJ to be Jupiter’s mass, and the third (small) mass is the Earth.
The equation of motion for the Earth is
and the parameter ξ characterizes the strength of the perturbation of the Earth’s orbit around the Sun. The parameters for the Jupiter-Sun system are
for the 11.86 year journey of Jupiter around the Sun. Eq. (1) is a four-dimensional non-autonomous flow
The solutions of an Earth orbit are shown in Fig.2. The natural Earth-Sun-Jupiter system has a mass ratio mJ/mS = 0.001 for Jupiter relative to the Sun mass. Even in this case, Jupiter causes perturbations of the Earth’s orbit by about one percent. If the mass of Jupiter increases, the perturbations would grow larger until around ξ= 0.06 when the perturbations become severe and the orbit grows unstable. The Earth gains energy from the momentum of the Sun-Jupiter system and can reach escape velocity. The simulation for a mass ratio of 0.07 shows the Earth ejected from the Solar System.
The chances for ejection depends on initial conditions for these simulations, but generally the danger becomes severe when Jupiter is about 50 times larger than it currently is. Otherwise the Earth remains safe from ejection. However, if the Earth is to keep its climate intact, then Jupiter should not be any larger than about 5 times its current size. At the other extreme, for a planet 70 times larger than Jupiter, the Earth may not get ejected at once, but it can take a wild ride through the solar system. A simulation for a 70x Jupiter is shown in Fig. 3. In this case, the Earth is captured for a while as a “moon” of Jupiter in a very tight orbit around the super planet as it orbits the sun before it is set free again to orbit the sun in highly elliptical orbits. Because of the premise of the restricted three-body problem, the Earth has no effect on the orbit of Jupiter.
If Nemesis were to swing by and scramble the solar system, then Jupiter might move closer to the Earth. More ominously, the period of Jupiter’s orbit could come into resonance with the Earth’s period. This occurs when the ratio of orbital periods is a ratio of small integers. Resonance can amplify small perturbations, so perhaps Jupiter would become a danger to Earth. However, the forces exerted by Jupiter on the Earth changes the Earth’s orbit and hence its period, preventing strict resonance to occur, and the Earth is not ejected from the solar system even for initial rational periods or larger planet mass. This is related to the famous KAM theory of resonances by Kolmogorov, Arnold and Moser that tends to protect the Earth from the chaos of the solar system. More often than not in these scenarios, the Earth is either captured by the super Jupiter, or it is thrown into a large orbit that is still bound to the sun. Some examples are given in the following figures.
Life on a planet in a solar system with two large bodies has been envisioned in dramatic detail in the science fiction novel “Three-Body Problem” by Liu Cixin about the Trisolarians of the closest known exoplanet to Earth–Proxima Centauri b.
“What is a coconut worth to a cast-away on a deserted island?”
In the midst of the cast-away’s misfortune and hunger and exertion and food lies an answer that looks familiar to any physicist who speaks the words
“Assume a Lagrangian …”
It is the same process that determines how a bead slides along a bent wire in gravity or a skier navigates a ski hill. The answer: find the balance of economic forces subject to constraints.
Here is the history and the physics behind one of the simplest economic systems that can be conceived: Robinson Crusoe spending his time collecting coconuts!
Robinson Crusoe in Economic History
Daniel Defoe published “The Life and Strange Surprizing Adventures of Robinson Crusoe” in 1719, about a man who is shipwrecked on a deserted island and survives there for 28 years before being rescued. It was written in the first person, as if the author had actually lived through those experiences, and it was based on a real-life adventure story. It is one of the first examples of realistic fiction, and it helped establish the genre of the English novel.
Marginalism in economic theory is the demarcation between classical economics and modern economics. The key principle of marginalism is the principle of “diminishing returns” as the value of something gets less as an individual has more of it. This principle makes functions convex, which helps to guarantee that there are equilibrium points in the economy. Economic equilibrium is a key concept and goal because it provides stability to economic systems.
One-Product Is a Dull Diet
The Robinson Crusoe economy is one of the simplest economic models that captures the trade-off between labor and production on one side, and leisure and consumption on the other. The model has a single laborer for whom there are 24*7 =168 hours in the week. Some of these hours must be spent finding food, let’s say coconuts, while the other hours are for leisure and rest. The production of coconuts follows a production curve
that is a function of labor L. There are diminishing returns in the finding of coconuts for a given labor, making the production curve of coconuts convex. The amount of rest is
and there is a reciprocal production curve q(R) related to less coconuts produced for more time spent resting. In this model it is assumed that all coconuts that are produced are consumed. This is known as market clearing when no surplus is built up.
The production curve presents a continuous trade-off between consumption and leisure, but at first look there is no obvious way to decide how much to work and how much to rest. A lazy person might be willing to go a little hungry if they can have more rest, while a busy person might want to use all waking hours to find coconuts. The production curve represents something known as a Pareto frontier. It is a continuous trade-off between two qualities. Another example of a Pareto frontier is car engine efficiency versus cost. Some consumers may care more about the up-front cost of the car than the cost of gas, while other consumers may value fuel efficiency and be willing to pay higher costs to get it.
Continuous trade offs always present a bit of a problem for planning. It is often not clear what the best trade off should be. This problem is solved by introducing another concept into this little economy–the concept of “Utility”.
The utility function was introduced by the physicist Daniel Bernoulli, one of the many bountiful Bernoullis of Basel, in 1738. The utility function is a measure of how much benefit or utility a person or an enterprise gains by holding varying amounts of goods or labor. The essential problem in economic exchange is to maximize one’s utility function subject to whatever constraints are active. The utility function for Robinson Crusoe is
This function is obviously a maximum at maximum leisure (R = 1) and lots of coconuts (q = 1), but this is not allowed, because it lies off the production curve q(R). Therefore the question becomes: where on the production curve he can maximize the trade-off between coconuts and leisure?
Fig. 1 shows the dynamical space for Robinson Crusoe’s economy. The space is two dimensional with axes for coconuts q and rest R. Isoclines of the utility function are shown as contours known as “indifference” curves, because the utility is constant along these curves and hence Robinson Crusoe is indifferent to his position on it. The indifference curves are cut by the production curve q(R). The equilibrium problem is to maximize utility subject to the production curve.
When looking at dynamics under constraints, Lagrange multipliers are the best tool. Furthermore, we can impart dynamics into the model with temporal adjustments in q and R that respond to economic forces.
The Lagrangian Economy
The approach to the Lagrangian economy is identical to the Lagrangian approach in classical physics. The equation of constraint is
All the dynamics take place on the production curve. The initial condition starts on the curve, and the state point moves along the curve until it reaches a maximum and settles into equilibrium. The dynamics is therefore one-dimensional, the link between q and R being the production curve.
The Lagrangian in this simple economy is given by the utility function augmented by the equation of constraint, such that
where the term on the right-hand-side is a drag force with the relaxation rate γ.
The first term on the left is the momentum of the system. In economic dynamics, this is usually negligible, similar to dynamics in living systems at low Reynold’s number in which all objects are moving instantaneously at their terminal velocity in response to forces. The equations of motion are therefore
The Lagrange multiplier can be solved from the first equation as
and the last equation converts q-dot to R-dot to yield the single equation
which is a one-dimensional flow
where all q’s are expressed as R’s through the equation of constraint. The speed vanishes at the fixed point—the economic equilibrium—when
This is the point of Pareto efficient allocation. Any initial condition on the production curve will relax to this point with a rate given by γ. These trajectories are shown in Fig. 2. From the point of view of Robinson Crusoe, if he is working harder than he needs, then he will slack off. But if there aren’t enough coconuts to make him happy, he will work harder.
The production curve is like a curved wire, the amount of production q is like the bead sliding on the wire. The utility function plays the role of a potential function, and the gradients of the utility function play the role of forces. Then this simple economic model is just like ordinary classical physics of point masses responding to forces constrained to lie on certain lines or surfaces. From this viewpoint, physics and economics are literally the same.
To make this problem specific, consider a utility function given by
that has a maximum in the upper right corner, and a production curve given by
that has diminishing returns. Then, the condition of equilibrium can be solved using
With the (fairly obvious) answer
By David D. Nolte, Feb. 10, 2022
For More Reading
 D. D. Nolte, Introduction to Modern Dynamics : Chaos, Networks, Space and Time, 2nd ed. Oxford : Oxford University Press (2019).
 Fritz Söllner; The Use (and Abuse) of Robinson Crusoe in Neoclassical Economics. History of Political Economy; 48 (1): 35–64. (2016)
If you are a fan of the Doppler effect, then time trials at the Indy 500 Speedway will floor you. Even if you have experienced the fall in pitch of a passing train whistle while stopped in your car at a railroad crossing, or heard the falling whine of a jet passing overhead, I can guarantee that you have never heard anything like an Indy car passing you by at 225 miles an hour.
Indy 500 Time Trials and the Doppler Effect
The Indy 500 time trials are the best way to experience the effect, rather than on race day when there is so much crowd noise and the overlapping sounds of all the cars. During the week before the race, the cars go out on the track, one by one, in time trials to decide the starting order in the pack on race day. Fans are allowed to wander around the entire complex, so you can get right up to the fence at track level on the straight-away. The cars go by only thirty feet away, so they are coming almost straight at you as they approach and straight away from you as they leave. The whine of the car as it approaches is 43% higher than when it is standing still, and it drops to 33% lower than the standing frequency—a ratio almost approaching a factor of two. And they go past so fast, it is almost a step function, going from a steady high note to a steady low note in less than a second. That is the Doppler effect!
But as obvious as the acoustic Doppler effect is to us today, it was far from obvious when it was proposed in 1842 by Christian Doppler at a time when trains, the fastest mode of transport at the time, ran at 20 miles per hour or less. In fact, Doppler’s theory generated so much controversy that the Academy of Sciences of Vienna held a trial in 1853 to decide its merit—and Doppler lost! For the surprising story of Doppler and the fate of his discovery, see my Physics Today article.
From that fraught beginning, the effect has expanded in such importance, that today it is a daily part of our lives. From Doppler weather radar, to speed traps on the highway, to ultrasound images of babies—Doppler is everywhere.
Development of the Doppler-Fizeau Effect
When Doppler proposed the shift in color of the light from stars in 1842 , depending on their motion towards or away from us, he may have been inspired by his walk to work every morning, watching the ripples on the surface of the Vltava River in Prague as the water slipped by the bridge piers. The drawings in his early papers look reminiscently like the patterns you see with compressed ripples on the upstream side of the pier and stretched out on the downstream side. Taking this principle to the night sky, Doppler envisioned that binary stars, where one companion was blue and the other was red, was caused by their relative motion. He could not have known at that time that typical binary star speeds were too small to cause this effect, but his principle was far more general, applying to all wave phenomena.
Six years later in 1848 , the French physicist Armand Hippolyte Fizeau, soon to be famous for making the first direct measurement of the speed of light, proposed the same principle, unaware of Doppler’s publications in German. As Fizeau was preparing his famous measurement, he originally worked with a spinning mirror (he would ultimately use a toothed wheel instead) and was thinking about what effect the moving mirror might have on the reflected light. He considered the effect of star motion on starlight, just as Doppler had, but realized that it was more likely that the speed of the star would affect the locations of the spectral lines rather than change the color. This is in fact the correct argument, because a Doppler shift on the black-body spectrum of a white or yellow star shifts a bit of the infrared into the visible red portion, while shifting a bit of the ultraviolet out of the visible, so that the overall color of the star remains the same, but Fraunhofer lines would shift in the process. Because of the independent development of the phenomenon by both Doppler and Fizeau, and because Fizeau was a bit clearer in the consequences, the effect is more accurately called the Doppler-Fizeau Effect, and in France sometimes only as the Fizeau Effect. Here in the US, we tend to forget the contributions of Fizeau, and it is all Doppler.
Doppler and Exoplanet Discovery
It is fitting that many of today’s applications of the Doppler effect are in astronomy. His original idea on binary star colors was wrong, but his idea that relative motion changes frequencies was right, and it has become one of the most powerful astrometric techniques in astronomy today. One of its important recent applications was in the discovery of extrasolar planets orbiting distant stars.
When a large planet like Jupiter orbits a star, the center of mass of the two-body system remains at a constant point, but the individual centers of mass of the planet and the star both orbit the common point. This makes it look like the star has a wobble, first moving towards our viewpoint on Earth, then moving away. Because of this relative motion of the star, the light can appear blueshifted caused by the Doppler effect, then redshifted with a set periodicity. This was observed by Queloz and Mayer in 1995 for the star 51 Pegasi, which represented the first detection of an exoplanet . The duo won the Nobel Prize in 2019 for the discovery.
Doppler and Vera Rubins’ Galaxy Velocity Curves
In the late 1960’s and early 1970’s Vera Rubin at the Carnegie Institution of Washington used newly developed spectrographs to use the Doppler effect to study the speeds of ionized hydrogen gas surrounding massive stars in individual galaxies . From simple Newtonian dynamics it is well understood that the speed of stars as a function of distance from the galactic center should increase with increasing distance up to the average radius of the galaxy, and then should decrease at larger distances. This trend in speed as a function of radius is called a rotation curve. As Rubin constructed the rotation curves for many galaxies, the increase of speed with increasing radius at small radii emerged as a clear trend, but the stars farther out in the galaxies were all moving far too fast. In fact, they are moving so fast that they exceeded escape velocity and should have flown off into space long ago. This disturbing pattern was repeated consistently in one rotation curve after another for many galaxies.
A simple fix to the problem of the rotation curves is to assume that there is significant mass present in every galaxy that is not observable either as luminous matter or as interstellar dust. In other words, there is unobserved matter, dark matter, in all galaxies that keeps all their stars gravitationally bound. Estimates of the amount of dark matter needed to fix the velocity curves is about five times as much dark matter as observable matter. In short, 80% of the mass of a galaxy is not normal. It is neither a perturbation nor an artifact, but something fundamental and large. The discovery of the rotation curve anomaly by Rubin using the Doppler effect stands as one of the strongest evidence for the existence of dark matter.
There is so much dark matter in the Universe that it must have a major effect on the overall curvature of space-time according to Einstein’s field equations. One of the best probes of the large-scale structure of the Universe is the afterglow of the Big Bang, known as the cosmic microwave background (CMB).
Doppler and the Big Bang
The Big Bang was astronomically hot, but as the Universe expanded it cooled. About 380,000 years after the Big Bang, the Universe cooled sufficiently that the electron-proton plasma that filled space at that time condensed into hydrogen. Plasma is charged and opaque to photons, while hydrogen is neutral and transparent. Therefore, when the hydrogen condensed, the thermal photons suddenly flew free and have continued unimpeded, continuing to cool. Today the thermal glow has reached about three degrees above absolute zero. Photons in thermal equilibrium with this low temperature have an average wavelength of a few millimeters corresponding to microwave frequencies, which is why the afterglow of the Big Bang got its name: the Cosmic Microwave Background (CMB).
Not surprisingly, the CMB has no preferred reference frame, because every point in space is expanding relative to every other point in space. In other words, space itself is expanding. Yet soon after the CMB was discovered by Arno Penzias and Robert Wilson (for which they were awarded the Nobel Prize in Physics in 1978), an anisotropy was discovered in the background that had a dipole symmetry caused by the Doppler effect as the Solar System moves at 368±2 km/sec relative to the rest frame of the CMB. Our direction is towards galactic longitude 263.85o and latitude 48.25o, or a bit southwest of Virgo. Interestingly, the local group of about 100 galaxies, of which the Milky Way and Andromeda are the largest members, is moving at 627±22 km/sec in the direction of galactic longitude 276o and latitude 30o. Therefore, it seems like we are a bit slack in our speed compared to the rest of the local group. This is in part because we are being pulled towards Andromeda in roughly the opposite direction, but also because of the speed of the solar system in our Galaxy.
Aside from the dipole anisotropy, the CMB is amazingly uniform when viewed from any direction in space, but not perfectly uniform. At the level of 0.005 percent, there are variations in the temperature depending on the location on the sky. These fluctuations in background temperature are called the CMB anisotropy, and they help interpret current models of the Universe. For instance, the average angular size of the fluctuations is related to the overall curvature of the Universe. This is because, in the early Universe, not all parts of it were in communication with each other. This set an original spatial size to thermal discrepancies. As the Universe continued to expand, the size of the regional variations expanded with it, and the sizes observed today would appear larger or smaller, depending on how the universe is curved. Therefore, to measure the energy density of the Universe, and hence to find its curvature, required measurements of the CMB temperature that were accurate to better than a part in 10,000.
Equivalently, parts of the early universe had greater mass density than others, causing the gravitational infall of matter towards these regions. Then, through the Doppler effect, light emitted (or scattered) by matter moving towards these regions contributes to the anisotropy. They contribute what are known as “Doppler peaks” in the spatial frequency spectrum of the CMB anisotropy.
The examples discussed in this blog (exoplanet discovery, galaxy rotation curves, and cosmic background) are just a small sampling of the many ways that the Doppler effect is used in Astronomy. But clearly, Doppler has played a key role in the long history of the universe.
By David D. Nolte, Jan. 23, 2022
 C. A. DOPPLER, “Über das farbige Licht der Doppelsterne und einiger anderer Gestirne des Himmels (About the coloured light of the binary stars and some other stars of the heavens),” Proceedings of the Royal Bohemian Society of Sciences, vol. V, no. 2, pp. 465–482, (Reissued 1903) (1842)
 H. Fizeau, “Acoustique et optique,” presented at the Société Philomathique de Paris, Paris, 1848.
 M. Mayor and D. Queloz, “A JUPITER-MASS COMPANION TO A SOLAR-TYPE STAR,” Nature, vol. 378, no. 6555, pp. 355-359, Nov (1995)
 Rubin, Vera; Ford, Jr., W. Kent (1970). “Rotation of the Andromeda Nebula from a Spectroscopic Survey of Emission Regions”. The Astrophysical Journal. 159: 379