Top 10 Topics of Modern Dynamics

“Modern physics” in the undergraduate physics curriculum has been monopolized, on the one hand, by quantum mechanics, nuclear physics, particle physics and astrophysics. “Classical mechanics”, on the other hand, has been monopolized by Lagrangians and Hamiltonians.  While these are all admittedly interesting, the topics of modern dynamics that monopolize the time and actions of most physics-degree holders, as they work in high-tech start-ups, established technology companies, or on Wall Street, are not to be found.  These are the topics of nonlinear dynamics, chaos theory, complex networks, finance, evolutionary dynamics and neural networks, among others.

Cover

There is a growing awareness that the undergraduate physics curriculum needs to be reinvigorated to make a physics degree relevant to the modern workplace.  To that end, I am listing my top 10 topics of modern dynamics that can form the foundation of a revamped upper-division (junior level) mechanics course.  Virtually all of these topics were once reserved for graduate-student-level courses, but all can be introduced to undergraduates in simple and intuitive ways without the need for advanced math.

1) Phase Space

The key change in perspective for modern dynamics that differentiates it from classical dynamics is the emphasis on the set of all possible trajectories that fill a “space” rather than emphasizing single trajectories defined by given initial conditions.  Rather than study the motion of one rock thrown from a cliff top, modern dynamics studies an infinity of rocks thrown from every possible point and with every possible velocity.  The space that contains this infinity of trajectories is known as phase space (or more generally state space).  The equation of motion in state space becomes the dynamical flow, replacing Newton’s second law as the central mathematical structure of physics.  Modern dynamics studies the properties of phase space rather than the properties of single trajectories, and makes rigorous and unique conclusions about classes of possible motions.  This emphasis on classes of behavior is more general and more universal and more powerful, while also providing a fundamental “visual language” with which to describe the complex physics of complex systems.

2) Metric Space

The Cartesian coordinate plane that we were all taught in high school tends to dominate our thinking, biasing us towards linear flat geometries.  Yet most dynamics do not take place in such simple Cartesian spaces.  A case in point, virtually every real-world dynamics problem has constraints that confine the motion to a surface.  Furthermore, the number of degrees of freedom of a dynamical system usually exceed our common 3-space, expanding to hundreds or even to thousands of dimensions.  The surfaces of constraint are hypersurfaces of high dimensions (known as manifolds) and are almost certainly not flat hyperplanes. This daunting prospect of high-dimensional warped spaces can be surprisingly simplified through the concept of Bernhard Riemann’s “metric space”.  Understanding the geometry of a metric space can be as simple as applying Pythagoras’ Theorem to sets of coordinates.  For instance, the metric tensor can be taught and used without requiring students to know anything of tensor calculus.  At the same time, it provides a useful tool for understanding dynamical patterns in phase space as well as orbits around black holes.

3) Invariants

Introductory physics classes emphasize the conservation of energy, linear momentum and angular momentum as if they are special cases.  Yet there is a grand structure that yields a universal set of conservation laws: integrable Hamiltonian systems.  An integrable system is one for which there are as many invariants of motion as there are degrees of freedom.  Amazingly, these conservation laws can all be captured by a single procedure known as (canonical) transformation to action-angle coordinates.  When expressed in action-angle form, these Hamiltonians take on extremely simple expressions.  They are also the starting point for the study of perturbations when small nonintegrable terms are added to the Hamiltonian.  As the perturbations grow, this provides one doorway to the emergence of chaos.

4) Chaos theory

“Chaos theory” is the more popular title for what is generally called “nonlinear dynamics”.  Nonlinear dynamics takes place in state space when the dynamical flow equations have terms that algebraically are products of variables.  One important distinction between chaos theory and nonlinear dynamics is the occurrence of unpredictability that can emerge in the dynamics when the number of variables is equal to three or higher.  The equations, and the resulting dynamics, are still deterministic, but the trajectories are incredibly sensitive to initial conditions (SIC).  In addition, the dynamical trajectories can relax to a submanifold of the original state space known as a strange attractor that typically is a fractal structure.

5) Synchronization

One of the central paradigms of nonlinear dynamics is the autonomous oscillator.  Unlike the harmonic oscillator that eventually decays due to friction, autonomous oscillators are steady-state oscillators that convert steady energy input into oscillatory behavior.  A prime example is the pendulum clock that converts the steady weight of a hanging mass into a sustained oscillation.  When two autonomous oscillators (that normally oscillator at slightly different frequencies) are coupled weakly together, they can synchronize to the same frequency.   This effect was discovered by Christiaan Huygens when he observed two pendulum clocks hanging next to each other on a wall synchronize the swings of their pendula.  Synchronization is a central paradigm in modern dynamics for several reasons.  First, it demonstrates the emergence of order when a collective behavior emerges from a collection of individual systems (this phenomenon of emergence is one of the fundamental principles of complex system science).  Second, synchronized systems include such critical systems as the beating heart and the thinking brain.  Third, synchronization becomes a useful tool to explore coupled systems that have a large number of linked subsystems, as in networks of nodes.

6) Network Dynamics

Networks have become one of the driving forces of our modern interconnected society.  The structure of networks, the dynamics of nodes in networks, and the dynamic growth of networks are all coming into focus as we live our lives in multiple interconnected webs.  Dynamics on networks include problems like diffusion and the spread of infection and connect with topics of percolation theory and critical phenomenon.  Nonlinear dynamics on networks provide key opportunities and examples to study complex interacting systems.

7) Neural Networks

Perhaps the most enigmatic network is the network of neurons in the brain.  The emergence of intelligence and of sentience is one of the greatest scientific questions.  At a much simpler level, the nonlinear dynamics of small numbers of neurons display the properties of autonomous oscillators and synchronization, while larger sets of neurons become interconnected into dynamic networks.  The dynamics of neurons and of neural networks is a  key topic in modern dynamics.  Not only can the physics of the networks be studied, but neural networks become tools for studying other complex systems.

8) Evolutionary Dynamics

The emergence of life and the evolution of species stands as another of the greatest scientific questions of our day.  Although this topic traditionally is studied by the biological sciences (and mathematical biology), physics has a surprising lot to say on the topic.  The dynamics of evolution can be captured in the same types of nonlinear flows that live in state space.  For instance, population dynamics can be described as a large ensemble of interacting individuals that are born, flourish and die dependent on their environment and on their complicated interactions with other members in their ecosystem.  These types of problems have state spaces of extremely high dimension far beyond what we can visualize.  Yet the emergence of structure and of patterns from the complex dynamics helps to reduce the complexity, as do conceptual metaphors like evolutionary fitness landscapes.

9) Economic Dynamics

A non-negligible fraction of both undergraduate and graduate physics degree holders end up on Wall Street or in related industries.  This is partly because physicists are numerically fluent while also possessing sound intuition.  Therefore, economic dynamics is a potentially valuable addition to the modern dynamics curriculum and easily expressed using the concepts of dynamical flows and state space.  Both microeconomics (business competition, business cycles) and macroeconomics (investment and savings, liquidity and money, inflation, unemployment) can be described and analyzed using mathematical flows that are the central toolkit of modern dynamics.

10) Relativity

Special relativity is a common topic in the current upper-division physics curriculum, while general relativity is viewed as too difficult to expose undergraduates to.  This is mostly an artificial division, because Einstein’s “happiest thought” occurred when he realized that an observer in free fall is in a force-free (inertial) frame.  The equivalence principle, that states that a frame in uniform acceleration is indistinguishable from a stationary frame in a uniform gravitational field, opens a wide door that connects special relativity to general relativity.  In an undergraduate course on modern dynamics, the metric tensor (described above) is introduced in simple terms, providing the foundation to develop Minkowski spacetime, and the next natural extension is to warped spacetime—all at the simple level of linear algebra combined with partial differentiation.  General relativity ties in many of the principles of the modern dynamics curriculum (dynamical flows, state space, metric space, invariants, nonlinear dynamics), and the students can simulate orbits around black holes with ease.  I have been teaching General Relativity to undergraduates for over ten years now, and it is a highlight of the course.

Introduction to Modern Dynamics

For further reading and more details, these top 10 topics of modern dynamics are defined and explored in the undergraduate physics textbook “Introduction to Modern Dynamics: Chaos, Networks, Space and Time” published by Oxford University Press (2015).  This textbook is designed for use in a two-semester junior-level mechanics course.  It introduces the topics of modern dynamics, while still presenting traditional materials that the students need for their physics GREs.

 

Dark Matter Mysteries

There is more to the Universe than meets the eye—way more. Over the past quarter century, it has become clear that all the points of light in the night sky, the stars, the Milky Way, the nubulae, all the distant galaxies, when added up with the nonluminous dust, constitute only a small fraction of the total energy density of the Universe. In fact, “normal” matter, like the stuff of which we are made—star dust—contributes only 4% to everything that is. The rest is something else, something different, something that doesn’t show up in the most sophisticated laboratory experiments, not even the Large Hadron Collider [1]. It is unmeasurable on terrestrial scales, and even at the scale of our furthest probe—the Voyager I spacecraft that left our solar system several years ago—there have been no indications of deviations from Newton’s law of gravity. To the highest precision we can achieve, it is invisible and non-interacting on any scale smaller than our stellar neighborhood. Perhaps it can never be detected in any direct sense. If so, then how do we know it is there? The answer comes from galactic trajectories. The motions in and of galaxies have been, and continue to be, the principal laboratory for the investigation of  cosmic questions about the dark matter of the universe.

Today, the nature of Dark Matter is one of the greatest mysteries in physics, and the search for direct detection of Dark Matter is one of physics’ greatest pursuits.

 

Island Universes

The nature of the Milky Way was a mystery through most of human history. To the ancient Greeks it was the milky circle (γαλαξίας κύκλος , pronounced galaktikos kyklos) and to the Romans it was literally the milky way (via lactea). Aristotle, in his Meteorologica, briefly suggested that the Milky Way might be composed of a large number of distant stars, but then rejected that idea in favor of a wisp, exhaled like breath on a cold morning, from the stars. The Milky Way is unmistakable on a clear dark night to anyone who looks up, far away from city lights. It was a constant companion through most of human history, like the constant stars, until electric lights extinguished it from much of the world in the past hundred years. Geoffrey Chaucer, in his Hous of Fame (1380) proclaimed “See yonder, lo, the Galaxyë Which men clepeth the Milky Wey, For hit is whyt.” (See yonder, lo, the galaxy which men call the Milky Way, for it is white.).

474336main_p1024ay_full

Hubble image of one of the galaxies in the Coma Cluster of galaxies that Fritz Zwicky used to announce that the universe contained a vast amount of dark matter.

Aristotle was fated, again, to be corrected by Galileo. Using his telescope in 1610, Galileo was the first to resolve a vast field of individual faint stars in the Milky Way. This led Emmanual Kant, in 1755, to propose that the Milky Way Galaxy was a rotating disk of stars held together by Newtonian gravity like the disk of the solar system, but much larger. He went on to suggest that the faint nebulae might be other far distant galaxies, which he called “island universes”. The first direct evidence that nebulae were distant galaxies came in 1917 with the observation of a supernova in the Andromeda Galaxy by Heber Curtis. Based on the brightness of the supernova, he estimated that the Andromeda Galaxy was over a million light years away, but uncertainty in the distance measurement kept the door open for the possibility that it was still part of the Milky Way, and hence the possibility that the Milky Way was the Universe.

The question of the nature of the nebulae hinged on the problem of measuring distances across vast amounts of space. By line of sight, there is no yard stick to tell how far away something is, so other methods must be used. Stellar parallax, for instance, can gauge the distance to nearby stars by measuring slight changes in the apparent positions of the stars as the Earth changes its position around the Sun through the year. This effect was used successfully for the first time in 1838 by Fredrich Bessel, and by the year 2000 more than a hundred thousand stars had their distances measured using stellar parallax. Recent advances in satellite observatories have extended the reach of stellar parallax to a distance of about 10,000 light years from the Sun, but this is still only a tenth of the diameter of the Milky Way. To measure distances to the far side of our own galaxy, or beyond, requires something else.

Because of Henrietta Leavitt

In 1908 Henrietta Leavitt, working at the Harvard Observatory as one of the famous female “computers”, discovered that stars whose luminosities oscillate with a steady periodicity, stars known as Cepheid variables, have a relationship between the period of oscillation and the average luminosity of the star [2]. By measuring the distance to nearby Cepheid variables using stellar parallax, the absolute brightness of the Cepheid could be calibrated, and the Cepheid could then be used as “standard candles”. This meant that by observing the period of oscillation and the brightness of a distant Cepheid, the distance to the star could be calculated. Edwin Hubble (1889 – 1953), working at the Mount Wilson observatory in Passedena CA, observed Cepheid variables in several of the brightest nebulae in the night sky. In 1925 he announced his observation of individual Cepheid variables in Andromeda and calculated that Andromeda was more than a million light years away, more than 10 Milky Way diameters (the actual number is about 25 Milky Way diameters). This meant that Andromeda was a separate galaxy and that the Universe was made of more than just our local cluster of stars. Once this door was opened, the known Universe expanded quickly up to a hundred Milky Way diameters as Hubble measured the distances to scores of our neighboring galaxies in the Virgo galaxy cluster. However, it was more than just our knowledge of the universe that was expanding.

Armed with measurements of galactic distances, Hubble was in a unique position to relate those distances to the speeds of the galaxies by combining his distance measurements with spectroscopic observations of the light spectra made by other astronomers. These galaxy emission spectra could be used to measure the Doppler effect on the light emitted by the stars of the galaxy. The Doppler effect, first proposed by Christian Doppler (1803 – 1853) in 1843, causes the wavelength of emitted light to be shifted to the red for objects receding from an observer, and shifted to the blue for objects approaching an observer. The amount of spectral shift is directly proportional the the object’s speed. Doppler’s original proposal was to use this effect to measure the speed of binary stars, which is indeed performed routinely today by astronomers for just this purpose, but in Doppler’s day spectroscopy was not precise enough to accomplish this. However, by the time Hubble was making his measurements, optical spectroscopy had become a precision science, and the Doppler shift of the galaxies could be measured with great accuracy. In 1929 Hubble announced the discovery of a proportional relationship between the distance to the galaxies and their Doppler shift. What he found was that the galaxies [3] are receding from us with speeds proportional to their distance [4]. Hubble himself made no claims at that time about what these data meant from a cosmological point of view, but others quickly noted that this Hubble effect could be explained if the universe were expanding.

Einstein’s Mistake

The state of the universe had been in doubt ever since Heber Curtis observed the supernova in the Andromeda galaxy in 1917. Einstein published a paper that same year in which he sought to resolve a problem that had appeared in the solution to his field equations. It appeared that the universe should either be expanding or contracting. Because the night sky literally was the firmament, it went against the mentality of the times to think of the universe as something intrinsically unstable, so Einstein fixed it with an extra term in his field equations, adding something called the cosmological constant, denoted by the Greek lambda (Λ). This extra term put the universe into a static equilibrium, and Einstein could rest easy with his firm trust in the firmament. However, a few years later, in 1922, the Russian physicist and mathematician Alexander Friedmann (1888 – 1925) published a paper that showed that Einstein’s static equilibrium was actually unstable, meaning that small perturbations away from the current energy density would either grow or shrink. This same result was found independently by the Belgian astronomer Georges Lemaître in 1927, who suggested that not only was the universe  expanding, but that it had originated in a singular event (now known as the Big Bang). Einstein was dismissive of Lemaître’s proposal and even quipped “Your calculations are correct, but your physics is atrocious.” [5] But after Hubble published his observation on the red shifts of galaxies in 1929, Lemaître pointed out that the redshifts would be explained by an expanding universe. Although Hubble himself never fully adopted this point of view, Einstein immediately saw it for what it was—a clear and simple explanation for a basic physical phenomenon that he had foolishly overlooked. Einstein retracted his cosmological constant in embarrassment and gave his support to Lemaître’s expanding universe. Nonetheless, Einstein’s physical intuition was never too far from the mark, and the cosmological constant has been resurrected in recent years in the form of Dark Energy. However, something else, both remarkable and disturbing, reared its head in the intervening years—Dark Matter.

Fritz Zwicky: Gadfly Genius

It is difficult to write about important advances in astronomy and astrophysics of the 20th century without tripping over Fritz Zwicky. As the gadfly genius that he was, he had a tendency to shoot close to the mark, or at least some of his many crazy ideas tended to be right. He was also in the right place at the right time, at the Mt. Wilson observatory nearby Cal Tech with regular access the World’s largest telescope. Shortly after Hubble proved that the nebulae were other galaxies and used Doppler shifts to measure their speeds, Zwicky (with his assistant Baade) began a study of as many galactic speeds and distances as they could. He was able to construct a three-dimensional map of the galaxies in the relatively nearby Coma galaxy cluster, together with their velocities. He then deduced that the galaxies in this isolated cluster were gravitational bound to each other, performing a whirling dance in each others thrall, like stars in globular star clusters in our Milky Way. But there was a serious problem.

Star clusters display average speeds and average gravitational potentials that are nicely balanced, a result predicted from a theorem of mechanics that was named the Virial Theorem by Rudolf Clausius in 1870. The Virial Theorem states that the average kinetic energy of a system of many bodies is directly related to the average potential energy of the system. By applying the Virial Theorem to the galaxies of the Coma cluster, Zwicky found that the dynamics of the galaxies were badly out of balance. The galaxy kinetic energies were far too fast relative to the gravitational potential—so fast, in fact, that the galaxies should have flown off away from each other and not been bound at all. To reconcile this discrepancy of the galactic speeds with the obvious fact that the galaxies were gravitationally bound, Zwicky postulated that there was unobserved matter present in the cluster that supplied the missing gravitational potential. The amount of missing potential was very large, and Zwicky’s calculations predicted that there was 400 times as much invisible matter, which he called “dark matter”, as visible. With his usual flare for the dramatic, Zwicky announced his findings to the World in 1933, but the World shrugged— after all, it was just Zwicky.

Nonetheless, Zwicky’s and Baade’s observations of the structure of the Coma cluster, and the calculations using the Virial Theorem, were verified by other astronomers. Something was clearly happening in the Coma cluster, but other scientists and astronomers did not have the courage or vision to make the bold assessment that Zwicky had. The problem of the Coma cluster, and a growing number of additional galaxy clusters that have been studied during the succeeding years, was to remain a thorn in the side of gravitational theory through half a century, and indeed remains a thorn to the present day. It is an important clue to a big question about the nature of gravity, which is arguably the least understood of the four forces of nature.

Vera Rubin: Galaxy Rotation Curves

Galactic clusters are among the largest coherent structures in the observable universe, and there are many questions about their origin and dynamics. Smaller gravitationally bound structures that can be handled more easily are individual galaxies themselves. If something important was missing in the dynamics of galactic clusters, perhaps the dynamics of the stars in individual galaxies could help shed light on the problem. In the late 1960’s and early 1970’s Vera Rubin at the Carnegie Institution of Washington used newly developed spectrographs to study the speeds of stars in individual galaxies. From simple Newtonian dynamics it is well understood that the speed of stars as a function of distance from the galactic center should increase with increasing distance up to the average radius of the galaxy, and then should decrease at larger distances. This trend in speed as a function of radius is called a rotation curve. As Rubin constructed the rotation curves for many galaxies, the increase of speed with increasing radius at small radii emerged as a clear trend, but the stars farther out in the galaxies were all moving far too fast. In fact, they are moving so fast that they exceeded escape velocity and should have flown off into space long ago. This disturbing pattern was repeated consistently in one rotation curve after another.

A simple fix to the problem of the rotation curves is to assume that there is significant mass present in every galaxy that is not observable either as luminous matter or as interstellar dust. In other words, there is unobserved matter, dark matter, in all galaxies that keeps all their stars gravitationally bound. Estimates of the amount of dark matter needed to fix the velocity curves is about five times as much dark matter as observable matter. This is not the same factor of 400 that Zwicky had estimated for the Coma cluster, but it is still a surprisingly large number. In short, 80% of the mass of a galaxy is not normal. It is neither a perturbation nor an artifact, but something fundamental and large. In fact, there is so much dark matter in the Universe that it must have a major effect on the overall curvature of space-time according to Einstein’s field equations. One of the best probes of the large-scale structure of the Universe is the afterglow of the Big Bang, known as the cosmic microwave background (CMB).

The Big Bang

The Big Bang was incredibly hot, but as the Universe expanded, its temperature cooled. About 379,000 years after the Big Bang, the Universe cooled sufficiently that the electron-nucleon plasma that filled space at that time condensed primarily into hydrogen. Plasma is charged and hence is opaque to photons.  Hydrogen, on the other hand, is neutral and transparent. Therefore, when the hydrogen condensed, the thermal photons suddenly flew free, unimpeded, and have continued unimpeded, continuing to cool, until today the thermal glow has reached about three degrees above absolute zero. Photons in thermal equilibrium with this low temperature have an average wavelength of a few millimeters corresponding to microwave frequencies, which is why the afterglow of the Big Bang got its CMB name.

The CMB is amazingly uniform when viewed from any direction in space, but it is not perfectly uniform. At the level of 0.005 percent, there are variations in the temperature depending on the location on the sky. These fluctuations in background temperature are called the CMB anisotropy, and they play an important role helping to interpret current models of the Universe. For instance, the average angular size of the fluctuations is related to the overall curvature of the Universe. This is because in the early Universe not all parts of it were in communication with each other because of the finite size and the finite speed of light. This set an original spatial size to thermal discrepancies. As the Universe continued to expand, the size of the regional variations expanded with it, and the sizes observed today would appear larger or smaller, depending on how the universe is curved. Therefore, to measure the energy density of the Universe, and hence to find its curvature, required measurements of the CMB temperature that were accurate to better than a part in 10,000.

 

Andrew Lange and Paul Richards: The Lambda and the Omega

In graduate school at Berkeley in 1982, my first graduate research assistantship was in the group of Paul Richards, one of the world leaders in observational cosmology. One of his senior graduate students at the time, Andrew Lange, was sharp and charismatic and leading an ambitious project to measure the cosmic background radiation on an experiment borne by a Japanese sounding rocket. My job was to create a set of far-infrared dichroic beamsplitters for the spectrometer.   A few days before launch, a technician noticed that the explosive bolts on the rocket nose-cone had expired. When fired, these would open the cone and expose the instrument at high altitude to the CMB. The old bolts were duly replaced with fresh ones. On launch day, the instrument and the sounding rocket worked perfectly, but the explosive bolts failed to fire, and the spectrometer made excellent measurements of the inside of the nose cone all the way up and all the way down until it sank into the Pacific Ocean. I left Paul’s comology group for a more promising career in solid state physics under the direction of Eugene Haller and Leo Falicov, but Paul and Andrew went on to great fame with high-altitude balloon-borne experiments that flew at 40,000 feet, above most of the atmosphere, to measure the CMB anisotropy.

By the late nineties, Andrew was established as a professor at Cal Tech. He was co-leading an experiment called BOOMerANG that flew a high-altitude balloon around Antarctica, while Paul was leading an experiment called MAXIMA that flew a balloon from Palastine, Texas. The two experiments had originally been coordinated together, but operational differences turned the former professor/student team into competitors to see who would be the first to measure the shape of the Universe through the CMB anisotropy.  BOOMerANG flew in 1997 and again in 1998, followed by MAXIMA that flew in 1998 and again in 1999. In early 2000, Andrew and the BOOMerANG team announced that the Universe was flat, confirmed quickly by an announcement by MAXIMA [BoomerMax]. This means that the energy density of the Universe is exactly critical, and there is precisely enough gravity to balance the expansion of the Universe. This parameter is known as Omega (Ω).  What was perhaps more important than this discovery was the announcement by Paul’s MAXIMA team that the amount of “normal” baryonic matter in the Universe made up only about 4% of the critical density. This is a shockingly small number, but agreed with predictions from Big Bang nucleosynthesis. When combined with independent measurements of Dark Energy known as Lambda (Λ), it also meant that about 25% of the energy density of the Universe is made up of Dark Matter—about five times more than ordinary matter. Zwicky’s Dark Matter announcement of 1933, virtually ignored by everyone, had been 75 years ahead of its time [6].

Dark Matter Pursuits

Today, the nature of Dark Matter is one of the greatest mysteries in physics, and the search for direct detection of Dark Matter is one of physics’ greatest pursuits. The indirect evidence for Dark Matter is incontestable—the CMB anisotropy, matter filaments in the early Universe, the speeds of galaxies in bound clusters, rotation curves of stars in Galaxies, gravitational lensing—all of these agree and confirm that most of the gravitational mass of the Universe is Dark. But what is it? The leading idea today is that it consists of weakly interacting particles, called cold dark matter (CDM). The dark matter particles pass right through you without ever disturbing a single electron. This is unlike unseen cosmic rays that are also passing through your body at the rate of several per second, leaving ionized trails like bullet holes through your flesh. Dark matter passes undisturbed through the entire Earth. This is not entirely unbelievable, because neutrinos, which are part of “normal” matter, also mostly pass through the Earth without interaction. Admittedly, the physics of neutrinos is not completely understood, but if ordinary matter can interact so weakly, then dark matter is just more extreme and perhaps not so strange. Of course, this makes detection of dark matter a big challenge. If a particle exists that won’t interact with anything, then how would you ever measure it? There are a lot of clever physicists with good ideas how to do it, but none of the ideas are easy, and none have worked yet.

[1] As of the writing of this chapter, Dark Matter has not been observed in particle form, but only through gravitational effects at large (galactic) scales.

[2] Leavitt, Henrietta S. “1777 Variables in the Magellanic Clouds”. Annals of Harvard College Observatory. LX(IV) (1908) 87-110

[3] Excluding the local group of galaxies that include Andromeda and Triangulum that are gravitationally influenced by the Milky Way.

[4] Hubble, Edwin (1929). “A relation between distance and radial velocity among extra-galactic nebulae”. PNAS 15 (3): 168–173.

[5] Deprit, A. (1984). “Monsignor Georges Lemaître”. In A. Barger (ed). The Big Bang and Georges Lemaître. Reidel. p. 370.

[6] I was amazed to read in Science magazine in 2004 or 2005, in a section called “Nobel Watch”, that Andrew Lange was a candidate for the Nobel Prize for his work on BoomerAng.  Around that same time I invited Paul Richards to Purdue to give our weekly physics colloquium.  There was definitely a buzz going around that the BoomerAng and MAXIMA collaborations were being talked about in Nobel circles.  The next year, the Nobel Prize of 2006 was indeed awarded for work on the Cosmic Microwave Background, but to Mather and Smoot for their earlier work on the COBE satellite.

How to Weave a Tapestry from Hamiltonian Chaos

While virtually everyone recognizes the famous Lorenz “Butterfly”, the strange attractor  that is one of the central icons of chaos theory, in my opinion Hamiltonian chaos generates far more interesting patterns. This is because Hamiltonians conserve phase-space volume, stretching and folding small volumes of initial conditions as they evolve in time, until they span large sections of phase space. Hamiltonian chaos is usually displayed as multi-color Poincaré sections (also known as first-return maps) that are created when a set of single trajectories, each represented by a single color, pierce the Poincaré plane over and over again.

The archetype of all Hamiltonian systems is the harmonic oscillator.

MATLAB Handle Graphics

A Hamiltonian tapestry generated from the Web Map for K = 0.616 and q = 4.

Periodically-Kicked Hamiltonian

The classic Hamiltonian system, perhaps the archetype of all Hamiltonian systems, is the harmonic oscillator. The physics of the harmonic oscillator are taught in the most elementary courses, because every stable system in the world is approximated, to lowest order, as a harmonic oscillator. As the simplest dynamical system, one would think that it held no surprises. But surprisingly, it can create the most beautiful tapestries of color when pulsed periodically and mapped onto the Poincaré plane.

The Hamiltonian of the periodically kicked harmonic oscillator is converted into the Web Map, represented as an iterative mapping as

WebMap

There can be resonance between the sequence of kicks and the natural oscillator frequency such that α = 2π/q. At these resonances, intricate web patterns emerge. The Web Map produces a web of stochastic layers when plotted on an extended phase plane. The symmetry of the web is controlled by the integer q, and the stochastic layer width is controlled by the perturbation strength K.

MATLAB Handle Graphics

A tapestry for q = 6.

Web Map Python Program

Iterated maps are easy to implement in code.  Here is a simple Python code to generate maps of different types.  You can play with the coupling constant K and the periodicity q.  For small K, the tapestries are mostly regular.  But as the coupling K increases, stochastic layers emerge.  When q is a small even number, tapestries of regular symmetric are generated.  However, when q is an odd small integer, the tapestries turn into quasi-crystals.

#!/usr/bin/env python3
# -*- coding: utf-8 -*-
“”
@author: nolte
“””

import numpy as np
from scipy import integrate
from matplotlib import pyplot as plt
plt.close(‘all’)
phi = (1+np.sqrt(5))/2
K = 1-phi     # (0.618, 4) (0.618,5) (0.618,7) (1.2, 4)
q = 4         # 4, 5, 6, 7
alpha = 2*np.pi/q

np.random.seed(2)
plt.figure(1)
for eloop in range(0,1000):

xlast = 50*np.random.random()
ylast = 50*np.random.random()

xnew = np.zeros(shape=(300,))
ynew = np.zeros(shape=(300,))

for loop in range(0,300):

xnew[loop] = (xlast + K*np.sin(ylast))*np.cos(alpha) + ylast*np.sin(alpha)
ynew[loop] = -(xlast + K*np.sin(ylast))*np.sin(alpha) + ylast*np.cos(alpha)

xlast = xnew[loop]
ylast = ynew[loop]

plt.plot(np.real(xnew),np.real(ynew),’o’,ms=1)
plt.xlim(xmin=-60,xmax=60)
plt.ylim(ymin=-60,ymax=60)

plt.title(‘WebMap’)
plt.savefig(‘WebMap’)

 

References and Further Reading

D. D. Nolte, Introduction to Modern Dynamics: Chaos, Networks, Space and Time (Oxford, 2015)

G. M. Zaslavsky,  Hamiltonian chaos and fractional dynamics. (Oxford, 2005)

 

 

 

Wave-Particle Duality and Hamilton’s Physics

Wave-particle duality was one of the greatest early challenges to quantum physics, partially clarified by Bohr’s Principle of Complementarity, but never easily grasped even today.  Yet long before Einstein proposed the indivisible quantum  of light (later to be called the photon by the chemist Gilbert Lewis), wave-particle duality was firmly embedded in the foundations of the classical physics of mechanics.

Light led the way to mechanics more than once in the history of physics.

 

Willebrord Snel van Royen

The Dutch physicist Willebrord Snel van Royen in 1621 derived an accurate mathematical description of the refraction of beams of light at a material interface in terms of sine functions, but he did not publish.  Fifteen years later, as Descartes was looking for an example to illustrate his new method of analytic geometry, he discovered the same law, unaware of Snel’s prior work.  In France the law is known as the Law of Descartes.  In the Netherlands (and much of the rest of the world) it is known as Snell’s Law.  Both Snell and Descartes based their work on Newton’s corpuscles of light.  The brilliant Fermat adopted corpuscles when he developed his principle of least time to explain the law of Descartes in 1662.  Yet Fermat was forced to assume that the corpuscles traveled slower in the denser material even though it was generally accepted that light should travel faster in denser media, just as sound did.  Seventy-five years later, Maupertuis continued the tradition when he developed his principle of least action and applied it to light corpuscles traveling faster through denser media, just as Descartes had prescribed.

HuygensParticle-02

The wave view of Snell’s Law (on the left). The source resides in the medium with higher speed. As the wave fronts impinge on the interface to a medium with lower speed, the wave fronts in the slower medium flatten out, causing the ray perpendicular to the wave fronts to tilt downwards. The particle view of Snell’s Law (on the right). The momentum of the particle in the second medium is larger than in the first, but the transverse components of the momentum (the x-components) are conserved, causing a tilt downwards of the particle’s direction as it crosses the interface. [i]

Maupertuis’ paper applying the principle of least action to the law of Descartes was a critical juncture in the development of dynamics.  His assumption of faster speeds in denser material was wrong, but he got the right answer because of the way he defined action for light.  Encouraged by the success of his (incorrect) theory, Maupertuis extended the principle of least action to mechanical systems, and this time used the right theory to get the right answers.  Despite Maupertuis’ misguided aspirations to become a physicist of equal stature to Newton, he was no mathematician, and he welcomed (and  somewhat appropriated) the contributions of Leonid Euler on the topic, who established the mathematical foundations for the principle of least action.  This work, in turn, attracted the attention of the Italian mathematician Lagrange, who developed a general new approach (Lagrangian mechanics) to mechanical systems that included the principle of least action as a direct consequence of his equations of motion.  This was the first time that light led the way to classical mechanics.  A hundred years after Maupertuis, it was time again for light to lead to the way to a deeper mechanics known as Hamiltonian mechanics.

Young Hamilton

William Rowland Hamilton (1805—1865) was a prodigy as a boy who knew parts of thirteen languages by the time he was thirteen years old. These were Greek, Latin, Hebrew, Syriac, Persian, Arabic, Sanskrit, Hindoostanee, Malay, French, Italian, Spanish, and German. In 1823 he entered Trinity College of Dublin University to study science. In his second and third years, he won the University’s top prizes for Greek and for mathematical physics, a run which may have extended to his fourth year—but he was offered the position of Andrew’s Professor of Astronomy at Dublin and Royal Astronomer of Ireland—not to be turned down at the early age of 21.

Hamilton1

Title of Hamilton’s first paper on his characteristic function as a new method that applied his theory from optics to the theory of mechanics, including Lagrangian mechanics as a special case.

His research into mathematical physics  concentrated on the theory of rays of light. Augustin-Jean Fresnel (1788—1827) had recently passed away, leaving behind a wave theory of light that provided a starting point for many effects in optical science, but which lacked broader generality. Hamilton developed a rigorous mathematical framework that could be applied to optical phenomena of the most general nature. This led to his theory of the Characteristic Function, based on principles of the variational calculus of Euler and Lagrange, that predicted the refraction of rays of light, like trajectories, as they passed through different media or across boundaries. In 1832 Hamilton predicted a phenomenon called conical refraction, which would cause a single ray of light entering a biaxial crystal to refract into a luminous cone.

Mathematical physics of that day typically followed experimental science. There were so many observed phenomena in so many fields that demanded explanation, that the general task of the mathematical physicist was to explain phenomena using basic principles followed by mathematical analysis. It was rare for the process to work the other way, for a theorist to predict a phenomenon never before observed. Today we take this as very normal. Einstein’s fame was primed by his prediction of the bending of light by gravity—but only after the observation of the effect by Eddington four years later was Einstein thrust onto the world stage. The same thing happened to Hamilton when his friend Humphrey Lloyd observed conical refraction, just as Hamilton had predicted. After that, Hamilton was revered as one of the most ingenious scientists of his day.

Following the success of conical refraction, Hamilton turned from optics to pursue a striking correspondence he had noted in his Characteristic Function that applied to mechanical trajectories as well as it did to rays of light. In 1834 and 1835 he published two papers On a General Method in Mechanics( I and II)[ii], in which he reworked the theory of Lagrange by beginning with the principle of varying action, which is now known as Hamilton’s Principle. Hamilton’s principle is related to Maupertuis’ principle of least action, but it was more rigorous and a more general approach to derive the Euler-Lagrange equations.  Hamilton’s Principal Function allowed the trajectories of particles to be calculated in complicated situations that were challenging for a direct solution by Lagrange’s equations.

The importance that these two papers had on the future development of physics would not be clear until 1842 when Carl Gustav Jacob Jacobi helped to interpret them and augment them, turning them into a methodology for solving dynamical problems. Today, the Hamiltonian approach to dynamics is central to all of physics, and thousands of physicists around the world mention his name every day, possibly more often than they mention Einstein’s.

[i] Reprinted from D. D. Nolte, Galileo Unbound: A Path Across Life, the Universe and Everything (Oxford, 2018)

[ii] W. R. Hamilton, “On a general method in dynamics I,” Phil. Trans. Roy. Soc., pp. 247-308, 1834; W. R. Hamilton, “On a general method in dynamics II,” Phil. Trans. Roy. Soc., pp. 95-144, 1835.

2018 Nobel Prize in Laser Physics

When I arrived at Bell Labs in 1988 on a postdoctoral appointment to work with Alastair Glass in the Department of Optical materials, the office I shared with Don Olsen was next door to the mysterious office of Art Ashkin.  Art was a legend in the corridors in a place of many legends.  Bell Labs in the late 80’s, even after the famous divestiture of AT&T into the Baby Bells, was a place of mythic proportions.  At the Holmdel site in New Jersey, the home of the laser physics branch of Bell Labs, the lunch table was a who’s who of laser science.  Chuck Shank, Daniel Chemla, Wayne Knox, Linn Mollenauer.  A new idea would be floated at lunchtime, and the resulting Phys Rev Letter would be submitted within the month…that was the speed of research at Bell Labs.  If you needed expertise, or hit a snag in an experiment, the World’s expert on almost anything was just down a hallway to help solve it.

Bell Labs in the late 80’s, even after the famous divestiture of AT&T into the Baby Bells, was a place of mythic proportions.

One of the key differences I have noted about the Bell Labs at that time, that set it apart from any other research organization I have experienced, whether at national labs like Lawrence Berkeley Laboratory, or at universities, was the genuine awe in people’s voices as they spoke about the work of their colleagues.  This was the tone as people talked about Steven Chu, recently departed from Bell Labs for Stanford, and especially Art Ashkin.

Art Ashkin had been at Bell Labs for nearly 40 years when I arrived.  He was a man of many talents, delving into topics as diverse as the photorefractive effect (which I had been hired to pursue in new directions), nonlinear optics in fibers (one of the chief interests of Holmdel in those days of exponential growth of fiber telecom) and second harmonic generation.  But his main scientific impact had been in the field of optical trapping.

Optical trapping uses focused laser fields to generate minute forces on minute targets.  If multiple lasers are directed in opposing directions, a small optical trap is formed.  This could be applied to atoms, which was used by Chu for atom trapping and cooling, and even to small particles like individual biological cells.  In this context, the trapping phenomenon was known as “optical tweezers”, because by moving the laser beams, the small targets could be moved about just as if they were being held by small tweezers.

In the late 80’s Steven Chu was on the rise as one of the leaders in the field of optical physics, receiving many prestigious awards for his applications of optical traps, while many felt that Art was being passed over.  This feeling intensified when Chu received the Nobel Prize in 1997 for optical trapping (shared with Cohen-Tannoudji and Phillips) but Art did not.  Several Nobel Prizes in laser physics later, and most felt that Art’s chances were over … until this morning, Oct. 2, 2018, when it was announced that Art, now age 96, was finally receiving the Nobel Prize.

Around the same time that Art and Steve were developing optical traps at Bell Labs using optical gradients to generate forces on atoms and particles, Gerard Mourou and Donna Strickland in the optics department at the University of Rochester discovered that optical gradients in nonlinear crystals could trap focused beams of light inside a laser cavity, causing a stable pulsing effect called Kerr-lens modelocking.  The optical pulses in lasers like the Ti:Sapphire laser had ultrafast durations around 100 femtoseconds with extremely stable repetition rates.  These pulse trains were the time-domain equivalent of optical combs in the frequency domain (for which Hall and Hansch  received the Nobel Prize for physics in 2005).  Before Kerr-lens modelocking, it took great skill with very nasty dye lasers to get femtosecond pulses in a laboratory.  But by the early 90’s, anyone who wanted femtosecond pulses could get them easily just by buying a femtosecond modelocked laser kit from Mourou’s company, Clark-MXR.  These types of lasers moved into ophthalmology and laser eye surgery, becoming one of the most common and most valuable commercial lasers.

Donna Strickland and Gerard Mourou shared the 2018 Nobel Prize with Art Ashkin on laser trapping, complementing the trapping of material particles by light gradients with the trapping of light beams themselves.

Galileo Unbound

In June of 1633 Galileo was found guilty of heresy and sentenced to house arrest for what remained of his life. He was a renaissance Prometheus, bound for giving knowledge to humanity. With little to do, and allowed few visitors, he at last had the uninterrupted time to finish his life’s labor. When Two New Sciences was published in 1638, it contained the seeds of the science of motion that would mature into a grand and abstract vision that permeates all science today. In this way, Galileo was unbound, not by Hercules, but by his own hand as he penned the introduction to his work:

. . . what I consider more important, there have been opened up to this vast and most excellent science, of which my work is merely the beginning, ways and means by which other minds more acute than mine will explore its remote corners.

            Galileo Galilei (1638) Two New Sciences

 

Galileo Unbound: A Path Across Life, the Universe and Everything (Oxford University Press) publishes today (Sept. 26, 2018). It explores the continuous thread from Galileo’s discovery of the parabolic trajectory to modern dynamics and complex systems. It is a history of expanding dimension and increasing abstraction, until today we speak of entangled quantum particles moving among many worlds, and we envision our lives as trajectories through spaces of thousands of dimensions. Remarkably, common themes persist that predict the evolution of species as readily as the orbits of planets. Galileo laid the foundation upon which Newton built a theory of dynamics that could capture the trajectory of the moon through space using the same physics that controlled the flight of a cannon ball. Late in the nineteenth-century, concepts of motion expanded into multiple dimensions, and in the 20th century geometry became the cause of motion rather than the result when Einstein envisioned the fabric of space-time warped by mass and energy, causing light rays to bend past the Sun. Possibly more radical was Feynman’s dilemma of quantum particles taking all paths at once—setting the stage for the modern fields of quantum field theory and quantum computing. Yet as concepts of motion have evolved, one thing has remained constant—the need to track ever more complex changes and to capture their essence—to find patterns in the chaos as we try to predict and control our world. Today’s ideas of motion go far beyond the parabolic trajectory, but even Galileo might recognize the common thread that winds through all these motions, drawing them together into a unified view that gives us the power to see, at least a little, through the mists shrouding the future.

 

To read more: Galileo Unbound: A Path Across Life, the Universe and Everything by David D. Nolte (Oxford University Press, Sept. 26, 2018). Available at Amazon.com.

 

Huygens’ Tautochrone

In February of 1662, Pierre de Fermat wrote a paper Synthesis ad refractiones that explained Descartes-Snell’s Law of light refraction by finding the least time it took for light to travel between two points. This famous approach is now known as Fermat’s principle, and it motivated other searches for minimum principles. A few years earlier, in 1656, Christiaan Huygens had invented the pendulum clock [1], and he began a ten-year study of the physics of the pendulum. He was well aware that the pendulum clock does not keep exact time—as the pendulum swings wider, the period of oscillation slows down. He began to search for a path of the pendular mass that would keep the period the same (and make pendulum clocks more accurate), and he discovered a trajectory along which a mass would arrive at the same position in the same time no matter where it was released on the curve. That such a curve could exist was truly remarkable, and it promised to make highly accurate time pieces.

It made minimization problems a familiar part of physics—they became part of the mindset, leading ultimately to the principle of least action.

This curve is known as a tautochrone (literally: same or equal time) and Huygens provided a geometric proof in his Horologium Oscillatorium sive de motu pendulorum (1673) that the curve was a cycloid. A cycloid is the curve traced by a point on the rim of a circular wheel as the wheel rolls without slipping along a straight line. Huygens invented such a pendulum in which the mass executed a cycloid curve. It was a mass on a flexible yet inelastic string that partially wrapped itself around a solid bumper on each half swing. In principle, whether the pendulum swung gently, or through large displacements, the time would be the same. Unfortunately, friction along the contact of the string with the bumper prevented the pendulum from achieving this goal, and the tautochronic pendulum did not catch on.

HuygensIsochron

Fig. 1 Huygens’ isochronous pendulum.  The time it takes the pendulum bob to follow the cycloid arc is independent of the pendulum’s amplitude, unlike for the circular arc, as the pendulum slows down for larger excursions.

The solution of the tautochrone curve of equal time led naturally to a search for the curve of least time, known as the brachistochrone curve for a particle subject to gravity, like a bead sliding on a frictionless wire between two points. Johann Bernoulli published a challenge to find the brachistochrone in 1696 in the scientific journal Acta Eruditorum that had been founded in 1682 by Leibniz in Germany in collaboration with Otto Mencke. Leibniz envisioned the journal to be a place where new ideas in the natural sciences and mathematics could be published and disseminated rapidly, and it included letters and commentaries, acting as a communication hub to help establish a community of scholars across Europe. In reality, it was the continental response to the Proceedings of the Royal Society in England.  Naturally, the Acta and the Proceedings would later take partisan sides in the priority dispute between Leibniz and Newton for the development of the calculus.

When Bernoulli published his brachistochrone challenge in the June issue of 1696, it was read immediately by the leading mathematicians of the day, many of whom took up the challenge and replied. The problem was solved and published in the May 1697 issue of the Acta by no less than five correspondents, including Johann Bernoulli, Jakob Bernoulli (Johann’s brother), Isaac Newton, Gottfried Leibniz and Ehrenfried Walther von Tschirnhaus. Each of them varied in their approaches, but all found the same solution. Johann and Jakob each considered the problem as the path of a light beam in a medium whose speed varied with depth. Just as in the tautochrone, the solved curve was a cycloid. The path of fastest time always started with a vertical path that allowed the fastest acceleration, and the point of greatest depth always was at the point of greatest horizontal speed.

The brachistrochrone problem led to the invention of the variational calculus, with first steps by Jakob Bernoulli and later more rigorous approaches by Euler.  However, its real importance is that it made minimization problems a familiar part of physics—they became part of the mindset, leading ultimately to the principle of least action.

[1] Galileo conceived of a pendulum clock in 1641, and his son Vincenzo started construction, but it was never finished.  Huygens submitted and received a patent in 1657 for a practical escape mechanism on pendulum clocks that is still used today.