Physics in the Age of Contagion: The Bifurcation of COVID-19

We are at War! That may sound like a cliche, but more people in the United States may die over the next year from COVID-19 than US soldiers have died in all the wars ever fought in US history. It is a war against an invasion by an alien species that has no remorse and gives no quarter. In this war, one of our gravest enemies, beyond the virus, is misinformation. The Internet floods our attention with half-baked half-truths. There may even be foreign powers that see this time of crisis as an opportunity to sow fear through disinformation to divide the country.

Because of the bifurcation physics of the SIR model of COVID-19, small changes in personal behavior (if everyone participates) can literally save Millions of lives!

At such times, physicists may be tapped to help the war effort. This is because physicists have unique skill sets that help us see through the distractions of details to get to the essence of the problem. Our solutions are often back-of-the-envelope, but that is their strength. We can see zeroth-order results stripped bare of all the obfuscating minutia.

One way physicists can help in this war is to shed light on how infections percolate through a population and to provide estimates on the numbers involved. Perhaps most importantly, we can highlight what actions ordinary citizens can take that best guard against the worst-case scenarios of the pandemic. The zeroth-oder solutions may not say anything new that the experts don’t already know, but it may help spread the word of why such simple actions as shelter-in-place may save millions of lives.

The SIR Model of Infection

One of the simplest models for infection is the so-called SIR model that stands for Susceptible-Infected-Removed. This model is an averaged model (or a mean-field model) that disregards the fundamental network structure of human interactions and considers only averages. The dynamical flow equations are very simple

where I is the infected fraction of the population, and S is the susceptible fraction of the population. The coefficient μ is the rate at which patients recover or die, <k> is the average number of “links” to others, and β is the infection probability per link per day. The total population fraction is give by the constraint

where R is the removed population, most of whom will be recovered, but some fraction will have passed away. The number of deaths is

where m is the mortality rate, and Rinf is the longterm removed fraction of the population after the infection has run its course.

The nullclines, the curves along which the time derivatives vanish, are

Where the first nullcline intersects the third nullcline is the only fixed point of this simple model

The phase space of the SIR flow is shown in Fig. 1 plotted as the infected fraction as a function of the susceptible fraction. The diagonal is the set of initial conditions where R = 0. Each initial condition on the diagonal produces a dynamical trajectory. The dashed trajectory that starts at (1,0) is the trajectory for a new disease infecting a fully susceptible population. The trajectories terminate on the I = 0 axis at long times when the infection dies out. In this model, there is always a fraction of the population who never get the disease, not through unusual immunity, but through sheer luck.

Fig. 1 Phase space of the SIR model. The single fixed point has “marginal” stability, but leads to a finite number of of the population who never are infected. The dashed trajectory is the trajectory of the infection starting with a single case. (Adapted from “Introduction to Modern Dynamics” (Oxford University Press, 2019))

The key to understanding the scale of the pandemic is the susceptible fraction at the fixed point S*. For the parameters chosen to plot Fig. 1, the value of S* is 1/4, or β<k> = 4μ. It is the high value of the infection rate β<k> relative to the decay rate of the infection μ that allows a large fraction of the population to become infected. As the infection rate gets smaller, the fixed point S* moves towards unity on the horizontal axis, and less of the population is infected.

As soon as S* exceeds unity, for the condition

then the infection cannot grow exponentially and will decay away without infecting an appreciable fraction of the population. This condition represents a bifurcation in the infection dynamics. It means that if the infection rate can be reduced below the recovery rate, then the pandemic fades away. (It is important to point out that the R0 of a network model (the number of people each infected person infects) is analogous to the inverse of S*. When R0 > 1 then the infection spreads, just as when S* < 1, and vice versa.)

This bifurcation condition makes the strategy for fighting the pandemic clear. The parameter μ is fixed by the virus and cannot be altered. But the infection probability per day per social link, β, can be reduced by clean hygiene:

  • Don’t shake hands
  • Wash your hands often and thoroughly
  • Don’t touch your face
  • Cover your cough or sneeze in your elbow
  • Wear disposable gloves
  • Wipe down touched surfaces with disinfectants

And the number of contacts per person, <k>, can be reduced by social distancing:

  • No large gatherings
  • Stand away from others
  • Shelter-in-place
  • Self quarantine

The big question is: can the infection rate be reduced below the recovery rate through the actions of clean hygiene and social distancing? If there is a chance that it can, then literally millions of lives can be saved. So let’s take a look at COVID-19.

The COVID-19 Pandemic

To get a handle on modeling the COVID-19 pandemic using the (very simplistic) SIR model, one key parameter is the average number of people you are connected to, represented by <k>. These are not necessarily the people in your social network, but also includes people who may touch a surface you touched earlier, or who touched a surface you later touch yourself. It also includes anyone in your proximity who has coughed or sneezed in the past few minutes. The number of people in your network is a topic of keen current interest, but is surprisingly hard to pin down. For the sake of this model, I will take the number <k> = 50 as a nominal number. This is probably too small, but it is compensated by the probability of infection given by a factor r and by the number of days that an individual is infectious.

The spread is helped when infectious people go about their normal lives infecting others. But if a fraction of the population self quarantines, especially after they “may” have been exposed, then the effective number of infectious dinf days per person can be decreased. A rough equation that captures this is

where fnq is the fraction of the population that does NOT self quarantine, dill is the mean number of days a person is ill (and infectious), and dq is the number of days quarantined. This number of infectious days goes into the parameter β.

where r = 0.0002 infections per link per day2 , which is a very rough estimate of the coefficient for COVID-19.

It is clear why shelter-in-place can be so effective, especially if the number of days quarantined is equal to the number of days a person is ill. The infection could literally die out if enough people self quarantine by pushing the critical value S* above the bifurcation threshold. However, it is much more likely that large fractions of people will continue to move about. A simulation of the “wave” that passes through the US is shown in Fig. 2 (see the Python code in the section below for parameters). In this example, 60% of the population does NOT self quarantine. The wave peaks approximately 150 days after the establishment of community spread.

Fig. 2 Population dynamics for the US spread of COVID-19. The fraction that is infected represents a “wave” that passes through a community. In this simulation fnq = 60%. The total US dead after the wave has passed is roughly 2 Million in this simulation.

In addition to shelter-in-place, social distancing can have a strong effect on the disease spread. Fig. 3 shows the number of US deaths as a function of the fraction of the population who do NOT self-quarantine for a series of average connections <k>. The bifurcation effect is clear in this graph. For instance, if <k> = 50 is a nominal value, then if 85% of the population would shelter-in-place for 14 days, then the disease would fall below threshold and only a small number of deaths would occur. But if that connection number can be dropped even to <k> = 40, then only 60% would need to shelter-in-place to avoid the pandemic. By contrast, if 80% of the people don’t self-quarantine, and if <k> = 40, then there could be 2 Million deaths in the US by the time the disease has run its course.

Because of the bifurcation physics of this SIR model of COVID-19, small changes in personal behavior (if everyone participates) can literally save Millions of lives!

Fig. 3 Bifurcation plot of the number of US deaths as a function of the fraction of the population who do NOT shelter-in-place for different average links per person. At 20 links per person, the contagion could be contained. However, at 60 links per person, nearly 90% of the population would need to quarantine for at least 14 days to stop the spread.

There has been a lot said about “flattening the curve”, which is shown in Fig. 4. There are two ways that flattening the curve saves overall lives: 1) it keeps the numbers below the threshold capacity of hospitals; and 2) it decreases the total number infected and hence decreases the total dead. When the number of critical patients exceeds hospital capacity, the mortality rate increases. This is being seen in Italy where the hospitals have been overwhelmed and the mortality rate has risen from a baseline of 1% or 2% to as large as 8%. Flattening the curve is achieved by sheltering in place, personal hygiene and other forms of social distancing. The figure shows a family of curves for different fractions of the total population who shelter in place for 14 days. If more than 70% of the population shelters in place for 14 days, then the curve not only flattens … it disappears!

Fig. 4 Flattening the curve for a range of fractions of the population that shelters in place for 14 days. (See Python code for parameters.)

Python Code:

#!/usr/bin/env python3
# -*- coding: utf-8 -*-
Created on Sat March 21 2020
@author: nolte
D. D. Nolte, Introduction to Modern Dynamics: Chaos, Networks, Space and Time, 2nd ed. (Oxford,2019)

import numpy as np
from scipy import integrate
from matplotlib import pyplot as plt


print(' ')

def solve_flow(param,max_time=1000.0):

    def flow_deriv(x_y,tspan,mu,betap):
        x, y = x_y
        return [-mu*x + betap*x*y,-betap*x*y]
    x0 = [del1, del2]
    # Solve for the trajectories
    t = np.linspace(0, int(tlim), int(250*tlim))
    x_t = integrate.odeint(flow_deriv, x0, t, param)

    return t, x_t

r = 0.0002    # 0.0002
k = 50        # connections  50
dill = 14     # days ill 14
dpq = 14      # days shelter in place 14
fnq = 0.6     # fraction NOT sheltering in place
mr0 = 0.01    # mortality rate
mr1 = 0.03     # extra mortality rate if exceeding hospital capacity
P = 330       # population of US in Millions
HC = 0.003    # hospital capacity

dinf = fnq*dill + (1-fnq)*np.exp(-dpq/dill)*dill;

betap = r*k*dinf;
mu = 1/dill;

print('beta = ',betap)
print('dinf = ',dinf)
print('beta/mu = ',betap/mu)
del1 = .001         # infected
del2 = 1-del1       # susceptible

tlim = np.log(P*1e6/del1)/betap + 50/betap

param = (mu, betap)    # flow parameters

t, y = solve_flow(param)
I = y[:,0]
S = y[:,1]
R = 1 - I - S

lines = plt.semilogy(t,I,t,S,t,R)
plt.setp(lines, linewidth=0.5)
plt.ylabel('Fraction of Population')
plt.title('Population Dynamics for COVID-19 in US')

mr = mr0 + mr1*(0.2*np.max(I)-HC)*np.heaviside(0.2*np.max(I),HC)
Dead = mr*P*R[R.size-1]
print('US Dead = ',Dead)

D = np.zeros(shape=(100,))
x = np.zeros(shape=(100,))
for kloop in range(0,5):
    for floop in range(0,100):
        fnq = floop/100
        dinf = fnq*dill + (1-fnq)*np.exp(-dpq/dill)*dill;
        k = 20 + kloop*10
        betap = r*k*dinf
        tlim = np.log(P*1e6/del1)/betap + 50/betap

        param = (mu, betap)    # flow parameters

        t, y = solve_flow(param)       
        I = y[:,0]
        S = y[:,1]
        R = 1 - I - S
        mr = mr0 + mr1*(0.2*np.max(I)-HC)*np.heaviside(0.2*np.max(I),HC)

        D[floop] = mr*P*R[R.size-1]
        x[floop] = fnq
    lines2 = plt.plot(x,D)
    plt.setp(lines2, linewidth=0.5)

plt.ylabel('US Million Deaths')
plt.xlabel('Fraction NOT Quarantining')
plt.title('Quarantine and Distancing')        

label = np.zeros(shape=(9,))
for floop in range(0,8):
    fq = floop/10.0
    dinf = (1-fq)*dill + fq*np.exp(-dpq/dill)*dill;
    k = 50
    betap = r*k*dinf
    tlim = np.log(P*1e6/del1)/betap + 50/betap

    param = (mu, betap)    # flow parameters

    t, y = solve_flow(param)       
    I = y[:,0]
    S = y[:,1]
    R = 1 - I - S
    lines2 = plt.plot(t,I*P)
    plt.setp(lines2, linewidth=0.5)

plt.ylabel('US Millions Infected')
plt.title('Flattening the Curve')       

You can run this Python code yourself and explore the effects of changing the parameters. For instance, the mortality rate is modeled to increase when the number of hospital beds is exceeded by the number of critical patients. This coefficient is not well known and hence can be explored numerically. Also, the infection rate r is not known well, nor the average number of connections per person. The effect of longer quarantines can also be tested relative to the fraction who do not quarantine at all. Because of the bifurcation physics of the disease model, large changes in dynamics can occur for small changes in parameters when the dynamics are near the bifurcation threshold.

Caveats and Disclaimers

This SIR model of COVID-19 is an extremely rough tool that should not be taken too literally. It can be used to explore ideas about the general effect of days quarantined, or changes in the number of social contacts, but should not be confused with the professional models used by epidemiologists. In particular, this mean-field SIR model completely ignores the discrete network character of person-to-person spread. It also homogenizes the entire country, where is it blatantly obvious that the dynamics inside New York City are very different than the dynamics in rural Indiana. And the elimination of the epidemic, so that it would not come back, would require strict compliance for people to be tested (assuming there are enough test kits) and infected individuals to be isolated after the wave has passed.

Second Edition of Introduction to Modern Dynamics (Chaos, Networks, Space and Time)

The second edition of Introduction to Modern Dynamics: Chaos, Networks, Space and Time is available from Oxford University Press and Amazon.

Most physics majors will use modern dynamics in their careers: nonlinearity, chaos, network theory, econophysics, game theory, neural nets, geodesic geometry, among many others.

The first edition of Introduction to Modern Dynamics (IMD) was an upper-division junior-level mechanics textbook at the level of Thornton and Marion (Classical Dynamics of Particles and Systems) and Taylor (Classical Mechanics).  IMD helped lead an emerging trend in physics education to update the undergraduate physics curriculum.  Conventional junior-level mechanics courses emphasized Lagrangian and Hamiltonian physics, but notably missing from the classic subjects are modern dynamics topics that most physics majors will use in their careers: nonlinearity, chaos, network theory, econophysics, game theory, neural nets, geodesic geometry, among many others.  These are the topics at the forefront of physics that drive high-tech businesses and start-ups, which is where more than half of all physicists work. IMD introduced these modern topics to junior-level physics majors in an accessible form that allowed them to master the fundamentals to prepare them for the modern world.

The second edition (IMD2) continues that trend by expanding the chapters to include additional material and topics.  It rearranges several of the introductory chapters for improved logical flow and expands them to include key conventional topics that were missing in the first edition (e.g., Lagrange undetermined multipliers and expanded examples of Lagrangian applications).  It is also an opportunity to correct several typographical errors and other errata that students have identified over the past several years.  The second edition also has expanded homework problems.

The goal of IMD2 is to strengthen the sections on conventional topics (that students need to master to take their GREs) to make IMD2 attractive as a mainstream physics textbook for broader adoption at the junior level, while continuing the program of updating the topics and approaches that are relevant for the roles that physicists play in the 21st century.

(New Chapters and Sections highlighted in red.)

New Features in Second Edition:

Second Edition Chapters and Sections

Part 1 Geometric Mechanics

• Expanded development of Lagrangian dynamics

• Lagrange multipliers

• More examples of applications

• Connection to statistical mechanics through the virial theorem

• Greater emphasis on action-angle variables

• The key role of adiabatic invariants

Part 1 Geometric Mechanics

Chapter 1 Physics and Geometry

1.1 State space and dynamical flows

1.2 Coordinate representations

1.3 Coordinate transformation

1.4 Uniformly rotating frames

1.5 Rigid-body motion

Chapter 2 Lagrangian Mechanics

2.1 Calculus of variations

2.2 Lagrangian applications

2.3 Lagrange’s undetermined multipliers

2.4 Conservation laws

2.5 Central force motion

2.6 Virial Theorem

Chapter 3 Hamiltonian Dynamics and Phase Space

3.1 The Hamiltonian function

3.2 Phase space

3.3 Integrable systems and action–angle variables

3.4 Adiabatic invariants

Part 2 Nonlinear Dynamics

• New section on non-autonomous dynamics

• Entire new chapter devoted to Hamiltonian mechanics

• Added importance to Chirikov standard map

• The important KAM theory of “constrained chaos” and solar system stability

• Degeneracy in Hamiltonian chaos

• A short overview of quantum chaos

• Rational resonances and the relation to KAM theory

• Synchronized chaos

Part 2 Nonlinear Dynamics

Chapter 4 Nonlinear Dynamics and Chaos

4.1 One-variable dynamical systems

4.2 Two-variable dynamical systems

4.3 Limit cycles

4.4 Discrete iterative maps

4.5 Three-dimensional state space and chaos

4.6 Non-autonomous (driven) flows

4.7 Fractals and strange attractors

Chapter 5 Hamiltonian Chaos

5.1 Perturbed Hamiltonian systems

5.2 Nonintegrable Hamiltonian systems

5.3 The Chirikov Standard Map

5.4 KAM Theory

5.5 Degeneracy and the web map

5.6 Quantum chaos

Chapter 6 Coupled Oscillators and Synchronization

6.1 Coupled linear oscillators

6.2 Simple models of synchronization

6.3 Rational resonances

6.4 External synchronization

6.5 Synchronization of Chaos

Part 3 Complex Systems

• New emphasis on diffusion on networks

• Epidemic growth on networks

• A new section of game theory in the context of evolutionary dynamics

• A new section on general equilibrium theory in economics

Part 3 Complex Systems

Chapter 7 Network Dynamics

7.1 Network structures

7.2 Random network topologies

7.3 Synchronization on networks

7.4 Diffusion on networks

7.5 Epidemics on networks

Chapter 8 Evolutionary Dynamics

81 Population dynamics

8.2 Virus infection and immune deficiency

8.3 Replicator Dynamics

8.4 Quasi-species

8.5 Game theory and evolutionary stable solutions

Chapter 9 Neurodynamics and Neural Networks

9.1 Neuron structure and function

9.2 Neuron dynamics

9.3 Network nodes: artificial neurons

9.4 Neural network architectures

9.5 Hopfield neural network

9.6 Content-addressable (associative) memory

Chapter 10 Economic Dynamics

10.1 Microeconomics and equilibrium

10.2 Macroeconomics

10.3 Business cycles

10.4 Random walks and stock prices (optional)

Part 4 Relativity and Space–Time

• Relativistic trajectories

• Gravitational waves

Part 4 Relativity and Space–Time

Chapter 11 Metric Spaces and Geodesic Motion

11.1 Manifolds and metric tensors

11.2 Derivative of a tensor

11.3 Geodesic curves in configuration space

11.4 Geodesic motion

Chapter 12 Relativistic Dynamics

12.1 The special theory

12.2 Lorentz transformations

12.3 Metric structure of Minkowski space

12.4 Relativistic trajectories

12.5 Relativistic dynamics

12.6 Linearly accelerating frames (relativistic)

Chapter 13 The General Theory of Relativity and Gravitation

13.1 Riemann curvature tensor

13.2 The Newtonian correspondence

13.3 Einstein’s field equations

13.4 Schwarzschild space–time

13.5 Kinematic consequences of gravity

13.6 The deflection of light by gravity

13.7 The precession of Mercury’s perihelion

13.8 Orbits near a black hole

13.9 Gravitational waves

Synopsis of 2nd Ed. Chapters

Chapter 1. Physics and Geometry (Sample Chapter)

This chapter has been rearranged relative to the 1st edition to provide a more logical flow of the overarching concepts of geometric mechanics that guide the subsequent chapters.  The central role of coordinate transformations is strengthened, as is the material on rigid-body motion with expanded examples.

Chapter 2. Lagrangian Mechanics (Sample Chapter)

Much of the structure and material is retained from the 1st edition while adding two important sections.  The section on applications of Lagrangian mechanics adds many direct examples of the use of Lagrange’s equations of motion.  An additional new section covers the important topic of Lagrange’s undetermined multipliers

Chapter 3. Hamiltonian Dynamics and Phase Space (Sample Chapter)

The importance of Hamiltonian systems and dynamics merits a stand-alone chapter.  The topics from the 1st edition are expanded in this new chapter, including a new section on adiabatic invariants that plays an important role in the development of quantum theory.  Some topics are de-emphasized from the 1st edition, such as general canonical transformations and the symplectic structure of phase space, although the specific transformation to action-angle coordinates is retained and amplified.

Chapter 4. Nonlinear Dynamics and Chaos

The first part of this chapter is retained from the 1st edition with numerous minor corrections and updates of figures.  The second part of the IMD 1st edition, treating Hamiltonian chaos, will be expanded into the new Chapter 5.

Chapter 5. Hamiltonian Chaos

This new stand-alone chapter expands on the last half of Chapter 3 of the IMD 1st edition.  The physical character of Hamiltonian chaos is substantially distinct from dissipative chaos that it deserves its own chapter.  It is also a central topic of interest for complex systems that are either conservative or that have integral invariants, such as our N-body solar system that played such an important role in the history of chaos theory beginning with Poincaré.  The new chapter highlights Poincaré’s homoclinic tangle, illustrated by the Chirikov Standard Map.  The Standard Map is an excellent introduction to KAM theory, which is one of the crowning achievements of the theory of dynamical systems by Komogorov, Arnold and Moser, connecting to deeper aspects of synchronization and rational resonances that drive the structure of systems as diverse as the rotation of the Moon and the rings of Saturn.  This is also a perfect lead-in to the next chapter on synchronization.  An optional section at the end of this chapter briefly discusses quantum chaos to show how Hamiltonian chaos can be extended into the quantum regime.

Chapter 6. Synchronization

This is an updated version of the IMD 1st ed. chapter.  It has a reduced initial section on coupled linear oscillators, retaining the key ideas about linear eigenmodes but removing some irrelevant details in the 1st edition.  A new section is added that defines and emphasizes the importance of quasi-periodicity.  A new section on the synchronization of chaotic oscillators is added.

Chapter 7. Network Dynamics

This chapter rearranges the structure of the chapter from the 1st edition, moving synchronization on networks earlier to connect from the previous chapter.  The section on diffusion and epidemics is moved to the back of the chapter and expanded in the 2nd edition into two separate sections on these topics, adding new material on discrete matrix approaches to continuous dynamics.

Chapter 8. Neurodynamics and Neural Networks

This chapter is retained from the 1st edition with numerous minor corrections and updates of figures.

Chapter 9. Evolutionary Dynamics

Two new sections are added to this chapter.  A section on game theory and evolutionary stable solutions introduces core concepts of evolutionary dynamics that merge well with the other topics of the chapter such as the pay-off matrix and replicator dynamics.  A new section on nearly neutral networks introduces new types of behavior that occur in high-dimensional spaces which are counter intuitive but important for understanding evolutionary drift.

Chapter 10.  Economic Dynamics

This chapter will be significantly updated relative to the 1st edition.  Most of the sections will be rewritten with improved examples and figures.  Three new sections will be added.  The 1st edition section on consumer market competition will be split into two new sections describing the Cournot duopoly and Pareto optimality in one section, and Walras’ Law and general equilibrium theory in another section.  The concept of the Pareto frontier in economics is becoming an important part of biophysical approaches to population dynamics.  In addition, new trends in economics are drawing from general equilibrium theory, first introduced by Walras in the nineteenth century, but now merging with modern ideas of fixed points and stable and unstable manifolds.  A third new section is added on econophysics, highlighting the distinctions that contrast economic dynamics (phase space dynamical approaches to economics) from the emerging field of econophysics (statistical mechanics approaches to economics).

Chapter 11. Metric Spaces and Geodesic Motion

 This chapter is retained from the 1st edition with several minor corrections and updates of figures.

Chapter 12. Relativistic Dynamics

This chapter is retained from the 1st edition with minor corrections and updates of figures.  More examples will be added, such as invariant mass reconstruction.  The connection between relativistic acceleration and Einstein’s equivalence principle will be strengthened.

Chapter 13. The General Theory of Relativity and Gravitation

This chapter is retained from the 1st edition with minor corrections and updates of figures.  A new section will derive the properties of gravitational waves, given the spectacular success of LIGO and the new field of gravitational astronomy.

Homework Problems:

All chapters will have expanded and updated homework problems.  Many of the homework problems from the 1st edition will remain, but the number of problems at the end of each chapter will be nearly doubled, while removing some of the less interesting or problematic problems.


D. D. Nolte, Introduction to Modern Dynamics: Chaos, Networks, Space and Time, 2nd Ed. (Oxford University Press, 2019)

The Physics of Modern Dynamics (with Python Programs)

It is surprising how much of modern dynamics boils down to an extremely simple formula

This innocuous-looking equation carries such riddles, such surprises, such unintuitive behavior that it can become the object of study for life.  This equation is called a vector flow equation, and it can be used to capture the essential physics of economies, neurons, ecosystems, networks, and even orbits of photons around black holes.  This equation is to modern dynamics what F = ma was to classical mechanics.  It is the starting point for understanding complex systems.

The Magic of Phase Space

The apparent simplicity of the “flow equation” masks the complexity it contains.  It is a vector equation because each “dimension” is a variable of a complex system.  Many systems of interest may have only a few variables, but ecosystems and economies and social networks may have hundreds or thousands of variables.  Expressed in component format, the flow equation is

where the superscript spans the number of variables.  But even this masks all that can happen with such an equation. Each of the functions fa can be entirely different from each other, and can be any type of function, whether polynomial, rational, algebraic, transcendental or composite, although they must be single-valued.  They are generally nonlinear, and the limitless ways that functions can be nonlinear is where the richness of the flow equation comes from.

The vector flow equation is an ordinary differential equation (ODE) that can be solved for specific trajectories as initial value problems.  A single set of initial conditions defines a unique trajectory.  For instance, the trajectory for a 4-dimensional example is described as the column vector

which is the single-parameter position vector to a point in phase space, also called state space.  The point sweeps through successive configurations as a function of its single parameter—time.  This trajectory is also called an orbit.  In classical mechanics, the focus has tended to be on the behavior of specific orbits that arise from a specific set of initial conditions.  This is the classic “rock thrown from a cliff” problem of introductory physics courses.  However, in modern dynamics, the focus shifts away from individual trajectories to encompass the set of all possible trajectories.

Why is Modern Dynamics part of Physics?

If finding the solutions to the “x-dot equals f” vector flow equation is all there is to do, then this would just be a math problem—the solution of ODE’s.  There are plenty of gems for mathematicians to look for, and there is an entire of field of study in mathematics called “dynamical systems“, but this would not be “physics”.  Physics as a profession is separate and distinct from mathematics, although the two are sometimes confused.  Physics uses mathematics as its language and as its toolbox, but physics is not mathematics.  Physics is done best when it is done qualitatively—this means with scribbles done on napkins in restaurants or on the back of envelopes while waiting in line. Physics is about recognizing relationships and patterns. Physics is about identifying the limits to scaling properties where the physics changes when scales change. Physics is about the mapping of the simplest possible mathematics onto behavior in the physical world, and recognizing when the simplest possible mathematics is a universal that applies broadly to diverse systems that seem different, but that share the same underlying principles.

So, granted solving ODE’s is not physics, there is still a tremendous amount of good physics that can be done by solving ODE’s. ODE solvers become the modern physicist’s experimental workbench, providing data output from numerical experiments that can test the dependence on parameters in ways that real-world experiments might not be able to access. Physical intuition can be built based on such simulations as the engaged physicist begins to “understand” how the system behaves, able to explain what will happen as the values of parameters are changed.

In the follow sections, three examples of modern dynamics are introduced with a preliminary study, including Python code. These examples are: Galactic dynamics, synchronized networks and ecosystems. Despite their very different natures, their description using dynamical flows share features in common and illustrate the beauty and depth of behavior that can be explored with simple equations.

Galactic Dynamics

One example of the power and beauty of the vector flow equation and its set of all solutions in phase space is called the Henon-Heiles model of the motion of a star within a galaxy.  Of course, this is a terribly complicated problem that involves tens of billions of stars, but if you average over the gravitational potential of all the other stars, and throw in a couple of conservation laws, the resulting potential can look surprisingly simple.  The motion in the plane of this galactic potential takes two configuration coordinates (x, y) with two associated momenta (px, py) for a total of four dimensions.  The flow equations in four-dimensional phase space are simply

Fig. 1 The 4-dimensional phase space flow equations of a star in a galaxy. The terms in light blue are a simple two-dimensional harmonic oscillator. The terms in magenta are the nonlinear contributions from the stars in the galaxy.

where the terms in the light blue box describe a two-dimensional simple harmonic oscillator (SHO), which is a linear oscillator, modified by the terms in the magenta box that represent the nonlinear galactic potential.  The orbits of this Hamiltonian system are chaotic, and because there is no dissipation in the model, a single orbit will continue forever within certain ranges of phase space governed by energy conservation, but never quite repeating.

Fig. 2 Two-dimensional Poincaré section of sets of trajectories in four-dimensional phase space for the Henon-Heiles galactic dynamics model. The perturbation parameter is &eps; = 0.3411 and the energy E = 1.

(Python code on GitHub.)

#!/usr/bin/env python3
# -*- coding: utf-8 -*-
Created on Wed Apr 18 06:03:32 2018

@author: nolte

Derived from:
D. D. Nolte, Introduction to Modern Dynamics: Chaos, Networks, Space and Time, 2nd ed. (Oxford,2019)

import numpy as np
import matplotlib as mpl
from mpl_toolkits.mplot3d import Axes3D
from scipy import integrate
from matplotlib import pyplot as plt
from matplotlib import cm
import time
import os


# model_case 1 = Heiles
# model_case 2 = Crescent
print(' ')
print('Case: 1 = Heiles')
print('Case: 2 = Crescent')
model_case = int(input('Enter the Model Case (1-2)'))

if model_case == 1:
    E = 1       # Heiles: 1, 0.3411   Crescent: 0.05, 1
    epsE = 0.3411   # 3411
    def flow_deriv(x_y_z_w,tspan):
        x, y, z, w = x_y_z_w
        a = z
        b = w
        c = -x - epsE*(2*x*y)
        d = -y - epsE*(x**2 - y**2)
    E = .1       #   Crescent: 0.1, 1
    epsE = 1   
    def flow_deriv(x_y_z_w,tspan):
        x, y, z, w = x_y_z_w
        a = z
        b = w
        c = -(epsE*(y-2*x**2)*(-4*x) + x)
        d = -(y-epsE*2*x**2)
prms = np.sqrt(E)
pmax = np.sqrt(2*E)    
# Potential Function
if model_case == 1:
    V = np.zeros(shape=(100,100))
    for xloop in range(100):
        x = -2 + 4*xloop/100
        for yloop in range(100):
            y = -2 + 4*yloop/100
            V[yloop,xloop] = 0.5*x**2 + 0.5*y**2 + epsE*(x**2*y - 0.33333*y**3) 
    V = np.zeros(shape=(100,100))
    for xloop in range(100):
        x = -2 + 4*xloop/100
        for yloop in range(100):
            y = -2 + 4*yloop/100
            V[yloop,xloop] = 0.5*x**2 + 0.5*y**2 + epsE*(2*x**4 - 2*x**2*y) 

fig = plt.figure(1)
contr = plt.contourf(V,100, cmap=cm.coolwarm, vmin = 0, vmax = 10)
fig.colorbar(contr, shrink=0.5, aspect=5)    
fig =

repnum = 250
mulnum = 64/repnum

for reploop  in range(repnum):
    px1 = 2*(np.random.random((1))-0.499)*pmax
    py1 = np.sign(np.random.random((1))-0.499)*np.real(np.sqrt(2*(E-px1**2/2)))
    xp1 = 0
    yp1 = 0
    x_y_z_w0 = [xp1, yp1, px1, py1]
    tspan = np.linspace(1,1000,10000)
    x_t = integrate.odeint(flow_deriv, x_y_z_w0, tspan)
    siztmp = np.shape(x_t)
    siz = siztmp[0]

    if reploop % 50 == 0:
        lines = plt.plot(x_t[:,0],x_t[:,1])
        plt.setp(lines, linewidth=0.5)

    y1 = x_t[:,0]
    y2 = x_t[:,1]
    y3 = x_t[:,2]
    y4 = x_t[:,3]
    py = np.zeros(shape=(2*repnum,))
    yvar = np.zeros(shape=(2*repnum,))
    cnt = -1
    last = y1[1]
    for loop in range(2,siz):
        if (last < 0)and(y1[loop] > 0):
            cnt = cnt+1
            del1 = -y1[loop-1]/(y1[loop] - y1[loop-1])
            py[cnt] = y4[loop-1] + del1*(y4[loop]-y4[loop-1])
            yvar[cnt] = y2[loop-1] + del1*(y2[loop]-y2[loop-1])
            last = y1[loop]
            last = y1[loop]
    lines = plt.plot(yvar,py,'o',ms=1)
if model_case == 1:

Networks, Synchronization and Emergence

A central paradigm of nonlinear science is the emergence of patterns and organized behavior from seemingly random interactions among underlying constituents.  Emergent phenomena are among the most awe inspiring topics in science.  Crystals are emergent, forming slowly from solutions of reagents.  Life is emergent, arising out of the chaotic soup of organic molecules on Earth (or on some distant planet).  Intelligence is emergent, and so is consciousness, arising from the interactions among billions of neurons.  Ecosystems are emergent, based on competition and symbiosis among species.  Economies are emergent, based on the transfer of goods and money spanning scales from the local bodega to the global economy.

One of the common underlying properties of emergence is the existence of networks of interactions.  Networks and network science are topics of great current interest driven by the rise of the World Wide Web and social networks.  But networks are ubiquitous and have long been the topic of research into complex and nonlinear systems.  Networks provide a scaffold for understanding many of the emergent systems.  It allows one to think of isolated elements, like molecules or neurons, that interact with many others, like the neighbors in a crystal or distant synaptic connections.

From the point of view of modern dynamics, the state of a node can be a variable or a “dimension” and the interactions among links define the functions of the vector flow equation.  Emergence is then something that “emerges” from the dynamical flow as many elements interact through complex networks to produce simple or emergent patterns.

Synchronization is a form of emergence that happens when lots of independent oscillators, each vibrating at their own personal frequency, are coupled together to push and pull on each other, entraining all the individual frequencies into one common global oscillation of the entire system.  Synchronization plays an important role in the solar system, explaining why the Moon always shows one face to the Earth, why Saturn’s rings have gaps, and why asteroids are mainly kept away from colliding with the Earth.  Synchronization plays an even more important function in biology where it coordinates the beating of the heart and the functioning of the brain.

One of the most dramatic examples of synchronization is the Kuramoto synchronization phase transition. This occurs when a large set of individual oscillators with differing natural frequencies interact with each other through a weak nonlinear coupling.  For small coupling, all the individual nodes oscillate at their own frequency.  But as the coupling increases, there is a sudden coalescence of all the frequencies into a single common frequency.  This mechanical phase transition, called the Kuramoto transition, has many of the properties of a thermodynamic phase transition, including a solution that utilizes mean field theory.

Fig. 3 The Kuramoto model for the nonlinear coupling of N simple phase oscillators. The term in light blue is the simple phase oscillator. The term in magenta is the global nonlinear coupling that connects each oscillator to every other.

The simulation of 20 Poncaré phase oscillators with global coupling is shown in Fig. 4 as a function of increasing coupling coefficient g. The original individual frequencies are spread randomly. The oscillators with similar frequencies are the first to synchronize, forming small clumps that then synchronize with other clumps of oscillators, until all oscillators are entrained to a single compromise frequency. The Kuramoto phase transition is not sharp in this case because the value of N = 20 is too small. If the simulation is run for 200 oscillators, there is a sudden transition from unsynchronized to synchronized oscillation at a threshold value of g.

Fig. 4 The Kuramoto model for 20 Poincare oscillators showing the frequencies as a function of the coupling coefficient.

The Kuramoto phase transition is one of the most important fundamental examples of modern dynamics because it illustrates many facets of nonlinear dynamics in a very simple way. It highlights the importance of nonlinearity, the simplification of phase oscillators, the use of mean field theory, the underlying structure of the network, and the example of a mechanical analog to a thermodynamic phase transition. It also has analytical solutions because of its simplicity, while still capturing the intrinsic complexity of nonlinear systems.

(Python code on GitHub.)

#!/usr/bin/env python3
# -*- coding: utf-8 -*-
Created on Sat May 11 08:56:41 2019

@author: nolte

D. D. Nolte, Introduction to Modern Dynamics: Chaos, Networks, Space and Time, 2nd ed. (Oxford,2019)


import numpy as np
from scipy import integrate
from matplotlib import pyplot as plt
import networkx as nx
from UserFunction import linfit
import time

tstart = time.time()


Nfac = 25   # 25
N = 50      # 50
width = 0.2

# model_case 1 = complete graph (Kuramoto transition)
# model_case 2 = Erdos-Renyi
model_case = int(input('Input Model Case (1-2)'))
if model_case == 1:
    facoef = 0.2
    nodecouple = nx.complete_graph(N)
elif model_case == 2:
    facoef = 5
    nodecouple = nx.erdos_renyi_graph(N,0.1)

# function: omegout, yout = coupleN(G)
def coupleN(G):

    # function: yd = flow_deriv(x_y)
    def flow_deriv(y,t0):
        yp = np.zeros(shape=(N,))
        for omloop  in range(N):
            temp = omega[omloop]
            linksz = G.node[omloop]['numlink']
            for cloop in range(linksz):
                cindex = G.node[omloop]['link'][cloop]
                g = G.node[omloop]['coupling'][cloop]

                temp = temp + g*np.sin(y[cindex]-y[omloop])
            yp[omloop] = temp
        yd = np.zeros(shape=(N,))
        for omloop in range(N):
            yd[omloop] = yp[omloop]
        return yd
    # end of function flow_deriv(x_y)

    mnomega = 1.0
    for nodeloop in range(N):
        omega[nodeloop] = G.node[nodeloop]['element']
    x_y_z = omega    
    # Settle-down Solve for the trajectories
    tsettle = 100
    t = np.linspace(0, tsettle, tsettle)
    x_t = integrate.odeint(flow_deriv, x_y_z, t)
    x0 = x_t[tsettle-1,0:N]
    t = np.linspace(1,1000,1000)
    y = integrate.odeint(flow_deriv, x0, t)
    siztmp = np.shape(y)
    sy = siztmp[0]
    # Fit the frequency
    m = np.zeros(shape = (N,))
    w = np.zeros(shape = (N,))
    mtmp = np.zeros(shape=(4,))
    btmp = np.zeros(shape=(4,))
    for omloop in range(N):
        if np.remainder(sy,4) == 0:
            mtmp[0],btmp[0] = linfit(t[0:sy//2],y[0:sy//2,omloop]);
            mtmp[1],btmp[1] = linfit(t[sy//2+1:sy],y[sy//2+1:sy,omloop]);
            mtmp[2],btmp[2] = linfit(t[sy//4+1:3*sy//4],y[sy//4+1:3*sy//4,omloop]);
            mtmp[3],btmp[3] = linfit(t,y[:,omloop]);
            sytmp = 4*np.floor(sy/4);
            mtmp[0],btmp[0] = linfit(t[0:sytmp//2],y[0:sytmp//2,omloop]);
            mtmp[1],btmp[1] = linfit(t[sytmp//2+1:sytmp],y[sytmp//2+1:sytmp,omloop]);
            mtmp[2],btmp[2] = linfit(t[sytmp//4+1:3*sytmp/4],y[sytmp//4+1:3*sytmp//4,omloop]);
            mtmp[3],btmp[3] = linfit(t[0:sytmp],y[0:sytmp,omloop]);

        #m[omloop] = np.median(mtmp)
        m[omloop] = np.mean(mtmp)
        w[omloop] = mnomega + m[omloop]
    omegout = m
    yout = y
    return omegout, yout
    # end of function: omegout, yout = coupleN(G)

Nlink = N*(N-1)//2      
omega = np.zeros(shape=(N,))
omegatemp = width*(np.random.rand(N)-1)
meanomega = np.mean(omegatemp)
omega = omegatemp - meanomega
sto = np.std(omega)

lnk = np.zeros(shape = (N,), dtype=int)
for loop in range(N):
    nodecouple.node[loop]['element'] = omega[loop]
    nodecouple.node[loop]['link'] = list(nx.neighbors(nodecouple,loop))
    nodecouple.node[loop]['numlink'] = np.size(list(nx.neighbors(nodecouple,loop)))
    lnk[loop] = np.size(list(nx.neighbors(nodecouple,loop)))

avgdegree = np.mean(lnk)
mnomega = 1

facval = np.zeros(shape = (Nfac,))
yy = np.zeros(shape=(Nfac,N))
xx = np.zeros(shape=(Nfac,))
for facloop in range(Nfac):

    fac = facoef*(16*facloop/(Nfac))*(1/(N-1))*sto/mnomega
    for nodeloop in range(N):
        nodecouple.node[nodeloop]['coupling'] = np.zeros(shape=(lnk[nodeloop],))
        for linkloop in range (lnk[nodeloop]):
            nodecouple.node[nodeloop]['coupling'][linkloop] = fac

    facval[facloop] = fac*avgdegree
    omegout, yout = coupleN(nodecouple)                           # Here is the subfunction call for the flow

    for omloop in range(N):
        yy[facloop,omloop] = omegout[omloop]

    xx[facloop] = facval[facloop]

lines = plt.plot(xx,yy)
plt.setp(lines, linewidth=0.5)

elapsed_time = time.time() - tstart
print('elapsed time = ',format(elapsed_time,'.2f'),'secs')

The Web of Life

Ecosystems are among the most complex systems on Earth.  The complex interactions among hundreds or thousands of species may lead to steady homeostasis in some cases, to growth and collapse in other cases, and to oscillations or chaos in yet others.  But the definition of species can be broad and abstract, referring to businesses and markets in economic ecosystems, or to cliches and acquaintances in social ecosystems, among many other examples.  These systems are governed by the laws of evolutionary dynamics that include fitness and survival as well as adaptation.

The dimensionality of the dynamical spaces for these systems extends to hundreds or thousands of dimensions—far too complex to visualize when thinking in four dimensions is already challenging.  Yet there are shared principles and common behaviors that emerge even here.  Many of these can be illustrated in a simple three-dimensional system that is represented by a triangular simplex that can be easily visualized, and then generalized back to ultra-high dimensions once they are understood.

A simplex is a closed (N-1)-dimensional geometric figure that describes a zero-sum game (game theory is an integral part of evolutionary dynamics) among N competing species.  For instance, a two-simplex is a triangle that captures the dynamics among three species.  Each vertex of the triangle represents the situation when the entire ecosystem is composed of a single species.  Anywhere inside the triangle represents the situation when all three species are present and interacting.

A classic model of interacting species is the replicator equation. It allows for a fitness-based proliferation and for trade-offs among the individual species. The replicator dynamics equations are shown in Fig. 5.

Fig. 5 Replicator dynamics has a surprisingly simple form, but with surprisingly complicated behavior. The key elements are the fitness and the payoff matrix. The fitness relates to how likely the species will survive. The payoff matrix describes how one species gains at the loss of another (although symbiotic relationships also occur).

The population dynamics on the 2D simplex are shown in Fig. 6 for several different pay-off matrices. The matrix values are shown in color and help interpret the trajectories. For instance the simplex on the upper-right shows a fixed point center. This reflects the antisymmetric character of the pay-off matrix around the diagonal. The stable spiral on the lower-left has a nearly asymmetric pay-off matrix, but with unequal off-diagonal magnitudes. The other two cases show central saddle points with stable fixed points on the boundary. A very large variety of behaviors are possible for this very simple system. The Python program is shown in

Fig. 6 Payoff matrix and population simplex for four random cases: Upper left is an unstable saddle. Upper right is a center. Lower left is a stable spiral. Lower right is a marginal case.

(Python code on GitHub.)

#!/usr/bin/env python3
# -*- coding: utf-8 -*-
Created on Thu May  9 16:23:30 2019

@author: nolte

Derived from:
D. D. Nolte, Introduction to Modern Dynamics: Chaos, Networks, Space and Time, 2nd ed. (Oxford,2019)

import numpy as np
from scipy import integrate
from matplotlib import pyplot as plt


def tripartite(x,y,z):

    sm = x + y + z
    xp = x/sm
    yp = y/sm
    f = np.sqrt(3)/2
    y0 = f*xp
    x0 = -0.5*xp - yp + 1;
    lines = plt.plot(x0,y0)
    plt.setp(lines, linewidth=0.5)
    plt.plot([0, 1],[0, 0],'k',linewidth=1)
    plt.plot([0, 0.5],[0, f],'k',linewidth=1)
    plt.plot([1, 0.5],[0, f],'k',linewidth=1)

def solve_flow(y,tspan):
    def flow_deriv(y, t0):
    #"""Compute the time-derivative ."""
        f = np.zeros(shape=(N,))
        for iloop in range(N):
            ftemp = 0
            for jloop in range(N):
                ftemp = ftemp + A[iloop,jloop]*y[jloop]
            f[iloop] = ftemp
        phitemp = phi0          # Can adjust this from 0 to 1 to stabilize (but Nth population is no longer independent)
        for loop in range(N):
            phitemp = phitemp + f[loop]*y[loop]
        phi = phitemp
        yd = np.zeros(shape=(N,))
        for loop in range(N-1):
            yd[loop] = y[loop]*(f[loop] - phi);
        if np.abs(phi0) < 0.01:             # average fitness maintained at zero
            yd[N-1] = y[N-1]*(f[N-1]-phi);
        else:                                     # non-zero average fitness
            ydtemp = 0
            for loop in range(N-1):
                ydtemp = ydtemp - yd[loop]
            yd[N-1] = ydtemp
        return yd

    # Solve for the trajectories
    t = np.linspace(0, tspan, 701)
    x_t = integrate.odeint(flow_deriv,y,t)
    return t, x_t

# model_case 1 = zero diagonal
# model_case 2 = zero trace
# model_case 3 = asymmetric (zero trace)
print(' ')
print('Case: 1 = antisymm zero diagonal')
print('Case: 2 = antisymm zero trace')
print('Case: 3 = random')
model_case = int(input('Enter the Model Case (1-3)'))

N = 3
asymm = 3      # 1 = zero diag (replicator eqn)   2 = zero trace (autocatylitic model)  3 = random (but zero trace)
phi0 = 0.001            # average fitness (positive number) damps oscillations
T = 100;

if model_case == 1:
    Atemp = np.zeros(shape=(N,N))
    for yloop in range(N):
        for xloop in range(yloop+1,N):
            Atemp[yloop,xloop] = 2*(0.5 - np.random.random(1))
            Atemp[xloop,yloop] = -Atemp[yloop,xloop]

if model_case == 2:
    Atemp = np.zeros(shape=(N,N))
    for yloop in range(N):
        for xloop in range(yloop+1,N):
            Atemp[yloop,xloop] = 2*(0.5 - np.random.random(1))
            Atemp[xloop,yloop] = -Atemp[yloop,xloop]
        Atemp[yloop,yloop] = 2*(0.5 - np.random.random(1))
    tr = np.trace(Atemp)
    A = Atemp
    for yloop in range(N):
        A[yloop,yloop] = Atemp[yloop,yloop] - tr/N
    Atemp = np.zeros(shape=(N,N))
    for yloop in range(N):
        for xloop in range(N):
            Atemp[yloop,xloop] = 2*(0.5 - np.random.random(1))
    tr = np.trace(Atemp)
    A = Atemp
    for yloop in range(N):
        A[yloop,yloop] = Atemp[yloop,yloop] - tr/N

im = plt.matshow(A,3,'seismic'))  # hsv, seismic, bwr
cbar = im.figure.colorbar(im)

M = 20
delt = 1/M
ep = 0.01;

tempx = np.zeros(shape = (3,))
for xloop in range(M):
    tempx[0] = delt*(xloop)+ep;
    for yloop in range(M-xloop):
        tempx[1] = delt*yloop+ep
        tempx[2] = 1 - tempx[0] - tempx[1]
        x0 = tempx/np.sum(tempx);          # initial populations
        tspan = 70
        t, x_t = solve_flow(x0,tspan)
        y1 = x_t[:,0]
        y2 = x_t[:,1]
        y3 = x_t[:,2]
        lines = plt.plot(t,y1,t,y2,t,y3)
        plt.setp(lines, linewidth=0.5)
        plt.ylabel('X Position')


Topics in Modern Dynamics

These three examples are just the tip of the iceberg. The topics in modern dynamics are almost numberless. Any system that changes in time is a potential object of study in modern dynamics. Here is a list of a few topics that spring to mind.


D. D. Nolte, Introduction to Modern Dynamics: Chaos, Networks, Space and Time, 2nd Ed. (Oxford University Press, 2019) (The physics and the derivations of the equations for the examples in this blog can be found here.)

D. D. Nolte, Galileo Unbound: A Path Across Life, the Universe and Everything (Oxford University Press, 2018) (The historical origins of the examples in this blog can be found here.)

Physics and the Zen of Motorcycle Maintenance

When I arrived at Berkeley in 1981 to start graduate school in physics, the single action I took that secured my future as a physicist, more than spending scores of sleepless nights studying quantum mechanics by Schiff or electromagnetism by Jackson —was buying a motorcycle!  Why motorcycle maintenance should be the Tao of Physics was beyond me at the time—but Zen is transcendent.


The Quantum Sadistics

In my first semester of grad school I made two close friends, Keith Swenson and Kent Owen, as we stayed up all night working on impossible problem sets and hand-grading a thousand midterms for an introductory physics class that we were TAs for.  The camaraderie was made tighter when Keith and Kent bought motorcycles and I quickly followed suit, buying my first wheels –– a 1972 Suzuki GT550.    It was an old bike, but in good shape and ready to ride, so the three of us began touring around the San Francisco Bay Area together on weekend rides.  We went out to Mt. Tam, or up to Vallejo, or around the North and South Bay.  Kent thought this was a very cool way for physics grads to spend their time and he came up with a name for our gang –– the “Quantum Sadistics”!  He even made a logo for our “colors” that was an eye shedding a tear drop shaped like the dagger of a quantum raising operator.

At the end of the first year, Keith left the program, not sure he was the right material for a physics degree, and moved to San Diego to head up the software arm of a start-up company that he had founder’s shares in.  Kent and I continued at Berkeley, but soon got too busy to keep up the weekend rides.  My Suzuki was my only set of wheels, so I tooled around with it, keeping it running when it really didn’t want to go any further.  I had to pull its head and dive deep into it to adjust the rockers.  It stayed together enough for a trip all the way down Highway 1 to San Diego to visit Keith and back, and a trip all the way up Highway 1 to Seattle to visit my grandparents and back, having ridden the full length of the Pacific Coast from Tijuana to Vancouver.  Motorcycle maintenance was always part of the process.

Andrew Lange

After a few semesters as a TA for the large lecture courses in physics, it was time to try something real and I noticed a job opening posted on a bulletin board.  It was for a temporary research position in Prof. Paul Richard’s group.  I had TA-ed for him once, but knew nothing of his research, and the interview wasn’t even with him, but with a graduate student named Andrew Lange.  I met with Andrew in a ground-floor lab on the south side of Birge Hall.  He was soft-spoken and congenial, with round architect glasses, fine sandy hair and had about him a hint of something exotic.  He was encouraging in his reactions to my answers.  Then he asked if I had a motorcycle.  I wasn’t sure if he already knew, or whether it was a test of some kind, so I said that I did.  “Do you work on it?”, he asked.  I remember my response.  “Not really,” I said.  In my mind I was no mechanic.  Adjusting the overhead rockers was nothing too difficult.  It wasn’t like I had pulled the pistons.

“It’s important to work on your motorcycle.”

For some reason, he didn’t seem to like my answer.  He probed further.  “Do you change the tires or the oil?”.  I admitted that I did, and on further questioning, he slowly dragged out my story of pulling the head and adjusting the cams.  He seemed to relax, like he had gotten to the bottom of something.  He then gave me some advice, focusing on me with a strange intensity and stressing very carefully, “It’s important to work on your motorcycle.”

I got the job and joined Paul Richards research group.  It was a heady time.  Andrew was designing a rocket-borne far-infrared spectrometer that would launch on a sounding rocket from Nagoya, Japan.  The spectrometer was to make the most detailed measurements ever of the cosmic microwave background (CMB) radiation during a five-minute free fall at the edge of space, before plunging into the Pacific Ocean.  But the spectrometer was missing a set of key optical elements known as far-infrared dichroic beam splitters.  Without these beam splitters, the spectrometer was just a small chunk of machined aluminum.  It became my job to create these beam splitters.  The problem was that no one knew how to do it.  So with Andrew’s help, I scanned the literature, and we settled on a design related to results from the Ulrich group in Germany.

Our spectral range was different than previous cases, so I created a new methodology using small mylar sheets, patterned with photolithography, evaporating thin films of aluminum on both sides of the mylar.  My first photomasks were made using an amazingly archaic technology known as rubylith that had been used in the 70’s to fabricate low-level integrated circuits.  Andrew showed me how to cut the fine strips of red plastic tape at a large scale that was then photo-reduced for contract printing.  I modeled the beam splitters with equivalent circuits to predict the bandpass spectra, and learned about Kramers-Kronig transforms to explain an additional phase shift that appeared in the interferometric tests of the devices.  These were among the first metamaterials ever created (although this was before that word existed), with an engineered magnetic response for millimeter waves.  I fabricated the devices in the silicon fab on the top floor of the electrical engineering building on the Berkeley campus.  It was one of the first university-based VLSI fabs in the country, with high-class clean rooms and us in bunny suits.  But I was doing everything but silicon, modifying all their carefully controlled processes in the photolithography bay.  I made and characterized a full set of 5 of these high-tech beam splitters–right before I was ejected from the lab and banned.  My processes were incompatible with the VLSI activities of the rest of the students.  Fortunately, I had completed the devices, with a little extra material to spare.

I rode my motorcycle with Andrew and his friends around the Bay Area and up to Napa and the wine country.  One memorable weekend Paul had all his grad students come up to his property in Mendocino County to log trees.  Of course, we rode up on our bikes.  Paul’s land was high on a coastal mountain next to the small winery owned by Charles Kittel (the famous Kittel of “Solid State Physics”).  The weekend was rustic.  The long-abandoned hippie-shack on the property was uninhabitable so we roughed it.  After two days of hauling and stacking logs, I took a long way home riding along dark roads under tall redwoods.

Andrew moved his operation to the University of Nagoya, Japan, six months before the launch date.  The spectrometer checked out perfectly.  As launch day approached, it was mounted into the nose cone of the sounding rocket, continuing to pass all calibration tests.  On the day of launch, we held our breath back in Berkeley.  There was a 12 hour time difference, then we received the report.  The launch was textbook perfect, but at the critical moment when the explosive nose-cone bolts were supposed to blow, they failed.  The cone stayed firmly in place, and the spectrometer telemetered back perfect measurements of the inside of the rocket all the way down until it crashed into the Pacific, and the last 9 months of my life sank into the depths of the Marianas Trench.  I read the writing on the thin aluminum wall, and the following week I was interviewing for a new job up at Lawrence Berkeley Laboratory, the DOE national lab high on the hill overlooking the Berkeley campus.

Eugene Haller

The  instrument I used in Paul Richard’s lab to characterize my state-of-the-art dichroic beamsplitters was a far-infrared Fourier-transform spectrometer that Paul had built using a section of 1-foot-diameter glass sewer pipe.  Bob McMurray, a graduate student working with Prof. Eugene Haller on the hill, was a routine user of this makeshift spectrometer, and I had been looking over Bob’s shoulder at the interesting data he was taking on shallow defect centers in semiconductors.   The work sounded fascinating, and as Andrew’s Japanese sounding rocket settled deeper into the ocean floor, I arranged to meet with Eugene Haller in his office at LBL.

I was always clueless about interviews.  I never thought about them ahead of time, and never knew what I needed to say.  On the other hand, I always had a clear idea of what I wanted to accomplish.  I think this gave me a certain solid confidence that may have come through.  So I had no idea what Eugene was getting at as we began the discussion.  He asked me some questions about my project with Paul, which I am sure I answered with lots of details about Kramers-Kronig and the like.  Then came the question strangely reminiscent of when I first met Andrew Lange:  Did I work on my car?  Actually, I didn’t have a car, I had a motorcycle, and said so.  Well then, did I work on my motorcycle?  He had that same strange intensity that Andrew had when he asked me roughly the same question.  He looked like a prosecuting attorney waiting for the suspect to incriminate himself.  Once again, I described pulling the head and adjusting the rockers and cams.

Eugene leaned back in his chair and relaxed.  He began talking in the future tense about the project I would be working on.  It was a new project for the new Center for Advanced Materials at LBL, for which he was the new director.  The science revolved around semiconductors and especially a promising new material known as GaAs.  He never actually said I had the job … all of a sudden it just seemed to be assumed.  When the interview was over, he simply asked me to give him an answer in a few days if I would come up and join his group.

I didn’t know it at the time, by Eugene had a beautiful vintage Talbot roadster that was his baby.  One of his loves was working on his car.  He was a real motor head and knew everything about the mechanics.  He was also an avid short-wave radio enthusiast and knew as much about vacuum tubes as he did about transistors.  Working on cars (or motorcycles) was a guaranteed ticket into his group.  At a recent gathering of his former students and colleagues for his memorial, similar stories circulated about that question:  Did you work on your car?  The answer to this one question mattered more than any answer you gave about physics.

I joined Eugene Haller’s research group at LBL in March of 1984 and received my PhD on topics of semiconductor physics in 1988.  My association with his group opened the door to a post-doc position at AT&T Bell Labs and then to a faculty position at Purdue University where I currently work on the physics of oncology in medicine and have launched two biotech companies—all triggered by the simple purchase of a motorcycle.

Andrew Lange’s career was particularly stellar.  He joined the faculty of Cal Tech, and I was amazed to read in Science magazine in 2004 or 2005, in a section called “Nobel Watch”, that he was a candidate for the Nobel Prize for his work on BoomerAng that had launched and monitored a high-altitude balloon as it circled the South Pole taking unprecedented data on the CMB that constrained the amount of dark matter in the universe.  Around that same time I invited Paul Richards to Purdue to give our weekly physics colloquium to talk about his own work on MAXIMA. There was definitely a buzz going around that the BoomerAng and MAXIMA collaborations were being talked about in Nobel circles. The next year, the Nobel Prize of 2006 was indeed awarded for work on the Cosmic Microwave Background, but to Mather and Smoot for their earlier work on the COBE satellite.

Then, in January 2010, I was shocked to read in the New York Times that Andrew, that vibrant sharp-eyed brilliant physicist, was found lifeless in a hotel room, dead from asphyxiation.  The police ruled it a suicide.  Apparently few had known of his life-long struggle with depression, and it had finally overwhelmed him.  Perhaps he had sold his motorcycle by then.  But I wonder—if he had pulled out his wrenches and gotten to work on its engine, whether he might have been enveloped by the zen of motorcycle maintenance and the crisis would have passed him by.  As Andrew had told me so many years ago, and I wish I could have reminded him, “It’s important to work on your motorcycle.”

2018 Nobel Prize in Laser Physics

When I arrived at Bell Labs in 1988 on a postdoctoral appointment to work with Alastair Glass in the Department of Optical materials, the office I shared with Don Olsen was next door to the mysterious office of Art Ashkin.  Art was a legend in the corridors in a place of many legends.  Bell Labs in the late 80’s, even after the famous divestiture of AT&T into the Baby Bells, was a place of mythic proportions.  At the Holmdel site in New Jersey, the home of the laser physics branch of Bell Labs, the lunch table was a who’s who of laser science.  Chuck Shank, Daniel Chemla, Wayne Knox, Linn Mollenauer.  A new idea would be floated at lunchtime, and the resulting Phys Rev Letter would be submitted within the month…that was the speed of research at Bell Labs.  If you needed expertise, or hit a snag in an experiment, the World’s expert on almost anything was just down a hallway to help solve it.

Bell Labs in the late 80’s, even after the famous divestiture of AT&T into the Baby Bells, was a place of mythic proportions.

One of the key differences I have noted about the Bell Labs at that time, that set it apart from any other research organization I have experienced, whether at national labs like Lawrence Berkeley Laboratory, or at universities, was the genuine awe in people’s voices as they spoke about the work of their colleagues.  This was the tone as people talked about Steven Chu, recently departed from Bell Labs for Stanford, and especially Art Ashkin.

Art Ashkin had been at Bell Labs for nearly 40 years when I arrived.  He was a man of many talents, delving into topics as diverse as the photorefractive effect (which I had been hired to pursue in new directions), nonlinear optics in fibers (one of the chief interests of Holmdel in those days of exponential growth of fiber telecom) and second harmonic generation.  But his main scientific impact had been in the field of optical trapping.

Optical trapping uses focused laser fields to generate minute forces on minute targets.  If multiple lasers are directed in opposing directions, a small optical trap is formed.  This could be applied to atoms, which was used by Chu for atom trapping and cooling, and even to small particles like individual biological cells.  In this context, the trapping phenomenon was known as “optical tweezers”, because by moving the laser beams, the small targets could be moved about just as if they were being held by small tweezers.

In the late 80’s Steven Chu was on the rise as one of the leaders in the field of optical physics, receiving many prestigious awards for his applications of optical traps, while many felt that Art was being passed over.  This feeling intensified when Chu received the Nobel Prize in 1997 for optical trapping (shared with Cohen-Tannoudji and Phillips) but Art did not.  Several Nobel Prizes in laser physics later, and most felt that Art’s chances were over … until this morning, Oct. 2, 2018, when it was announced that Art, now age 96, was finally receiving the Nobel Prize.

Around the same time that Art and Steve were developing optical traps at Bell Labs using optical gradients to generate forces on atoms and particles, Gerard Mourou and Donna Strickland in the optics department at the University of Rochester discovered that optical gradients in nonlinear crystals could trap focused beams of light inside a laser cavity, causing a stable pulsing effect called Kerr-lens modelocking.  The optical pulses in lasers like the Ti:Sapphire laser had ultrafast durations around 100 femtoseconds with extremely stable repetition rates.  These pulse trains were the time-domain equivalent of optical combs in the frequency domain (for which Hall and Hansch  received the Nobel Prize for physics in 2005).  Before Kerr-lens modelocking, it took great skill with very nasty dye lasers to get femtosecond pulses in a laboratory.  But by the early 90’s, anyone who wanted femtosecond pulses could get them easily just by buying a femtosecond modelocked laser kit from Mourou’s company, Clark-MXR.  These types of lasers moved into ophthalmology and laser eye surgery, becoming one of the most common and most valuable commercial lasers.

Donna Strickland and Gerard Mourou shared the 2018 Nobel Prize with Art Ashkin on laser trapping, complementing the trapping of material particles by light gradients with the trapping of light beams themselves.

Galileo Unbound

Book Outline Topics

  • Chapter 1: Flight of the Swallows
    • Introduction to motion and trajectories
  • Chapter 2: A New Scientist
    • Galileo’s Biography
  • Chapter 3: Galileo’s Trajectory
    • His study of the science of motion
    • Publication of Two New Sciences
  • Chapter 4: On the Shoulders of Giants
    • Newton’s Principia
    • The Principle of Least Action: Maupertuis, Euler, and Voltaire
    • Lagrange and his new dynamics
  • Chapter 5: Geometry on my Mind
    • Differential geometry of Gauss and Riemann
    • Vector spaces rom Grassmann to Hilbert
    • Fractals: Cantor, Weierstrass, Hausdorff
  • Chapter 6: The Tangled Tale of Phase Space
    • Liouville and Jacobi
    • Entropy and Chaos: Clausius, Boltzmann and Poincare
    • Phase Space: Gibbs and Ehrenfest
  • Chapter 7: The Lens of Gravity
    • Einstein and the warping of light
    • Black Holes: Schwarzschild’s radius
    • Oppenheimer versus Wheeler
    • The Golden Age of General Relativity
  • Chapter 8: On the Quantum Footpath
    • Heisenberg’s matrix mechanics
    • Schrödinger’s wave mechanics
    • Bohr’s complementarity
    • Einstein and entanglement
    • Feynman and the path-integral formulation of quantum
  • Chapter 9: From Butterflies to Hurricanes
    • KAM theory of stability of the solar system
    • Steven Smale’s horseshoe
    • Lorenz’ butterfly: strange attractor
    • Feigenbaum and chaos
  • Chapter 10: Darwin in the Clockworks
    • Charles Darwin and the origin of species
    • Fibonnacci’s bees
    • Economic dynamics
    • Mendel and the landscapes of life
    • Evolutionary dynamics
    • Linus Pauling’s molecular clock and Dawkins meme
  • Chapter 11: The Measure of Life
    • Huygens, von Helmholtz and Rayleigh oscillators
    • Neurodynamics
    • Euler and the Seven Bridges of Königsberg
    • Network theory: Strogatz and Barabasi

In June of 1633 Galileo was found guilty of heresy and sentenced to house arrest for what remained of his life. He was a renaissance Prometheus, bound for giving knowledge to humanity. With little to do, and allowed few visitors, he at last had the uninterrupted time to finish his life’s labor. When Two New Sciences was published in 1638, it contained the seeds of the science of motion that would mature into a grand and abstract vision that permeates all science today. In this way, Galileo was unbound, not by Hercules, but by his own hand as he penned the introduction to his work:

. . . what I consider more important, there have been opened up to this vast and most excellent science, of which my work is merely the beginning, ways and means by which other minds more acute than mine will explore its remote corners.

            Galileo Galilei (1638) Two New Sciences

Galileo Unbound (Oxford University Press, 2018) explores the continuous thread from Galileo’s discovery of the parabolic trajectory to modern dynamics and complex systems. It is a history of expanding dimension and increasing abstraction, until today we speak of entangled quantum particles moving among many worlds, and we envision our lives as trajectories through spaces of thousands of dimensions. Remarkably, common themes persist that predict the evolution of species as readily as the orbits of planets. Galileo laid the foundation upon which Newton built a theory of dynamics that could capture the trajectory of the moon through space using the same physics that controlled the flight of a cannon ball. Late in the nineteenth-century, concepts of motion expanded into multiple dimensions, and in the 20th century geometry became the cause of motion rather than the result when Einstein envisioned the fabric of space-time warped by mass and energy, causing light rays to bend past the Sun. Possibly more radical was Feynman’s dilemma of quantum particles taking all paths at once—setting the stage for the modern fields of quantum field theory and quantum computing. Yet as concepts of motion have evolved, one thing has remained constant—the need to track ever more complex changes and to capture their essence—to find patterns in the chaos as we try to predict and control our world. Today’s ideas of motion go far beyond the parabolic trajectory, but even Galileo might recognize the common thread that winds through all these motions, drawing them together into a unified view that gives us the power to see, at least a little, through the mists shrouding the future.