Physics in the Age of Contagion. Part 2: The Second Wave of COVID-19

Since my last Blog on the bifurcation physics of COVID-19, most of the US has approached the crest of “the wave”, with the crest arriving sooner in hot spots like New York City and a few weeks later in rural areas like Lafayette, Indiana where I live. As of the posting of this Blog, most of the US is in lock-down with only a few hold-out states. Fortunately, this was sufficient to avoid the worst case scenarios of my last Blog, but we are still facing severe challenges.

There is good news! The second wave can be managed and minimized if we don’t come out of lock-down too soon.

One fall-out of the (absolutely necessary) lock-down is the serious damage done to the economy that is now in its greatest retraction since the Great Depression. The longer the lock-down goes, the deeper the damage and the longer to recover. The single-most important question at this point in time, as we approach the crest, is when we can emerge from lock down? This is a critical question. If we emerge too early, then the pandemic will re-kindle into a second wave that could exceed the first. But if we emerge later than necessary, then the economy may take a decade to fully recover. We need a Goldilocks solution: not too early and not too late. How do we assess that?

The Value of Qualitative Physics

In my previous Blog I laid out a very simple model called the Susceptible-Infected-Removed (SIR) model and provided a Python program whose parameters can be tweaked to explore the qualitatitive behavior of the model, answering questions like: What is the effect of longer or shorter quarantine periods? What role does social distancing play in saving lives? What happens if only a small fraction of the population pays attention and practice social distancing?

It is necessary to wait from releasing the lock-down at least several weeks after the crest has passed to avoid the second wave.

It is important to note that none of the parameters in that SIR model are reliable and no attempt was made to fit the parameters to the actual data. To expert epidemiological modelers, this simplistic model is less than useless and potentially dangerous if wrong conclusions are arrived at and disseminated on the internet.

But here is the point: The actual numbers are less important than the functional dependences. What matters is how the solution changes as a parameter is changed. The Python programs allow non-experts to gain an intuitive understanding of the qualitative physics of the pandemic. For instance, it is valuable to gain a feeling of how sensitive the pandemic is to small changes in parameters. This is especially important because of the bifurcation physics of COVID-19 where very small changes can cause very different trajectories of the population dynamics.

In the spirit of the value of qualitative physics, I am extending here that simple SIR model to a slightly more sophisticated model that can help us understand the issues and parametric dependences of this question of when to emerge from lock-down. Again, no effort is made to fit actual data of this pandemic, but there are still important qualitative conclusions to be made.

The Two-Compartment SIR Model of COVID-19

To approach a qualitative understanding of what happens by varying the length of time of the country-wide shelter-in-place, it helps to think of two cohorts of the public: those who are compliant and conscientious valuing the lives of others, and those who don’t care and are non-compliant.

Fig. 1 Two-compartment SIR model for compliant and non-compliant cohorts.

These two cohorts can each be modeled separately by their own homogeneous SIR models, but with a coupling between them because even those who shelter in place must go out for food and medicines. The equations of this two-compartment model are

where n and q refer to the non-compliant and the compliant cohorts, respectively. I and S are the susceptible populations. The coupling parameters are knn for the coupling between non-compliants individuals, knq for the effect of the compliant individuals on the non-compliant, kqn for the effect of the non-compliant individuals on the compliant, and kqq for the effect of the compliant cohort on themselves.

There are two time frames for the model. The first time frame is the time of lock-down when the compliant cohort is sheltering in place and practicing good hygiene, but they still need to go out for food and medicines. (This model does not include the first responders. They are an important cohort, but do not make up a large fraction of the national population). The second time frame is after the lock-down is removed. Even then, good practices by the compliant group are expected to continue with the purpose to lower infections among themselves and among others.

This two-compartment model has roughly 8 adjustable parameters, all of which can be varied to study their effects on the predictions. None of them are well known, but general trends still can be explored.

Python Code

#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
Created on Sat March 21 2020

@author: nolte

D. D. Nolte, Introduction to Modern Dynamics: Chaos, Networks, Space and Time, 2nd ed. (Oxford,2019)

"""

import numpy as np
from scipy import integrate
from matplotlib import pyplot as plt

plt.close('all')

print(' ')
print('SIR.py')

def solve_flow(param,max_time=1000.0):

    def flow_deriv(x_y_z_w,tspan):
        In, Sn, Iq, Sq = x_y_z_w
        
        Inp = -mu*In + beta*knn*In*Sn + beta*knq*Iq*Sn
        Snp = -beta*knn*In*Sn - beta*knq*Iq*Sn
        
        Iqp = -mu*Iq + beta*kqn*In*Sq + beta*kqq*Iq*Sq
        Sqp = -beta*kqn*In*Sq - beta*kqq*Iq*Sq
        
        return [Inp, Snp, Iqp, Sqp]
    
    x0 = [In0, Sn0, Iq0, Sq0]
    
    # Solve for the trajectories
    t = np.linspace(tlo, thi, thi-tlo)
    x_t = integrate.odeint(flow_deriv, x0, t)

   
    return t, x_t

beta = 0.02   # infection rate
dill = 5      # mean days infectious
mu = 1/dill   # decay rate
fnq = 0.3     # fraction not quarantining
fq = 1-fnq    # fraction quarantining
P = 330       # Population of the US in millions
mr = 0.002    # Mortality rate
dq = 90       # Days of lock-down (this is the key parameter)

# During quarantine
knn = 50      # Average connections per day for non-compliant group among themselves
kqq = 0       # Connections among compliant group
knq = 0       # Effect of compliaht group on non-compliant
kqn = 5       # Effect of non-clmpliant group on compliant

initfrac = 0.0001          # Initial conditions:
In0 = initfrac*fnq         # infected non-compliant
Sn0 = (1-initfrac)*fnq     # susceptible non-compliant
Iq0 = initfrac*fq          # infected compliant
Sq0 = (1-initfrac)*fq      # susceptivle compliant

tlo = 0
thi = dq

param = (mu, beta, knn, knq, kqn, kqq)    # flow parameters

t1, y1 = solve_flow(param)

In1 = y1[:,0]
Sn1 = y1[:,1]
Rn1 = fnq - In1 - Sn1
Iq1 = y1[:,2]
Sq1 = y1[:,3]
Rq1 = fq - Iq1 - Sq1

# Lift the quarantine: Compliant group continues social distancing
knn = 50      # Adjusted coupling parameters
kqq = 5
knq = 20
kqn = 15

fin1 = len(t1)
In0 = In1[fin1-1]
Sn0 = Sn1[fin1-1]
Iq0 = Iq1[fin1-1]
Sq0 = Sq1[fin1-1]

tlo = fin1
thi = fin1 + 365-dq

param = (mu, beta, knn, knq, kqn, kqq)

t2, y2 = solve_flow(param)

In2 = y2[:,0]
Sn2 = y2[:,1]
Rn2 = fnq - In2 - Sn2
Iq2 = y2[:,2]
Sq2 = y2[:,3]
Rq2 = fq - Iq2 - Sq2

fin2 = len(t2)
t = np.zeros(shape=(fin1+fin2,))
In = np.zeros(shape=(fin1+fin2,))
Sn = np.zeros(shape=(fin1+fin2,))
Rn = np.zeros(shape=(fin1+fin2,))
Iq = np.zeros(shape=(fin1+fin2,))
Sq = np.zeros(shape=(fin1+fin2,))
Rq = np.zeros(shape=(fin1+fin2,))

t[0:fin1] = t1
In[0:fin1] = In1
Sn[0:fin1] = Sn1
Rn[0:fin1] = Rn1
Iq[0:fin1] = Iq1
Sq[0:fin1] = Sq1
Rq[0:fin1] = Rq1


t[fin1:fin1+fin2] = t2
In[fin1:fin1+fin2] = In2
Sn[fin1:fin1+fin2] = Sn2
Rn[fin1:fin1+fin2] = Rn2
Iq[fin1:fin1+fin2] = Iq2
Sq[fin1:fin1+fin2] = Sq2
Rq[fin1:fin1+fin2] = Rq2

plt.figure(1)
lines = plt.semilogy(t,In,t,Iq,t,(In+Iq))
plt.ylim([0.0001,.1])
plt.xlim([0,thi])
plt.legend(('Non-compliant','Compliant','Total'))
plt.setp(lines, linewidth=0.5)
plt.xlabel('Days')
plt.ylabel('Infected')
plt.title('Infection Dynamics for COVID-19 in US')
plt.show()

plt.figure(2)
lines = plt.semilogy(t,Rn*P*mr,t,Rq*P*mr)
plt.ylim([0.001,1])
plt.xlim([0,thi])
plt.legend(('Non-compliant','Compliant'))
plt.setp(lines, linewidth=0.5)
plt.xlabel('Days')
plt.ylabel('Deaths')
plt.title('Total Deaths for COVID-19 in US')
plt.show()

D = P*mr*(Rn[fin1+fin2-1] + Rq[fin1+fin2-1])
print('Deaths = ',D)

plt.figure(3)
lines = plt.semilogy(t,In/fnq,t,Iq/fq)
plt.ylim([0.0001,.1])
plt.xlim([0,thi])
plt.legend(('Non-compliant','Compliant'))
plt.setp(lines, linewidth=0.5)
plt.xlabel('Days')
plt.ylabel('Fraction of Sub-Population')
plt.title('Population Dynamics for COVID-19 in US')
plt.show()

Trends

The obvious trend to explore is the effect of changing the quarantine period. Fig. 2 shows the results of a an early release from shelter-in-place compared to pushing the release date one month longer. The trends are:

  • If the lock-down is released early, the second wave can be larger than the first wave
  • If the lock-down is released early, the compliant cohort will be mostly susceptible and will have the majority of new cases
  • There are 40% more deaths when the lock-down is released early

If the lock-down is ended just after the crest, this is too early. It is necessary to wait at least several weeks after the crest has passed to avoid the second wave. There are almost 40% more deaths for the 90-day period than the 120-day period. In addition, for the case when the quarantine is stopped too early, the compliant cohort, since they are the larger fraction and are mostly susceptible, will suffer a worse number of new infections than the non-compliant group who put them at risk in the first place. In addition, the second wave for the compliant group would be worse than the first wave. This would be a travesty! But by pushing the quarantine out by just 1 additional month, the compliant group will suffer fewer total deaths than the non-compliant group. Most importantly, the second wave would be substantially smaller than the first wave for both cohorts.

Fig. 2 Comparison of 90-day quarantine versus 120-day quarantine for the compliant and non-compliant cohort of individuals . When the ban is lifted too soon, the second wave can be bigger than the first. This model assumes that 30% of the population are non-compliant and that the compliant group continues to practice social distancing.

The lesson from this simple model is simple: push the quarantine date out as far as the economy can allow! There is good news! The second wave can be managed and minimized if we don’t come out of lock-down too soon.

Caveats and Disclaimers

This model is purely qualitative and only has value for studying trends that depend on changing parameters. Absolute numbers are not meant to be taken too seriously. For instance, the total number of deaths in this model are about 2x larger than what we are hearing from Dr. Fauci of NIAID at this time, so this simple model overestimates fatalities. Also, it doesn’t matter whether the number of quarantine days should be 60, 90 or 120 … what matters is that an additional month makes a large difference in total number of deaths. If someone does want to model the best possible number of quarantine days — the Goldilocks solution — then they need to get their hands on a professional epidemiological model (or an actual epidemiologist). The model presented here is not appropriate for that purpose.

Note added in postscript on April 8: Since posting the original blog on April 6, Dr, Fauci announced that as many as 90% of individuals are practicing some form of social distancing. In addition, many infections are not being reported because of lack of testing, which means that the mortality rate is lower than thought. Therefore, I have changed the mortality rate and figures with numbers that better reflect the current situation (that is changing daily), but still without any attempt to fit the numerous other parameters.

Physics in the Age of Contagion: The Bifurcation of COVID-19

We are at War! That may sound like a cliche, but more people in the United States may die over the next year from COVID-19 than US soldiers have died in all the wars ever fought in US history. It is a war against an invasion by an alien species that has no remorse and gives no quarter. In this war, one of our gravest enemies, beyond the virus, is misinformation. The Internet floods our attention with half-baked half-truths. There may even be foreign powers that see this time of crisis as an opportunity to sow fear through disinformation to divide the country.

Because of the bifurcation physics of the SIR model of COVID-19, small changes in personal behavior (if everyone participates) can literally save Millions of lives!

At such times, physicists may be tapped to help the war effort. This is because physicists have unique skill sets that help us see through the distractions of details to get to the essence of the problem. Our solutions are often back-of-the-envelope, but that is their strength. We can see zeroth-order results stripped bare of all the obfuscating minutia.

One way physicists can help in this war is to shed light on how infections percolate through a population and to provide estimates on the numbers involved. Perhaps most importantly, we can highlight what actions ordinary citizens can take that best guard against the worst-case scenarios of the pandemic. The zeroth-oder solutions may not say anything new that the experts don’t already know, but it may help spread the word of why such simple actions as shelter-in-place may save millions of lives.

The SIR Model of Infection

One of the simplest models for infection is the so-called SIR model that stands for Susceptible-Infected-Removed. This model is an averaged model (or a mean-field model) that disregards the fundamental network structure of human interactions and considers only averages. The dynamical flow equations are very simple

where I is the infected fraction of the population, and S is the susceptible fraction of the population. The coefficient μ is the rate at which patients recover or die, <k> is the average number of “links” to others, and β is the infection probability per link per day. The total population fraction is give by the constraint

where R is the removed population, most of whom will be recovered, but some fraction will have passed away. The number of deaths is

where m is the mortality rate, and Rinf is the longterm removed fraction of the population after the infection has run its course.

The nullclines, the curves along which the time derivatives vanish, are

Where the first nullcline intersects the third nullcline is the only fixed point of this simple model

The phase space of the SIR flow is shown in Fig. 1 plotted as the infected fraction as a function of the susceptible fraction. The diagonal is the set of initial conditions where R = 0. Each initial condition on the diagonal produces a dynamical trajectory. The dashed trajectory that starts at (1,0) is the trajectory for a new disease infecting a fully susceptible population. The trajectories terminate on the I = 0 axis at long times when the infection dies out. In this model, there is always a fraction of the population who never get the disease, not through unusual immunity, but through sheer luck.

Fig. 1 Phase space of the SIR model. The single fixed point has “marginal” stability, but leads to a finite number of of the population who never are infected. The dashed trajectory is the trajectory of the infection starting with a single case. (Adapted from “Introduction to Modern Dynamics” (Oxford University Press, 2019))

The key to understanding the scale of the pandemic is the susceptible fraction at the fixed point S*. For the parameters chosen to plot Fig. 1, the value of S* is 1/4, or β<k> = 4μ. It is the high value of the infection rate β<k> relative to the decay rate of the infection μ that allows a large fraction of the population to become infected. As the infection rate gets smaller, the fixed point S* moves towards unity on the horizontal axis, and less of the population is infected.

As soon as S* exceeds unity, for the condition

then the infection cannot grow exponentially and will decay away without infecting an appreciable fraction of the population. This condition represents a bifurcation in the infection dynamics. It means that if the infection rate can be reduced below the recovery rate, then the pandemic fades away. (It is important to point out that the R0 of a network model (the number of people each infected person infects) is analogous to the inverse of S*. When R0 > 1 then the infection spreads, just as when S* < 1, and vice versa.)

This bifurcation condition makes the strategy for fighting the pandemic clear. The parameter μ is fixed by the virus and cannot be altered. But the infection probability per day per social link, β, can be reduced by clean hygiene:

  • Don’t shake hands
  • Wash your hands often and thoroughly
  • Don’t touch your face
  • Cover your cough or sneeze in your elbow
  • Wear disposable gloves
  • Wipe down touched surfaces with disinfectants

And the number of contacts per person, <k>, can be reduced by social distancing:

  • No large gatherings
  • Stand away from others
  • Shelter-in-place
  • Self quarantine

The big question is: can the infection rate be reduced below the recovery rate through the actions of clean hygiene and social distancing? If there is a chance that it can, then literally millions of lives can be saved. So let’s take a look at COVID-19.

The COVID-19 Pandemic

To get a handle on modeling the COVID-19 pandemic using the (very simplistic) SIR model, one key parameter is the average number of people you are connected to, represented by <k>. These are not necessarily the people in your social network, but also includes people who may touch a surface you touched earlier, or who touched a surface you later touch yourself. It also includes anyone in your proximity who has coughed or sneezed in the past few minutes. The number of people in your network is a topic of keen current interest, but is surprisingly hard to pin down. For the sake of this model, I will take the number <k> = 50 as a nominal number. This is probably too small, but it is compensated by the probability of infection given by a factor r and by the number of days that an individual is infectious.

The spread is helped when infectious people go about their normal lives infecting others. But if a fraction of the population self quarantines, especially after they “may” have been exposed, then the effective number of infectious dinf days per person can be decreased. A rough equation that captures this is

where fnq is the fraction of the population that does NOT self quarantine, dill is the mean number of days a person is ill (and infectious), and dq is the number of days quarantined. This number of infectious days goes into the parameter β.

where r = 0.0002 infections per link per day2 , which is a very rough estimate of the coefficient for COVID-19.

It is clear why shelter-in-place can be so effective, especially if the number of days quarantined is equal to the number of days a person is ill. The infection could literally die out if enough people self quarantine by pushing the critical value S* above the bifurcation threshold. However, it is much more likely that large fractions of people will continue to move about. A simulation of the “wave” that passes through the US is shown in Fig. 2 (see the Python code in the section below for parameters). In this example, 60% of the population does NOT self quarantine. The wave peaks approximately 150 days after the establishment of community spread.

Fig. 2 Population dynamics for the US spread of COVID-19. The fraction that is infected represents a “wave” that passes through a community. In this simulation fnq = 60%. The total US dead after the wave has passed is roughly 2 Million in this simulation.

In addition to shelter-in-place, social distancing can have a strong effect on the disease spread. Fig. 3 shows the number of US deaths as a function of the fraction of the population who do NOT self-quarantine for a series of average connections <k>. The bifurcation effect is clear in this graph. For instance, if <k> = 50 is a nominal value, then if 85% of the population would shelter-in-place for 14 days, then the disease would fall below threshold and only a small number of deaths would occur. But if that connection number can be dropped even to <k> = 40, then only 60% would need to shelter-in-place to avoid the pandemic. By contrast, if 80% of the people don’t self-quarantine, and if <k> = 40, then there could be 2 Million deaths in the US by the time the disease has run its course.

Because of the bifurcation physics of this SIR model of COVID-19, small changes in personal behavior (if everyone participates) can literally save Millions of lives!

Fig. 3 Bifurcation plot of the number of US deaths as a function of the fraction of the population who do NOT shelter-in-place for different average links per person. At 20 links per person, the contagion could be contained. However, at 60 links per person, nearly 90% of the population would need to quarantine for at least 14 days to stop the spread.

There has been a lot said about “flattening the curve”, which is shown in Fig. 4. There are two ways that flattening the curve saves overall lives: 1) it keeps the numbers below the threshold capacity of hospitals; and 2) it decreases the total number infected and hence decreases the total dead. When the number of critical patients exceeds hospital capacity, the mortality rate increases. This is being seen in Italy where the hospitals have been overwhelmed and the mortality rate has risen from a baseline of 1% or 2% to as large as 8%. Flattening the curve is achieved by sheltering in place, personal hygiene and other forms of social distancing. The figure shows a family of curves for different fractions of the total population who shelter in place for 14 days. If more than 70% of the population shelters in place for 14 days, then the curve not only flattens … it disappears!

Fig. 4 Flattening the curve for a range of fractions of the population that shelters in place for 14 days. (See Python code for parameters.)

SIR Python Code

#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
Created on Sat March 21 2020
@author: nolte
D. D. Nolte, Introduction to Modern Dynamics: Chaos, Networks, Space and Time, 2nd ed. (Oxford,2019)
"""

import numpy as np
from scipy import integrate
from matplotlib import pyplot as plt

plt.close('all')

print(' ')
print('SIR.py')

def solve_flow(param,max_time=1000.0):

    def flow_deriv(x_y,tspan,mu,betap):
        x, y = x_y
        
        return [-mu*x + betap*x*y,-betap*x*y]
    
    x0 = [del1, del2]
    
    # Solve for the trajectories
    t = np.linspace(0, int(tlim), int(250*tlim))
    x_t = integrate.odeint(flow_deriv, x0, t, param)

   
    return t, x_t


r = 0.0002    # 0.0002
k = 50        # connections  50
dill = 14     # days ill 14
dpq = 14      # days shelter in place 14
fnq = 0.6     # fraction NOT sheltering in place
mr0 = 0.01    # mortality rate
mr1 = 0.03     # extra mortality rate if exceeding hospital capacity
P = 330       # population of US in Millions
HC = 0.003    # hospital capacity

dinf = fnq*dill + (1-fnq)*np.exp(-dpq/dill)*dill;

betap = r*k*dinf;
mu = 1/dill;

print('beta = ',betap)
print('dinf = ',dinf)
print('beta/mu = ',betap/mu)
          
del1 = .001         # infected
del2 = 1-del1       # susceptible

tlim = np.log(P*1e6/del1)/betap + 50/betap

param = (mu, betap)    # flow parameters

t, y = solve_flow(param)
I = y[:,0]
S = y[:,1]
R = 1 - I - S

plt.figure(1)
lines = plt.semilogy(t,I,t,S,t,R)
plt.ylim([0.001,1])
plt.xlim([0,tlim])
plt.legend(('Infected','Susceptible','Removed'))
plt.setp(lines, linewidth=0.5)
plt.xlabel('Days')
plt.ylabel('Fraction of Population')
plt.title('Population Dynamics for COVID-19 in US')
plt.show()

mr = mr0 + mr1*(0.2*np.max(I)-HC)*np.heaviside(0.2*np.max(I),HC)
Dead = mr*P*R[R.size-1]
print('US Dead = ',Dead)

D = np.zeros(shape=(100,))
x = np.zeros(shape=(100,))
for kloop in range(0,5):
    for floop in range(0,100):
        
        fnq = floop/100
        
        dinf = fnq*dill + (1-fnq)*np.exp(-dpq/dill)*dill;
        
        k = 20 + kloop*10
        betap = r*k*dinf
        
        tlim = np.log(P*1e6/del1)/betap + 50/betap

        param = (mu, betap)    # flow parameters

        t, y = solve_flow(param)       
        I = y[:,0]
        S = y[:,1]
        R = 1 - I - S
        
        mr = mr0 + mr1*(0.2*np.max(I)-HC)*np.heaviside(0.2*np.max(I),HC)

        D[floop] = mr*P*R[R.size-1]
        x[floop] = fnq
        
    plt.figure(2)
    lines2 = plt.plot(x,D)
    plt.setp(lines2, linewidth=0.5)

plt.ylabel('US Million Deaths')
plt.xlabel('Fraction NOT Quarantining')
plt.title('Quarantine and Distancing')        
plt.legend(('20','30','40','50','60','70'))
plt.show()    


label = np.zeros(shape=(9,))
for floop in range(0,8):
    
    fq = floop/10.0
    
    dinf = (1-fq)*dill + fq*np.exp(-dpq/dill)*dill;
    
    k = 50
    betap = r*k*dinf
    
    tlim = np.log(P*1e6/del1)/betap + 50/betap

    param = (mu, betap)    # flow parameters

    t, y = solve_flow(param)       
    I = y[:,0]
    S = y[:,1]
    R = 1 - I - S
    
    plt.figure(3)
    lines2 = plt.plot(t,I*P)
    plt.setp(lines2, linewidth=0.5)
    label[floop]=fq

plt.legend(label)
plt.ylabel('US Millions Infected')
plt.xlabel('Days')
plt.title('Flattening the Curve')       

You can run this Python code yourself and explore the effects of changing the parameters. For instance, the mortality rate is modeled to increase when the number of hospital beds is exceeded by the number of critical patients. This coefficient is not well known and hence can be explored numerically. Also, the infection rate r is not known well, nor the average number of connections per person. The effect of longer quarantines can also be tested relative to the fraction who do not quarantine at all. Because of the bifurcation physics of the disease model, large changes in dynamics can occur for small changes in parameters when the dynamics are near the bifurcation threshold.

Caveats and Disclaimers

This SIR model of COVID-19 is an extremely rough tool that should not be taken too literally. It can be used to explore ideas about the general effect of days quarantined, or changes in the number of social contacts, but should not be confused with the professional models used by epidemiologists. In particular, this mean-field SIR model completely ignores the discrete network character of person-to-person spread. It also homogenizes the entire country, where is it blatantly obvious that the dynamics inside New York City are very different than the dynamics in rural Indiana. And the elimination of the epidemic, so that it would not come back, would require strict compliance for people to be tested (assuming there are enough test kits) and infected individuals to be isolated after the wave has passed.

The Physics of Life, the Universe and Everything (In One Easy Equation)

Everyone knows that the answer to life, the universe and everything is “42”.  But if it’s the question that you want, then you can either grab a towel and a copy of The Hitchhikers Guide to the Galaxy, or you can go into physics and begin the search for yourself. 

What you may find is that the question boils down to an extremely simple formula

This innocuous-looking equation carries such riddles, such surprises, such unintuitive behavior that it can become the object of study for life.  This equation is called a vector flow equation, and it can be used to capture the essential physics of economies, neurons, ecosystems, networks, and even orbits of photons around black holes.  This equation is to modern dynamics what F = ma was to classical mechanics.  It is the starting point for understanding complex systems.

The Phase Space of Everything

The apparent simplicity of the “flow equation” masks the complexity it contains.  It is a vector equation because each “dimension” is a variable of a complex system.  Many systems of interest may have only a few variables, but ecosystems and economies and social networks may have hundreds or thousands of variables.  Expressed in component format, the flow equation is

where the superscript spans the number of variables.  But even this masks all that can happen with such an equation. Each of the functions fa can be entirely different from each other, and can be any type of function, whether polynomial, rational, algebraic, transcendental or composite, although they must be single-valued.  They are generally nonlinear, and the limitless ways that functions can be nonlinear is where the richness of the flow equation comes from.

The vector flow equation is an ordinary differential equation (ODE) that can be solved for specific trajectories as initial value problems.  A single set of initial conditions defines a unique trajectory.  For instance, the trajectory for a 4-dimensional example is described as the column vector

which is the single-parameter position vector to a point in phase space, also called state space.  The point sweeps through successive configurations as a function of its single parameter—time.  This trajectory is also called an orbit.  In classical mechanics, the focus has tended to be on the behavior of specific orbits that arise from a specific set of initial conditions.  This is the classic “rock thrown from a cliff” problem of introductory physics courses.  However, in modern dynamics, the focus shifts away from individual trajectories to encompass the set of all possible trajectories.

Why is Modern Dynamics part of Physics?

If finding the solutions to the “x-dot equals f” vector flow equation is all there is to do, then this would just be a math problem—the solution of ODE’s.  There are plenty of gems for mathematicians to look for, and there is an entire of field of study in mathematics called “dynamical systems“, but this would not be “physics”.  Physics as a profession is separate and distinct from mathematics, although the two are sometimes confused.  Physics uses mathematics as its language and as its toolbox, but physics is not mathematics.  Physics is done best when it is done qualitatively—this means with scribbles done on napkins in restaurants or on the back of envelopes while waiting in line. Physics is about recognizing relationships and patterns. Physics is about identifying the limits to scaling properties where the physics changes when scales change. Physics is about the mapping of the simplest possible mathematics onto behavior in the physical world, and recognizing when the simplest possible mathematics is a universal that applies broadly to diverse systems that seem different, but that share the same underlying principles.

So, granted solving ODE’s is not physics, there is still a tremendous amount of good physics that can be done by solving ODE’s. ODE solvers become the modern physicist’s experimental workbench, providing data output from numerical experiments that can test the dependence on parameters in ways that real-world experiments might not be able to access. Physical intuition can be built based on such simulations as the engaged physicist begins to “understand” how the system behaves, able to explain what will happen as the values of parameters are changed.

In the follow sections, three examples of modern dynamics are introduced with a preliminary study, including Python code. These examples are: Galactic dynamics, synchronized networks and ecosystems. Despite their very different natures, their description using dynamical flows share features in common and illustrate the beauty and depth of behavior that can be explored with simple equations.

Galactic Dynamics

One example of the power and beauty of the vector flow equation and its set of all solutions in phase space is called the Henon-Heiles model of the motion of a star within a galaxy.  Of course, this is a terribly complicated problem that involves tens of billions of stars, but if you average over the gravitational potential of all the other stars, and throw in a couple of conservation laws, the resulting potential can look surprisingly simple.  The motion in the plane of this galactic potential takes two configuration coordinates (x, y) with two associated momenta (px, py) for a total of four dimensions.  The flow equations in four-dimensional phase space are simply

Fig. 1 The 4-dimensional phase space flow equations of a star in a galaxy. The terms in light blue are a simple two-dimensional harmonic oscillator. The terms in magenta are the nonlinear contributions from the stars in the galaxy.

where the terms in the light blue box describe a two-dimensional simple harmonic oscillator (SHO), which is a linear oscillator, modified by the terms in the magenta box that represent the nonlinear galactic potential.  The orbits of this Hamiltonian system are chaotic, and because there is no dissipation in the model, a single orbit will continue forever within certain ranges of phase space governed by energy conservation, but never quite repeating.

Fig. 2 Two-dimensional Poincaré section of sets of trajectories in four-dimensional phase space for the Henon-Heiles galactic dynamics model. The perturbation parameter is &eps; = 0.3411 and the energy E = 1.

Hamilton4D.py

#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
Hamilton4D.py
Created on Wed Apr 18 06:03:32 2018

@author: nolte

Derived from:
D. D. Nolte, Introduction to Modern Dynamics: Chaos, Networks, Space and Time, 2nd ed. (Oxford,2019)
"""

import numpy as np
import matplotlib as mpl
from mpl_toolkits.mplot3d import Axes3D
from scipy import integrate
from matplotlib import pyplot as plt
from matplotlib import cm
import time
import os

plt.close('all')

# model_case 1 = Heiles
# model_case 2 = Crescent
print(' ')
print('Hamilton4D.py')
print('Case: 1 = Heiles')
print('Case: 2 = Crescent')
model_case = int(input('Enter the Model Case (1-2)'))

if model_case == 1:
    E = 1       # Heiles: 1, 0.3411   Crescent: 0.05, 1
    epsE = 0.3411   # 3411
    def flow_deriv(x_y_z_w,tspan):
        x, y, z, w = x_y_z_w
        a = z
        b = w
        c = -x - epsE*(2*x*y)
        d = -y - epsE*(x**2 - y**2)
        return[a,b,c,d]
else:
    E = .1       #   Crescent: 0.1, 1
    epsE = 1   
    def flow_deriv(x_y_z_w,tspan):
        x, y, z, w = x_y_z_w
        a = z
        b = w
        c = -(epsE*(y-2*x**2)*(-4*x) + x)
        d = -(y-epsE*2*x**2)
        return[a,b,c,d]
    
prms = np.sqrt(E)
pmax = np.sqrt(2*E)    
            
# Potential Function
if model_case == 1:
    V = np.zeros(shape=(100,100))
    for xloop in range(100):
        x = -2 + 4*xloop/100
        for yloop in range(100):
            y = -2 + 4*yloop/100
            V[yloop,xloop] = 0.5*x**2 + 0.5*y**2 + epsE*(x**2*y - 0.33333*y**3) 
else:
    V = np.zeros(shape=(100,100))
    for xloop in range(100):
        x = -2 + 4*xloop/100
        for yloop in range(100):
            y = -2 + 4*yloop/100
            V[yloop,xloop] = 0.5*x**2 + 0.5*y**2 + epsE*(2*x**4 - 2*x**2*y) 

fig = plt.figure(1)
contr = plt.contourf(V,100, cmap=cm.coolwarm, vmin = 0, vmax = 10)
fig.colorbar(contr, shrink=0.5, aspect=5)    
fig = plt.show()

repnum = 250
mulnum = 64/repnum

np.random.seed(1)
for reploop  in range(repnum):
    px1 = 2*(np.random.random((1))-0.499)*pmax
    py1 = np.sign(np.random.random((1))-0.499)*np.real(np.sqrt(2*(E-px1**2/2)))
    xp1 = 0
    yp1 = 0
    
    x_y_z_w0 = [xp1, yp1, px1, py1]
    
    tspan = np.linspace(1,1000,10000)
    x_t = integrate.odeint(flow_deriv, x_y_z_w0, tspan)
    siztmp = np.shape(x_t)
    siz = siztmp[0]

    if reploop % 50 == 0:
        plt.figure(2)
        lines = plt.plot(x_t[:,0],x_t[:,1])
        plt.setp(lines, linewidth=0.5)
        plt.show()
        time.sleep(0.1)
        #os.system("pause")

    y1 = x_t[:,0]
    y2 = x_t[:,1]
    y3 = x_t[:,2]
    y4 = x_t[:,3]
    
    py = np.zeros(shape=(2*repnum,))
    yvar = np.zeros(shape=(2*repnum,))
    cnt = -1
    last = y1[1]
    for loop in range(2,siz):
        if (last < 0)and(y1[loop] > 0):
            cnt = cnt+1
            del1 = -y1[loop-1]/(y1[loop] - y1[loop-1])
            py[cnt] = y4[loop-1] + del1*(y4[loop]-y4[loop-1])
            yvar[cnt] = y2[loop-1] + del1*(y2[loop]-y2[loop-1])
            last = y1[loop]
        else:
            last = y1[loop]
 
    plt.figure(3)
    lines = plt.plot(yvar,py,'o',ms=1)
    plt.show()
    
if model_case == 1:
    plt.savefig('Heiles')
else:
    plt.savefig('Crescent')
    

Networks, Synchronization and Emergence

A central paradigm of nonlinear science is the emergence of patterns and organized behavior from seemingly random interactions among underlying constituents.  Emergent phenomena are among the most awe inspiring topics in science.  Crystals are emergent, forming slowly from solutions of reagents.  Life is emergent, arising out of the chaotic soup of organic molecules on Earth (or on some distant planet).  Intelligence is emergent, and so is consciousness, arising from the interactions among billions of neurons.  Ecosystems are emergent, based on competition and symbiosis among species.  Economies are emergent, based on the transfer of goods and money spanning scales from the local bodega to the global economy.

One of the common underlying properties of emergence is the existence of networks of interactions.  Networks and network science are topics of great current interest driven by the rise of the World Wide Web and social networks.  But networks are ubiquitous and have long been the topic of research into complex and nonlinear systems.  Networks provide a scaffold for understanding many of the emergent systems.  It allows one to think of isolated elements, like molecules or neurons, that interact with many others, like the neighbors in a crystal or distant synaptic connections.

From the point of view of modern dynamics, the state of a node can be a variable or a “dimension” and the interactions among links define the functions of the vector flow equation.  Emergence is then something that “emerges” from the dynamical flow as many elements interact through complex networks to produce simple or emergent patterns.

Synchronization is a form of emergence that happens when lots of independent oscillators, each vibrating at their own personal frequency, are coupled together to push and pull on each other, entraining all the individual frequencies into one common global oscillation of the entire system.  Synchronization plays an important role in the solar system, explaining why the Moon always shows one face to the Earth, why Saturn’s rings have gaps, and why asteroids are mainly kept away from colliding with the Earth.  Synchronization plays an even more important function in biology where it coordinates the beating of the heart and the functioning of the brain.

One of the most dramatic examples of synchronization is the Kuramoto synchronization phase transition. This occurs when a large set of individual oscillators with differing natural frequencies interact with each other through a weak nonlinear coupling.  For small coupling, all the individual nodes oscillate at their own frequency.  But as the coupling increases, there is a sudden coalescence of all the frequencies into a single common frequency.  This mechanical phase transition, called the Kuramoto transition, has many of the properties of a thermodynamic phase transition, including a solution that utilizes mean field theory.

Fig. 3 The Kuramoto model for the nonlinear coupling of N simple phase oscillators. The term in light blue is the simple phase oscillator. The term in magenta is the global nonlinear coupling that connects each oscillator to every other.

The simulation of 20 Poncaré phase oscillators with global coupling is shown in Fig. 4 as a function of increasing coupling coefficient g. The original individual frequencies are spread randomly. The oscillators with similar frequencies are the first to synchronize, forming small clumps that then synchronize with other clumps of oscillators, until all oscillators are entrained to a single compromise frequency. The Kuramoto phase transition is not sharp in this case because the value of N = 20 is too small. If the simulation is run for 200 oscillators, there is a sudden transition from unsynchronized to synchronized oscillation at a threshold value of g.

Fig. 4 The Kuramoto model for 20 Poincare oscillators showing the frequencies as a function of the coupling coefficient.

The Kuramoto phase transition is one of the most important fundamental examples of modern dynamics because it illustrates many facets of nonlinear dynamics in a very simple way. It highlights the importance of nonlinearity, the simplification of phase oscillators, the use of mean field theory, the underlying structure of the network, and the example of a mechanical analog to a thermodynamic phase transition. It also has analytical solutions because of its simplicity, while still capturing the intrinsic complexity of nonlinear systems.

Kuramoto.py

#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
Created on Sat May 11 08:56:41 2019

@author: nolte

Derived from:
D. D. Nolte, Introduction to Modern Dynamics: Chaos, Networks, Space and Time, 2nd ed. (Oxford,2019)
"""

# https://www.python-course.eu/networkx.php
# https://networkx.github.io/documentation/stable/tutorial.html
# https://networkx.github.io/documentation/stable/reference/functions.html

import numpy as np
from scipy import integrate
from matplotlib import pyplot as plt
import networkx as nx
from UserFunction import linfit
import time

tstart = time.time()

plt.close('all')

Nfac = 20   # 25
N = 20      # 50
width = 0.2

# function: omegout, yout = coupleN(G)
def coupleN(G):

    # function: yd = flow_deriv(x_y)
    def flow_deriv(y,t0):
                
        yp = np.zeros(shape=(N,))
        for omloop  in range(N):
            temp = omega[omloop]
            linksz = G.node[omloop]['numlink']
            for cloop in range(linksz):
                cindex = G.node[omloop]['link'][cloop]
                g = G.node[omloop]['coupling'][cloop]

                temp = temp + g*np.sin(y[cindex]-y[omloop])
            
            yp[omloop] = temp
        
        yd = np.zeros(shape=(N,))
        for omloop in range(N):
            yd[omloop] = yp[omloop]
        
        return yd
    # end of function flow_deriv(x_y)

    mnomega = 1.0
    
    for nodeloop in range(N):
        omega[nodeloop] = G.node[nodeloop]['element']
    
    x_y_z = omega    
    
    # Settle-down Solve for the trajectories
    tsettle = 100
    t = np.linspace(0, tsettle, tsettle)
    x_t = integrate.odeint(flow_deriv, x_y_z, t)
    x0 = x_t[tsettle-1,0:N]
    
    t = np.linspace(1,1000,1000)
    y = integrate.odeint(flow_deriv, x0, t)
    siztmp = np.shape(y)
    sy = siztmp[0]
        
    # Fit the frequency
    m = np.zeros(shape = (N,))
    w = np.zeros(shape = (N,))
    mtmp = np.zeros(shape=(4,))
    btmp = np.zeros(shape=(4,))
    for omloop in range(N):
        
        if np.remainder(sy,4) == 0:
            mtmp[0],btmp[0] = linfit(t[0:sy//2],y[0:sy//2,omloop]);
            mtmp[1],btmp[1] = linfit(t[sy//2+1:sy],y[sy//2+1:sy,omloop]);
            mtmp[2],btmp[2] = linfit(t[sy//4+1:3*sy//4],y[sy//4+1:3*sy//4,omloop]);
            mtmp[3],btmp[3] = linfit(t,y[:,omloop]);
        else:
            sytmp = 4*np.floor(sy/4);
            mtmp[0],btmp[0] = linfit(t[0:sytmp//2],y[0:sytmp//2,omloop]);
            mtmp[1],btmp[1] = linfit(t[sytmp//2+1:sytmp],y[sytmp//2+1:sytmp,omloop]);
            mtmp[2],btmp[2] = linfit(t[sytmp//4+1:3*sytmp/4],y[sytmp//4+1:3*sytmp//4,omloop]);
            mtmp[3],btmp[3] = linfit(t[0:sytmp],y[0:sytmp,omloop]);

        
        #m[omloop] = np.median(mtmp)
        m[omloop] = np.mean(mtmp)
        
        w[omloop] = mnomega + m[omloop]
     
    omegout = m
    yout = y
    
    return omegout, yout
    # end of function: omegout, yout = coupleN(G)



Nlink = N*(N-1)//2      
omega = np.zeros(shape=(N,))
omegatemp = width*(np.random.rand(N)-1)
meanomega = np.mean(omegatemp)
omega = omegatemp - meanomega
sto = np.std(omega)

nodecouple = nx.complete_graph(N)

lnk = np.zeros(shape = (N,), dtype=int)
for loop in range(N):
    nodecouple.node[loop]['element'] = omega[loop]
    nodecouple.node[loop]['link'] = list(nx.neighbors(nodecouple,loop))
    nodecouple.node[loop]['numlink'] = np.size(list(nx.neighbors(nodecouple,loop)))
    lnk[loop] = np.size(list(nx.neighbors(nodecouple,loop)))

avgdegree = np.mean(lnk)
mnomega = 1

facval = np.zeros(shape = (Nfac,))
yy = np.zeros(shape=(Nfac,N))
xx = np.zeros(shape=(Nfac,))
for facloop in range(Nfac):
    print(facloop)
    facoef = 0.2

    fac = facoef*(16*facloop/(Nfac))*(1/(N-1))*sto/mnomega
    for nodeloop in range(N):
        nodecouple.node[nodeloop]['coupling'] = np.zeros(shape=(lnk[nodeloop],))
        for linkloop in range (lnk[nodeloop]):
            nodecouple.node[nodeloop]['coupling'][linkloop] = fac

    facval[facloop] = fac*avgdegree
    
    omegout, yout = coupleN(nodecouple)                           # Here is the subfunction call for the flow

    for omloop in range(N):
        yy[facloop,omloop] = omegout[omloop]

    xx[facloop] = facval[facloop]

plt.figure(1)
lines = plt.plot(xx,yy)
plt.setp(lines, linewidth=0.5)
plt.show()

elapsed_time = time.time() - tstart
print('elapsed time = ',format(elapsed_time,'.2f'),'secs')

The Web of Life

Ecosystems are among the most complex systems on Earth.  The complex interactions among hundreds or thousands of species may lead to steady homeostasis in some cases, to growth and collapse in other cases, and to oscillations or chaos in yet others.  But the definition of species can be broad and abstract, referring to businesses and markets in economic ecosystems, or to cliches and acquaintances in social ecosystems, among many other examples.  These systems are governed by the laws of evolutionary dynamics that include fitness and survival as well as adaptation.

The dimensionality of the dynamical spaces for these systems extends to hundreds or thousands of dimensions—far too complex to visualize when thinking in four dimensions is already challenging.  Yet there are shared principles and common behaviors that emerge even here.  Many of these can be illustrated in a simple three-dimensional system that is represented by a triangular simplex that can be easily visualized, and then generalized back to ultra-high dimensions once they are understood.

A simplex is a closed (N-1)-dimensional geometric figure that describes a zero-sum game (game theory is an integral part of evolutionary dynamics) among N competing species.  For instance, a two-simplex is a triangle that captures the dynamics among three species.  Each vertex of the triangle represents the situation when the entire ecosystem is composed of a single species.  Anywhere inside the triangle represents the situation when all three species are present and interacting.

A classic model of interacting species is the replicator equation. It allows for a fitness-based proliferation and for trade-offs among the individual species. The replicator dynamics equations are shown in Fig. 5.

Fig. 5 Replicator dynamics has a surprisingly simple form, but with surprisingly complicated behavior. The key elements are the fitness and the payoff matrix. The fitness relates to how likely the species will survive. The payoff matrix describes how one species gains at the loss of another (although symbiotic relationships also occur).

The population dynamics on the 2D simplex are shown in Fig. 6 for several different pay-off matrices. The matrix values are shown in color and help interpret the trajectories. For instance the simplex on the upper-right shows a fixed point center. This reflects the antisymmetric character of the pay-off matrix around the diagonal. The stable spiral on the lower-left has a nearly asymmetric pay-off matrix, but with unequal off-diagonal magnitudes. The other two cases show central saddle points with stable fixed points on the boundary. A very large variety of behaviors are possible for this very simple system. The Python program is shown in Trirep.py.

Fig. 6 Payoff matrix and population simplex for four random cases: Upper left is an unstable saddle. Upper right is a center. Lower left is a stable spiral. Lower right is a marginal case.

Trirep.py

#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
trirep.py
Created on Thu May  9 16:23:30 2019

@author: nolte

Derived from:
D. D. Nolte, Introduction to Modern Dynamics: Chaos, Networks, Space and Time, 2nd ed. (Oxford,2019)
"""

import numpy as np
from scipy import integrate
from matplotlib import pyplot as plt

plt.close('all')

def tripartite(x,y,z):

    sm = x + y + z
    xp = x/sm
    yp = y/sm
    
    f = np.sqrt(3)/2
    
    y0 = f*xp
    x0 = -0.5*xp - yp + 1;
    
    plt.figure(2)
    lines = plt.plot(x0,y0)
    plt.setp(lines, linewidth=0.5)
    plt.plot([0, 1],[0, 0],'k',linewidth=1)
    plt.plot([0, 0.5],[0, f],'k',linewidth=1)
    plt.plot([1, 0.5],[0, f],'k',linewidth=1)
    plt.show()
    

def solve_flow(y,tspan):
    def flow_deriv(y, t0):
    #"""Compute the time-derivative ."""
    
        f = np.zeros(shape=(N,))
        for iloop in range(N):
            ftemp = 0
            for jloop in range(N):
                ftemp = ftemp + A[iloop,jloop]*y[jloop]
            f[iloop] = ftemp
        
        phitemp = phi0          # Can adjust this from 0 to 1 to stabilize (but Nth population is no longer independent)
        for loop in range(N):
            phitemp = phitemp + f[loop]*y[loop]
        phi = phitemp
        
        yd = np.zeros(shape=(N,))
        for loop in range(N-1):
            yd[loop] = y[loop]*(f[loop] - phi);
        
        if np.abs(phi0) < 0.01:             # average fitness maintained at zero
            yd[N-1] = y[N-1]*(f[N-1]-phi);
        else:                                     # non-zero average fitness
            ydtemp = 0
            for loop in range(N-1):
                ydtemp = ydtemp - yd[loop]
            yd[N-1] = ydtemp
       
        return yd

    # Solve for the trajectories
    t = np.linspace(0, tspan, 701)
    x_t = integrate.odeint(flow_deriv,y,t)
    return t, x_t

# model_case 1 = zero diagonal
# model_case 2 = zero trace
# model_case 3 = asymmetric (zero trace)
print(' ')
print('trirep.py')
print('Case: 1 = antisymm zero diagonal')
print('Case: 2 = antisymm zero trace')
print('Case: 3 = random')
model_case = int(input('Enter the Model Case (1-3)'))

N = 3
asymm = 3      # 1 = zero diag (replicator eqn)   2 = zero trace (autocatylitic model)  3 = random (but zero trace)
phi0 = 0.001            # average fitness (positive number) damps oscillations
T = 100;


if model_case == 1:
    Atemp = np.zeros(shape=(N,N))
    for yloop in range(N):
        for xloop in range(yloop+1,N):
            Atemp[yloop,xloop] = 2*(0.5 - np.random.random(1))
            Atemp[xloop,yloop] = -Atemp[yloop,xloop]

if model_case == 2:
    Atemp = np.zeros(shape=(N,N))
    for yloop in range(N):
        for xloop in range(yloop+1,N):
            Atemp[yloop,xloop] = 2*(0.5 - np.random.random(1))
            Atemp[xloop,yloop] = -Atemp[yloop,xloop]
        Atemp[yloop,yloop] = 2*(0.5 - np.random.random(1))
    tr = np.trace(Atemp)
    A = Atemp
    for yloop in range(N):
        A[yloop,yloop] = Atemp[yloop,yloop] - tr/N
        
else:
    Atemp = np.zeros(shape=(N,N))
    for yloop in range(N):
        for xloop in range(N):
            Atemp[yloop,xloop] = 2*(0.5 - np.random.random(1))
        
    tr = np.trace(Atemp)
    A = Atemp
    for yloop in range(N):
        A[yloop,yloop] = Atemp[yloop,yloop] - tr/N

plt.figure(3)
im = plt.matshow(A,3,cmap=plt.cm.get_cmap('seismic'))  # hsv, seismic, bwr
cbar = im.figure.colorbar(im)

M = 20
delt = 1/M
ep = 0.01;

tempx = np.zeros(shape = (3,))
for xloop in range(M):
    tempx[0] = delt*(xloop)+ep;
    for yloop in range(M-xloop):
        tempx[1] = delt*yloop+ep
        tempx[2] = 1 - tempx[0] - tempx[1]
        
        x0 = tempx/np.sum(tempx);          # initial populations
        
        tspan = 70
        t, x_t = solve_flow(x0,tspan)
        
        y1 = x_t[:,0]
        y2 = x_t[:,1]
        y3 = x_t[:,2]
        
        plt.figure(1)
        lines = plt.plot(t,y1,t,y2,t,y3)
        plt.setp(lines, linewidth=0.5)
        plt.show()
        plt.ylabel('X Position')
        plt.xlabel('Time')

        tripartite(y1,y2,y3)

Topics in Modern Dynamics

These three examples are just the tip of the iceberg. The topics in modern dynamics are almost numberless. Any system that changes in time is a potential object of study in modern dynamics. Here is a list of a few topics that spring to mind.

Bibliography

D. D. Nolte, Introduction to Modern Dynamics: Chaos, Networks, Space and Time, 2nd Ed. (Oxford University Press, 2019) (The physics and the derivations of the equations for the examples in this blog can be found here.)

Publication Date for the Second Edition: November 18, 2019

D. D. Nolte, Galileo Unbound: A Path Across Life, the Universe and Everything (Oxford University Press, 2018) (The historical origins of the examples in this blog can be found here.)

How Number Theory Protects You from the Chaos of the Cosmos

We are exceedingly fortunate that the Earth lies in the Goldilocks zone.  This zone is the range of orbital radii of a planet around its sun for which water can exist in a liquid state.  Water is the universal solvent, and it may be a prerequisite for the evolution of life.  If we were too close to the sun, water would evaporate as steam.  And if we are too far, then it would be locked in perpetual ice.  As it is, the Earth has had wild swings in its surface temperature.  There was once a time, more than 650 million years ago, when the entire Earth’s surface froze over.  Fortunately, the liquid oceans remained liquid, and life that already existed on Earth was able to persist long enough to get to the Cambrian explosion.  Conversely, Venus may once have had liquid oceans and maybe even nascent life, but too much carbon dioxide turned the planet into an oven and boiled away its water (a fate that may await our own Earth if we aren’t careful).  What has saved us so far is the stability of our orbit, our steady distance from the Sun that keeps our water liquid and life flourishing.  Yet it did not have to be this way. 

The regions of regular motion associated with irrational numbers act as if they were a barrier, restricting the range of chaotic orbits and protecting other nearby orbits from the chaos.

Our solar system is a many-body problem.  It consists of three large gravitating bodies (Sun, Jupiter, Saturn) and several minor ones (such as Earth).   Jupiter does influence our orbit, and if it were only a few times more massive than it actually is, then our orbit would become chaotic, varying in distance from the sun in unpredictable ways.  And if Jupiter were only about 20 times bigger than is actually is, there is a possibility that it would perturb the Earth’s orbit so strongly that it could eject the Earth from the solar system entirely, sending us flying through interstellar space, where we would slowly cool until we became a permanent ice ball.  What can protect us from this terrifying fate?  What keeps our orbit stable despite the fact that we inhabit a many-body solar system?  The answer is number theory!

The Most Irrational Number

What is the most irrational number you can think of? 

Is it: pi = 3.1415926535897932384626433 ? 

Or Euler’s constant: e = 2.7182818284590452353602874 ?

How about: sqrt(3) = 1.73205080756887729352744634 ?

These are all perfectly good irrational numbers.  But how do you choose the “most irrational” number?  The answer is fairly simple.  The most irrational number is the one that is least well approximated by a ratio of integers.  For instance, it is possible to get close to pi through the ratio 22/7 = 3.1428 which differs from pi by only 4 parts in ten thousand.  Or Euler’s constant 87/32 = 2.7188 differs from e by only 2 parts in ten thousand.  Yet 87 and 32 are much bigger than 22 and 7, so it may be said that e is more irrational than pi, because it takes ratios of larger integers to get a good approximation.  So is there a “most irrational” number?  The answer is yes.  The Golden Ratio.

The Golden ratio can be defined in many ways, but its most common expression is given by

It is the hardest number to approximate with a ratio of small integers.  For instance, to get a number that is as close as one part in ten thousand to the golden mean takes the ratio 89/55.  This result may seem obscure, but there is a systematic way to find the ratios of integers that approximate an irrational number. This is known as a convergent from continued fractions.

Continued fractions were invented by John Wallis in 1695, introduced in his book Opera Mathematica.  The continued fraction for pi is

An alternate form of displaying this continued fraction is with the expression

The irrational character of pi is captured by the seemingly random integers in this string. However, there can be regular structure in irrational numbers. For instance, a different continued fraction for pi is

that has a surprisingly simple repeating pattern.

The continued fraction for the golden mean has an especially simple repeating form

or

This continued fraction has the slowest convergence for its continued fraction of any other number. Hence, the Golden Ratio can be considered, using this criterion, to be the most irrational number.

If the Golden Ratio is the most irrational number, how does that save us from the chaos of the cosmos? The answer to this question is KAM!

Kolmogorov, Arnold and Moser: (KAM) Theory

KAM is an acronym made from the first initials of three towering mathematicians of the 20th century: Andrey Kolmogorov (1903 – 1987), his student Vladimir Arnold (1937 – 2010), and Jürgen Moser (1928 – 1999).

In 1954, Kolmogorov, considered to be the greatest living mathematician at that time, was invited to give the plenary lecture at a mathematics conference. To the surprise of the conference organizers, he chose to talk on what seemed like a very mundane topic: the question of the stability of the solar system. This had been the topic which Poincaré had attempted to solve in 1890 when he first stumbled on chaotic dynamics. The question had remained open, but the general consensus was that the many-body nature of the solar system made it intrinsically unstable, even for only three bodies.

Against all expectations, Kolmogorov proposed that despite the general chaotic behavior of the three–body problem, there could be “islands of stability” which were protected from chaos, allowing some orbits to remain regular even while other nearby orbits were highly chaotic. He even outlined an approach to a proof of his conjecture, though he had not carried it through to completion.

The proof of Kolmogorov’s conjecture was supplied over the next 10 years through the work of the German mathematician Jürgen Moser and by Kolmogorov’s former student Vladimir Arnold. The proof hinged on the successive ratios of integers that approximate irrational numbers. With this work KAM showed that indeed some orbits are actually protected from neighboring chaos by relying on the irrationality of the ratio of orbital periods.

Resonant Ratios

Let’s go back to the simple model of our solar system that consists of only three bodies: the Sun, Jupiter and Earth. The period of Jupiter’s orbit is 11.86 years, but instead, if it were exactly 12 years, then its period would be in a 12:1 ratio with the Earth’s period. This ratio of integers is called a “resonance”, although in this case it is fairly mismatched. But if this ratio were a ratio of small integers like 5:3, then it means that Jupiter would travel around the sun 5 times in 15 years while the Earth went around 3 times. And every 15 years, the two planets would align. This kind of resonance with ratios of small integers creates a strong gravitational perturbation that alters the orbit of the smaller planet. If the perturbation is strong enough, it could disrupt the Earth’s orbit, creating a chaotic path that might ultimately eject the Earth completely from the solar system.

What KAM discovered is that as the resonance ratio becomes a ratio of large integers, like 87:32, then the planets have a hard time aligning, and the perturbation remains small. A surprising part of this theory is that a nearby orbital ratio might be 5:2 = 1.5, which is only a little different than 87:32 = 1.7. Yet the 5:2 resonance can produce strong chaos, while the 87:32 resonance is almost immune. This way, it is possible to have both chaotic orbits and regular orbits coexisting in the same dynamical system. An irrational orbital ratio protects the regular orbits from chaos. The next question is, how irrational does the orbital ratio need to be to guarantee safety?

You probably already guessed the answer to this question–the answer must be the Golden Ratio. If this is indeed the most irrational number, then it cannot be approximated very well with ratios of small integers, and this is indeed the case. In a three-body system, the most stable orbital ratio would be a ratio of 1.618034. But the more general question of what is “irrational enough” for an orbit to be stable against a given perturbation is much harder to answer. This is the field of Diophantine Analysis, which addresses other questions as well, such as Fermat’s Last Theorem.

KAM Twist Map

The dynamics of three-body systems are hard to visualize directly, so there are tricks that help bring the problem into perspective. The first trick, invented by Henri Poincaré, is called the first return map (or the Poincaré section). This is a way of reducing the dimensionality of the problem by one dimension. But for three bodies, even if they are all in a plane, this still can be complicated. Another trick, called the restricted three-body problem, is to assume that there are two large masses and a third small mass. This way, the dynamics of the two-body system is unaffected by the small mass, so all we need to do is focus on the dynamics of the small body. This brings the dynamics down to two dimensions (the position and momentum of the third body), which is very convenient for visualization, but the dynamics still need solutions to differential equations. So the final trick is to replace the differential equations with simple difference equations that are solved iteratively.

A simple discrete iterative map that captures the essential behavior of the three-body problem begins with action-angle variables that are coupled through a perturbation. Variations on this model have several names: the Twist Map, the Chirikov Map and the Standard Map. The essential mapping is

where J is an action variable (like angular momentum) paired with the angle variable. Initial conditions for the action and the angle are selected, and then all later values are obtained by iteration. The perturbation parameter is given by ε. If ε = 0 then all orbits are perfectly regular and circular. But as the perturbation increases, the open orbits split up into chains of closed (periodic) orbits. As the perturbation increases further, chaotic behavior emerges. The situation for ε = 0.9 is shown in the figure below. There are many regular periodic orbits as well as open orbits. Yet there are simultaneously regions of chaotic behavior. This figure shows an intermediate case where regular orbits can coexist with chaotic ones. The key is the orbital period ratio. For orbital ratios that are sufficiently irrational, the orbits remain open and regular. Bur for orbital ratios that are ratios of small integers, the perturbation is strong enough to drive the dynamics into chaos.

Arnold Twist Map (also known as a Chirikov map) for ε = 0.9 showing the chaos that has emerged at the hyperbolic point, but there are still open orbits that are surprisingly circular (unperturbed) despite the presence of strongly chaotic orbits nearby.

Python Code

#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
Created on Wed Oct. 2, 2019
@author: nolte
"""
import numpy as np
from scipy import integrate
from matplotlib import pyplot as plt
plt.close('all')

eps = 0.9

np.random.seed(2)
plt.figure(1)
for eloop in range(0,50):

    rlast = np.pi*(1.5*np.random.random()-0.5)
    thlast = 2*np.pi*np.random.random()

    orbit = np.int(200*(rlast+np.pi/2))
    rplot = np.zeros(shape=(orbit,))
    thetaplot = np.zeros(shape=(orbit,))
    x = np.zeros(shape=(orbit,))
    y = np.zeros(shape=(orbit,))    
    for loop in range(0,orbit):
        rnew = rlast + eps*np.sin(thlast)
        thnew = np.mod(thlast+rnew,2*np.pi)
        
        rplot[loop] = rnew
        thetaplot[loop] = np.mod(thnew-np.pi,2*np.pi) - np.pi            
          
        rlast = rnew
        thlast = thnew
        
        x[loop] = (rnew+np.pi+0.25)*np.cos(thnew)
        y[loop] = (rnew+np.pi+0.25)*np.sin(thnew)
        
    plt.plot(x,y,'o',ms=1)

plt.savefig('StandMapTwist')

The twist map for three values of ε are shown in the figure below. For ε = 0.2, most orbits are open, with one elliptic point and its associated hyperbolic point. At ε = 0.9 the periodic elliptic point is still stable, but the hyperbolic point has generated a region of chaotic orbits. There is still a remnant open orbit that is associated with an orbital period ratio at the Golden Ratio. However, by ε = 0.97, even this most stable orbit has broken up into a chain of closed orbits as the chaotic regions expand.

Twist map for three levels of perturbation.

Safety in Numbers

In our solar system, governed by gravitational attractions, the square of the orbital period increases as the cube of the average radius (Kepler’s third law). Consider the restricted three-body problem of the Sun and Jupiter with the Earth as the third body. If we analyze the stability of the Earth’s orbit as a function of distance from the Sun, the orbital ratio relative to Jupiter would change smoothly. Near our current position, it would be in a 12:1 resonance, but as we moved farther from the Sun, this ratio would decrease. When the orbital period ratio is sufficiently irrational, then the orbit would be immune to Jupiter’s pull. But as the orbital ratio approaches ratios of integers, the effect gets larger. Close enough to Jupiter there would be a succession of radii that had regular motion separated by regions of chaotic motion. The regions of regular motion associated with irrational numbers act as if they were a barrier, restricting the range of chaotic orbits and protecting more distant orbits from the chaos. In this way numbers, rational versus irrational, protect us from the chaos of our own solar system.

A dramatic demonstration of the orbital resonance effect can be seen with the asteroid belt. The many small bodies act as probes of the orbital resonances. The repetitive tug of Jupiter opens gaps in the distribution of asteroid radii, with major gaps, called Kirkwood Gaps, opening at orbital ratios of 3:1, 5:2, 7:3 and 2:1. These gaps are the radii where chaotic behavior occurs, while the regions in between are stable. Most asteroids spend most of their time in the stable regions, because chaotic motion tends to sweep them out of the regions of resonance. This mechanism for the Kirkwood gaps is the same physics that produces gaps in the rings of Saturn at resonances with the many moons of Saturn.

The gaps in the asteroid distributions caused by orbital resonances with Jupiter. Ref. Wikipedia

Further Reading

For a detailed history of the development of KAM theory, see Chapter 9 Butterflies to Hurricanes in Galileo Unbound (Oxford University Press, 2018).

For a more detailed mathematical description of the KAM theory, see Chapter 5, Hamiltonian Chaos, in Introduction to Modern Dynamics, 2nd edition (Oxford University Press, 2019).

See also:

Dumas, H. S., The KAM Story: A friendly introduction to the content, history and significance of Classical Kolmogorov-Arnold-Moser Theory. World Scientific: 2014.

Arnold, V. I., From superpositions to KAM theory. Vladimir Igorevich Arnold. Selected Papers 1997, PHASIS, 60, 727–740.

Limit-Cycle Oscillators: The Fast and the Slow of Grandfather Clocks

Imagine in your mind the stately grandfather clock.  The long slow pendulum swinging back and forth so purposefully with such majesty.  It harks back to slower simpler times—seemingly Victorian in character, although their origins go back to Christiaan Huygens in 1656.  In introductory physics classes the dynamics of the pendulum is taught as one of the simplest simple harmonic oscillators, only a bit more complicated than a mass on a spring.

But don’t be fooled!  This simplicity is an allusion, for the pendulum clock lies at the heart of modern dynamics.  It is a nonlinear autonomous oscillator with system gain that balances dissipation to maintain a dynamic equilibrium that ticks on resolutely as long as some energy source can continue to supply it (like the heavy clock weights).    

This analysis has converted the two-dimensional dynamics of the autonomous oscillator to a simple one-dimensional dynamics with a stable fixed point.

The dynamic equilibrium of the grandfather clock is known as a limit cycle, and they are the central feature of autonomous oscillators.  Autonomous oscillators are one of the building blocks of complex systems, providing the fundamental elements for biological oscillators, neural networks, business cycles, population dynamics, viral epidemics, and even the rings of Saturn.  The most famous autonomous oscillator (after the pendulum clock) is named for a Dutch physicist, Balthasar van der Pol (1889 – 1959), who discovered the laws that govern how electrons oscillate in vacuum tubes.  But this highly specialized physics problem has expanded to become the new guiding paradigm for the fundamental oscillating element of modern dynamics—the van der Pol oscillator.

The van der Pol Oscillator

The van der Pol (vdP) oscillator begins as a simple harmonic oscillator (SHO) in which the dissipation (loss of energy) is flipped to become gain of energy.  This is as simple as flipping the sign of the damping term in the SHO

where β is positive.  This 2nd-order ODE is re-written into a dynamical flow as

where γ = β/m is the system gain.  Clearly, the dynamics of this SHO with gain would lead to run-away as the oscillator grows without bound.             

But no real-world system can grow indefinitely.  It has to eventually be limited by things such as inelasticity.  One of the simplest ways to include such a limiting process in the mathematical model is to make the gain get smaller at larger amplitudes.  This can be accomplished by making the gain a function of the amplitude x as

When the amplitude x gets large, the gain decreases, becoming zero and changing sign when x = 1.  Putting this amplitude-dependent gain into the SHO equation yields

This is the van der Pol equation.  It is the quintessential example of a nonlinear autonomous oscillator.            

When the parameter ε is large, the vdP oscillator has can behave in strongly nonlinear ways, with strongly nonlinear and nonharmonic oscillations.  An example is shown in Fig. 2 for a = 5 and b = 2.5.  The oscillation is clearly non-harmonic.

Fig. 1 Time trace of the position and velocity of the vdP oscillator with w0 = 5 and ε = 2.5.
Fig. 2 State-space portrait of the vdP flow lines for w0 = 5 and ε = 2.5.
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
Created on Mon Apr 16 07:38:57 2018
@author: David Nolte
D. D. Nolte, Introduction to Modern Dynamics: Chaos, Networks, Space and Time, 2nd ed. (Oxford,2019)
"""
import numpy as np
from scipy import integrate
from matplotlib import pyplot as plt

plt.close('all')

def solve_flow(param,lim = [-3,3,-3,3],max_time=10.0):
# van der pol 2D flow 
    def flow_deriv(x_y, t0, alpha,beta):
        x, y = x_y
        return [y,-alpha*x+beta*(1-x**2)*y]
    
    plt.figure()
    xmin = lim[0]
    xmax = lim[1]
    ymin = lim[2]
    ymax = lim[3]
    plt.axis([xmin, xmax, ymin, ymax])

    N=144
    colors = plt.cm.prism(np.linspace(0, 1, N))
    
    x0 = np.zeros(shape=(N,2))
    ind = -1
    for i in range(0,12):
        for j in range(0,12):
            ind = ind + 1;
            x0[ind,0] = ymin-1 + (ymax-ymin+2)*i/11
            x0[ind,1] = xmin-1 + (xmax-xmin+2)*j/11
             
    # Solve for the trajectories
    t = np.linspace(0, max_time, int(250*max_time))
    x_t = np.asarray([integrate.odeint(flow_deriv, x0i, t, param)
                      for x0i in x0])

    for i in range(N):
        x, y = x_t[i,:,:].T
        lines = plt.plot(x, y, '-', c=colors[i])
        plt.setp(lines, linewidth=1)

    plt.show()
    plt.title(model_title)
    plt.savefig('Flow2D')
    
    return t, x_t

def solve_flow2(param,max_time=20.0):
# van der pol 2D flow 
    def flow_deriv(x_y, t0, alpha,beta):
        #"""Compute the time-derivative of a Medio system."""
        x, y = x_y
        return [y,-alpha*x+beta*(1-x**2)*y]
    model_title = 'van der Pol Oscillator'
    x0 = np.zeros(shape=(2,))
    x0[0] = 0
    x0[1] = 4.5
    
    # Solve for the trajectories
    t = np.linspace(0, max_time, int(250*max_time))
    x_t = integrate.odeint(flow_deriv, x0, t, param)
 
    return t, x_t

param = (5, 2.5)             # van der Pol
lim = (-7,7,-10,10)

t, x_t = solve_flow(param,lim)

t, x_t = solve_flow2(param)
plt.figure(2)
lines = plt.plot(t,x_t[:,0],t,x_t[:,1],'-')

Separation of Time Scales

Nonlinear systems can have very complicated behavior that may be difficult to address analytically.  This is why the numerical ODE solver is a central tool of modern dynamics.  But there is a very neat analytical trick that can be applied to tame the nonlinearities (if they are not too large) and simplify the autonomous oscillator.  This trick is called separation of time scales (also known as secular perturbation theory)—it looks for simultaneous fast and slow behavior within the dynamics.  An example of fast and slow time scales in a well-known dynamical system is found in the simple spinning top in which nutation (fast oscillations) are superposed on precession (slow oscillations).             

For the autonomous van der Pol oscillator the fast time scale is the natural oscillation frequency, while the slow time scale is the approach to the limit cycle.  Let’s assign t0 = t and t1 = εt, where ε is a small parameter.  t0 is the slow period (approach to the limit cycle) and t1 is the fast period (natural oscillation frequency).  The solution in terms of these time scales is

where x0 is a slow response and acts as an envelope function for x1 that is the fast response. The total differential is

Similarly, to obtain a second derivative

Therefore, the vdP equation in terms of x0 and x1 is

to lowest order. Now separate the orders to zeroth and first orders in ε, respectively,

Solve the first equation (a simple harmonic oscillator)

and plug the solution it into the right-hand side of the second equation to give

The key to secular perturbation theory is to confine dynamics to their own time scales.  In other words, the slow dynamics provide the envelope that modulates the fast carrier frequency.  The envelope dynamics are contained in the time dependence of the coefficients A and B.  Furthermore, the dynamics of x1 should be a homogeneous function of time, which requires each term in the last equation to be zero.  Therefore, the dynamical equations for the envelope functions are

These can be transformed into polar coordinates. Because the envelope functions do not depend on the slow time scale, the time derivatives are

With these expressions, the slow dynamics become

where the angular velocity in the fast variable is equal to zero, leaving only the angular velocity of the unperturbed oscillator. (This is analogous to the rotating wave approximation (RWA) in optics, and also equivalent to studying the dynamics in the rotating frame of the unperturbed oscillator.)

Making a final substitution ρ = R/2 gives a very simple set of dynamical equations

These final equations capture the essential properties of the relaxation of the dynamics to the limit cycle. To lowest order (when the gain is weak) the angular frequency is unaffected, and the system oscillates at the natural frequency. The amplitude of the limit cycle equals 1. A deviation in the amplitude from 1 decays slowly back to the limit cycle making it a stable fixed point in the radial dynamics. This analysis has converted the two-dimensional dynamics of the autonomous oscillator to a simple one-dimensional dynamics with a stable fixed point on the radius variable. The phase-space portrait of this simplified autonomous oscillator is shown in Fig. 3. What could be simpler? This simplified autonomous oscillator can be found as a fundamental element of many complex systems.

Fig. 3 The state-space diagram of the simplified autonomous oscillator. Initial conditions relax onto the limit cycle. (Reprinted from Introduction to Modern Dynamics (Oxford, 2019) on pg. 8)

Further Reading

D. D. Nolte, Introduction to Modern Dynamics: Chaos, Networks, Space and Time, 2nd edition (Oxford University Press, 2019)

Pikovsky, A. S., M. G. Rosenblum and J. Kurths (2003). Synchronization: A Universal concept in nonlinear science. Cambridge, Cambridge University Press.

Orbiting Photons around a Black Hole

The physics of a path of light passing a gravitating body is one of the hardest concepts to understand in General Relativity, but it is also one of the easiest.  It is hard because there can be no force of gravity on light even though the path of a photon bends as it passes a gravitating body.  It is easy, because the photon is following the simplest possible path—a geodesic equation for force-free motion.

         This blog picks up where my last blog left off, having there defined the geodesic equation and presenting the Schwarzschild metric.  With those two equations in hand, we could simply solve for the null geodesics (a null geodesic is the path of a light beam through a manifold).  But there turns out to be a simpler approach that Einstein came up with himself (he never did like doing things the hard way).  He just had to sacrifice the fundamental postulate that he used to explain everything about Special Relativity.

Throwing Special Relativity Under the Bus

The fundamental postulate of Special Relativity states that the speed of light is the same for all observers.  Einstein posed this postulate, then used it to derive some of the most astonishing consequences of Special Relativity—like E = mc2.  This postulate is at the rock core of his theory of relativity and can be viewed as one of the simplest “truths” of our reality—or at least of our spacetime. 

            Yet as soon as Einstein began thinking how to extend SR to a more general situation, he realized almost immediately that he would have to throw this postulate out.   While the speed of light measured locally is always equal to c, the apparent speed of light observed by a distant observer (far from the gravitating body) is modified by gravitational time dilation and length contraction.  This means that the apparent speed of light, as observed at a distance, varies as a function of position.  From this simple conclusion Einstein derived a first estimate of the deflection of light by the Sun, though he initially was off by a factor of 2.  (The full story of Einstein’s derivation of the deflection of light by the Sun and the confirmation by Eddington is in Chapter 7 of Galileo Unbound (Oxford University Press, 2018).)

The “Optics” of Gravity

The invariant element for a light path moving radially in the Schwarzschild geometry is

The apparent speed of light is then

where c(r) is  always less than c, when observing it from flat space.  The “refractive index” of space is defined, as for any optical material, as the ratio of the constant speed divided by the observed speed

Because the Schwarzschild metric has the property

the effective refractive index of warped space-time is

with a divergence at the Schwarzschild radius.

            The refractive index of warped space-time in the limit of weak gravity can be used in the ray equation (also known as the Eikonal equation described in an earlier blog)

where the gradient of the refractive index of space is

The ray equation is then a four-variable flow

These equations represent a 4-dimensional flow for a light ray confined to a plane.  The trajectory of any light path is found by using an ODE solver subject to the initial conditions for the direction of the light ray.  This is simple for us to do today with Python or Matlab, but it was also that could be done long before the advent of computers by early theorists of relativity like Max von Laue  (1879 – 1960).

The Relativity of Max von Laue

In the Fall of 1905 in Berlin, a young German physicist by the name of Max Laue was sitting in the physics colloquium at the University listening to another Max, his doctoral supervisor Max Planck, deliver a seminar on Einstein’s new theory of relativity.  Laue was struck by the simplicity of the theory, in this sense “simplistic” and hence hard to believe, but the beauty of the theory stuck with him, and he began to think through the consequences for experiments like the Fizeau experiment on partial ether drag.

         Armand Hippolyte Louis Fizeau (1819 – 1896) in 1851 built one of the world’s first optical interferometers and used it to measure the speed of light inside moving fluids.  At that time the speed of light was believed to be a property of the luminiferous ether, and there were several opposing theories on how light would travel inside moving matter.  One theory would have the ether fully stationary, unaffected by moving matter, and hence the speed of light would be unaffected by motion.  An opposite theory would have the ether fully entrained by matter and hence the speed of light in moving matter would be a simple sum of speeds.  A middle theory considered that only part of the ether was dragged along with the moving matter.  This was Fresnel’s partial ether drag hypothesis that he had arrived at to explain why his friend Francois Arago had not observed any contribution to stellar aberration from the motion of the Earth through the ether.  When Fizeau performed his experiment, the results agreed closely with Fresnel’s drag coefficient, which seemed to settle the matter.  Yet when Michelson and Morley performed their experiments of 1887, there was no evidence for partial drag.

         Even after the exposition by Einstein on relativity in 1905, the disagreement of the Michelson-Morley results with Fizeau’s results was not fully reconciled until Laue showed in 1907 that the velocity addition theorem of relativity gave complete agreement with the Fizeau experiment.  The velocity observed in the lab frame is found using the velocity addition theorem of special relativity. For the Fizeau experiment, water with a refractive index of n is moving with a speed v and hence the speed in the lab frame is

The difference in the speed of light between the stationary and the moving water is the difference

where the last term is precisely the Fresnel drag coefficient.  This was one of the first definitive “proofs” of the validity of Einstein’s theory of relativity, and it made Laue one of relativity’s staunchest proponents.  Spurred on by his success with the Fresnel drag coefficient explanation, Laue wrote the first monograph on relativity theory, publishing it in 1910. 

Fig. 1 Front page of von Laue’s textbook, first published in 1910, on Special Relativity (this is a 4-th edition published in 1921).

A Nobel Prize for Crystal X-ray Diffraction

In 1909 Laue became a Privatdozent under Arnold Sommerfeld (1868 – 1951) at the university in Munich.  In the Spring of 1912 he was walking in the Englischer Garten on the northern edge of the city talking with Paul Ewald (1888 – 1985) who was finishing his doctorate with Sommerfed studying the structure of crystals.  Ewald was considering the interaction of optical wavelength with the periodic lattice when it struck Laue that x-rays would have the kind of short wavelengths that would allow the crystal to act as a diffraction grating to produce multiple diffraction orders.  Within a few weeks of that discussion, two of Sommerfeld’s students (Friedrich and Knipping) used an x-ray source and photographic film to look for the predicted diffraction spots from a copper sulfate crystal.  When the film was developed, it showed a constellation of dark spots for each of the diffraction orders of the x-rays scattered from the multiple periodicities of the crystal lattice.  Two years later, in 1914, Laue was awarded the Nobel prize in physics for the discovery.  That same year his father was elevated to the hereditary nobility in the Prussian empire and Max Laue became Max von Laue.

            Von Laue was not one to take risks, and he remained conservative in many of his interests.  He was immensely respected and played important roles in the administration of German science, but his scientific contributions after receiving the Nobel Prize were only modest.  Yet as the Nazis came to power in the early 1930’s, he was one of the few physicists to stand up and resist the Nazi take-over of German physics.  He was especially disturbed by the plight of the Jewish physicists.  In 1933 he was invited to give the keynote address at the conference of the German Physical Society in Wurzburg where he spoke out against the Nazi rejection of relativity as they branded it “Jewish science”.  In his speech he likened Einstein, the target of much of the propaganda, to Galileo.  He said, “No matter how great the repression, the representative of science can stand erect in the triumphant certainty that is expressed in the simple phrase: And yet it moves.”  Von Laue believed that truth would hold out in the face of the proscription against relativity theory by the Nazi regime.  The quote “And yet it moves” is supposed to have been muttered by Galileo just after his abjuration before the Inquisition, referring to the Earth moving around the Sun.  Although the quote is famous, it is believed to be a myth.

            In an odd side-note of history, von Laue sent his gold Nobel prize medal to Denmark for its safe keeping with Niels Bohr so that it would not be paraded about by the Nazi regime.  Yet when the Nazis invaded Denmark, to avoid having the medals fall into the hands of the Nazis, the medal was dissolved in aqua regia by a member of Bohr’s team, George de Hevesy.  The gold completely dissolved into an orange liquid that was stored in a beaker high on a shelf through the war.  When Denmark was finally freed, the dissolved gold was precipitated out and a new medal was struck by the Nobel committee and re-presented to von Laue in a ceremony in 1951. 

The Orbits of Light Rays

Von Laue’s interests always stayed close to the properties of light and electromagnetic radiation ever since he was introduced to the field when he studied with Woldemor Voigt at Göttingen in 1899.  This interest included the theory of relativity, and only a few years after Einstein published his theory of General Relativity and Gravitation, von Laue added to his earlier textbook on relativity by writing a second volume on the general theory.  The new volume was published in 1920 and included the theory of the deflection of light by gravity. 

         One of the very few illustrations in his second volume is of light coming into interaction with a super massive gravitational field characterized by a Schwarzschild radius.  (No one at the time called it a “black hole”, nor even mentioned Schwarzschild.  That terminology came much later.)  He shows in the drawing, how light, if incident at just the right impact parameter, would actually loop around the object.  This is the first time such a diagram appeared in print, showing the trajectory of light so strongly affected by gravity.

Fig. 2 A page from von Laue’s second volume on relativity (first published in 1920) showing the orbit of a photon around a compact mass with “gravitational cutoff” (later known as a “black hole:”). The figure is drawn semi-quantitatively, but the phenomenon was clearly understood by von Laue.

Python Code

#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
Created on Tue May 28 11:50:24 2019
@author: nolte
D. D. Nolte, Introduction to Modern Dynamics: Chaos, Networks, Space and Time, 2nd ed. (Oxford,2019)
"""

import numpy as np
import matplotlib as mpl
from mpl_toolkits.mplot3d import Axes3D
from scipy import integrate
from matplotlib import pyplot as plt
from matplotlib import cm
import time
import os

plt.close('all')

def create_circle():
	circle = plt.Circle((0,0), radius= 10, color = 'black')
	return circle

def show_shape(patch):
	ax=plt.gca()
	ax.add_patch(patch)
	plt.axis('scaled')
	plt.show()
    
def refindex(x,y):
    
    A = 10
    eps = 1e-6
    
    rp0 = np.sqrt(x**2 + y**2);
        
    n = 1/(1 - A/(rp0+eps))
    fac = np.abs((1-9*(A/rp0)**2/8))   # approx correction to Eikonal
    nx = -fac*n**2*A*x/(rp0+eps)**3
    ny = -fac*n**2*A*y/(rp0+eps)**3
     
    return [n,nx,ny]

def flow_deriv(x_y_z,tspan):
    x, y, z, w = x_y_z
    
    [n,nx,ny] = refindex(x,y)
        
    yp = np.zeros(shape=(4,))
    yp[0] = z/n
    yp[1] = w/n
    yp[2] = nx
    yp[3] = ny
    
    return yp
                
for loop in range(-5,30):
    
    xstart = -100
    ystart = -2.245 + 4*loop
    print(ystart)
    
    [n,nx,ny] = refindex(xstart,ystart)


    y0 = [xstart, ystart, n, 0]

    tspan = np.linspace(1,400,2000)

    y = integrate.odeint(flow_deriv, y0, tspan)

    xx = y[1:2000,0]
    yy = y[1:2000,1]


    plt.figure(1)
    lines = plt.plot(xx,yy)
    plt.setp(lines, linewidth=1)
    plt.show()
    plt.title('Photon Orbits')
    
c = create_circle()
show_shape(c)
axes = plt.gca()
axes.set_xlim([-100,100])
axes.set_ylim([-100,100])

# Now set up a circular photon orbit
xstart = 0
ystart = 15

[n,nx,ny] = refindex(xstart,ystart)

y0 = [xstart, ystart, n, 0]

tspan = np.linspace(1,94,1000)

y = integrate.odeint(flow_deriv, y0, tspan)

xx = y[1:1000,0]
yy = y[1:1000,1]

plt.figure(1)
lines = plt.plot(xx,yy)
plt.setp(lines, linewidth=2, color = 'black')
plt.show()

One of the most striking effects of gravity on photon trajectories is the possibility for a photon to orbit a black hole in a circular orbit. This is shown in Fig. 3 as the black circular ring for a photon at a radius equal to 1.5 times the Schwarzschild radius. This radius defines what is known as the photon sphere. However, the orbit is not stable. Slight deviations will send the photon spiraling outward or inward.

The Eikonal approximation does not strictly hold under strong gravity, but the Eikonal equations with the effective refractive index of space still yield semi-quantitative behavior. In the Python code, a correction factor is used to match the theory to the circular photon orbits, while still agreeing with trajectories far from the black hole. The results of the calculation are shown in Fig. 3. For large impact parameters, the rays are deflected through a finite angle. At a critical impact parameter, near 3 times the Schwarzschild radius, the ray loops around the black hole. For smaller impact parameters, the rays are captured by the black hole.

Fig. 3 Photon orbits near a black hole calculated using the Eikonal equation and the effective refractive index of warped space. One ray, near the critical impact parameter, loops around the black hole as predicted by von Laue. The central black circle is the black hole with a Schwarzschild radius of 10 units. The black ring is the circular photon orbit at a radius 1.5 times the Schwarzschild radius.

Photons pile up around the black hole at the photon sphere. The first image ever of the photon sphere of a black hole was made earlier this year (announced April 10, 2019). The image shows the shadow of the supermassive black hole in the center of Messier 87 (M87), an elliptical galaxy 55 million light-years from Earth. This black hole is 6.5 billion times the mass of the Sun. Imaging the photosphere required eight ground-based radio telescopes placed around the globe, operating together to form a single telescope with an optical aperture the size of our planet.  The resolution of such a large telescope would allow one to image a half-dollar coin on the surface of the Moon, although this telescope operates in the radio frequency range rather than the optical.

Fig. 4 Scientists have obtained the first image of a black hole, using Event Horizon Telescope observations of the center of the galaxy M87. The image shows a bright ring formed as light bends in the intense gravity around a black hole that is 6.5 billion times more massive than the Sun.

Further Reading

Introduction to Modern Dynamics: Chaos, Networks, Space and Time, 2nd Ed. (Oxford University Press, 2019)

B. Lavenda, The Optical Properties of Gravity, J. Mod. Phys, 8 8-3-838 (2017)

Getting Armstrong, Aldrin and Collins Home from the Moon: Apollo 11 and the Three-Body Problem

Fifty years ago on the 20th of July at nearly 11 o’clock at night, my brothers and I were peering through the screen door of a very small 1960’s Shasta compact car trailer watching the TV set on the picnic table outside the trailer door.  Our family was at a camp ground in southern Michigan and the mosquitos were fierce (hence why we were inside the trailer looking out through the screen).  Neil Armstrong was about to be the first human to step foot on the Moon.  The image on the TV was a fuzzy black and white, with barely recognizable shapes clouded even more by the dirt and dead bugs on the screen, but it is a memory etched in my mind.  I was 10 years old and I was convinced that when I grew up I would visit the Moon myself, because by then Moon travel would be like flying to Europe.  It didn’t turn out that way, and fifty years later it’s a struggle to even get back there. 

The dangers could have become life-threatening for the crew of Apollo 11. If they miscalculated their trajectory home and had bounced off the Earth’s atmosphere, they would have become a tragic demonstration of the chaos of three-body orbits.

So maybe I won’t get to the Moon, but maybe my grandchildren will.  And if they do, I hope they know something about the three-body problem in physics, because getting to and from the Moon isn’t as easy as it sounds.  Apollo 11 faced real danger at several critical points on its flight plan, but all went perfectly (except overshooting their landing site and that last boulder field right before Armstrong landed). Some of those dangers became life-threatening for the crew of Apollo 13, and if they had miscalculated their trajectory home and had bounced off the Earth’s atmosphere, they would have become a tragic demonstration of the chaos of three-body orbits.  In fact, their lifeless spaceship might have returned to the Moon and back to Earth over and over again, caught in an infinite chaotic web.

The complexities of trajectories in the three-body problem arise because there are too few constants of motion and too many degrees of freedom.  To get an intuitive picture of how the trajectory behaves, it is best to start with a problem known as the restricted three-body problem.

The Saturn V Booster, perhaps the pinnacle of “muscle and grit” space exploration.

The Restricted Three-Body Problem

The restricted three-body problem was first considered by Leonhard Euler in 1762 (for a further discussion of the history of the three-body problem, see my Blog from July 5).  For the special case of circular orbits of constant angular frequency, the motion of the third mass is described by the Lagrangian

where the potential is time dependent because of the motion of the two larger masses.  Lagrange approached the problem by adopting a rotating reference frame in which the two larger masses m1 and m2 move along the stationary line defined by their centers.  The new angle variable is theta-prime.  The Lagrangian in the rotating frame is

where the effective potential is now time independent.  The first term in the effective potential is the Coriolis effect and the second is the centrifugal term.  The dynamical flow in the plane is four dimensional, and the four-dimensional flow is

where the position vectors are in the center-of-mass frame

relative to the positions of the Earth and Moon (x1 and x2) in the rotating frame in which they are at rest along the x-axis.

A single trajectory solved for this flow is shown in Fig. 1 for a tiny object passing back and forth chaotically between the Earth and the Moon. The object is considered to be massless, or at least so small it does not perturb the Earth-Moon system. The energy of the object was selected to allow it to pass over the potential barrier of the Lagrange-Point L1 between the Earth and the Moon. The object spends most of its time around the Earth, but now and then will get into a transfer orbit that brings it around the Moon. This would have been the fate of Apollo 11 if their last thruster burn had failed.

Fig. 1 The trajectory of a tiny object in the planar three-body problem interacting with a large mass (Earth on the left) and a small mass (Moon on the right). The energy of the trajectory allows it to pass back and forth chaotically between proximity to the Earth and proximity to the Moon. The time-duration of the simulation is approximately one decade. The envelope of the trajectories is called the “Hill region” named after one of the the first US astrophysicists George William Hill (1838-1914) who studied the 3-body problem of the Moon.

Contrast the orbit of Fig. 1 with the simple flight plan of Apollo 11 on the banner figure. The chaotic character of the three-body problem emerges for a “random” initial condition. You can play with different initial conditions in the following Python code to explore the properties of this dynamical problem. Note that in this simulation, the mass of the Moon was chosen about 8 times larger than in nature to exaggerate the effect of the Moon.

Python Code

#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
Created on Tue May 28 11:50:24 2019
@author: nolte
D. D. Nolte, Introduction to Modern Dynamics: Chaos, Networks, Space and Time, 2nd ed. (Oxford,2019)
"""

import numpy as np
import matplotlib as mpl
from mpl_toolkits.mplot3d import Axes3D
from scipy import integrate
from matplotlib import pyplot as plt
from matplotlib import cm
import time
import os

plt.close('all')

womega = 1
R = 1
eps = 1e-6

M1 = 1     % Mass of the Earth
M2 = 1/10     % Mass of the Moon
chsi = M2/M1

x1 = -M2*R/(M1+M2)    % Earth location in rotating frame
x2 = x1 + R     % Moon location

def poten(y,c):
    
    rp0 = np.sqrt(y**2 + c**2);
    thetap0 = np.arctan(y/c);
        
    rp1 = np.sqrt(x1**2 + rp0**2 - 2*np.abs(rp0*x1)*np.cos(np.pi-thetap0));
    rp2 = np.sqrt(x2**2 + rp0**2 - 2*np.abs(rp0*x2)*np.cos(thetap0));
    V = -M1/rp1 -M2/rp2 - E;
     
    return [V]

def flow_deriv(x_y_z,tspan):
    x, y, z, w = x_y_z
    
    r1 = np.sqrt(x1**2 + x**2 - 2*np.abs(x*x1)*np.cos(np.pi-z));
    r2 = np.sqrt(x2**2 + x**2 - 2*np.abs(x*x2)*np.cos(z));
        
    yp = np.zeros(shape=(4,))
    yp[0] = y
    yp[1] = -womega**2*R**3*(np.abs(x)-np.abs(x1)*np.cos(np.pi-z))/(r1**3+eps) - womega**2*R**3*chsi*(np.abs(x)-abs(x2)*np.cos(z))/(r2**3+eps) + x*(w-womega)**2
    yp[2] = w
    yp[3] = 2*y*(womega-w)/x - womega**2*R**3*chsi*abs(x2)*np.sin(z)/(x*(r2**3+eps)) + womega**2*R**3*np.abs(x1)*np.sin(np.pi-z)/(x*(r1**3+eps))
    
    return yp
                
r0 = 0.64   % initial radius
v0 = 0.3    % initial radial speed
theta0 = 0   % initial angle
vrfrac = 1   % fraction of speed in radial versus angular directions

rp1 = np.sqrt(x1**2 + r0**2 - 2*np.abs(r0*x1)*np.cos(np.pi-theta0))
rp2 = np.sqrt(x2**2 + r0**2 - 2*np.abs(r0*x2)*np.cos(theta0))
V = -M1/rp1 - M2/rp2
T = 0.5*v0**2
E = T + V

vr = vrfrac*v0
W = (2*T - v0**2)/r0

y0 = [r0, vr, theta0, W]   % This is where you set the initial conditions

tspan = np.linspace(1,2000,20000)

y = integrate.odeint(flow_deriv, y0, tspan)

xx = y[1:20000,0]*np.cos(y[1:20000,2]);
yy = y[1:20000,0]*np.sin(y[1:20000,2]);

plt.figure(1)
lines = plt.plot(xx,yy)
plt.setp(lines, linewidth=0.5)
plt.show()

In the code, set the position and speed of the Apollo command module on lines 56-59 and put in the initial conditions on line 70. The mass of the Moon in nature is 1/81 of the mass of the Earth, which shrinks the L1 “bottleneck” to a much smaller region that you can explore to see what the fate of the Apollo missions could have been.

Further Reading

The Three-body Problem, Longitude at Sea, and Lagrange’s Points

Introduction to Modern Dynamics: Chaos, Networks, Space and Time, 2nd Ed. (Oxford University Press, 2019)