Physics of the Flipping iPhone and the Fate of the Earth

Find an iPhone, then flip it face upwards (hopefully over a soft cushion or mattress).  What do you see?

An iPhone is a rectangular parallelepiped with three unequal dimensions and hence three unequal principle moments of inertia I1 < I2 < I3.  These axes are: vertical to the face, horizontal through the small dimension, and horizontal through the long dimension. So now spin the iPhone around its long axis, it keeps a nice and steady spin.  And then spin it around an axis point out of the face, again it’s a nice steady spin. But flip it face upwards, and it almost always does a half twist. Why?

The answer is variously known as the Tennis Racket Theorem or the Intermediate Axis Theorem or even the Dzhanibekov Effect. If you don’t have an iPhone or Samsung handy, then watch this NASA video of the effect.

Stability Analysis

The flipping iPhone is a rigid body experiencing force-free motion. The Euler equations are an easy way to approach the somewhat complicated physics. These equations are

They all equal zero because there is no torque. First let’s assume the object is rotating mainly around the x1 axis so that ω2 and ω3 are small (rotating mainly around ω1).  Then solving for the angular accelerations yields

This is a two-dimensional flow equation in the variables  ω2, ω3.  Hence we can apply classic stability analysis for rotation mainly about the x1 axis. The Jacobian matrix is

This matrix has a trace τ = 0 and a determinant Δ given by

Because of the ordering I1 < I2 < I3 we know that this is quantity is positive. 

Armed with the trace and the determinant of a two-dimensional flow, we simply need to look at the 2D “stability space” as shown in Fig. 1. The horizontal axis is the determinant of the Jacobian matrix evaluated at the fixed point of the motion, and the vertical axis is the trace. In the case of the flipping iPhone, the Jacobian matrix is independent of both ω2 and ω3 (if they are remain small), so it has a global stability. When the determinant is positive, the stability depends on the trace. If the trace is positive, all motions are unstable (deviations grow exponentially). If the trace is negative, all motions are stable. The sideways parabola in the figure is known as the discriminant. If solutions are within the discriminant, they are spirals. As the trace approaches the origin, the spirals get slower and slow, until they become simple harmonic motions when the trace goes to zero. This kind of marginal stability is also known as centers. Centers have a stead-state stability without dissipation.

Fig. 1 The stability space for two-dimensional dynamics. The vertical axis is the trace of the Jacobian matrix and the horizontal axis is the determinant. If the determinant is negative, all motions are unstable saddle points. Otherwise, stability depends on the sign of the trace, unless the trace is zero, for which case the motion has steady-state stability like celestial orbits or harmonic oscillators. (Reprinted from Ref. [1])

For the flipping iPhone (or tennis racket or book), the trace is zero and the determinant is positive for rotation mainly about the x1 axis, and the stability is therefore a “center”.  This is why the iPhone spins nicely about its axis with the smallest moment.

Let’s permute the indices to get the motion about the x3 axis with the largest moment. Then

The trace and determinant are

where the determinant is again positive and the stability is again a center.

But now let’s permute again so that the motion is mainly about the x2 axis with the intermediate moment.  In this case

And the trace and determinant are

The determinant is now negative, and from Fig. 1, this means that the stability is a saddle point. 

Saddle points in 2D have one stable manifold and one unstable manifold.  If the initial condition is just a little off the stability point, then the deviation will grow as the dynamical trajectory moves away from the equilibrium point along the unstable manifold.

The components of the angular frequencies of each of these cases is shown in Fig. 2 for rotation mainly around x1, then x2 and then x3. A small amount of rotation is given as an initial condition about the other two axes for each case. For these calculations, no approximations were made, using the full Euler equations, and the motion is fully three-dimensional.

Fig. 2 Angular frequency components for motion with initial conditions of spin mainly about, respecitvely, the x1, x2 and x3 axes. The x2 case shows strong nonlinearity and slow unstable dynamics that periodically reverse. (I1 = 0.3, I2 = 0.5, I3 = 0.7)

Fate of the Spinning Earth

When two of the axes have very similar moments of inertia, that is, when the object becomes more symmetric, then the unstable dynamics can get very slow. An example is shown in Fig. 3 for I2 just a bit smaller than I3. The high frequency spin remains the same for long times and then quickly reverses. During the time when the spin is nearly stable, the other angular frequencies are close to zero, and the object would have only a slight wobble to it. Yet, in time, the wobble goes from bad to worse, until the whole thing flips over. It’s inevitable for almost any real-world solid…like maybe the Earth.

Fig. 3 Angular frequencies for a slightly asymmetric rigid body. The spin remains the same for long times and then flips suddenly.

The Earth is an oblate spheroid, wider at the equator because of the centrifugal force of the rotation. If it were a perfect spheroid, then the two moments orthogonal to the spin axis would be identically equal. However, the Earth has landmasses, continents, that make the moments of inertia slightly unequal. This would have catastrophic consequences, because if the Earth were perfectly rigid, then every few million years it should flip over, scrambling the seasons!

But that doesn’t happen. The reason is that the Earth has a liquid mantel and outer core that very slowly dissipate any wobble. The Earth, and virtually every celestial object that has any type of internal friction, always spins about its axis with the highest moment of inertia, which also means the system relaxes to its lowest kinetic energy for conserved L through the simple equation

So we are safe!

Python Code (FlipPhone.py)

Here is a simple Python code to explore the intermediate axis theorem. (Python code on GitHub.) Change the moments of inertia and change the initial conditions. Note that this program does not solve for the actual motions–the configuration-space trajectories. The solution of the Euler equations gives the time evolution of the three components of the angular velocity. Incremental rotations could be applied through rotation matrices operating on the configuration space to yield the configuration-space trajectory of the flipping iPhone (link to the technical details here).

#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
Created on Thurs Oct 7 19:38:57 2021

@author: David Nolte
Introduction to Modern Dynamics, 2nd edition (Oxford University Press, 2019)

FlipPhone Example
"""
import numpy as np
import matplotlib as mpl
from mpl_toolkits.mplot3d import Axes3D
from scipy import integrate
from matplotlib import pyplot as plt

plt.close('all')
fig = plt.figure()
ax = fig.add_axes([0, 0, 1, 1], projection='3d')
ax.axis('on')

I1 = 0.45   # Moments of inertia can be changed here
I2 = 0.5
I3 = 0.55

def solve_lorenz(max_time=300.0):

# Flip Phone
    def flow_deriv(x_y_z, t0):

        x, y, z = x_y_z
        
        yp1 = ((I2-I3)/I1)*y*z;
        yp2 = ((I3-I1)/I2)*z*x;
        yp3 = ((I1-I2)/I3)*x*y;
        
        return [yp1, yp2, yp3]
    
    model_title = 'Flip Phone'

    t = np.linspace(0, max_time/4, int(250*max_time/4))

    # Solve for trajectories
    x0 = [[0.01,1,0.01]]   # Initial Conditions:  Change the major rotation axis here ....
    t = np.linspace(0, max_time, int(250*max_time))
    x_t = np.asarray([integrate.odeint(flow_deriv, x0i, t)
                      for x0i in x0])
     
    x, y, z = x_t[0,:,:].T
    lines = ax.plot(x, y, z, '-')
    plt.setp(lines, linewidth=0.5)

    ax.view_init(30, 30)
    plt.show()
    plt.title(model_title)
    plt.savefig('Flow3D')

    return t, x_t

ax.set_xlim((-1.1, 1.1))
ax.set_ylim((-1.1, 1.1))
ax.set_zlim((-1.1, 1.1))

t, x_t = solve_lorenz()

plt.figure(2)
lines = plt.plot(t,x_t[0,:,0],t,x_t[0,:,1],t,x_t[0,:,2])
plt.setp(lines, linewidth=1)


[1] D. D. Nolte, Introduction to Modern Dynamics, 2nd Edition (Oxford, 2019)

To see more on the Intermediate Axis Theorem, watch this amazing Youtube.

And here is another description of the Intermediate Axis Theorem.


This Blog Post is a Companion to the textbook Introduction to Modern Dynamics: Chaos, Networks, Space and Time, 2nd ed. (Oxford, 2019) that introduces topics of classical dynamics, Lagrangians and Hamiltonians, chaos theory, complex systems, synchronization, neural networks, econophysics and Special and General Relativity.

Caustic Curves and the Optics of Rays

Snorkeling above a shallow reef on a clear sunny day transports you to an otherworldly galaxy of spectacular deep colors and light reverberating off of the rippled surface.  Playing across the underwater floor of the reef is a fabulous light show of bright filaments entwining and fluttering, creating random mesh networks of light and dark.  These same patterns appear on the bottom of swimming pools in summer and in deep fountains in parks.

Johann Bernoulli had a stormy career and a problematic personality–but he was brilliant even among the bountiful Bernoulli clan. Using methods of tangents, he found the analytic solution of the caustic of the circle.

Something similar happens when a bare overhead light reflects from the sides of a circular glass of water.  The pattern no longer moves, but a dazzling filament splays across the bottom of the glass with a sharp bright cusp at the center. These bright filaments of light have an age old name — Caustics — meaning burning as in burning with light. The study of caustics goes back to Archimedes of Syracuse and his apocryphal burning mirrors that are supposed to have torched the invading triremes of the Roman navy in 212 BC.

Fig. 1 (left) Archimedes supposedly burning the Roman navy with caustics formed by a “burning mirror”. A wall painting from the Uffizi Gallery, Stanzino delle Matematiche, in Florence, Italy. Painted in 1600 by Gieulio Parigi. (right) The Mojave thermal farm uses 3000 acres of mirrors to actually do the trick.

Caustics in optics are concentrations of light rays that form bright filaments, often with cusp singularities. Mathematically, they are envelope curves that are tangent to a set of lines. Cata-caustics are caustics caused by light reflecting from curved surfaces. Dia-caustics are caustics caused by light refracting from transparent curved materials.

From Leonardo to Huygens

Even after Archimedes, burning mirrors remained an interest for a broad range of scientists, artists and engineers. Leonardo Da Vinci took an interest around 1503 – 1506 when he drew reflected caustics from a circular mirror in his many notebooks.

Fig. 2 Drawings of caustics of the circle in Leonardo Da Vinci’s notebooks circa 1503 – 1506. Digitized by the British Museum.

Almost two centuries later, Christian Huygens constructed the caustic of a circle in his Treatise on light : in which are explained the causes of that which occurs in reflection, & in refraction and particularly in the strange refraction of Iceland crystal. This is the famous treatise in which he explained his principle for light propagation as wavefronts. He was able to construct the caustic geometrically, but did not arrive at a functional form. He mentions that it has a cusp like a cycloid, but without being a cycloid. He first presented this work at the Paris Academy in 1678 where the news of his lecture went as far as Italy where a young German mathematician was traveling.

Fig. 3 Christian Huygens construction of the cusp of the caustic of the circle from his Treatise on Light (1690).

The Cata-caustics of Tschirnhaus and Bernoulli

In the decades after Newton and Leibniz invented the calculus, a small cadre of mathematicians strove to apply the new method to understand aspects of the physical world. At at a time when Newton had left the calculus behind to follow more arcane pursuits, Lebniz, Jakob and Johann Bernoulli, Guillaume de l’Hôpital, Émilie du Chatelet and Walter von Tschirnhaus were pushing notation reform (mainly following Leibniz) to make the calculus easier to learn and use, as well as finding new applications of which there were many.

Ehrenfried Walter von Tschirnhaus (1651 – 1708) was a German mathematician and physician and a lifelong friend of Leibniz, who he met in Paris in 1675. He was one of only five mathematicians to provide a solution to Johann Bernoulli’s brachistochrone problem. One of the recurring interests of von Tschirnhaus, that he revisited throughout his carrier, was in burning glasses and mirrors. A burning glass is a high-quality magnifying lens that brings the focus of the sun to a fine point to burn or anneal various items. Burning glasses were used to heat small items for manufacture or for experimentation. For instance, Priestly and Lavoisier routinely used burning glasses in their chemistry experiments. Low optical aberrations were required for the lenses to bring the light to the finest possible focus, so the study of optical focusing was an important topic both academically and practically. Tshirnhaus had his own laboratory to build and test burning mirrors, and he became aware of the cata-caustic patterns of light reflected from a circular mirror or glass surface. Given his parallel interest in the developing calculus methods, he published a paper in Acta Eruditorum in 1682 that constructed the envelope function created by the cata-caustics of a circle. However, Tschirnhaus did not produce the analytic function–that was provided by Johann Bernoulli ten years later in 1692.

Fig. 4 Excerpt from Acta Eruditorum 1682 by von Tschirnhaus.

Johann Bernoulli had a stormy career and a problematic personality–but he was brilliant even among the Bountiful Bernoulli clan. Using methods of tangents, he found the analytic solution of the caustic of the circle. He did this by stating the general equation for all reflected rays and then finding when their y values are independent of changing angle … in other words using the principle of stationarity which would later become a potent tool in the hands of Lagrange as he developed Lagrangian physics.

Fig. 5 Bernoulli’s construction of the equations of rays reflected by the unit circle.

The equation for the reflected ray, expressing y as a function of x for a given angle α in Fig. 5, is

The condition of the caustic envelope requires the change in y with respect to the angle α to vanish while treating x as a constant. This is a partial derivative, and Johann Bernoulli is giving an early use of this method in 1692 to ensure the stationarity of y with respect to the changing angle. The partial derivative is

This is solved to give

Plugging this into the equation at the top equation above yields

These last two expressions for x and y in terms of the angle α are a parametric representation of the caustic. Combining them gives the solution to the caustic of the circle

The square root provides the characteristic cusp at the center of the caustic.

Fig. 6 Caustic of a circle. Image was generated using the Python program raycaustic.py.

Python Code: raycaustic.py

There are lots of options here. Try them all … then add your own!

#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
Created on Tue Feb 16 16:44:42 2021

raycaustic.py

@author: nolte

D. D. Nolte, Optical Interferometry for Biology and Medicine (Springer,2011)
"""

import numpy as np
from matplotlib import pyplot as plt

plt.close('all')

# model_case 1 = cosine
# model_case 2 = circle
# model_case 3 = square root
# model_case 4 = inverse power law
# model_case 5 = ellipse
# model_case 6 = secant
# model_case 7 = parabola
# model_case 8 = Cauchy

model_case = int(input('Input Model Case (1-7)'))
if model_case == 1:
    model_title = 'cosine'
    xleft = -np.pi
    xright = np.pi
    ybottom = -1
    ytop = 1.2

elif model_case == 2:
    model_title = 'circle'
    xleft = -1
    xright = 1
    ybottom = -1
    ytop = .2

elif model_case == 3:
    model_title = 'square-root'
    xleft = 0
    xright = 4
    ybottom = -2
    ytop = 2

elif model_case == 4:
    model_title = 'Inverse Power Law'
    xleft = 1e-6
    xright = 4
    ybottom = 0
    ytop = 4
    
elif model_case == 5:
    model_title = 'ellipse'
    a = 0.5
    b = 2
    xleft = -b
    xright = b
    ybottom = -a
    ytop = 0.5*b**2/a
    
elif model_case == 6:
    model_title = 'secant'
    xleft = -np.pi/2
    xright = np.pi/2
    ybottom = 0.5
    ytop = 4
    
elif model_case == 7:
    model_title = 'Parabola'
    xleft = -2
    xright = 2
    ybottom = 0
    ytop = 4

elif model_case == 8:
    model_title = 'Cauchy'
    xleft = 0
    xright = 4
    ybottom = 0
    ytop = 4
    
def feval(x):

    if model_case == 1:
        y = -np.cos(x)

    elif model_case == 2:
        y = -np.sqrt(1-x**2)

    elif model_case == 3:
        y = -np.sqrt(x)
        
    elif model_case == 4:
        y = x**(-0.75)
        
    elif model_case == 5:
        y = -a*np.sqrt(1-x**2/b**2)

    elif model_case == 6:
        y = 1.0/np.cos(x)

    elif model_case == 7:
        y = 0.5*x**2  
        
    elif model_case == 8:
        y = 1/(1 + x**2)

    return y

xx = np.arange(xleft,xright,0.01)
yy = feval(xx)

lines = plt.plot(xx,yy)
plt.xlim(xleft, xright)
plt.ylim(ybottom, ytop)

delx = 0.001
N = 75

for i in range(N+1):
    
    x = xleft + (xright-xleft)*(i-1)/N
    
    val = feval(x)
    valp = feval(x+delx/2)
    valm = feval(x-delx/2)
    deriv = (valp-valm)/delx
    
    phi = np.arctan(deriv)
    slope =  np.tan(np.pi/2 + 2*phi)

    if np.abs(deriv) < 1:
        xf = (ytop-val+slope*x)/slope;
        yf = ytop;
    else:
        xf = (ybottom-val+slope*x)/slope;
        yf = ybottom;
    plt.plot([x, x],[ytop, val],linewidth = 0.5)       
    plt.plot([x, xf],[val, yf],linewidth = 0.5)
    plt.gca().set_aspect('equal', adjustable='box')       
    plt.show()
    

The Dia-caustics of Swimming Pools

A caustic is understood mathematically as the envelope function of multiple rays that converge in the Fourier domain (angular deflection measured at far distances).  These are points of mathematical stationarity, in which the ray density is invariant to first order in deviations in the refracting surface.  The rays themselves are the trajectories of the Eikonal Equation as rays of light thread their way through complicated optical systems.

The basic geometry is shown in Fig 7 for a ray incident on a nonplanar surface emerging into a less-dense medium.  From Snell’s law we have the relation for light entering a dense medium like light into water

where n is the relative index (ratio), and the small-angle approximation has been made.  The incident angle θ1 is simply related to the slope of the interface dh/dx as

where the small-angle approximation is used again.  The angular deflection relative to the optic axis is then

which is equal to the optical path difference through the sample.

Fig. 7 The geometry of ray deflection by a random surface. Reprinted from Optical Interferometry, Ref. [1].

In two dimensions, the optical path difference can be replaced with a general potential

and the two orthogonal angular deflections (measured in the far field on a Fourier plane) are

These angles describe the deflection of the rays across the sample surface. They are also the right-hand side of the Eikonal Equation, the equation governing ray trajectories through optical systems.

Caustics are lines of stationarity, meaning that the density of rays is independent of first-order changes in the refracting sample.  The condition of stationarity is defined by the Jacobian of the transformation from (x,y) to (θx, θy) with

where the second expression is the Hessian determinant of the refractive power of the uneven surface. When this condition is satisfied, the envelope function bounding groups of collected rays is stationary to perturbations in the inhomogeneous sample.

An example of diacaustic formation from a random surface is shown in Fig. 8 generated by the Python program caustic.py. The Jacobian density (center) outlines regions in which the ray density is independent of small changes in the surface. They are positions of the zeros of the Hessian determinant, the regions of zero curvature of the surface or potential function. These high-intensity regions spread out and are intercepted at some distance by a suface, like the bottom of a swimming pool, where the concentrated rays create bright filaments. As the wavelets on the surface of the swimming pool move, the caustic filaments on the bottom of the swimming pool dance about.

Optical caustics also occur in the gravitational lensing of distant quasars by galaxy clusters in the formation of Einstein rings and arcs seen by deep field telescopes, as described in my following blog post.

Fig. 8 Formation of diacaustics by transmission through a transparent material of random thickness (left). The Jacobian density is shown at the center. These are regions of constant ray density. A near surface displays caustics (right) as on the bottom of a swimming pool. Images were generated using the Python program caustic.py.

Python Code: caustic.py

This Python code was used to generate the caustic patterns in Fig. 8. You can change the surface roughness by changing the divisors on the last two arguments on Line 58. The distance to the bottom of the swimming pool can be changed by changing the parameter d on Line 84.

#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
Created on Tue Feb 16 19:50:54 2021

caustic.py

@author: nolte

D. D. Nolte, Optical Interferometry for Biology and Medicine (Springer,2011)
"""

import numpy as np
from matplotlib import pyplot as plt
from numpy import random as rnd
from scipy import signal as signal

plt.close('all')

N = 256

def gauss2(sy,sx,wy,wx):
    
    x = np.arange(-sx/2,sy/2,1)
    y = np.arange(-sy/2,sy/2,1)
    y = y[..., None]
    
    ex = np.ones(shape=(sy,1))
    x2 = np.kron(ex,x**2/(2*wx**2));
    
    ey = np.ones(shape=(1,sx));
    y2 = np.kron(y**2/(2*wy**2),ey);

    rad2 = (x2+y2);

    A = np.exp(-rad2);

    return A

def speckle2(sy,sx,wy,wx):

    Btemp = 2*np.pi*rnd.rand(sy,sx);
    B = np.exp(complex(0,1)*Btemp);

    C = gauss2(sy,sx,wy,wx);

    Atemp = signal.convolve2d(B,C,'same');

    Intens = np.mean(np.mean(np.abs(Atemp)**2));

    D = np.real(Atemp/np.sqrt(Intens));

    Dphs = np.arctan2(np.imag(D),np.real(D));

    return D, Dphs


Sp, Sphs = speckle2(N,N,N/16,N/16)

plt.figure(2)
plt.matshow(Sp,2,cmap=plt.cm.get_cmap('seismic'))  # hsv, seismic, bwr
plt.show()

fx, fy = np.gradient(Sp);

fxx,fxy = np.gradient(fx);
fyx,fyy = np.gradient(fy);

J = fxx*fyy - fxy*fyx;

D = np.abs(1/J)

plt.figure(3)
plt.matshow(D,3,cmap=plt.cm.get_cmap('gray'))  # hsv, seismic, bwr
plt.clim(0,0.5e7)
plt.show()

eps = 1e-7
cnt = 0
E = np.zeros(shape=(N,N))
for yloop in range(0,N-1):
    for xloop in range(0,N-1):
        
        d = N/2
        
        indx = int(N/2 + (d*(fx[yloop,xloop])+(xloop-N/2)/2))
        indy = int(N/2 + (d*(fy[yloop,xloop])+(yloop-N/2)/2))
        
        if ((indx > 0) and (indx < N)) and ((indy > 0) and (indy < N)):
            E[indy,indx] = E[indy,indx] + 1

plt.figure(4)
plt.imshow(E,interpolation='bicubic',cmap=plt.cm.get_cmap('gray'))
plt.clim(0,30)
plt.xlim(N/4, 3*N/4)
plt.ylim(N/4,3*N/4)

By David D. Nolte, Feb. 28, 2021


External Link: Youtube Video


Read more from Oxford University Press: Interference (2023)

The stories of the scientists and engineers who tamed light and used it to probe the universe.

Available at Oxford U Press

Available at Amazon.

Available at Barnes & Nobles


References

[1] D. D. Nolte, “Speckle and Spatial Coherence,” Chapter 3 in Optical Interferometry for Biology and Medicine (Springer, 2012), pp. 95-121.

[2] E. Hairer and G. Wanner, Analysis by its history. (Springer, 1996)

[3] C. Huygens (1690), Treatise on light : in which are explained the causes of that which occurs in reflection, & in refraction and particularly in the strange refraction of Iceland crystal. Ed. S. P. Thompson, (University of Chicago Press, 1950).

The Ups and Downs of the Compound Double Pendulum

A chief principle of chaos theory states that even simple systems can display complex dynamics.  All that is needed for chaos, roughly, is for a system to have at least three dynamical variables plus some nonlinearity. 

A classic example of chaos is the driven damped pendulum.  This is a mass at the end of a massless rod driven by a sinusoidal perturbation.  The three variables are the angle, the angular velocity and the phase of the sinusoidal drive.  The nonlinearity is provided by the cosine function in the potential energy which is anharmonic for large angles.  However, the driven damped pendulum is not an autonomous system, because the drive is an external time-dependent function.  To find an autonomous system—one that persists in complex motion without any external driving function—one needs only to add one more mass to a simple pendulum to create what is known as a compound pendulum, or a double pendulum.

Daniel Bernoulli and the Discovery of Normal Modes

After the invention of the calculus by Newton and Leibniz, the first wave of calculus practitioners (Leibniz, Jakob and Johann Bernoulli and von Tschirnhaus) focused on static problems, like the functional form of the catenary (the shape of a hanging chain), or on constrained problems, like the brachistochrone (the path of least time for a mass under gravity to move between two points) and the tautochrone (the path of equal time).

The next generation of calculus practitioners (Euler, Johann and Daniel Bernoulli, and  D’Alembert) focused on finding the equations of motion of dynamical systems.  One of the simplest of these, that yielded the earliest equations of motion as well as the first identification of coupled modes, was the double pendulum.  The double pendulum, in its simplest form, is a mass on a rigid massless rod attached to another mass on a massless rod.  For small-angle motion, this is a simple coupled oscillator.

Fig. 1 The double pendulum as seen by Daniel Bernoulli, Johann Bernoulli and D’Alembert. This two-mass system played a central role in the earliest historical development of dynamical equations of motion.

Daniel Bernoulli, the son of Johann I Bernoulli, was the first to study the double pendulum, publishing a paper on the topic in 1733 in the proceedings of the Academy in St. Petersburg just as he returned from Russia to take up a post permanently in his home town of Basel, Switzerland.  Because he was a physicist first and mathematician second, he performed experiments with masses on strings to attempt to understand the qualitative as well as quantitative behavior of the two-mass system.  He discovered that for small motions there was a symmetric behavior that had a low frequency of oscillation and an antisymmetric motion that had a higher frequency of oscillation.  Furthermore, he recognized that any general motion of the double pendulum was a combination of the fundamental symmetric and antisymmetric motions.  This work by Daniel Bernoulli represents the discovery of normal modes of coupled oscillators.  It is also the first statement of the combination of motions that he would use later (1753) to express for the first time the principle of superposition. 

Superposition is one of the guiding principles of linear physical systems.  It provides a means for the solution of differential equations.  It explains the existence of eigenmodes and their eigenfrequencies.  It is the basis of all interference phenomenon, whether classical like the Young’s double-slit experiment or quantum like Schrödinger’s cat.  Today, superposition has taken center stage in quantum information sciences and helps define the spooky (and useful) properties of quantum entanglement.  Therefore, normal modes, composition of motion, superposition of harmonics on a musical string—these all date back to Daniel Bernoulli in the twenty years between 1733 and 1753.  (Daniel Bernoulli is also the originator of the Bernoulli principle that explains why birds and airplanes fly.)

Johann Bernoulli and the Equations of Motion

Daniel Bernoulli’s father was Johann I Bernoulli.  Daniel had been tutored by Johann, along with his friend Leonhard Euler, when Daniel was young.  But as Daniel matured as a mathematician, he and his father began to compete against each other in international mathematics competitions (which were very common in the early eighteenth century).  When Daniel beat his father in a competition sponsored by the French Academy, Johann threw Daniel out of his house and their relationship remained strained for the remainder of their lives.

Johann had a history of taking ideas from Daniel and never citing the source. For instance, when Johann published his work on equations of motion for masses on strings in 1742, he built on the work of his son Daniel from 1733 but never once mentioned it. Daniel, of course, was not happy.

In a letter dated 20 October 1742 that Daniel wrote to Euler, he said, “The collected works of my father are being printed, and I have Just learned that he has inserted, without any mention of me, the dynamical problems I first discovered and solved (such as e. g. the descent of a sphere on a moving triangle; the linked pendulum, the center of spontaneous rotation, etc.).” And on 4 September 1743, when Daniel had finally seen his father’s works in print, he said, “The new mechanical problems are mostly mine, and my father saw my solutions before he solved the problems in his way …”. [2]

Daniel clearly has the priority for the discovery of the normal modes of the linked (i.e. double or compound) pendulum, but Johann often would “improve” on Daniel’s work despite giving no credit for the initial work. As a mathematician, Johann had a more rigorous approach and could delve a little deeper into the math. For this reason, it was Johann in 1742 who came closest to writing down differential equations of motion for multi-mass systems, but falling just short. It was D’Alembert only one year later who first wrote down the differential equations of motion for systems of masses and extended it to the loaded string for which he was the first to derive the wave equation. The D’Alembertian operator is today named after him.

Double Pendulum Dynamics

The general dynamics of the double pendulum are best obtained from Lagrange’s equations of motion. However, setting up the Lagrangian takes careful thought, because the kinetic energy of the second mass depends on its absolute speed which is dependent on the motion of the first mass from which it is suspended. The velocity of the second mass is obtained through vector addition of velocities.

Fig. 2. The dynamics of the double pendulum.

The potential energy of the system is

so that the Lagrangian is

The partial derivatives are

and the time derivatives of the last two expressions are

Therefore, the equations of motion are

To get a sense of how this system behaves, we can make a small-angle approximation to linearize the equations to find the lowest-order normal modes.  In the small-angle approximation, the equations of motion become

where the determinant is

This quartic equation is quadratic in w2 and the quadratic solution is

This solution is still a little opaque, so taking the special case: R = R1 = R2 and M = M1 = M2 it becomes

There are two normal modes.  The low-frequency mode is symmetric as both masses swing (mostly) together, while the higher frequency mode is antisymmetric with the two masses oscillating against each other.  These are the motions that Daniel Bernoulli discovered in 1733.

It is interesting to note that if the string were rigid, so that the two angles were the same, then the lowest frequency would be 3/5 which is within 2% of the above answer but is certainly not equal.  This tells us that there is a slightly different angular deflection for the second mass relative to the first.

Chaos in the Double Pendulum

The full expression for the nonlinear coupled dynamics is expressed in terms of four variables (q1, q2, w1, w2).  The dynamical equations are

These can be put into the normal form for a four-dimensional flow as

The numerical solution of these equations produce a complex interplay between the angle of the first mass and the angle of the second mass. Examples of trajectory projections in configuration space are shown in Fig. 3 for E = 1. The horizontal is the first angle, and the vertical is the angle of the second mass.

Fig. 3 Trajectory projections onto configuration space. The horizontal axis is the first mass angle, and the vertical axis is the second mass angle. All of these are periodic or nearly periodic orbits except for the one on the lower left. E = 1.

The dynamics in state space are four dimensional which are difficult to visualize directly. Using the technique of the Poincaré first-return map, the four-dimensional trajectories can be viewed as a two-dimensional plot where the trajectories pierce the Poincaré plane. Poincare sections are shown in Fig. 4.

Fig. Poincare sections of the double pendulum in state space for increasing kinetic energy. Initial conditions are vertical in all. The horizontal axis is the angle of the second mass, and the vertical axis is the angular velocity of the second mass.

Python Code: DoublePendulum.py

(Python code on GitHub.)

#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
DoublePendulum.py
Created on Oct 16 06:03:32 2020
"Introduction to Modern Dynamics" 2nd Edition (Oxford, 2019)
@author: nolte
"""

import numpy as np
from scipy import integrate
from matplotlib import pyplot as plt
import time

plt.close('all')

E = 1.       # Try 0.8 to 1.5

def flow_deriv(x_y_z_w,tspan):
    x, y, z, w = x_y_z_w

    A = w**2*np.sin(y-x);
    B = -2*np.sin(x);
    C = z**2*np.sin(y-x)*np.cos(y-x);
    D = np.sin(y)*np.cos(y-x);
    EE = 2 - (np.cos(y-x))**2;
    
    FF = w**2*np.sin(y-x)*np.cos(y-x);
    G = -2*np.sin(x)*np.cos(y-x);
    H = 2*z**2*np.sin(y-x);
    I = 2*np.sin(y);
    JJ = (np.cos(y-x))**2 - 2;

    a = z
    b = w
    c = (A+B+C+D)/EE
    d = (FF+G+H+I)/JJ
    return[a,b,c,d]

repnum = 75

np.random.seed(1)
for reploop  in range(repnum):
    
    
    px1 = 2*(np.random.random((1))-0.499)*np.sqrt(E);
    py1 = -px1 + np.sqrt(2*E - px1**2);            

    xp1 = 0   # Try 0.1
    yp1 = 0   # Try -0.2
    
    x_y_z_w0 = [xp1, yp1, px1, py1]
    
    tspan = np.linspace(1,1000,10000)
    x_t = integrate.odeint(flow_deriv, x_y_z_w0, tspan)
    siztmp = np.shape(x_t)
    siz = siztmp[0]

    if reploop % 50 == 0:
        plt.figure(2)
        lines = plt.plot(x_t[:,0],x_t[:,1])
        plt.setp(lines, linewidth=0.5)
        plt.show()
        time.sleep(0.1)
        #os.system("pause")

    y1 = np.mod(x_t[:,0]+np.pi,2*np.pi) - np.pi
    y2 = np.mod(x_t[:,1]+np.pi,2*np.pi) - np.pi
    y3 = np.mod(x_t[:,2]+np.pi,2*np.pi) - np.pi
    y4 = np.mod(x_t[:,3]+np.pi,2*np.pi) - np.pi
    
    py = np.zeros(shape=(10*repnum,))
    yvar = np.zeros(shape=(10*repnum,))
    cnt = -1
    last = y1[1]
    for loop in range(2,siz):
        if (last < 0)and(y1[loop] > 0):
            cnt = cnt+1
            del1 = -y1[loop-1]/(y1[loop] - y1[loop-1])
            py[cnt] = y4[loop-1] + del1*(y4[loop]-y4[loop-1])
            yvar[cnt] = y2[loop-1] + del1*(y2[loop]-y2[loop-1])
            last = y1[loop]
        else:
            last = y1[loop]
     
    plt.figure(3)
    lines = plt.plot(yvar,py,'o',ms=1)
    plt.show()
    
plt.savefig('DPen')

You can change the energy E on line 16 and also the initial conditions xp1 and yp1 on lines 48 and 49. The energy E is the initial kinetic energy imparted to the two masses. For a given initial condition, what happens to the periodic orbits as the energy E increases?

References

[1] Daniel Bernoulli, Theoremata de oscillationibus corporum filo flexili connexorum et catenae verticaliter suspensae,” Academiae Scientiarum Imperialis Petropolitanae, 6, 1732/1733

[2] Truesdell B. The rational mechanics of flexible or elastic bodies, 1638-1788. (Turici: O. Fussli, 1960). (This rare and artistically produced volume, that is almost impossible to find today in any library, is one of the greatest books written about the early history of dynamics.)


This Blog Post is a Companion to the undergraduate physics textbook Modern Dynamics: Chaos, Networks, Space and Time, 2nd ed. (Oxford, 2019) introducing Lagrangians and Hamiltonians, chaos theory, complex systems, synchronization, neural networks, econophysics and Special and General Relativity.

Up-side-down Physics: Dynamic Equilibrium and the Inverted Pendulum

In the study of mechanics, the physics student moves through several stages in their education.  The first stage is the Newtonian physics of trajectories and energy and momentum conservation—there are no surprises there.  The second stage takes them to Lagrangians and Hamiltonians—here there are some surprises, especially for rigid body rotations.  Yet even at this stage, most problems have analytical solutions, and most of those solutions are exact.

Any street busker can tell you that an equally good (and more interesting) equilibrium point of a simple pendulum is when the bob is at the top.

It is only at the third stage that physics starts to get really interesting, and when surprising results with important ramifications emerge.  This stage is nonlinear physics.  Most nonlinear problems have no exact analytical solutions, but there are regimes where analytical approximations not only are possible but provide intuitive insights.  One of the best examples of this third stage is the dynamic equilibrium of Kapitsa’s up-side-down pendulum.

Piotr Kapitsa

Piotr Kapitsa (1894 – 1984) was a physicist who received the Nobel Prize in physics in 1978 for his discovery in 1937 of superfluidity in liquid helium.  (He shared the 1978 prize with Penzias and Wilson who had discovered the cosmic microwave background.)  Superfluidity is a low-temperature hydrodynamic property of superfluids that shares some aspects in common with superconductivity.  Kapitsa published his results in Nature in 1938 in the same issue as a paper by John Allen and Don Misener of Cambridge, but Kapitsa had submitted his paper 19 days before Allen and Misener and so got priority (and the Nobel).

During his career Kapitsa was a leading force in Russian physics, surviving Stalin’s great purge through force of character, and helping to establish the now-famous Moscow Institute of Physics and Technology.  However, surviving Stalin did not always mean surviving with freedom, and around 1950 Kapitsa was under effective house arrest because of his unwillingness to toe the party line.

In his enforced free time, to while away the hours, Kapitsa developed an ingenious analytical approach to the problem of dynamic equilibrium.  His toy example was the driven inverted pendulum. It is surprising how many great works have emerged from the time freed up by house arrest: Galileo finally had time to write his “Two New Sciences”  after his run-in with the Inquisition, and Fresnel was free to develop his theory of diffraction after he ill-advisedly joined a militia to support the Bourbon king during Napoleon’s return. (In our own time, with so many physicists in lock-down and working from home, it will be interesting to see what great theory emerges from the pandemic.)

Stability in the Inverted Driven Pendulum

The only stable static equilibrium of the simple pendulum is when the pendulum bob is at its lowest point.  However, any street busker can tell you that an equally good (and more interesting) equilibrium point is when the bob is at the top.  The caveat is that this “inverted” equilibrium of the pendulum requires active stabilization. 

If the inverted pendulum is a simple physical pendulum, like a meter stick that you balance on the tip of your finger, you know that you need to nudge the stick gently and continuously this way and that with your finger, in response to the tipping stick, to keep it upright.  It’s an easy trick, and almost everyone masters it as a child.  With the human as the drive force, this is an example of a closed-loop control system.  The tipping stick is observed visually by the human, and the finger position is adjusted to compensate for the tip.  On the other hand, one might be interested to find an “open-loop” system that does not require active feedback or even an operator.  In 1908, Andrew Stephenson suggested that induced stability could be achieved by the inverted pendulum if a drive force of sufficiently high frequency were applied [1].  But the proof of the stability remained elusive until Kapitsa followed Stephenson’s suggestion by solving the problem through a separation of time scales [2].

The Method of Separation of Time Scales

The driven inverted pendulum has the dynamical equation

where w0 is the natural angular frequency of small-amplitude oscillations, a is a drive amplitude (with units of frequency) and w is the drive angular frequency that is assumed to be much larger than the natural frequency.  The essential assumption that allows the problem to be separate according to widely separated timescales is that the angular displacement has a slow contribution that changes on the time scale of the natural frequency, and a fast contribution that changes on the time scale of the much higher drive frequency.  The assumed solution then looks like

This is inserted into the dynamical equation to yield

where we have used the approximation

So far this is simple.  The next step is the key step.  It assumes that the dynamical equation should also separate into fast and slow contributions.  But the last term of the sin q expansion has a product of fast and slow components.  The key insight is that a time average can be used to average over the fast contribution.  The separation of the dynamical equation is then

where the time average of the fast variables is only needed on the first line.  The second line is a simple driven harmonic oscillator with a natural frequency that depends on cos qslow and a driving amplitude that depends on sin qslow.  The classic solution to the second line for qfast is

This solution can then be inserted into the first line to yield

This describes a pendulum under an effective potential (for high drive frequency and no damping)

The first term is unstable at the inverted position, but the second term is actually a restoring force.  If the second term is stronger than the first, then a dynamic equilibrium can be achieved. This occurs when the driving amplitude is larger than a threshold value

The effective potential for increasing drive amplitude looks like

Fig. 1 Effective potential as a function of angle and drive amplitude a (in units of w0)

When the drive amplitude is larger than sqrt(2), a slight dip forms in the unstable potential. The dip increases with increasing drive amplitude, as does the oscillation frequency of the effective potential.

Python Program: PenInverted.py

(Python code on GitHub.)

#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
PenInverted.py
Created on Friday Sept 11 06:03:32 2020
@author: nolte
D. D. Nolte, Introduction to Modern Dynamics: Chaos, Networks, Space and Time, 2nd ed. (Oxford,2019)
"""

import numpy as np
from scipy import integrate
from matplotlib import pyplot as plt

plt.close('all')

print(' ')
print('PenInverted.py')

F = 133.5          # 30 to 140  (133.5)
delt = 0.000       # 0.000 to 0.01
w = 20          # 20
def flow_deriv(x_y_z,tspan):
    x, y, z = x_y_z
    a = y
    b = -(1 + F*np.cos(z))*np.sin(x) - delt*y
    c = w
    return[a,b,c]
                
T = 2*np.pi/w

x0 = np.pi+0.3
v0 = 0.00
z0 = 0

x_y_z = [x0, v0, z0]

# Solve for the trajectories
t = np.linspace(0, 2000, 200000)
x_t = integrate.odeint(flow_deriv, x_y_z, t)
siztmp = np.shape(x_t)
siz = siztmp[0]

#y1 = np.mod(x_t[:,0]-np.pi,2*np.pi)-np.pi
y1 = x_t[:,0]
y2 = x_t[:,1]
y3 = x_t[:,2]    

plt.figure(1)
lines = plt.plot(t[0:2000],x_t[0:2000,0]/np.pi)
plt.setp(lines, linewidth=0.5)
plt.show()
plt.title('Angular Position')

plt.figure(2)
lines = plt.plot(t[0:1000],y2[0:1000])
plt.setp(lines, linewidth=0.5)
plt.show()
plt.title('Speed')

repnum = 5000
px = np.zeros(shape=(2*repnum,))
xvar = np.zeros(shape=(2*repnum,))
cnt = -1
testwt = np.mod(t,T)-0.5*T;
last = testwt[1]
for loop in range(2,siz-1):
    if (last < 0)and(testwt[loop] > 0):
        cnt = cnt+1
        del1 = -testwt[loop-1]/(testwt[loop] - testwt[loop-1])
        px[cnt] = (y2[loop]-y2[loop-1])*del1 + y2[loop-1]
        xvar[cnt] = (y1[loop]-y1[loop-1])*del1 + y1[loop-1]
        last = testwt[loop]
    else:
        last = testwt[loop]
 
plt.figure(3)
lines = plt.plot(xvar[0:5000],px[0:5000],'ko',ms=1)
plt.show()
plt.title('First Return Map')

plt.figure(4)
lines = plt.plot(x_t[0:1000,0]/np.pi,y2[0:1000])
plt.setp(lines, linewidth=0.5)
plt.show()
plt.title('Phase Space')

You can play with the parameters of this program to explore the physics of dynamic equilibrium. For instance, if the control parameter is slightly above the threshold (F = 32) at which a dip appears in the effective potential, the slow oscillation has a vary low frequency, as shown in Fig. 2. The high-frequency drive can still be seen superposed on the slow oscillation of the pendulum that is oscillating just like an ordinary pendulum but up-side-down!

Fig. 2 Just above the pitchfork bifurcation the slow oscillation has a low frequency. F = 32, w = 20, w0 = 1

The oscillation frequency is a function of the drive amplitude. This is a classic signature of a nonlinear system: amplitude-frequency coupling. Well above the threshold (F = 100), the frequency of oscillation in the effective potential becomes much larger, as in Fig. 3.

Fig. 3 High above the transition. F = 100, w = 20, w0 = 1

When the drive amplitude is more than four times larger than the threshold value (F > 140), the equilibrium is destroyed, so there is an upper bound to the dynamic stabilization. This happens when the “slow” frequency becomes comparable to the drive frequency and the separation-of-time-scales approach is no longer valid.

You can also play with the damping (delt) to see what effect it has on thresholds and long-term behavior starting at delt = 0.001 and increasing it.

Other Examples of Dynamic Equilibrium

Every physics student learns that there is no stable electrostatic equilibrium. However, if charges are put into motion, then a time-averaged potential can be created that can confine a charged particle. This is the principle of the Paul Ion Trap, named after Wolfgang Paul who was awarded the Nobel Prize in Physics in 1989 for this invention.

One of the most famous examples of dynamic equilibrium are the L4 and L5 Lagrange points. In the Earth-Jupiter system, these are the locations of the Trojan asteroids. These special Lagrange points are maxima (unstable equilibria) in the effective potential of a rotation coordinate system, but the Coriolis force creates a local minimum that traps the asteroids in a dynamically stable equilibrium.

In economics, general equilibrium theory describes how oscillating prices among multiple markets can stabilize economic performance in macroeconomics.

A recent paper in Science magazine used the principle of dynamic equilibrium to levitate a layer of liquid on which toy boats can ride right-side-up and up-side-down. For an interesting video see Upside-down boat (link).

References

[1] Stephenson Andrew (1908). “XX.On induced stability”. Philosophical Magazine. 6. 15: 233–236.

[2] Kapitza P. L. (1951). “Dynamic stability of a pendulum when its point of suspension vibrates”. Soviet Phys. JETP. 21: 588–597.

Links

https://en.wikipedia.org/wiki/Kapitza%27s_pendulum

A detailed derivation of Kapitsa’s approach:https://elmer.unibas.ch/pendulum/upside.htm

The bifurcation threshold for the inverted pendulum is a pitchfork bifurcation https://elmer.unibas.ch/pendulum/bif.htm#pfbif


This Blog Post is a Companion to the undergraduate physics textbook Modern Dynamics: Chaos, Networks, Space and Time, 2nd ed. (Oxford, 2019) introducing Lagrangians and Hamiltonians, chaos theory, complex systems, synchronization, neural networks, econophysics and Special and General Relativity.

Henri Poincaré and his Homoclinic Tangle

Will the next extinction-scale asteroid strike the Earth in our lifetime? 

This existential question—the question of our continued existence on this planet—is rhetorical, because there are far too many bodies in our solar system to accurately calculate all trajectories of all asteroids. 

The solar system is what is known as an N-body problem.  And even the N is not well determined.  The asteroid belt alone has over a million extinction-sized asteroids, and there are tens of millions of smaller ones that could still do major damage to life on Earth if they hit.  To have a hope of calculating even one asteroid trajectory do we ignore planetary masses that are too small?  What is too small?  What if we only consider the Sun, the Earth and Jupiter?  This is what Euler did in 1760, and he still had to make more assumptions.

Stability of the Solar System

Once Newton published his Principia, there was a pressing need to calculate the orbit of the Moon (see my blog post on the three-body problem).  This was important for navigation, because if the daily position of the moon could be known with sufficient accuracy, then ships would have a means to determine their longitude at sea.  However, the Moon, Earth and Sun are already a three-body problem, which still ignores the effects of Mars and Jupiter on the Moon’s orbit, not to mention the problem that the Earth is not a perfect sphere.  Therefore, to have any hope of success, toy systems that were stripped of all their obfuscating detail were needed.

Euler investigated simplified versions of the three-body problem around 1760, treating a body attracted to two fixed centers of gravity moving in the plane, and he solved it using elliptic integrals. When the two fixed centers are viewed in a coordinate frame that is rotating with the Sun-Earth system, it can come close to capturing many of the important details of the system. In 1762 Euler tried another approach, called the restricted three-body problem, where he considered a massless Moon attracted to a massive Earth orbiting a massive Sun, again all in the plane. Euler could not find general solutions to this problem, but he did stumble on an interesting special case when the three bodies remain collinear throughout their motions in a rotating reference frame.

It was not the danger of asteroids that was the main topic of interest in those days, but the question whether the Earth itself is in a stable orbit and is safe from being ejected from the Solar system.  Despite steadily improving methods for calculating astronomical trajectories through the nineteenth century, this question of stability remained open.

Poincaré and the King Oscar Prize of 1889

Some years ago I wrote an article for Physics Today called “The Tangled Tale of Phase Space” that tracks the historical development of phase space. One of the chief players in that story was Henri Poincaré (1854 – 1912). Henri Poincare was the Einstein before Einstein. He was a minor celebrity and was considered to be the greatest genius of his era. The event in his early career that helped launch him to stardom was a mathematics prize announced in 1887 to honor the birthday of King Oscar II of Sweden. The challenge problem was as simple as it was profound: Prove rigorously whether the solar system is stable.

This was the old N-body problem that had so far resisted solution, but there was a sense at that time that recent mathematical advances might make the proof possible. There was even a rumor that Dirichlet had outlined such a proof, but no trace of the outline could be found in his papers after his death in 1859.

The prize competition was announced in Acta Mathematica, written by the Swedish mathematician Gösta Mittag-Leffler. It stated:

Given a system of arbitrarily many mass points that attract each according to Newton’s law, under the assumption that no two points ever collide, try to find a representation of the coordinates of each point as a series in a variable that is some known function of time and for all of whose values the series converges uniformly.

The timing of the prize was perfect for Poincaré who was in his early thirties and just beginning to make his mark on mathematics. He was working on the theory of dynamical systems and was developing a new viewpoint that went beyond integrating single trajectories by focusing more broadly on whole classes of solutions. The question of the stability of the solar system seemed like a good problem to use to sharpen his mathematical tools. The general problem was still too difficult, so he began with Euler’s restricted three-body problem. He made steady progress, and along the way he invented an array of new techniques for studying the general properties of dynamical systems. One of these was the Poincaré section. Another was his set of integral invariants, one of which is recognized as the conservation of volume in phase space, also known as Liouville’s theorem, although it was Ludwig Boltzmann who first derived this result (see my Physics Today article). Eventually, he believed he had proven that the restricted three-body problem was stable.

By the time Poincaré had finished is prize submission, he had invented a new field of mathematical analysis, and the judges of the prize submission recognized it. Poincaré was named the winner, and his submission was prepared for publication in the Acta. However, Mittag-Leffler was a little concerned by a technical objection that had been raised, so he forwarded the comment to Poincaré for him to look at. At first, Poincaré thought the objection could easily be overcome, but as he worked on it and delved deeper, he had a sudden attack of panic. Trajectories near a saddle point did not converge. His proof of stability was wrong!

He alerted Mittag-Leffler to stop the presses, but it was too late. The first printing had been completed and review copies had already been sent to the judges. Mittag-Leffler immediately wrote to them asking for their return while Poincaré worked nonstop to produce a corrected copy. When he had completed his reanalysis, he had discovered a divergent feature of the solution to the dynamical problem near saddle points that is recognized today as the discovery of chaos. Poincaré paid for the reprinting of his paper out of his own pocket and (almost) all of the original printing was destroyed. This embarrassing moment in the life of a great mathematician was virtually forgotten until it was brought to light by the historian Barrow-Green in 1994 [1].

Poincaré is still a popular icon in France. Here is the Poincaré cafe in Paris.
A crater on the Moon is named after Poincaré.

Chaos in the Poincaré Return Map

Despite the fact that his conclusions on the stability of the 3-body problem flipped, Poincaré’s new tools for analyzing dynamical systems earned him the prize. He did not stop at his modified prize submission but continued working on systematizing his methods, publishing New Methods in Celestial Mechanics in several volumes through the 1890’s. It was here that he fully explored what happens when a trajectory approaches a saddle point of dynamical equilibrium.

The third volume of a three-book series that grew from Poincaré’s award-winning paper

To visualize a periodic trajectory, Poincaré invented a mathematical tool called a “first-return map”, also known as a Poincaré section. It was a way of taking a higher dimensional continuous trajectory and turning it into a simple iterated discrete map. Therefore, one did not need to solve continuous differential equations, it was enough to just iterate the map. In this way, complicated periodic, or nearly periodic, behavior could be explored numerically. However, even armed with this weapon, Poincaré found that iterated maps became unstable as a trajectory that originated from a saddle point approached another equivalent saddle point. Because the dynamics are periodic, the outgoing and incoming trajectories are opposite ends of the same trajectory, repeated with 2-pi periodicity. Therefore, the saddle point is also called a homoclinic point, meaning that trajectories in the discrete map intersect with themselves. (If two different trajectories in the map intersect, that is called a heteroclinic point.) When Poincaré calculated the iterations around the homoclinic point, he discovered a wild and complicated pattern in which a trajectory intersected itself many times. Poincaré wrote:

[I]f one seeks to visualize the pattern formed by these two curves and their infinite number of intersections … these intersections form a kind of lattice work, a weave, a chain-link network of infinitely fine mesh; each of the two curves can never cross itself, but it must fold back on itself in a very complicated way so as to recross all the chain-links an infinite number of times .… One will be struck by the complexity of this figure, which I am not even attempting to draw. Nothing can give us a better idea of the intricacy of the three-body problem, and of all the problems of dynamics in general…

Poincaré’s first view of chaos.

This was the discovery of chaos! Today we call this “lattice work” the “homoclinic tangle”. He could not draw it with the tools of his day … but we can!

Chirikov’s Standard Map

The restricted 3-body problem is a bit more complicated than is needed to illustrate Poincaré’s homoclinic tangle. A much simpler model is a discrete map called Chirikov’s Map or the Standard Map. It describes the Poincaré section of a periodically kicked oscillator that rotates or oscillates in the angular direction with an angular momentm J. The map has the simple form

in which the angular momentum in updated first, and then the angle variable is updated with the new angular momentum. When plotted on the (θ,J) plane, the standard map produces a beautiful kaleidograph of intertwined trajectories piercing the Poincaré plane, as shown in the figure below. The small points or dots are successive intersections of the higher-dimensional trajectory intersecting a plane. It is possible to trace successive points by starting very close to a saddle point (on the left) and connecting successive iterates with lines. These lines merge into the black trace in the figure that emerges along the unstable manifold of the saddle point on the left and approaches the saddle point on the right generally along the stable manifold.

Fig. Standard map for K = 0.97 at the transition to full chaos. The dark line is the trajectory of the unstable manifold emerging from the saddle point at (p,0). Note the wild oscillations as it approaches the saddle point at (3pi,0).

However, as the successive iterates approach the new saddle (which is really just the old saddle point because of periodicity) it crosses the stable manifold again and again, in ever wilder swings that diverge as it approaches the saddle point. This is just one trace. By calculating traces along all four stable and unstable manifolds and carrying them through to the saddle, a lattice work, or homoclinic tangle emerges.

Two of those traces originate from the stable manifolds, so to calculate their contributions to the homoclinic tangle, one must run these traces backwards in time using the inverse Chirikov map. This is

The four traces all intertwine at the saddle point in the figure below with a zoom in on the tangle in the next figure. This is the lattice work that Poincaré glimpsed in 1889 as he worked feverishly to correct the manuscript that won him the prize that established him as one of the preeminent mathematicians of Europe.

Fig. The homoclinic tangle caused by the folding of phase space trajectories as stable and unstable manifolds criss-cross in the Poincare map at the saddle point. This was the figure that Poincaré could not attempt to draw because of its complexity.
Fig. A zoom-in of the homoclinic tangle at the saddle point as the stable and unstable manifolds create a lattice of intersections. This is the fundamental origin of chaos and the sensitivity to initial conditions (SIC) that make forecasting almost impossible in chaotic systems.


The Physics of Life, the Universe and Everything:

Read more about the history of chaos theory in Galileo Unbound from Oxford University Press


Python Code: StandmapHom.py

(Python code on GitHub.)

#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
StandmapHom.py
Created on Sun Aug  2  2020
"Introduction to Modern Dynamics" 2nd Edition (Oxford, 2019)
@author: nolte
"""

import numpy as np
from matplotlib import pyplot as plt
from numpy import linalg as LA

plt.close('all')

eps = 0.97

np.random.seed(2)

plt.figure(1)

for eloop in range(0,100):

    rlast = 2*np.pi*(0.5-np.random.random())
    thlast = 4*np.pi*np.random.random()
    
    rplot = np.zeros(shape=(200,))
    thetaplot = np.zeros(shape=(200,))
    for loop in range(0,200):
        rnew = rlast + eps*np.sin(thlast)
        thnew = np.mod(thlast+rnew,4*np.pi)
        
        thetaplot[loop] = np.mod(thnew-np.pi,4*np.pi)     
        rtemp = np.mod(rnew + np.pi,2*np.pi)
        rplot[loop] = rtemp - np.pi
  
        rlast = rnew
        thlast = thnew
        
    plt.plot(np.real(thetaplot),np.real(rplot),'o',ms=0.2)
    plt.xlim(xmin=np.pi,xmax=4*np.pi)
    plt.ylim(ymin=-2.5,ymax=2.5)
        
plt.savefig('StandMap')

K = eps
eps0 = 5e-7

J = [[1,1+K],[1,1]]
w, v = LA.eig(J)

My = w[0]
Vu = v[:,0]     # unstable manifold
Vs = v[:,1]     # stable manifold

# Plot the unstable manifold
Hr = np.zeros(shape=(100,150))
Ht = np.zeros(shape=(100,150))
for eloop in range(0,100):
    
    eps = eps0*eloop

    roldu1 = eps*Vu[0]
    thetoldu1 = eps*Vu[1]
    
    Nloop = np.ceil(-6*np.log(eps0)/np.log(eloop+2))
    flag = 1
    cnt = 0
    
    while flag==1 and cnt < Nloop:
        
        ru1 = roldu1 + K*np.sin(thetoldu1)
        thetau1 = thetoldu1 + ru1
        
        roldu1 = ru1
        thetoldu1 = thetau1
        
        if thetau1 > 4*np.pi:
            flag = 0
            
        Hr[eloop,cnt] = roldu1
        Ht[eloop,cnt] = thetoldu1 + 3*np.pi
        cnt = cnt+1
    
x = Ht[0:99,12] - 2*np.pi
x2 = 6*np.pi - x
y = Hr[0:99,12]
y2 = -y
plt.plot(x,y,linewidth =0.75)
plt.plot(x2,y2,linewidth =0.75)

del x,y
x = Ht[5:39,15] - 2*np.pi
x2 = 6*np.pi - x
y = Hr[5:39,15]
y2 = -y
plt.plot(x,y,linewidth =0.75)
plt.plot(x2,y2,linewidth =0.75)

del x,y
x = Ht[12:69,16] - 2*np.pi
x2 = 6*np.pi - x
y = Hr[12:69,16]
y2 = -y
plt.plot(x,y,linewidth =0.75)
plt.plot(x2,y2,linewidth =0.75)

del x,y
x = Ht[15:89,17] - 2*np.pi
x2 = 6*np.pi - x
y = Hr[15:89,17]
y2 = -y
plt.plot(x,y,linewidth =0.75)
plt.plot(x2,y2,linewidth =0.75)

del x,y
x = Ht[30:99,18] - 2*np.pi
x2 = 6*np.pi - x
y = Hr[30:99,18]
y2 = -y
plt.plot(x,y,linewidth =0.75)
plt.plot(x2,y2,linewidth =0.75)

# Plot the stable manifold
del Hr, Ht
Hr = np.zeros(shape=(100,150))
Ht = np.zeros(shape=(100,150))
#eps0 = 0.03
for eloop in range(0,100):
    
    eps = eps0*eloop

    roldu1 = eps*Vs[0]
    thetoldu1 = eps*Vs[1]
    
    Nloop = np.ceil(-6*np.log(eps0)/np.log(eloop+2))
    flag = 1
    cnt = 0
    
    while flag==1 and cnt < Nloop:
        
        thetau1 = thetoldu1 - roldu1
        ru1 = roldu1 - K*np.sin(thetau1)

        roldu1 = ru1
        thetoldu1 = thetau1
        
        if thetau1 > 4*np.pi:
            flag = 0
            
        Hr[eloop,cnt] = roldu1
        Ht[eloop,cnt] = thetoldu1
        cnt = cnt+1
    
x = Ht[0:79,12] + np.pi
x2 = 6*np.pi - x
y = Hr[0:79,12]
y2 = -y
plt.plot(x,y,linewidth =0.75)
plt.plot(x2,y2,linewidth =0.75)

del x,y
x = Ht[4:39,15] + np.pi
x2 = 6*np.pi - x
y = Hr[4:39,15]
y2 = -y
plt.plot(x,y,linewidth =0.75)
plt.plot(x2,y2,linewidth =0.75)

del x,y
x = Ht[12:69,16] + np.pi
x2 =  6*np.pi - x
y = Hr[12:69,16]
y2 = -y
plt.plot(x,y,linewidth =0.75)
plt.plot(x2,y2,linewidth =0.75)

del x,y
x = Ht[15:89,17] + np.pi
x2 =  6*np.pi - x
y = Hr[15:89,17]
y2 = -y
plt.plot(x,y,linewidth =0.75)
plt.plot(x2,y2,linewidth =0.75)

del x,y
x = Ht[30:99,18] + np.pi
x2 =  6*np.pi - x
y = Hr[30:99,18]
y2 = -y
plt.plot(x,y,linewidth =0.75)
plt.plot(x2,y2,linewidth =0.75)

References

[1] D. D. Nolte, “The tangled tale of phase space,” Physics Today, vol. 63, no. 4, pp. 33-38, Apr (2010)

[2] M. C. Gutzwiller, “Moon-Earth-Sun: The oldest three-body problem,” Reviews of Modern Physics, vol. 70, no. 2, pp. 589-639, Apr (1998)

[3] Barrow-Green J. Oscar II’s Prize Competition and the Error in Poindare’s Memoir on the Three Body Problem. Arch Hist Exact Sci 48: 107-131, 1994.

[4] Barrow-Green J. Poincaré and the three body problem. London Mathematical Society, 1997.

[5] https://the-moon.us/wiki/Poincar%C3%A9

[6] Poincaré H and Goroff DL. New methods of celestial mechanics … Edited and introduced by Daniel L. Goroff. New York: American Institute of Physics, 1993.


Physics in the Age of Contagion: Part 4. Fifty Shades of Immunity to COVID-19

This is the fourth installment in a series of blogs on the population dynamics of COVID-19. In my first blog I looked at a bifurcation physics model that held the possibility (and hope) that with sufficient preventive action the pandemic could have died out and spared millions. That hope was in vain.

What will it be like to live with COVID-19 as a constant factor of modern life for years to come?

In my second blog I looked at a two-component population dynamics model that showed the importance of locking down and not emerging too soon. It predicted that waiting only a few extra weeks before opening could have saved tens of thousands of lives. Unfortunately, because states like Texas and Florida opened too soon and refused to mandate the wearing of masks, thousands of lives were lost.

In my third blog I looked at a network physics model that showed the importance of rapid testing and contact tracing to remove infected individuals to push the infection rate low — not only to flatten the curve, but to drive it down. While most of the developed world is succeeding in achieving this, the United States is not.

In this fourth blog, I am looking at a simple mean-field model that shows what it will be like to live with COVID-19 as a constant factor of modern life for years to come. This is what will happen if the period of immunity to the disease is short and people who recover from the disease can get it again. Then the disease will never go away and the world will need to learn to deal with it. How different that world will look from the one we had just a year ago will depend on the degree of immunity that is acquired after infection, how long a vaccine will provide protection before booster shots are needed, and how many people will get the vaccine or will refus.

SIRS for SARS

COVID-19 is a SARS corona virus known as SARS-CoV-2. SARS stands for Severe Acute Respiratory Syndrome. There is a simple and well-established mean-field model for an infectious disease like SARS that infects individuals, from which they recover, but after some lag period they become susceptible again. This is called the SIRS model, standing for Susceptible-Infected-Recovered-Susceptible. This model is similar to the SIS model of my first blog, but it now includes a mean lifetime for the acquired immunity, after an individual recovers from the infection and then becomes susceptible again. The bifurcation threshold is the same for the SIRS model as the SIS model, but with SIRS there is a constant susceptible population. The mathematical flow equations for this model are

where i is the infected fraction, r is the recovered fraction, and 1 – i – r = s is the susceptible fraction. The infection rate for an individual who has k contacts is βk. The recovery rate is μ and the mean lifetime of acquired immunity after recovery is τlife = 1/ν. This model assumes that all individuals are equivalent (no predispositions) and there is no vaccine–only natural immunity that fades in time after recovery.

The population trajectories for this model are shown in Fig. 1. The figure on the left is a 3-simplex where every point in the triangle stands for a 3-tuple (i, r, s). Our own trajectory starts at the right bottom vertex and generates the green trajectory that spirals into the fixed point. The parameters are chosen to be roughly equivalent to what is known about the virus (but with big uncertainties in the model parameters). One of the key results is that the infection will oscillate over several years, settling into a steady state after about 4 years. Thereafter, there is a steady 3% infected population with 67% of the population susceptible and 30% recovered. The decay time for the immunity is assumed to be one year in this model. Note that the first peak in the infected numbers will be about 1 year, or around March 2021. There is a second smaller peak (the graph on the right is on a vertical log scale) at about 4 years, or sometime in 2024.

Fig. 1 SIRS model for COVID-19 in which immunity acquired after recovery fades in time so an individual can be infected again. If immunity fades and there is never a vaccine, a person will have an 80% chance of getting the virus at least twice in their lifetime, and COVID will become the third highest cause of death in the US after heart disease and cancer.

Although the recovered fraction is around 30% for these parameters, it is important to understand that this is a dynamic equilibrium. If there is no vaccine, then any individual who was once infected can be infected again after about a year. So if they don’t get the disease in the first year, they still have about a 4% chance to get it every following year. In 50 years, a 20-year-old today would have almost a 90% chance of having been infected at least once and an 80% chance of having gotten it at least twice. In other words, if there is never a vaccine, and if immunity fades after each recovery, then almost everyone will eventually get the disease several times in their lifetime. Furthermore, COVID will become the third most likely cause of death in the US after heart disease (first) and cancer (second). The sad part of this story is that it all could have been avoided if the government leaders of several key nations, along with their populations, had behaved responsibly.

The Asymmetry of Personal Cost under COVID

The nightly news in the US during the summer of 2020 shows endless videos of large parties, dense with people, mostly young, wearing no masks. This is actually understandable even though regrettable. It is because of the asymmetry of personal cost. Here is what that means …

On any given day, an individual who goes out and about in the US has only about a 0.01 percent chance of contracting the virus. In the entire year, there is only about a 3% chance that that individual will get the disease. And even if they get the virus, they only have a 2% chance of dying. So the actual danger per day per person is so minuscule that it is hard to understand why it is so necessary to wear a mask and socially distance. Therefore, if you go out and don’t wear a mask, almost nothing bad will happen to YOU. So why not? Why not screw the masks and just go out!

And this is why that’s such a bad idea: because if no-one wears a mask, then tens or hundreds of thousands of OTHERS will die.

This is the asymmetry of personal cost. By ignoring distancing, nothing is likely to happen to YOU, but thousands of OTHERS will die. How much of your own comfort are you willing to give up to save others? That is the existential question.

This year is the 75th anniversary of the end of WW II. During the war everyone rationed and recycled, not because they needed it for themselves, but because it was needed for the war effort. Almost no one hesitated back then. It was the right thing to do even though it cost personal comfort. There was a sense of community spirit and doing what was good for the country. Where is that spirit today? The COVID-19 pandemic is a war just as deadly as any war since WW II. There is a community need to battle it. All anyone has to do is wear a mask and behave responsibly. Is this such a high personal cost?

The Vaccine

All of this can change if a reliable vaccine can be developed. There is no guarantee that this can be done. For instance, there has never been a reliable vaccine for the common cold. A more sobering thought is to realize is that there has never been a vaccine for the closely related virus SARS-CoV-1 that broke out in 2003 in China but was less infectious. But the need is greater now, so there is reason for optimism that a vaccine can be developed that elicits the production of antibodies with a mean lifetime at least as long as for naturally-acquired immunity.

The SIRS model has the same bifurcation threshold as the SIS model that was discussed in a previous blog. If the infection rate can be made slower than the recovery rate, then the pandemic can be eliminated entirely. The threshold is

The parameter μ, the recovery rate, is intrinsic and cannot be changed. The parameter β, the infection rate per contact, can be reduced by personal hygiene and wearing masks. The parameter <k>, the average number of contacts to a susceptible person, can be significantly reduced by vaccinating a large fraction of the population.

To simulate the effect of vaccination, the average <k> per person can be reduced at the time of vaccination. This lowers the average infection rate. The results are shown in Fig. 2 for the original dynamics, a vaccination of 20% of the populace, and a vaccination of 40% of the populace. For 20% vaccination, the epidemic is still above threshold, although the long-time infection is lower. For 40% of the population vaccinated, the disease falls below threshold and would decay away and vanish.

Fig. 2 Vaccination at 52 weeks can lower the infection cases (20% vaccinated) or eliminate them entirely (40% vaccinated). The vaccinations would need booster shots every year (if the decay time of immunity is one year).

In this model, the vaccination is assumed to decay at the same rate as naturally acquired immunity (one year), so booster shots would be needed every year. Getting 40% of the population to get vaccinated may be achieved. Roughly that fraction get yearly flu shots in the US, so the COVID vaccine could be added to the list. But at 40% it would still be necessary for everyone to wear face masks and socially distance until the pandemic fades away. Interestingly, if the 40% got vaccinated all on the same date (across the world), then the pandemic would be gone in a few months. Unfortunately, that’s unrealistic, so with a world-wide push to get 40% of the World’s population vaccinated within five years, it would take that long to eliminate the disease, taking us to 2025 before we could go back to the way things were in November of 2019. But that would take a world-wide vaccination blitz the likes of which the world has never seen.

Python Code: SIRS.py

#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
SIRS.py
Created on Fri July 17 2020
D. D. Nolte, "Introduction to Modern Dynamics: 
    Chaos, Networks, Space and Time, 2nd Edition (Oxford University Press, 2019)
@author: nolte
"""

import numpy as np
from scipy import integrate
from matplotlib import pyplot as plt

plt.close('all')

def tripartite(x,y,z):

    sm = x + y + z
    xp = x/sm
    yp = y/sm
    
    f = np.sqrt(3)/2
    
    y0 = f*xp
    x0 = -0.5*xp - yp + 1;
    
    lines = plt.plot(x0,y0)
    plt.setp(lines, linewidth=0.5)
    plt.plot([0, 1],[0, 0],'k',linewidth=1)
    plt.plot([0, 0.5],[0, f],'k',linewidth=1)
    plt.plot([1, 0.5],[0, f],'k',linewidth=1)
    plt.show()
    
print(' ')
print('SIRS.py')

def solve_flow(param,max_time=1000.0):

    def flow_deriv(x_y,tspan,mu,betap,nu):
        x, y = x_y
        
        return [-mu*x + betap*x*(1-x-y),mu*x-nu*y]
    
    x0 = [del1, del2]
    
    # Solve for the trajectories
    t = np.linspace(0, int(tlim), int(250*tlim))
    x_t = integrate.odeint(flow_deriv, x0, t, param)

    return t, x_t

 # rates per week
betap = 0.3;   # infection rate
mu = 0.2;      # recovery rate
nu = 0.02      # immunity decay rate

print('beta = ',betap)
print('mu = ',mu)
print('nu =',nu)
print('betap/mu = ',betap/mu)
          
del1 = 0.005         # initial infected
del2 = 0.005         # recovered

tlim = 600          # weeks (about 12 years)

param = (mu, betap, nu)    # flow parameters

t, y = solve_flow(param)
I = y[:,0]
R = y[:,1]
S = 1 - I - R

plt.figure(1)
lines = plt.semilogy(t,I,t,S,t,R)
plt.ylim([0.001,1])
plt.xlim([0,tlim])
plt.legend(('Infected','Susceptible','Recovered'))
plt.setp(lines, linewidth=0.5)
plt.xlabel('Days')
plt.ylabel('Fraction of Population')
plt.title('Population Dynamics for COVID-19')
plt.show()

plt.figure(2)
plt.hold(True)
for xloop in range(0,10):
    del1 = xloop/10.1 + 0.001
    del2 = 0.01

    tlim = 300
    param = (mu, betap, nu)    # flow parameters
    t, y = solve_flow(param)       
    I = y[:,0]
    R = y[:,1]
    S = 1 - I - R
    
    tripartite(I,R,S);

for yloop in range(1,6):
    del1 = 0.001;
    del2 = yloop/10.1
    t, y = solve_flow(param)
    I = y[:,0]
    R = y[:,1]
    S = 1 - I - R
    
    tripartite(I,R,S);
    
for loop in range(2,10):
    del1 = loop/10.1
    del2 = 1 - del1 - 0.01
    t, y = solve_flow(param)
    I = y[:,0]
    R = y[:,1]
    S = 1 - I - R
        
    tripartite(I,R,S);
    
plt.hold(False)
plt.title('Simplex Plot of COVID-19 Pop Dynamics')
 
vac = [1, 0.8, 0.6]
for loop in vac:
               
    # Run the epidemic to the first peak
    del1 = 0.005
    del2 = 0.005
    tlim = 52
    param = (mu, betap, nu)
    t1, y1 = solve_flow(param)
    
    # Now vaccinate a fraction of the population
    st = np.size(t1)
    del1 = y1[st-1,0]
    del2 = y1[st-1,1]
    tlim = 400
    
    param = (mu, loop*betap, nu)
    t2, y2 = solve_flow(param)
    
    t2 = t2 + t1[st-1]
    
    tc = np.concatenate((t1,t2))
    yc = np.concatenate((y1,y2))
    
    I = yc[:,0]
    R = yc[:,1]
    S = 1 - I - R
    
    plt.figure(3)
    plt.hold(True)
    lines = plt.semilogy(tc,I,tc,S,tc,R)
    plt.ylim([0.001,1])
    plt.xlim([0,tlim])
    plt.legend(('Infected','Susceptible','Recovered'))
    plt.setp(lines, linewidth=0.5)
    plt.xlabel('Weeks')
    plt.ylabel('Fraction of Population')
    plt.title('Vaccination at 1 Year')
    plt.show()
    
plt.hold(False)

Caveats and Disclaimers

No effort was made to match parameters to the actual properties of the COVID-19 pandemic. The SIRS model is extremely simplistic and can only show general trends because it homogenizes away all the important spatial heterogeneity of the disease across the cities and states of the country. If you live in a hot spot, this model says little about what you will experience locally. The decay of immunity is also a completely open question and the decay rate is completely unknown. It is easy to modify the Python program to explore the effects of differing decay rates and vaccination fractions. The model also can be viewed as a “compartment” to model local variations in parameters.

Physics in the Age of Contagion. Part 3: Testing and Tracing COVID-19

In the midst of this COVID crisis (and the often botched governmental responses to it), there have been several success stories: Taiwan, South Korea, Australia and New Zealand stand out. What are the secrets to their success? First, is the willingness of the population to accept the seriousness of the pandemic and to act accordingly. Second, is a rapid and coherent (and competent) governmental response. Third, is biotechnology and the physics of ultra-sensitive biomolecule detection.

Antibody Testing

A virus consists a protein package called a capsid that surrounds polymers of coding RNA. Protein molecules on the capsid are specific to the virus and are the key to testing whether a person has been exposed to the virus. These specific molecules are called antigens, and the body produces antibodies — large biomolecules — that are rapidly evolved by the immune system and released into the blood system to recognize and bind to the antigen. The recognition and binding is highly specific (though not perfect) to the capsid proteins of the virus, so that other types of antibodies (produced to fend off other infections) tend not to bind to it. This specificity enables antibody testing.

In principle, all one needs to do is isolate the COVID-19 antigen, bind it to a surface, and run a sample of a patient’s serum (the part of the blood without the blood cells) over the same surface. If the patient has produced antibodies against the COVID-19, these antibodies will attach to the antigens stuck to the surface. After washing away the rest of the serum, what remains are anti-COVID antibodies attached to the antigens bound to the surface. The next step is to determine whether these antibodies have been bound to the surface or not.

Fig. 1 Schematic of an antibody macromolecule. The total height of the molecule is about 3 nanometers. The antigen binding sites are at the ends of the upper arms.

At this stage, there are many possible alternative technologies to detecting the bound antibodies (see section below on the physics of the BioCD for one approach). A conventional detection approach is known as ELISA (Enzyme-linked immunosorbant assay). To detect the bound antibody, a secondary antibody that binds to human antibodies is added to the test well. This secondary antibody contains either a fluorescent molecular tag or an enzyme that causes the color of the well to change (kind of like how a pregnancy test causes a piece of paper to change color). If the COVID antigen binds antibodies from the patient serum, then this second antibody will bind to the first and can be detected by fluorescence or by simple color change.

The technical challenges associated with antibody assays relate to false positives and false negatives. A false positive happens when the serum is too sticky and some antibodies NOT against COVID tend to stick to the surface of the test well. This is called non-specific binding. The secondary antibodies bind to these non-specifically-bound antibodies and a color change reports a detection, when in fact no COVID-specific antibodies were there. This is a false positive — the patient had not been exposed, but the test says they were.

On the other hand, a false negative occurs when the patient serum is possibly too dilute and even though anti-COVID antibodies are present, they don’t bind sufficiently to the test well to be detectable. This is a false negative — the patient had been exposed, but the test says they weren’t. Despite how mature antibody assay technology is, false positives and false negatives are very hard to eliminate. It is fairly common for false rates to be in the range of 5% to 10% even for high-quality immunoassays. The finite accuracy of the tests must be considered when doing community planning for testing and tracking. But the bottom line is that even 90% accuracy on the test can do a lot to stop the spread of the infection. This is because of the geometry of social networks and how important it is to find and isolate the super spreaders.

Social Networks

The web of any complex set of communities and their interconnections aren’t just random. Whether in interpersonal networks, or networks of cities and states and nations, it’s like the world-wide-web where the most popular webpages get the most links. This is the same phenomenon that makes the rich richer and the poor poorer. It produces a network with a few hubs that have a large fraction of the links. A network model that captures this network topology is known as the Barabasi-Albert model for scale-free networks [1]. A scale-free network tends to have one node that has the most links, then a couple of nodes that have a little fewer links, then several more with even fewer, and so on, until there are a vary large number of nodes with just a single link each.

When it comes to pandemics, this type of network topology is both a curse and a blessing. It is a curse, because if the popular node becomes infected it tends to infect a large fraction of the rest of the network because it is so linked in. But it is a blessing, because if that node can be identified and isolated from the rest of the network, then the chance of the pandemic sweeping across the whole network can be significantly reduced. This is where testing and contact tracing becomes so important. You have to know who is infected and who they are connected with. Only then can you isolate the central nodes of the network and make a dent in the pandemic spread.

An example of a Barabasi-Albert network is shown in Fig. 2 having 128 nodes. Some nodes have many links out (and in) the number of links connecting a node is called the node degree. There are several nodes of very high degree (a degree around 25 in this case) but also very many nodes that have only a single link. It’s the high-degree nodes that matter in a pandemic. If they get infected, then they infect almost the entire network. This scale-free network structure emphasizes the formation of central high-degree nodes. It tends to hold for many social networks, but also can stand for cities across a nation. A city like New York has links all over the country (by flights), while my little town of Lafayette IN might be modeled by a single link to Indianapolis. That same scaling structure is seen across many scales from interactions among nations to interactions among citizens in towns.

Fig. 2 A scale-free network with 128 nodes. A few nodes have high degree, but most nodes have a degree of one.

Isolating the Super Spreaders

In the network of nodes in Fig. 2, each node can be considered as a “compartment” in a multi-compartment SIR model (see my previous blog for the two-compartment SIR model of COVID-19). The infection of each node depends on the SIR dynamics of that node, plus the infections coming in from links other infected nodes. The equations of the dynamics for each node are

where Aab is the adjacency matrix where self-connection is allowed (infection dynamics within a node) and the sums go over all the nodes of the network. In this model, the population of each node is set equal to the degree ka of the node. The spread of the pandemic across the network depends on the links and where the infection begins, but the overall infection is similar to the simple SIR model for a given average network degree

However, if the pandemic starts, but then the highest-degree node (the super spreader) is isolated (by testing and contact tracing), then the saturation of the disease across the network can be decreased in a much greater proportion than simply given by the population of the isolated node. For instance, in the simulation in Fig. 3, a node of degree 20 is removed at 50 days. The fraction of the population that is isolated is only 10%, yet the saturation of the disease across the whole network is decreased by more than a factor of 2.

Fig. 3 Scale-free network of 128 nodes. Solid curve is infection dynamics of the full network. Dashed curve is the infection when the highest-degree node was isolated at 50 days.

In a more realistic model with many more nodes, and full testing to allow the infected nodes and their connections to be isolated, the disease can be virtually halted. This is what was achieved in Taiwan and South Korea. The question is why the United States, with its technologically powerful companies and all their capabilities, was so unprepared or unwilling to achieve the same thing.

Python Code: NetSIRSF.py

#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
NetSIRSF.py
Created on Sat May 11 08:56:41 2019
@author: nolte
D. D. Nolte, Introduction to Modern Dynamics: Chaos, Networks, Space and Time, 2nd ed. (Oxford,2019)
"""

# https://www.python-course.eu/networkx.php
# https://networkx.github.io/documentation/stable/tutorial.html
# https://networkx.github.io/documentation/stable/reference/functions.html

import numpy as np
from scipy import integrate
from matplotlib import pyplot as plt
import networkx as nx
import time
from random import random

tstart = time.time()

plt.close('all')

betap = 0.014;
mu = 0.13;

print('beta = ',betap)
print('betap/mu = ',betap/mu)


N = 128      # 50


facoef = 2
k = 1
nodecouple = nx.barabasi_albert_graph(N, k, seed=None)

indhi = 0
deg = 0
for omloop in nodecouple.node:
    degtmp = nodecouple.degree(omloop)
    if degtmp > deg:
        deg = degtmp
        indhi = omloop
print('highest degree node = ',indhi)
print('highest degree = ',deg)

plt.figure(1)
colors = [(random(), random(), random()) for _i in range(10)]
nx.draw_circular(nodecouple,node_size=75, node_color=colors)
print(nx.info(nodecouple))
        
# function: omegout, yout = coupleN(G)
def coupleN(G,tlim):

    # function: yd = flow_deriv(x_y)
    def flow_deriv(x_y,t0):
        
        N = np.int(x_y.size/2)
        yd = np.zeros(shape=(2*N,))
        ind = -1
        for omloop in G.node:
            ind = ind + 1
            temp1 = -mu*x_y[ind] + betap*x_y[ind]*x_y[N+ind]
            temp2 =  -betap*x_y[ind]*x_y[N+ind]
            linksz = G.node[omloop]['numlink']
            for cloop in range(linksz):
                cindex = G.node[omloop]['link'][cloop]
                indx = G.node[cindex]['index']
                g = G.node[omloop]['coupling'][cloop]
                
                temp1 = temp1 + g*betap*x_y[indx]*x_y[N+ind]
                temp2 = temp2 - g*betap*x_y[indx]*x_y[N+ind]
            
            yd[ind] = temp1
            yd[N+ind] = temp2
                
        return yd
    # end of function flow_deriv(x_y)
    x0 = x_y
    t = np.linspace(0,tlim,tlim)      # 600  300
    y = integrate.odeint(flow_deriv, x0, t)        
    
    return t,y
    # end of function: omegout, yout = coupleN(G)

lnk = np.zeros(shape = (N,), dtype=int)
ind = -1
for loop in nodecouple.node:
    ind = ind + 1
    nodecouple.node[loop]['index'] = ind
    nodecouple.node[loop]['link'] = list(nx.neighbors(nodecouple,loop))
    nodecouple.node[loop]['numlink'] = len(list(nx.neighbors(nodecouple,loop)))
    lnk[ind] = len(list(nx.neighbors(nodecouple,loop)))

gfac = 0.1

ind = -1
for nodeloop in nodecouple.node:
    ind = ind + 1
    nodecouple.node[nodeloop]['coupling'] = np.zeros(shape=(lnk[ind],))
    for linkloop in range (lnk[ind]):
        nodecouple.node[nodeloop]['coupling'][linkloop] = gfac*facoef
            
x_y = np.zeros(shape=(2*N,))   
for loop in nodecouple.node:
    x_y[loop]=0
    x_y[N+loop]=nodecouple.degree(loop)
    #x_y[N+loop]=1
x_y[N-1 ]= 0.01
x_y[2*N-1] = x_y[2*N-1] - 0.01
N0 = np.sum(x_y[N:2*N]) - x_y[indhi] - x_y[N+indhi]
print('N0 = ',N0)
     
tlim0 = 600
t0,yout0 = coupleN(nodecouple,tlim0)                           # Here is the subfunction call for the flow


plt.figure(2)
plt.yscale('log')
plt.gca().set_ylim(1e-3, 1)
for loop in range(N):
    lines1 = plt.plot(t0,yout0[:,loop])
    lines2 = plt.plot(t0,yout0[:,N+loop])
    lines3 = plt.plot(t0,N0-yout0[:,loop]-yout0[:,N+loop])

    plt.setp(lines1, linewidth=0.5)
    plt.setp(lines2, linewidth=0.5)
    plt.setp(lines3, linewidth=0.5)
    

Itot = np.sum(yout0[:,0:127],axis = 1) - yout0[:,indhi]
Stot = np.sum(yout0[:,128:255],axis = 1) - yout0[:,N+indhi]
Rtot = N0 - Itot - Stot
plt.figure(3)
#plt.plot(t0,Itot,'r',t0,Stot,'g',t0,Rtot,'b')
plt.plot(t0,Itot/N0,'r',t0,Rtot/N0,'b')
#plt.legend(('Infected','Susceptible','Removed'))
plt.legend(('Infected','Removed'))
plt.hold

# Repeat but innoculate highest-degree node
x_y = np.zeros(shape=(2*N,))   
for loop in nodecouple.node:
    x_y[loop]=0
    x_y[N+loop]=nodecouple.degree(loop)
    #x_y[N+loop]=1
x_y[N-1] = 0.01
x_y[2*N-1] = x_y[2*N-1] - 0.01
N0 = np.sum(x_y[N:2*N]) - x_y[indhi] - x_y[N+indhi]
     
tlim0 = 50
t0,yout0 = coupleN(nodecouple,tlim0)


# remove all edges from highest-degree node
ee = list(nodecouple.edges(indhi))
nodecouple.remove_edges_from(ee)
print(nx.info(nodecouple))

#nodecouple.remove_node(indhi)        
lnk = np.zeros(shape = (N,), dtype=int)
ind = -1
for loop in nodecouple.node:
    ind = ind + 1
    nodecouple.node[loop]['index'] = ind
    nodecouple.node[loop]['link'] = list(nx.neighbors(nodecouple,loop))
    nodecouple.node[loop]['numlink'] = len(list(nx.neighbors(nodecouple,loop)))
    lnk[ind] = len(list(nx.neighbors(nodecouple,loop)))

ind = -1
x_y = np.zeros(shape=(2*N,)) 
for nodeloop in nodecouple.node:
    ind = ind + 1
    nodecouple.node[nodeloop]['coupling'] = np.zeros(shape=(lnk[ind],))
    x_y[ind] = yout0[tlim0-1,nodeloop]
    x_y[N+ind] = yout0[tlim0-1,N+nodeloop]
    for linkloop in range (lnk[ind]):
        nodecouple.node[nodeloop]['coupling'][linkloop] = gfac*facoef

    
tlim1 = 500
t1,yout1 = coupleN(nodecouple,tlim1)

t = np.zeros(shape=(tlim0+tlim1,))
yout = np.zeros(shape=(tlim0+tlim1,2*N))
t[0:tlim0] = t0
t[tlim0:tlim1+tlim0] = tlim0+t1
yout[0:tlim0,:] = yout0
yout[tlim0:tlim1+tlim0,:] = yout1


plt.figure(4)
plt.yscale('log')
plt.gca().set_ylim(1e-3, 1)
for loop in range(N):
    lines1 = plt.plot(t,yout[:,loop])
    lines2 = plt.plot(t,yout[:,N+loop])
    lines3 = plt.plot(t,N0-yout[:,loop]-yout[:,N+loop])

    plt.setp(lines1, linewidth=0.5)
    plt.setp(lines2, linewidth=0.5)
    plt.setp(lines3, linewidth=0.5)
    

Itot = np.sum(yout[:,0:127],axis = 1) - yout[:,indhi]
Stot = np.sum(yout[:,128:255],axis = 1) - yout[:,N+indhi]
Rtot = N0 - Itot - Stot
plt.figure(3)
#plt.plot(t,Itot,'r',t,Stot,'g',t,Rtot,'b',linestyle='dashed')
plt.plot(t,Itot/N0,'r',t,Rtot/N0,'b',linestyle='dashed')
#plt.legend(('Infected','Susceptible','Removed'))
plt.legend(('Infected','Removed'))
plt.xlabel('Days')
plt.ylabel('Fraction of Sub-Population')
plt.title('Network Dynamics for COVID-19')
plt.show()
plt.hold()

elapsed_time = time.time() - tstart
print('elapsed time = ',format(elapsed_time,'.2f'),'secs')

Caveats and Disclaimers

No effort in the network model was made to fit actual disease statistics. In addition, the network in Figs. 2 and 3 only has 128 nodes, and each node was a “compartment” that had its own SIR dynamics. This is a coarse-graining approach that would need to be significantly improved to try to model an actual network of connections across communities and states. In addition, isolating the super spreader in this model would be like isolating a city rather than an individual, which is not realistic. The value of a heuristic model is to gain a physical intuition about scales and behaviors without being distracted by details of the model.

Postscript: Physics of the BioCD

Because antibody testing has become such a point of public discussion, it brings to mind a chapter of my own life that was closely related to this topic. About 20 years ago my research group invented and developed an antibody assay called the BioCD [2]. The “CD” stood for “compact disc”, and it was a spinning-disk format that used laser interferometry to perform fast and sensitive measurements of antibodies in blood. We launched a start-up company called QuadraSpec in 2004 to commercialize the technology for large-scale human disease screening.

A conventional compact disc consists of about a billion individual nulling interferometers impressed as pits into plastic. When the read-out laser beam straddles one of the billion pits, it experiences a condition of perfect destructive interferences — a zero. But when it was not shining on a pit it experiences high reflection — a one. So as the laser scans across the surface of the disc as it spins, a series of high and low reflections read off bits of information. Because the disc spins very fast, the data rate is very high, and a billion bits can be read in a matter of minutes.

The idea struck me in late 1999 just before getting on a plane to spend a weekend in New York City: What if each pit were like a test tube, so that instead of reading bits of ones and zeros it could read tiny amounts of protein? Then instead of a billion ones and zeros the disc could read a billion protein concentrations. But nulling interferometers are the least sensitive way to measure something sensitively because it operates at a local minimum in the response curve. The most sensitive way to do interferometry is in the condition of phase quadrature when the signal and reference waves are ninety-degrees out of phase and where the response curve is steepest, as in Fig. 4 . Therefore, the only thing you need to turn a compact disc from reading ones and zeros to proteins is to reduce the height of the pit by half. In practice we used raised ridges of gold instead of pits, but it worked in the same way and was extremely sensitive to the attachment of small amounts of protein.

Fig. 4 Principle of the BioCD antibody assay. Reprinted from Ref. [3]

This first generation BioCD was literally a work of art. It was composed of a radial array of gold strips deposited on a silicon wafer. We were approached in 2004 by an art installation called “Massive Change” that was curated by the Vancouver Art Museum. The art installation travelled to Toronto and then to the Museum of Contemporary Art in Chicago, where we went to see it. Our gold-and-silicon BioCD was on display in a section on art in technology.

The next-gen BioCDs were much simpler, consisting simply of oxide layers on silicon wafers, but they were much more versatile and more sensitive. An optical scan of a printed antibody spot on a BioCD is shown in Fig. 5 The protein height is only about 1 nanometer (the diameter of the spot is 100 microns). Interferometry can measure a change in the height of the spot (caused by binding antibodies from patient serum) by only about 10 picometers averaged over the size of the spot. This exquisite sensitivity enabled us to detect tiny fractions of blood-born antigens and antibodies at the level of only a nanogram per milliliter.

Fig. 5 Interferometric measurement of a printed antibody spot on a BioCD. The spot height is about 1 nanomater and the diameter is about 100 microns. Interferometry can measure a change of height by about 10 picometers averaged over the spot.

The real estate on a 100 mm diameter disc was sufficient to do 100,000 antibody assays, which would be 256 protein targets across 512 patients on a single BioCD that would take only a few hours to finish reading!

Fig. 6 A single BioCD has the potential to measure hundreds of proteins or antibodies per patient with hundreds of patients per disc.

The potential of the BioCD for massively multiplexed protein measurements made it possible to imagine testing a single patient for hundreds of diseases in a matter of hours using only a few drops of blood. Furthermore, by being simple and cheap, the test would allow people to track their health over time to look for emerging health trends.

If this sounds familiar to you, you’re right. That’s exactly what the notorious company Theranos was promising investors 10 years after we first proposed this idea. But here’s the difference: We learned that the tech did not scale. It cost us $10M to develop a BioCD that could test for just 4 diseases. And it would cost more than an additional $10M to get it to 8 diseases, because the antibody chemistry is not linear. Each new disease that you try to test creates a combinatorics problem of non-specific binding with all the other antibodies and antigens. To scale the test up to 100 diseases on the single platform using only a few drops of blood would have cost us more than $1B of R&D expenses — if it was possible at all. So we stopped development at our 4-plex product and sold the technology to a veterinary testing company that uses it today to test for diseases like heart worm and Lymes disease in blood samples from pet animals.

Five years after we walked away from massively multiplexed antibody tests, Theranos proposed the same thing and took in more than $700M in US investment, but ultimately produced nothing that worked. The saga of Theranos and its charismatic CEO Elizabeth Holmes has been the topic of books and documentaries and movies like “The Inventor: Out for Blood in Silicon Valley” and a rumored big screen movie starring Jennifer Lawrence as Holmes.

The bottom line is that antibody testing is a difficult business, and ramping up rapidly to meet the demands of testing and tracing COVID-19 is going to be challenging. The key is not to demand too much accuracy per test. False positives are bad for the individual, because it lets them go about without immunity and they might get sick, and false negatives are bad, because it locks them in when they could be going about. But if an inexpensive test of only 90% accuracy (a level of accuracy that has already been called “unreliable” in some news reports) can be brought out in massive scale so that virtually everyone can be tested, and tested repeatedly, then the benefit to society would be great. In the scaling networks that tend to characterize human interactions, all it takes is a few high-degree nodes to be isolated to make infection rates plummet.

References

[1] A. L. Barabasi and R. Albert, “Emergence of scaling in random networks,” Science, vol. 286, no. 5439, pp. 509-512, Oct 15 (1999)

[2] D. D. Nolte, “Review of centrifugal microfluidic and bio-optical disks,” Review Of Scientific Instruments, vol. 80, no. 10, p. 101101, Oct (2009)

[3] D. D. Nolte and F. E. Regnier, “Spinning-Disk Interferometry: The BioCD,” Optics and Photonics News, no. October 2004, pp. 48-53, (2004)

Physics in the Age of Contagion. Part 2: The Second Wave of COVID-19

Since my last Blog on the bifurcation physics of COVID-19, most of the US has approached the crest of “the wave”, with the crest arriving sooner in hot spots like New York City and a few weeks later in rural areas like Lafayette, Indiana where I live. As of the posting of this Blog, most of the US is in lock-down with only a few hold-out states. Fortunately, this was sufficient to avoid the worst case scenarios of my last Blog, but we are still facing severe challenges.

There is good news! The second wave can be managed and minimized if we don’t come out of lock-down too soon.

One fall-out of the (absolutely necessary) lock-down is the serious damage done to the economy that is now in its greatest retraction since the Great Depression. The longer the lock-down goes, the deeper the damage and the longer to recover. The single-most important question at this point in time, as we approach the crest, is when we can emerge from lock down? This is a critical question. If we emerge too early, then the pandemic will re-kindle into a second wave that could exceed the first. But if we emerge later than necessary, then the economy may take a decade to fully recover. We need a Goldilocks solution: not too early and not too late. How do we assess that?

The Value of Qualitative Physics

In my previous Blog I laid out a very simple model called the Susceptible-Infected-Removed (SIR) model and provided a Python program whose parameters can be tweaked to explore the qualitatitive behavior of the model, answering questions like: What is the effect of longer or shorter quarantine periods? What role does social distancing play in saving lives? What happens if only a small fraction of the population pays attention and practice social distancing?

It is necessary to wait from releasing the lock-down at least several weeks after the crest has passed to avoid the second wave.

It is important to note that none of the parameters in that SIR model are reliable and no attempt was made to fit the parameters to the actual data. To expert epidemiological modelers, this simplistic model is less than useless and potentially dangerous if wrong conclusions are arrived at and disseminated on the internet.

But here is the point: The actual numbers are less important than the functional dependences. What matters is how the solution changes as a parameter is changed. The Python programs allow non-experts to gain an intuitive understanding of the qualitative physics of the pandemic. For instance, it is valuable to gain a feeling of how sensitive the pandemic is to small changes in parameters. This is especially important because of the bifurcation physics of COVID-19 where very small changes can cause very different trajectories of the population dynamics.

In the spirit of the value of qualitative physics, I am extending here that simple SIR model to a slightly more sophisticated model that can help us understand the issues and parametric dependences of this question of when to emerge from lock-down. Again, no effort is made to fit actual data of this pandemic, but there are still important qualitative conclusions to be made.

The Two-Compartment SIR Model of COVID-19

To approach a qualitative understanding of what happens by varying the length of time of the country-wide shelter-in-place, it helps to think of two cohorts of the public: those who are compliant and conscientious valuing the lives of others, and those who don’t care and are non-compliant.

Fig. 1 Two-compartment SIR model for compliant and non-compliant cohorts.

These two cohorts can each be modeled separately by their own homogeneous SIR models, but with a coupling between them because even those who shelter in place must go out for food and medicines. The equations of this two-compartment model are

where n and q refer to the non-compliant and the compliant cohorts, respectively. I and S are the susceptible populations. The coupling parameters are knn for the coupling between non-compliants individuals, knq for the effect of the compliant individuals on the non-compliant, kqn for the effect of the non-compliant individuals on the compliant, and kqq for the effect of the compliant cohort on themselves.

There are two time frames for the model. The first time frame is the time of lock-down when the compliant cohort is sheltering in place and practicing good hygiene, but they still need to go out for food and medicines. (This model does not include the first responders. They are an important cohort, but do not make up a large fraction of the national population). The second time frame is after the lock-down is removed. Even then, good practices by the compliant group are expected to continue with the purpose to lower infections among themselves and among others.

This two-compartment model has roughly 8 adjustable parameters, all of which can be varied to study their effects on the predictions. None of them are well known, but general trends still can be explored.

Python Code: SIRWave.py

#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
SIRWave.py
Created on Sat March 21 2020
@author: nolte
D. D. Nolte, Introduction to Modern Dynamics: Chaos, Networks, Space and Time, 2nd ed. (Oxford,2019)
"""

import numpy as np
from scipy import integrate
from matplotlib import pyplot as plt

plt.close('all')

print(' ')
print('SIRWave.py')

def solve_flow(param,max_time=1000.0):

    def flow_deriv(x_y_z_w,tspan):
        In, Sn, Iq, Sq = x_y_z_w
        
        Inp = -mu*In + beta*knn*In*Sn + beta*knq*Iq*Sn
        Snp = -beta*knn*In*Sn - beta*knq*Iq*Sn
        
        Iqp = -mu*Iq + beta*kqn*In*Sq + beta*kqq*Iq*Sq
        Sqp = -beta*kqn*In*Sq - beta*kqq*Iq*Sq
        
        return [Inp, Snp, Iqp, Sqp]
    
    x0 = [In0, Sn0, Iq0, Sq0]
    
    # Solve for the trajectories
    t = np.linspace(tlo, thi, thi-tlo)
    x_t = integrate.odeint(flow_deriv, x0, t)

    return t, x_t

beta = 0.02   # infection rate
dill = 5      # mean days infectious
mu = 1/dill   # decay rate
fnq = 0.3     # fraction not quarantining
fq = 1-fnq    # fraction quarantining
P = 330       # Population of the US in millions
mr = 0.002    # Mortality rate
dq = 90       # Days of lock-down (this is the key parameter)

# During quarantine
knn = 50      # Average connections per day for non-compliant group among themselves
kqq = 0       # Connections among compliant group
knq = 0       # Effect of compliaht group on non-compliant
kqn = 5       # Effect of non-clmpliant group on compliant

initfrac = 0.0001          # Initial conditions:
In0 = initfrac*fnq         # infected non-compliant
Sn0 = (1-initfrac)*fnq     # susceptible non-compliant
Iq0 = initfrac*fq          # infected compliant
Sq0 = (1-initfrac)*fq      # susceptivle compliant

tlo = 0
thi = dq

param = (mu, beta, knn, knq, kqn, kqq)    # flow parameters

t1, y1 = solve_flow(param)

In1 = y1[:,0]
Sn1 = y1[:,1]
Rn1 = fnq - In1 - Sn1
Iq1 = y1[:,2]
Sq1 = y1[:,3]
Rq1 = fq - Iq1 - Sq1

# Lift the quarantine: Compliant group continues social distancing
knn = 50      # Adjusted coupling parameters
kqq = 5
knq = 20
kqn = 15

fin1 = len(t1)
In0 = In1[fin1-1]
Sn0 = Sn1[fin1-1]
Iq0 = Iq1[fin1-1]
Sq0 = Sq1[fin1-1]

tlo = fin1
thi = fin1 + 365-dq

param = (mu, beta, knn, knq, kqn, kqq)

t2, y2 = solve_flow(param)

In2 = y2[:,0]
Sn2 = y2[:,1]
Rn2 = fnq - In2 - Sn2
Iq2 = y2[:,2]
Sq2 = y2[:,3]
Rq2 = fq - Iq2 - Sq2

fin2 = len(t2)
t = np.zeros(shape=(fin1+fin2,))
In = np.zeros(shape=(fin1+fin2,))
Sn = np.zeros(shape=(fin1+fin2,))
Rn = np.zeros(shape=(fin1+fin2,))
Iq = np.zeros(shape=(fin1+fin2,))
Sq = np.zeros(shape=(fin1+fin2,))
Rq = np.zeros(shape=(fin1+fin2,))

t[0:fin1] = t1
In[0:fin1] = In1
Sn[0:fin1] = Sn1
Rn[0:fin1] = Rn1
Iq[0:fin1] = Iq1
Sq[0:fin1] = Sq1
Rq[0:fin1] = Rq1

t[fin1:fin1+fin2] = t2
In[fin1:fin1+fin2] = In2
Sn[fin1:fin1+fin2] = Sn2
Rn[fin1:fin1+fin2] = Rn2
Iq[fin1:fin1+fin2] = Iq2
Sq[fin1:fin1+fin2] = Sq2
Rq[fin1:fin1+fin2] = Rq2

plt.figure(1)
lines = plt.semilogy(t,In,t,Iq,t,(In+Iq))
plt.ylim([0.0001,.1])
plt.xlim([0,thi])
plt.legend(('Non-compliant','Compliant','Total'))
plt.setp(lines, linewidth=0.5)
plt.xlabel('Days')
plt.ylabel('Infected')
plt.title('Infection Dynamics for COVID-19 in US')
plt.show()

plt.figure(2)
lines = plt.semilogy(t,Rn*P*mr,t,Rq*P*mr)
plt.ylim([0.001,1])
plt.xlim([0,thi])
plt.legend(('Non-compliant','Compliant'))
plt.setp(lines, linewidth=0.5)
plt.xlabel('Days')
plt.ylabel('Deaths')
plt.title('Total Deaths for COVID-19 in US')
plt.show()

D = P*mr*(Rn[fin1+fin2-1] + Rq[fin1+fin2-1])
print('Deaths = ',D)

plt.figure(3)
lines = plt.semilogy(t,In/fnq,t,Iq/fq)
plt.ylim([0.0001,.1])
plt.xlim([0,thi])
plt.legend(('Non-compliant','Compliant'))
plt.setp(lines, linewidth=0.5)
plt.xlabel('Days')
plt.ylabel('Fraction of Sub-Population')
plt.title('Population Dynamics for COVID-19 in US')
plt.show()

Trends

The obvious trend to explore is the effect of changing the quarantine period. Fig. 2 shows the results of a an early release from shelter-in-place compared to pushing the release date one month longer. The trends are:

  • If the lock-down is released early, the second wave can be larger than the first wave
  • If the lock-down is released early, the compliant cohort will be mostly susceptible and will have the majority of new cases
  • There are 40% more deaths when the lock-down is released early

If the lock-down is ended just after the crest, this is too early. It is necessary to wait at least several weeks after the crest has passed to avoid the second wave. There are almost 40% more deaths for the 90-day period than the 120-day period. In addition, for the case when the quarantine is stopped too early, the compliant cohort, since they are the larger fraction and are mostly susceptible, will suffer a worse number of new infections than the non-compliant group who put them at risk in the first place. In addition, the second wave for the compliant group would be worse than the first wave. This would be a travesty! But by pushing the quarantine out by just 1 additional month, the compliant group will suffer fewer total deaths than the non-compliant group. Most importantly, the second wave would be substantially smaller than the first wave for both cohorts.

Fig. 2 Comparison of 90-day quarantine versus 120-day quarantine for the compliant and non-compliant cohort of individuals . When the ban is lifted too soon, the second wave can be bigger than the first. This model assumes that 30% of the population are non-compliant and that the compliant group continues to practice social distancing.

The lesson from this simple model is simple: push the quarantine date out as far as the economy can allow! There is good news! The second wave can be managed and minimized if we don’t come out of lock-down too soon.

Caveats and Disclaimers

This model is purely qualitative and only has value for studying trends that depend on changing parameters. Absolute numbers are not meant to be taken too seriously. For instance, the total number of deaths in this model are about 2x larger than what we are hearing from Dr. Fauci of NIAID at this time, so this simple model overestimates fatalities. Also, it doesn’t matter whether the number of quarantine days should be 60, 90 or 120 … what matters is that an additional month makes a large difference in total number of deaths. If someone does want to model the best possible number of quarantine days — the Goldilocks solution — then they need to get their hands on a professional epidemiological model (or an actual epidemiologist). The model presented here is not appropriate for that purpose.

Note added in postscript on April 8: Since posting the original blog on April 6, Dr, Fauci announced that as many as 90% of individuals are practicing some form of social distancing. In addition, many infections are not being reported because of lack of testing, which means that the mortality rate is lower than thought. Therefore, I have changed the mortality rate and figures with numbers that better reflect the current situation (that is changing daily), but still without any attempt to fit the numerous other parameters.

Physics in the Age of Contagion: The Bifurcation of COVID-19

We are at War! That may sound like a cliche, but more people in the United States may die over the next year from COVID-19 than US soldiers have died in all the wars ever fought in US history. It is a war against an invasion by an alien species that has no remorse and gives no quarter. In this war, one of our gravest enemies, beyond the virus, is misinformation. The Internet floods our attention with half-baked half-truths. There may even be foreign powers that see this time of crisis as an opportunity to sow fear through disinformation to divide the country.

Because of the bifurcation physics of the SIR model of COVID-19, small changes in personal behavior (if everyone participates) can literally save Millions of lives!

At such times, physicists may be tapped to help the war effort. This is because physicists have unique skill sets that help us see through the distractions of details to get to the essence of the problem. Our solutions are often back-of-the-envelope, but that is their strength. We can see zeroth-order results stripped bare of all the obfuscating minutia.

One way physicists can help in this war is to shed light on how infections percolate through a population and to provide estimates on the numbers involved. Perhaps most importantly, we can highlight what actions ordinary citizens can take that best guard against the worst-case scenarios of the pandemic. The zeroth-oder solutions may not say anything new that the experts don’t already know, but it may help spread the word of why such simple actions as shelter-in-place may save millions of lives.

The SIR Model of Infection

One of the simplest models for infection is the so-called SIR model that stands for Susceptible-Infected-Removed. This model is an averaged model (or a mean-field model) that disregards the fundamental network structure of human interactions and considers only averages. The dynamical flow equations are very simple

where I is the infected fraction of the population, and S is the susceptible fraction of the population. The coefficient μ is the rate at which patients recover or die, <k> is the average number of “links” to others, and β is the infection probability per link per day. The total population fraction is give by the constraint

where R is the removed population, most of whom will be recovered, but some fraction will have passed away. The number of deaths is

where m is the mortality rate, and Rinf is the longterm removed fraction of the population after the infection has run its course.

The nullclines, the curves along which the time derivatives vanish, are

Where the first nullcline intersects the third nullcline is the only fixed point of this simple model

The phase space of the SIR flow is shown in Fig. 1 plotted as the infected fraction as a function of the susceptible fraction. The diagonal is the set of initial conditions where R = 0. Each initial condition on the diagonal produces a dynamical trajectory. The dashed trajectory that starts at (1,0) is the trajectory for a new disease infecting a fully susceptible population. The trajectories terminate on the I = 0 axis at long times when the infection dies out. In this model, there is always a fraction of the population who never get the disease, not through unusual immunity, but through sheer luck.

Fig. 1 Phase space of the SIR model. The single fixed point has “marginal” stability, but leads to a finite number of of the population who never are infected. The dashed trajectory is the trajectory of the infection starting with a single case. (Adapted from “Introduction to Modern Dynamics” (Oxford University Press, 2019))

The key to understanding the scale of the pandemic is the susceptible fraction at the fixed point S*. For the parameters chosen to plot Fig. 1, the value of S* is 1/4, or β<k> = 4μ. It is the high value of the infection rate β<k> relative to the decay rate of the infection μ that allows a large fraction of the population to become infected. As the infection rate gets smaller, the fixed point S* moves towards unity on the horizontal axis, and less of the population is infected.

As soon as S* exceeds unity, for the condition

then the infection cannot grow exponentially and will decay away without infecting an appreciable fraction of the population. This condition represents a bifurcation in the infection dynamics. It means that if the infection rate can be reduced below the recovery rate, then the pandemic fades away. (It is important to point out that the R0 of a network model (the number of people each infected person infects) is analogous to the inverse of S*. When R0 > 1 then the infection spreads, just as when S* < 1, and vice versa.)

This bifurcation condition makes the strategy for fighting the pandemic clear. The parameter μ is fixed by the virus and cannot be altered. But the infection probability per day per social link, β, can be reduced by clean hygiene:

  • Don’t shake hands
  • Wash your hands often and thoroughly
  • Don’t touch your face
  • Cover your cough or sneeze in your elbow
  • Wear disposable gloves
  • Wipe down touched surfaces with disinfectants

And the number of contacts per person, <k>, can be reduced by social distancing:

  • No large gatherings
  • Stand away from others
  • Shelter-in-place
  • Self quarantine

The big question is: can the infection rate be reduced below the recovery rate through the actions of clean hygiene and social distancing? If there is a chance that it can, then literally millions of lives can be saved. So let’s take a look at COVID-19.

The COVID-19 Pandemic

To get a handle on modeling the COVID-19 pandemic using the (very simplistic) SIR model, one key parameter is the average number of people you are connected to, represented by <k>. These are not necessarily the people in your social network, but also includes people who may touch a surface you touched earlier, or who touched a surface you later touch yourself. It also includes anyone in your proximity who has coughed or sneezed in the past few minutes. The number of people in your network is a topic of keen current interest, but is surprisingly hard to pin down. For the sake of this model, I will take the number <k> = 50 as a nominal number. This is probably too small, but it is compensated by the probability of infection given by a factor r and by the number of days that an individual is infectious.

The spread is helped when infectious people go about their normal lives infecting others. But if a fraction of the population self quarantines, especially after they “may” have been exposed, then the effective number of infectious dinf days per person can be decreased. A rough equation that captures this is

where fnq is the fraction of the population that does NOT self quarantine, dill is the mean number of days a person is ill (and infectious), and dq is the number of days quarantined. This number of infectious days goes into the parameter β.

where r = 0.0002 infections per link per day2 , which is a very rough estimate of the coefficient for COVID-19.

It is clear why shelter-in-place can be so effective, especially if the number of days quarantined is equal to the number of days a person is ill. The infection could literally die out if enough people self quarantine by pushing the critical value S* above the bifurcation threshold. However, it is much more likely that large fractions of people will continue to move about. A simulation of the “wave” that passes through the US is shown in Fig. 2 (see the Python code in the section below for parameters). In this example, 60% of the population does NOT self quarantine. The wave peaks approximately 150 days after the establishment of community spread.

Fig. 2 Population dynamics for the US spread of COVID-19. The fraction that is infected represents a “wave” that passes through a community. In this simulation fnq = 60%. The total US dead after the wave has passed is roughly 2 Million in this simulation.

In addition to shelter-in-place, social distancing can have a strong effect on the disease spread. Fig. 3 shows the number of US deaths as a function of the fraction of the population who do NOT self-quarantine for a series of average connections <k>. The bifurcation effect is clear in this graph. For instance, if <k> = 50 is a nominal value, then if 85% of the population would shelter-in-place for 14 days, then the disease would fall below threshold and only a small number of deaths would occur. But if that connection number can be dropped even to <k> = 40, then only 60% would need to shelter-in-place to avoid the pandemic. By contrast, if 80% of the people don’t self-quarantine, and if <k> = 40, then there could be 2 Million deaths in the US by the time the disease has run its course.

Because of the bifurcation physics of this SIR model of COVID-19, small changes in personal behavior (if everyone participates) can literally save Millions of lives!

Fig. 3 Bifurcation plot of the number of US deaths as a function of the fraction of the population who do NOT shelter-in-place for different average links per person. At 20 links per person, the contagion could be contained. However, at 60 links per person, nearly 90% of the population would need to quarantine for at least 14 days to stop the spread.

There has been a lot said about “flattening the curve”, which is shown in Fig. 4. There are two ways that flattening the curve saves overall lives: 1) it keeps the numbers below the threshold capacity of hospitals; and 2) it decreases the total number infected and hence decreases the total dead. When the number of critical patients exceeds hospital capacity, the mortality rate increases. This is being seen in Italy where the hospitals have been overwhelmed and the mortality rate has risen from a baseline of 1% or 2% to as large as 8%. Flattening the curve is achieved by sheltering in place, personal hygiene and other forms of social distancing. The figure shows a family of curves for different fractions of the total population who shelter in place for 14 days. If more than 70% of the population shelters in place for 14 days, then the curve not only flattens … it disappears!

Fig. 4 Flattening the curve for a range of fractions of the population that shelters in place for 14 days. (See Python code for parameters.)

Python Code: SIR.py

#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
SIR.py
Created on Sat March 21 2020
@author: nolte
D. D. Nolte, Introduction to Modern Dynamics: Chaos, Networks, Space and Time, 2nd ed. (Oxford,2019)
"""

import numpy as np
from scipy import integrate
from matplotlib import pyplot as plt

plt.close('all')

print(' ')
print('SIR.py')

def solve_flow(param,max_time=1000.0):

    def flow_deriv(x_y,tspan,mu,betap):
        x, y = x_y
        
        return [-mu*x + betap*x*y,-betap*x*y]
    
    x0 = [del1, del2]
    
    # Solve for the trajectories
    t = np.linspace(0, int(tlim), int(250*tlim))
    x_t = integrate.odeint(flow_deriv, x0, t, param)

   
    return t, x_t


r = 0.0002    # 0.0002
k = 50        # connections  50
dill = 14     # days ill 14
dpq = 14      # days shelter in place 14
fnq = 0.6     # fraction NOT sheltering in place
mr0 = 0.01    # mortality rate
mr1 = 0.03     # extra mortality rate if exceeding hospital capacity
P = 330       # population of US in Millions
HC = 0.003    # hospital capacity

dinf = fnq*dill + (1-fnq)*np.exp(-dpq/dill)*dill;

betap = r*k*dinf;
mu = 1/dill;

print('beta = ',betap)
print('dinf = ',dinf)
print('beta/mu = ',betap/mu)
          
del1 = .001         # infected
del2 = 1-del1       # susceptible

tlim = np.log(P*1e6/del1)/betap + 50/betap

param = (mu, betap)    # flow parameters

t, y = solve_flow(param)
I = y[:,0]
S = y[:,1]
R = 1 - I - S

plt.figure(1)
lines = plt.semilogy(t,I,t,S,t,R)
plt.ylim([0.001,1])
plt.xlim([0,tlim])
plt.legend(('Infected','Susceptible','Removed'))
plt.setp(lines, linewidth=0.5)
plt.xlabel('Days')
plt.ylabel('Fraction of Population')
plt.title('Population Dynamics for COVID-19 in US')
plt.show()

mr = mr0 + mr1*(0.2*np.max(I)-HC)*np.heaviside(0.2*np.max(I),HC)
Dead = mr*P*R[R.size-1]
print('US Dead = ',Dead)

D = np.zeros(shape=(100,))
x = np.zeros(shape=(100,))
for kloop in range(0,5):
    for floop in range(0,100):
        
        fnq = floop/100
        
        dinf = fnq*dill + (1-fnq)*np.exp(-dpq/dill)*dill;
        
        k = 20 + kloop*10
        betap = r*k*dinf
        
        tlim = np.log(P*1e6/del1)/betap + 50/betap

        param = (mu, betap)    # flow parameters

        t, y = solve_flow(param)       
        I = y[:,0]
        S = y[:,1]
        R = 1 - I - S
        
        mr = mr0 + mr1*(0.2*np.max(I)-HC)*np.heaviside(0.2*np.max(I),HC)

        D[floop] = mr*P*R[R.size-1]
        x[floop] = fnq
        
    plt.figure(2)
    lines2 = plt.plot(x,D)
    plt.setp(lines2, linewidth=0.5)

plt.ylabel('US Million Deaths')
plt.xlabel('Fraction NOT Quarantining')
plt.title('Quarantine and Distancing')        
plt.legend(('20','30','40','50','60','70'))
plt.show()    


label = np.zeros(shape=(9,))
for floop in range(0,8):
    
    fq = floop/10.0
    
    dinf = (1-fq)*dill + fq*np.exp(-dpq/dill)*dill;
    
    k = 50
    betap = r*k*dinf
    
    tlim = np.log(P*1e6/del1)/betap + 50/betap

    param = (mu, betap)    # flow parameters

    t, y = solve_flow(param)       
    I = y[:,0]
    S = y[:,1]
    R = 1 - I - S
    
    plt.figure(3)
    lines2 = plt.plot(t,I*P)
    plt.setp(lines2, linewidth=0.5)
    label[floop]=fq

plt.legend(label)
plt.ylabel('US Millions Infected')
plt.xlabel('Days')
plt.title('Flattening the Curve')       

You can run this Python code yourself and explore the effects of changing the parameters. For instance, the mortality rate is modeled to increase when the number of hospital beds is exceeded by the number of critical patients. This coefficient is not well known and hence can be explored numerically. Also, the infection rate r is not known well, nor the average number of connections per person. The effect of longer quarantines can also be tested relative to the fraction who do not quarantine at all. Because of the bifurcation physics of the disease model, large changes in dynamics can occur for small changes in parameters when the dynamics are near the bifurcation threshold.

Caveats and Disclaimers

This SIR model of COVID-19 is an extremely rough tool that should not be taken too literally. It can be used to explore ideas about the general effect of days quarantined, or changes in the number of social contacts, but should not be confused with the professional models used by epidemiologists. In particular, this mean-field SIR model completely ignores the discrete network character of person-to-person spread. It also homogenizes the entire country, where is it blatantly obvious that the dynamics inside New York City are very different than the dynamics in rural Indiana. And the elimination of the epidemic, so that it would not come back, would require strict compliance for people to be tested (assuming there are enough test kits) and infected individuals to be isolated after the wave has passed.

The Physics of Modern Dynamics (with Python Programs)

It is surprising how much of modern dynamics boils down to an extremely simple formula

This innocuous-looking equation carries such riddles, such surprises, such unintuitive behavior that it can become the object of study for life.  This equation is called a vector flow equation, and it can be used to capture the essential physics of economies, neurons, ecosystems, networks, and even orbits of photons around black holes.  This equation is to modern dynamics what F = ma was to classical mechanics.  It is the starting point for understanding complex systems.

The Magic of Phase Space

The apparent simplicity of the “flow equation” masks the complexity it contains.  It is a vector equation because each “dimension” is a variable of a complex system.  Many systems of interest may have only a few variables, but ecosystems and economies and social networks may have hundreds or thousands of variables.  Expressed in component format, the flow equation is

where the superscript spans the number of variables.  But even this masks all that can happen with such an equation. Each of the functions fa can be entirely different from each other, and can be any type of function, whether polynomial, rational, algebraic, transcendental or composite, although they must be single-valued.  They are generally nonlinear, and the limitless ways that functions can be nonlinear is where the richness of the flow equation comes from.

The vector flow equation is an ordinary differential equation (ODE) that can be solved for specific trajectories as initial value problems.  A single set of initial conditions defines a unique trajectory.  For instance, the trajectory for a 4-dimensional example is described as the column vector

which is the single-parameter position vector to a point in phase space, also called state space.  The point sweeps through successive configurations as a function of its single parameter—time.  This trajectory is also called an orbit.  In classical mechanics, the focus has tended to be on the behavior of specific orbits that arise from a specific set of initial conditions.  This is the classic “rock thrown from a cliff” problem of introductory physics courses.  However, in modern dynamics, the focus shifts away from individual trajectories to encompass the set of all possible trajectories.

Why is Modern Dynamics part of Physics?

If finding the solutions to the “x-dot equals f” vector flow equation is all there is to do, then this would just be a math problem—the solution of ODE’s.  There are plenty of gems for mathematicians to look for, and there is an entire of field of study in mathematics called “dynamical systems“, but this would not be “physics”.  Physics as a profession is separate and distinct from mathematics, although the two are sometimes confused.  Physics uses mathematics as its language and as its toolbox, but physics is not mathematics.  Physics is done best when it is done qualitatively—this means with scribbles done on napkins in restaurants or on the back of envelopes while waiting in line. Physics is about recognizing relationships and patterns. Physics is about identifying the limits to scaling properties where the physics changes when scales change. Physics is about the mapping of the simplest possible mathematics onto behavior in the physical world, and recognizing when the simplest possible mathematics is a universal that applies broadly to diverse systems that seem different, but that share the same underlying principles.

So, granted solving ODE’s is not physics, there is still a tremendous amount of good physics that can be done by solving ODE’s. ODE solvers become the modern physicist’s experimental workbench, providing data output from numerical experiments that can test the dependence on parameters in ways that real-world experiments might not be able to access. Physical intuition can be built based on such simulations as the engaged physicist begins to “understand” how the system behaves, able to explain what will happen as the values of parameters are changed.

In the follow sections, three examples of modern dynamics are introduced with a preliminary study, including Python code. These examples are: Galactic dynamics, synchronized networks and ecosystems. Despite their very different natures, their description using dynamical flows share features in common and illustrate the beauty and depth of behavior that can be explored with simple equations.

Galactic Dynamics

One example of the power and beauty of the vector flow equation and its set of all solutions in phase space is called the Henon-Heiles model of the motion of a star within a galaxy.  Of course, this is a terribly complicated problem that involves tens of billions of stars, but if you average over the gravitational potential of all the other stars, and throw in a couple of conservation laws, the resulting potential can look surprisingly simple.  The motion in the plane of this galactic potential takes two configuration coordinates (x, y) with two associated momenta (px, py) for a total of four dimensions.  The flow equations in four-dimensional phase space are simply

Fig. 1 The 4-dimensional phase space flow equations of a star in a galaxy. The terms in light blue are a simple two-dimensional harmonic oscillator. The terms in magenta are the nonlinear contributions from the stars in the galaxy.

where the terms in the light blue box describe a two-dimensional simple harmonic oscillator (SHO), which is a linear oscillator, modified by the terms in the magenta box that represent the nonlinear galactic potential.  The orbits of this Hamiltonian system are chaotic, and because there is no dissipation in the model, a single orbit will continue forever within certain ranges of phase space governed by energy conservation, but never quite repeating.

Fig. 2 Two-dimensional Poincaré section of sets of trajectories in four-dimensional phase space for the Henon-Heiles galactic dynamics model. The perturbation parameter is &eps; = 0.3411 and the energy E = 1.

Hamilton4D.py

(Python code on GitHub.)

#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
Hamilton4D.py
Created on Wed Apr 18 06:03:32 2018

@author: nolte

Derived from:
D. D. Nolte, Introduction to Modern Dynamics: Chaos, Networks, Space and Time, 2nd ed. (Oxford,2019)
"""

import numpy as np
import matplotlib as mpl
from mpl_toolkits.mplot3d import Axes3D
from scipy import integrate
from matplotlib import pyplot as plt
from matplotlib import cm
import time
import os

plt.close('all')

# model_case 1 = Heiles
# model_case 2 = Crescent
print(' ')
print('Hamilton4D.py')
print('Case: 1 = Heiles')
print('Case: 2 = Crescent')
model_case = int(input('Enter the Model Case (1-2)'))

if model_case == 1:
    E = 1       # Heiles: 1, 0.3411   Crescent: 0.05, 1
    epsE = 0.3411   # 3411
    def flow_deriv(x_y_z_w,tspan):
        x, y, z, w = x_y_z_w
        a = z
        b = w
        c = -x - epsE*(2*x*y)
        d = -y - epsE*(x**2 - y**2)
        return[a,b,c,d]
else:
    E = .1       #   Crescent: 0.1, 1
    epsE = 1   
    def flow_deriv(x_y_z_w,tspan):
        x, y, z, w = x_y_z_w
        a = z
        b = w
        c = -(epsE*(y-2*x**2)*(-4*x) + x)
        d = -(y-epsE*2*x**2)
        return[a,b,c,d]
    
prms = np.sqrt(E)
pmax = np.sqrt(2*E)    
            
# Potential Function
if model_case == 1:
    V = np.zeros(shape=(100,100))
    for xloop in range(100):
        x = -2 + 4*xloop/100
        for yloop in range(100):
            y = -2 + 4*yloop/100
            V[yloop,xloop] = 0.5*x**2 + 0.5*y**2 + epsE*(x**2*y - 0.33333*y**3) 
else:
    V = np.zeros(shape=(100,100))
    for xloop in range(100):
        x = -2 + 4*xloop/100
        for yloop in range(100):
            y = -2 + 4*yloop/100
            V[yloop,xloop] = 0.5*x**2 + 0.5*y**2 + epsE*(2*x**4 - 2*x**2*y) 

fig = plt.figure(1)
contr = plt.contourf(V,100, cmap=cm.coolwarm, vmin = 0, vmax = 10)
fig.colorbar(contr, shrink=0.5, aspect=5)    
fig = plt.show()

repnum = 250
mulnum = 64/repnum

np.random.seed(1)
for reploop  in range(repnum):
    px1 = 2*(np.random.random((1))-0.499)*pmax
    py1 = np.sign(np.random.random((1))-0.499)*np.real(np.sqrt(2*(E-px1**2/2)))
    xp1 = 0
    yp1 = 0
    
    x_y_z_w0 = [xp1, yp1, px1, py1]
    
    tspan = np.linspace(1,1000,10000)
    x_t = integrate.odeint(flow_deriv, x_y_z_w0, tspan)
    siztmp = np.shape(x_t)
    siz = siztmp[0]

    if reploop % 50 == 0:
        plt.figure(2)
        lines = plt.plot(x_t[:,0],x_t[:,1])
        plt.setp(lines, linewidth=0.5)
        plt.show()
        time.sleep(0.1)
        #os.system("pause")

    y1 = x_t[:,0]
    y2 = x_t[:,1]
    y3 = x_t[:,2]
    y4 = x_t[:,3]
    
    py = np.zeros(shape=(2*repnum,))
    yvar = np.zeros(shape=(2*repnum,))
    cnt = -1
    last = y1[1]
    for loop in range(2,siz):
        if (last < 0)and(y1[loop] > 0):
            cnt = cnt+1
            del1 = -y1[loop-1]/(y1[loop] - y1[loop-1])
            py[cnt] = y4[loop-1] + del1*(y4[loop]-y4[loop-1])
            yvar[cnt] = y2[loop-1] + del1*(y2[loop]-y2[loop-1])
            last = y1[loop]
        else:
            last = y1[loop]
 
    plt.figure(3)
    lines = plt.plot(yvar,py,'o',ms=1)
    plt.show()
    
if model_case == 1:
    plt.savefig('Heiles')
else:
    plt.savefig('Crescent')
    

Networks, Synchronization and Emergence

A central paradigm of nonlinear science is the emergence of patterns and organized behavior from seemingly random interactions among underlying constituents.  Emergent phenomena are among the most awe inspiring topics in science.  Crystals are emergent, forming slowly from solutions of reagents.  Life is emergent, arising out of the chaotic soup of organic molecules on Earth (or on some distant planet).  Intelligence is emergent, and so is consciousness, arising from the interactions among billions of neurons.  Ecosystems are emergent, based on competition and symbiosis among species.  Economies are emergent, based on the transfer of goods and money spanning scales from the local bodega to the global economy.

One of the common underlying properties of emergence is the existence of networks of interactions.  Networks and network science are topics of great current interest driven by the rise of the World Wide Web and social networks.  But networks are ubiquitous and have long been the topic of research into complex and nonlinear systems.  Networks provide a scaffold for understanding many of the emergent systems.  It allows one to think of isolated elements, like molecules or neurons, that interact with many others, like the neighbors in a crystal or distant synaptic connections.

From the point of view of modern dynamics, the state of a node can be a variable or a “dimension” and the interactions among links define the functions of the vector flow equation.  Emergence is then something that “emerges” from the dynamical flow as many elements interact through complex networks to produce simple or emergent patterns.

Synchronization is a form of emergence that happens when lots of independent oscillators, each vibrating at their own personal frequency, are coupled together to push and pull on each other, entraining all the individual frequencies into one common global oscillation of the entire system.  Synchronization plays an important role in the solar system, explaining why the Moon always shows one face to the Earth, why Saturn’s rings have gaps, and why asteroids are mainly kept away from colliding with the Earth.  Synchronization plays an even more important function in biology where it coordinates the beating of the heart and the functioning of the brain.

One of the most dramatic examples of synchronization is the Kuramoto synchronization phase transition. This occurs when a large set of individual oscillators with differing natural frequencies interact with each other through a weak nonlinear coupling.  For small coupling, all the individual nodes oscillate at their own frequency.  But as the coupling increases, there is a sudden coalescence of all the frequencies into a single common frequency.  This mechanical phase transition, called the Kuramoto transition, has many of the properties of a thermodynamic phase transition, including a solution that utilizes mean field theory.

Fig. 3 The Kuramoto model for the nonlinear coupling of N simple phase oscillators. The term in light blue is the simple phase oscillator. The term in magenta is the global nonlinear coupling that connects each oscillator to every other.

The simulation of 20 Poncaré phase oscillators with global coupling is shown in Fig. 4 as a function of increasing coupling coefficient g. The original individual frequencies are spread randomly. The oscillators with similar frequencies are the first to synchronize, forming small clumps that then synchronize with other clumps of oscillators, until all oscillators are entrained to a single compromise frequency. The Kuramoto phase transition is not sharp in this case because the value of N = 20 is too small. If the simulation is run for 200 oscillators, there is a sudden transition from unsynchronized to synchronized oscillation at a threshold value of g.

Fig. 4 The Kuramoto model for 20 Poincare oscillators showing the frequencies as a function of the coupling coefficient.

The Kuramoto phase transition is one of the most important fundamental examples of modern dynamics because it illustrates many facets of nonlinear dynamics in a very simple way. It highlights the importance of nonlinearity, the simplification of phase oscillators, the use of mean field theory, the underlying structure of the network, and the example of a mechanical analog to a thermodynamic phase transition. It also has analytical solutions because of its simplicity, while still capturing the intrinsic complexity of nonlinear systems.

Kuramoto.py

(Python code on GitHub.)

#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
Created on Sat May 11 08:56:41 2019

@author: nolte

D. D. Nolte, Introduction to Modern Dynamics: Chaos, Networks, Space and Time, 2nd ed. (Oxford,2019)
"""

# https://www.python-course.eu/networkx.php
# https://networkx.github.io/documentation/stable/tutorial.html
# https://networkx.github.io/documentation/stable/reference/functions.html

import numpy as np
from scipy import integrate
from matplotlib import pyplot as plt
import networkx as nx
from UserFunction import linfit
import time

tstart = time.time()

plt.close('all')

Nfac = 25   # 25
N = 50      # 50
width = 0.2

# model_case 1 = complete graph (Kuramoto transition)
# model_case 2 = Erdos-Renyi
model_case = int(input('Input Model Case (1-2)'))
if model_case == 1:
    facoef = 0.2
    nodecouple = nx.complete_graph(N)
elif model_case == 2:
    facoef = 5
    nodecouple = nx.erdos_renyi_graph(N,0.1)


# function: omegout, yout = coupleN(G)
def coupleN(G):

    # function: yd = flow_deriv(x_y)
    def flow_deriv(y,t0):
                
        yp = np.zeros(shape=(N,))
        for omloop  in range(N):
            temp = omega[omloop]
            linksz = G.node[omloop]['numlink']
            for cloop in range(linksz):
                cindex = G.node[omloop]['link'][cloop]
                g = G.node[omloop]['coupling'][cloop]

                temp = temp + g*np.sin(y[cindex]-y[omloop])
            
            yp[omloop] = temp
        
        yd = np.zeros(shape=(N,))
        for omloop in range(N):
            yd[omloop] = yp[omloop]
        
        return yd
    # end of function flow_deriv(x_y)

    mnomega = 1.0
    
    for nodeloop in range(N):
        omega[nodeloop] = G.node[nodeloop]['element']
    
    x_y_z = omega    
    
    # Settle-down Solve for the trajectories
    tsettle = 100
    t = np.linspace(0, tsettle, tsettle)
    x_t = integrate.odeint(flow_deriv, x_y_z, t)
    x0 = x_t[tsettle-1,0:N]
    
    t = np.linspace(1,1000,1000)
    y = integrate.odeint(flow_deriv, x0, t)
    siztmp = np.shape(y)
    sy = siztmp[0]
        
    # Fit the frequency
    m = np.zeros(shape = (N,))
    w = np.zeros(shape = (N,))
    mtmp = np.zeros(shape=(4,))
    btmp = np.zeros(shape=(4,))
    for omloop in range(N):
        
        if np.remainder(sy,4) == 0:
            mtmp[0],btmp[0] = linfit(t[0:sy//2],y[0:sy//2,omloop]);
            mtmp[1],btmp[1] = linfit(t[sy//2+1:sy],y[sy//2+1:sy,omloop]);
            mtmp[2],btmp[2] = linfit(t[sy//4+1:3*sy//4],y[sy//4+1:3*sy//4,omloop]);
            mtmp[3],btmp[3] = linfit(t,y[:,omloop]);
        else:
            sytmp = 4*np.floor(sy/4);
            mtmp[0],btmp[0] = linfit(t[0:sytmp//2],y[0:sytmp//2,omloop]);
            mtmp[1],btmp[1] = linfit(t[sytmp//2+1:sytmp],y[sytmp//2+1:sytmp,omloop]);
            mtmp[2],btmp[2] = linfit(t[sytmp//4+1:3*sytmp/4],y[sytmp//4+1:3*sytmp//4,omloop]);
            mtmp[3],btmp[3] = linfit(t[0:sytmp],y[0:sytmp,omloop]);

        
        #m[omloop] = np.median(mtmp)
        m[omloop] = np.mean(mtmp)
        
        w[omloop] = mnomega + m[omloop]
     
    omegout = m
    yout = y
    
    return omegout, yout
    # end of function: omegout, yout = coupleN(G)


Nlink = N*(N-1)//2      
omega = np.zeros(shape=(N,))
omegatemp = width*(np.random.rand(N)-1)
meanomega = np.mean(omegatemp)
omega = omegatemp - meanomega
sto = np.std(omega)

lnk = np.zeros(shape = (N,), dtype=int)
for loop in range(N):
    nodecouple.node[loop]['element'] = omega[loop]
    nodecouple.node[loop]['link'] = list(nx.neighbors(nodecouple,loop))
    nodecouple.node[loop]['numlink'] = np.size(list(nx.neighbors(nodecouple,loop)))
    lnk[loop] = np.size(list(nx.neighbors(nodecouple,loop)))

avgdegree = np.mean(lnk)
mnomega = 1

facval = np.zeros(shape = (Nfac,))
yy = np.zeros(shape=(Nfac,N))
xx = np.zeros(shape=(Nfac,))
for facloop in range(Nfac):
    print(facloop)

    fac = facoef*(16*facloop/(Nfac))*(1/(N-1))*sto/mnomega
    for nodeloop in range(N):
        nodecouple.node[nodeloop]['coupling'] = np.zeros(shape=(lnk[nodeloop],))
        for linkloop in range (lnk[nodeloop]):
            nodecouple.node[nodeloop]['coupling'][linkloop] = fac

    facval[facloop] = fac*avgdegree
    
    omegout, yout = coupleN(nodecouple)                           # Here is the subfunction call for the flow

    for omloop in range(N):
        yy[facloop,omloop] = omegout[omloop]

    xx[facloop] = facval[facloop]

plt.figure(1)
lines = plt.plot(xx,yy)
plt.setp(lines, linewidth=0.5)
plt.show()

elapsed_time = time.time() - tstart
print('elapsed time = ',format(elapsed_time,'.2f'),'secs')


The Web of Life

Ecosystems are among the most complex systems on Earth.  The complex interactions among hundreds or thousands of species may lead to steady homeostasis in some cases, to growth and collapse in other cases, and to oscillations or chaos in yet others.  But the definition of species can be broad and abstract, referring to businesses and markets in economic ecosystems, or to cliches and acquaintances in social ecosystems, among many other examples.  These systems are governed by the laws of evolutionary dynamics that include fitness and survival as well as adaptation.

The dimensionality of the dynamical spaces for these systems extends to hundreds or thousands of dimensions—far too complex to visualize when thinking in four dimensions is already challenging.  Yet there are shared principles and common behaviors that emerge even here.  Many of these can be illustrated in a simple three-dimensional system that is represented by a triangular simplex that can be easily visualized, and then generalized back to ultra-high dimensions once they are understood.

A simplex is a closed (N-1)-dimensional geometric figure that describes a zero-sum game (game theory is an integral part of evolutionary dynamics) among N competing species.  For instance, a two-simplex is a triangle that captures the dynamics among three species.  Each vertex of the triangle represents the situation when the entire ecosystem is composed of a single species.  Anywhere inside the triangle represents the situation when all three species are present and interacting.

A classic model of interacting species is the replicator equation. It allows for a fitness-based proliferation and for trade-offs among the individual species. The replicator dynamics equations are shown in Fig. 5.

Fig. 5 Replicator dynamics has a surprisingly simple form, but with surprisingly complicated behavior. The key elements are the fitness and the payoff matrix. The fitness relates to how likely the species will survive. The payoff matrix describes how one species gains at the loss of another (although symbiotic relationships also occur).

The population dynamics on the 2D simplex are shown in Fig. 6 for several different pay-off matrices. The matrix values are shown in color and help interpret the trajectories. For instance the simplex on the upper-right shows a fixed point center. This reflects the antisymmetric character of the pay-off matrix around the diagonal. The stable spiral on the lower-left has a nearly asymmetric pay-off matrix, but with unequal off-diagonal magnitudes. The other two cases show central saddle points with stable fixed points on the boundary. A very large variety of behaviors are possible for this very simple system. The Python program is shown in Trirep.py.

Fig. 6 Payoff matrix and population simplex for four random cases: Upper left is an unstable saddle. Upper right is a center. Lower left is a stable spiral. Lower right is a marginal case.

Trirep.py

(Python code on GitHub.)

#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
trirep.py
Created on Thu May  9 16:23:30 2019

@author: nolte

Derived from:
D. D. Nolte, Introduction to Modern Dynamics: Chaos, Networks, Space and Time, 2nd ed. (Oxford,2019)
"""

import numpy as np
from scipy import integrate
from matplotlib import pyplot as plt

plt.close('all')

def tripartite(x,y,z):

    sm = x + y + z
    xp = x/sm
    yp = y/sm
    
    f = np.sqrt(3)/2
    
    y0 = f*xp
    x0 = -0.5*xp - yp + 1;
    
    plt.figure(2)
    lines = plt.plot(x0,y0)
    plt.setp(lines, linewidth=0.5)
    plt.plot([0, 1],[0, 0],'k',linewidth=1)
    plt.plot([0, 0.5],[0, f],'k',linewidth=1)
    plt.plot([1, 0.5],[0, f],'k',linewidth=1)
    plt.show()
    

def solve_flow(y,tspan):
    def flow_deriv(y, t0):
    #"""Compute the time-derivative ."""
    
        f = np.zeros(shape=(N,))
        for iloop in range(N):
            ftemp = 0
            for jloop in range(N):
                ftemp = ftemp + A[iloop,jloop]*y[jloop]
            f[iloop] = ftemp
        
        phitemp = phi0          # Can adjust this from 0 to 1 to stabilize (but Nth population is no longer independent)
        for loop in range(N):
            phitemp = phitemp + f[loop]*y[loop]
        phi = phitemp
        
        yd = np.zeros(shape=(N,))
        for loop in range(N-1):
            yd[loop] = y[loop]*(f[loop] - phi);
        
        if np.abs(phi0) < 0.01:             # average fitness maintained at zero
            yd[N-1] = y[N-1]*(f[N-1]-phi);
        else:                                     # non-zero average fitness
            ydtemp = 0
            for loop in range(N-1):
                ydtemp = ydtemp - yd[loop]
            yd[N-1] = ydtemp
       
        return yd

    # Solve for the trajectories
    t = np.linspace(0, tspan, 701)
    x_t = integrate.odeint(flow_deriv,y,t)
    return t, x_t

# model_case 1 = zero diagonal
# model_case 2 = zero trace
# model_case 3 = asymmetric (zero trace)
print(' ')
print('trirep.py')
print('Case: 1 = antisymm zero diagonal')
print('Case: 2 = antisymm zero trace')
print('Case: 3 = random')
model_case = int(input('Enter the Model Case (1-3)'))

N = 3
asymm = 3      # 1 = zero diag (replicator eqn)   2 = zero trace (autocatylitic model)  3 = random (but zero trace)
phi0 = 0.001            # average fitness (positive number) damps oscillations
T = 100;


if model_case == 1:
    Atemp = np.zeros(shape=(N,N))
    for yloop in range(N):
        for xloop in range(yloop+1,N):
            Atemp[yloop,xloop] = 2*(0.5 - np.random.random(1))
            Atemp[xloop,yloop] = -Atemp[yloop,xloop]

if model_case == 2:
    Atemp = np.zeros(shape=(N,N))
    for yloop in range(N):
        for xloop in range(yloop+1,N):
            Atemp[yloop,xloop] = 2*(0.5 - np.random.random(1))
            Atemp[xloop,yloop] = -Atemp[yloop,xloop]
        Atemp[yloop,yloop] = 2*(0.5 - np.random.random(1))
    tr = np.trace(Atemp)
    A = Atemp
    for yloop in range(N):
        A[yloop,yloop] = Atemp[yloop,yloop] - tr/N
        
else:
    Atemp = np.zeros(shape=(N,N))
    for yloop in range(N):
        for xloop in range(N):
            Atemp[yloop,xloop] = 2*(0.5 - np.random.random(1))
        
    tr = np.trace(Atemp)
    A = Atemp
    for yloop in range(N):
        A[yloop,yloop] = Atemp[yloop,yloop] - tr/N

plt.figure(3)
im = plt.matshow(A,3,cmap=plt.cm.get_cmap('seismic'))  # hsv, seismic, bwr
cbar = im.figure.colorbar(im)

M = 20
delt = 1/M
ep = 0.01;

tempx = np.zeros(shape = (3,))
for xloop in range(M):
    tempx[0] = delt*(xloop)+ep;
    for yloop in range(M-xloop):
        tempx[1] = delt*yloop+ep
        tempx[2] = 1 - tempx[0] - tempx[1]
        
        x0 = tempx/np.sum(tempx);          # initial populations
        
        tspan = 70
        t, x_t = solve_flow(x0,tspan)
        
        y1 = x_t[:,0]
        y2 = x_t[:,1]
        y3 = x_t[:,2]
        
        plt.figure(1)
        lines = plt.plot(t,y1,t,y2,t,y3)
        plt.setp(lines, linewidth=0.5)
        plt.show()
        plt.ylabel('X Position')
        plt.xlabel('Time')

        tripartite(y1,y2,y3)

Topics in Modern Dynamics

These three examples are just the tip of the iceberg. The topics in modern dynamics are almost numberless. Any system that changes in time is a potential object of study in modern dynamics. Here is a list of a few topics that spring to mind.

Bibliography

D. D. Nolte, Introduction to Modern Dynamics: Chaos, Networks, Space and Time, 2nd Ed. (Oxford University Press, 2019) (The physics and the derivations of the equations for the examples in this blog can be found here.)

D. D. Nolte, Galileo Unbound: A Path Across Life, the Universe and Everything (Oxford University Press, 2018) (The historical origins of the examples in this blog can be found here.)