Physics in the Age of Contagion: Part 4. Fifty Shades of Immunity to COVID-19

This is the fourth installment in a series of blogs on the population dynamics of COVID-19. In my first blog I looked at a bifurcation physics model that held the possibility (and hope) that with sufficient preventive action the pandemic could have died out and spared millions. That hope was in vain.

What will it be like to live with COVID-19 as a constant factor of modern life for years to come?

In my second blog I looked at a two-component population dynamics model that showed the importance of locking down and not emerging too soon. It predicted that waiting only a few extra weeks before opening could have saved tens of thousands of lives. Unfortunately, because states like Texas and Florida opened too soon and refused to mandate the wearing of masks, thousands of lives were lost.

In my third blog I looked at a network physics model that showed the importance of rapid testing and contact tracing to remove infected individuals to push the infection rate low — not only to flatten the curve, but to drive it down. While most of the developed world is succeeding in achieving this, the United States is not.

In this fourth blog, I am looking at a simple mean-field model that shows what it will be like to live with COVID-19 as a constant factor of modern life for years to come. This is what will happen if the period of immunity to the disease is short and people who recover from the disease can get it again. Then the disease will never go away and the world will need to learn to deal with it. How different that world will look from the one we had just a year ago will depend on the degree of immunity that is acquired after infection, how long a vaccine will provide protection before booster shots are needed, and how many people will get the vaccine or will refus.

SIRS for SARS

COVID-19 is a SARS corona virus known as SARS-CoV-2. SARS stands for Severe Acute Respiratory Syndrome. There is a simple and well-established mean-field model for an infectious disease like SARS that infects individuals, from which they recover, but after some lag period they become susceptible again. This is called the SIRS model, standing for Susceptible-Infected-Recovered-Susceptible. This model is similar to the SIS model of my first blog, but it now includes a mean lifetime for the acquired immunity, after an individual recovers from the infection and then becomes susceptible again. The bifurcation threshold is the same for the SIRS model as the SIS model, but with SIRS there is a constant susceptible population. The mathematical flow equations for this model are

where i is the infected fraction, r is the recovered fraction, and 1 – i – r = s is the susceptible fraction. The infection rate for an individual who has k contacts is βk. The recovery rate is μ and the mean lifetime of acquired immunity after recovery is τlife = 1/ν. This model assumes that all individuals are equivalent (no predispositions) and there is no vaccine–only natural immunity that fades in time after recovery.

The population trajectories for this model are shown in Fig. 1. The figure on the left is a 3-simplex where every point in the triangle stands for a 3-tuple (i, r, s). Our own trajectory starts at the right bottom vertex and generates the green trajectory that spirals into the fixed point. The parameters are chosen to be roughly equivalent to what is known about the virus (but with big uncertainties in the model parameters). One of the key results is that the infection will oscillate over several years, settling into a steady state after about 4 years. Thereafter, there is a steady 3% infected population with 67% of the population susceptible and 30% recovered. The decay time for the immunity is assumed to be one year in this model. Note that the first peak in the infected numbers will be about 1 year, or around March 2021. There is a second smaller peak (the graph on the right is on a vertical log scale) at about 4 years, or sometime in 2024.

Fig. 1 SIRS model for COVID-19 in which immunity acquired after recovery fades in time so an individual can be infected again. If immunity fades and there is never a vaccine, a person will have an 80% chance of getting the virus at least twice in their lifetime, and COVID will become the third highest cause of death in the US after heart disease and cancer.

Although the recovered fraction is around 30% for these parameters, it is important to understand that this is a dynamic equilibrium. If there is no vaccine, then any individual who was once infected can be infected again after about a year. So if they don’t get the disease in the first year, they still have about a 4% chance to get it every following year. In 50 years, a 20-year-old today would have almost a 90% chance of having been infected at least once and an 80% chance of having gotten it at least twice. In other words, if there is never a vaccine, and if immunity fades after each recovery, then almost everyone will eventually get the disease several times in their lifetime. Furthermore, COVID will become the third most likely cause of death in the US after heart disease (first) and cancer (second). The sad part of this story is that it all could have been avoided if the government leaders of several key nations, along with their populations, had behaved responsibly.

The Asymmetry of Personal Cost under COVID

The nightly news in the US during the summer of 2020 shows endless videos of large parties, dense with people, mostly young, wearing no masks. This is actually understandable even though regrettable. It is because of the asymmetry of personal cost. Here is what that means …

On any given day, an individual who goes out and about in the US has only about a 0.01 percent chance of contracting the virus. In the entire year, there is only about a 3% chance that that individual will get the disease. And even if they get the virus, they only have a 2% chance of dying. So the actual danger per day per person is so minuscule that it is hard to understand why it is so necessary to wear a mask and socially distance. Therefore, if you go out and don’t wear a mask, almost nothing bad will happen to YOU. So why not? Why not screw the masks and just go out!

And this is why that’s such a bad idea: because if no-one wears a mask, then tens or hundreds of thousands of OTHERS will die.

This is the asymmetry of personal cost. By ignoring distancing, nothing is likely to happen to YOU, but thousands of OTHERS will die. How much of your own comfort are you willing to give up to save others? That is the existential question.

This year is the 75th anniversary of the end of WW II. During the war everyone rationed and recycled, not because they needed it for themselves, but because it was needed for the war effort. Almost no one hesitated back then. It was the right thing to do even though it cost personal comfort. There was a sense of community spirit and doing what was good for the country. Where is that spirit today? The COVID-19 pandemic is a war just as deadly as any war since WW II. There is a community need to battle it. All anyone has to do is wear a mask and behave responsibly. Is this such a high personal cost?

The Vaccine

All of this can change if a reliable vaccine can be developed. There is no guarantee that this can be done. For instance, there has never been a reliable vaccine for the common cold. A more sobering thought is to realize is that there has never been a vaccine for the closely related virus SARS-CoV-1 that broke out in 2003 in China but was less infectious. But the need is greater now, so there is reason for optimism that a vaccine can be developed that elicits the production of antibodies with a mean lifetime at least as long as for naturally-acquired immunity.

The SIRS model has the same bifurcation threshold as the SIS model that was discussed in a previous blog. If the infection rate can be made slower than the recovery rate, then the pandemic can be eliminated entirely. The threshold is

The parameter μ, the recovery rate, is intrinsic and cannot be changed. The parameter β, the infection rate per contact, can be reduced by personal hygiene and wearing masks. The parameter <k>, the average number of contacts to a susceptible person, can be significantly reduced by vaccinating a large fraction of the population.

To simulate the effect of vaccination, the average <k> per person can be reduced at the time of vaccination. This lowers the average infection rate. The results are shown in Fig. 2 for the original dynamics, a vaccination of 20% of the populace, and a vaccination of 40% of the populace. For 20% vaccination, the epidemic is still above threshold, although the long-time infection is lower. For 40% of the population vaccinated, the disease falls below threshold and would decay away and vanish.

Fig. 2 Vaccination at 52 weeks can lower the infection cases (20% vaccinated) or eliminate them entirely (40% vaccinated). The vaccinations would need booster shots every year (if the decay time of immunity is one year).

In this model, the vaccination is assumed to decay at the same rate as naturally acquired immunity (one year), so booster shots would be needed every year. Getting 40% of the population to get vaccinated may be achieved. Roughly that fraction get yearly flu shots in the US, so the COVID vaccine could be added to the list. But at 40% it would still be necessary for everyone to wear face masks and socially distance until the pandemic fades away. Interestingly, if the 40% got vaccinated all on the same date (across the world), then the pandemic would be gone in a few months. Unfortunately, that’s unrealistic, so with a world-wide push to get 40% of the World’s population vaccinated within five years, it would take that long to eliminate the disease, taking us to 2025 before we could go back to the way things were in November of 2019. But that would take a world-wide vaccination blitz the likes of which the world has never seen.

Python Code

#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
Created on Fri July 17 2020

D. D. Nolte, "Introduction to Modern Dynamics: 
    Chaos, Networks, Space and Time, 2nd Edition (Oxford University Press, 2019)

@author: nolte
"""

import numpy as np
from scipy import integrate
from matplotlib import pyplot as plt

plt.close('all')


def tripartite(x,y,z):

    sm = x + y + z
    xp = x/sm
    yp = y/sm
    
    f = np.sqrt(3)/2
    
    y0 = f*xp
    x0 = -0.5*xp - yp + 1;
    
    lines = plt.plot(x0,y0)
    plt.setp(lines, linewidth=0.5)
    plt.plot([0, 1],[0, 0],'k',linewidth=1)
    plt.plot([0, 0.5],[0, f],'k',linewidth=1)
    plt.plot([1, 0.5],[0, f],'k',linewidth=1)
    plt.show()
    
print(' ')
print('SIRS.py')

def solve_flow(param,max_time=1000.0):

    def flow_deriv(x_y,tspan,mu,betap,nu):
        x, y = x_y
        
        return [-mu*x + betap*x*(1-x-y),mu*x-nu*y]
    
    x0 = [del1, del2]
    
    # Solve for the trajectories
    t = np.linspace(0, int(tlim), int(250*tlim))
    x_t = integrate.odeint(flow_deriv, x0, t, param)

    return t, x_t

 # rates per week
betap = 0.3;   # infection rate
mu = 0.2;      # recovery rate
nu = 0.02      # immunity decay rate

print('beta = ',betap)
print('mu = ',mu)
print('nu =',nu)
print('betap/mu = ',betap/mu)
          
del1 = 0.005         # initial infected
del2 = 0.005         # recovered

tlim = 600          # weeks (about 12 years)

param = (mu, betap, nu)    # flow parameters

t, y = solve_flow(param)
I = y[:,0]
R = y[:,1]
S = 1 - I - R

plt.figure(1)
lines = plt.semilogy(t,I,t,S,t,R)
plt.ylim([0.001,1])
plt.xlim([0,tlim])
plt.legend(('Infected','Susceptible','Recovered'))
plt.setp(lines, linewidth=0.5)
plt.xlabel('Days')
plt.ylabel('Fraction of Population')
plt.title('Population Dynamics for COVID-19')
plt.show()

plt.figure(2)
plt.hold(True)
for xloop in range(0,10):
    del1 = xloop/10.1 + 0.001
    del2 = 0.01

    tlim = 300
    param = (mu, betap, nu)    # flow parameters
    t, y = solve_flow(param)       
    I = y[:,0]
    R = y[:,1]
    S = 1 - I - R
    
    tripartite(I,R,S);

for yloop in range(1,6):
    del1 = 0.001;
    del2 = yloop/10.1
    t, y = solve_flow(param)
    I = y[:,0]
    R = y[:,1]
    S = 1 - I - R
    
    tripartite(I,R,S);
    
for loop in range(2,10):
    del1 = loop/10.1
    del2 = 1 - del1 - 0.01
    t, y = solve_flow(param)
    I = y[:,0]
    R = y[:,1]
    S = 1 - I - R
        
    tripartite(I,R,S);
    
plt.hold(False)
plt.title('Simplex Plot of COVID-19 Pop Dynamics')
 
vac = [1, 0.8, 0.6]
for loop in vac:
               
    # Run the epidemic to the first peak
    del1 = 0.005
    del2 = 0.005
    tlim = 52
    param = (mu, betap, nu)
    t1, y1 = solve_flow(param)
    
    # Now vaccinate a fraction of the population
    st = np.size(t1)
    del1 = y1[st-1,0]
    del2 = y1[st-1,1]
    tlim = 400
    
    param = (mu, loop*betap, nu)
    t2, y2 = solve_flow(param)
    
    t2 = t2 + t1[st-1]
    
    tc = np.concatenate((t1,t2))
    yc = np.concatenate((y1,y2))
    
    I = yc[:,0]
    R = yc[:,1]
    S = 1 - I - R
    
    plt.figure(3)
    plt.hold(True)
    lines = plt.semilogy(tc,I,tc,S,tc,R)
    plt.ylim([0.001,1])
    plt.xlim([0,tlim])
    plt.legend(('Infected','Susceptible','Recovered'))
    plt.setp(lines, linewidth=0.5)
    plt.xlabel('Weeks')
    plt.ylabel('Fraction of Population')
    plt.title('Vaccination at 1 Year')
    plt.show()
    
plt.hold(False)

Caveats and Disclaimers

No effort was made to match parameters to the actual properties of the COVID-19 pandemic. The SIRS model is extremely simplistic and can only show general trends because it homogenizes away all the important spatial heterogeneity of the disease across the cities and states of the country. If you live in a hot spot, this model says little about what you will experience locally. The decay of immunity is also a completely open question and the decay rate is completely unknown. It is easy to modify the Python program to explore the effects of differing decay rates and vaccination fractions. The model also can be viewed as a “compartment” to model local variations in parameters.

Timelines in the History and Physics of Dynamics (with links to primary texts)

These timelines in the History of Dynamics are organized along the Chapters in Galileo Unbound (Oxford, 2018). The book is about the physics and history of dynamics including classical and quantum mechanics as well as general relativity and nonlinear dynamics (with a detour down evolutionary dynamics and game theory along the way). The first few chapters focus on Galileo, while the following chapters follow his legacy, as theories of motion became more abstract, eventually to encompass the evolution of species within the same theoretical framework as the orbit of photons around black holes.

Galileo: A New Scientist

Galileo Galilei was the first modern scientist, launching a new scientific method that superseded, after one and a half millennia, Aristotle’s physics.  Galileo’s career began with his studies of motion at the University of Pisa that were interrupted by his move to the University of Padua and his telescopic discoveries of mountains on the moon and the moons of Jupiter.  Galileo became the first rock star of science, and he used his fame to promote the ideas of Copernicus and the Sun-centered model of the solar system.  But he pushed too far when he lampooned the Pope.  Ironically, Galileo’s conviction for heresy and his sentence to house arrest for the remainder of his life gave him the free time to finally finish his work on the physics of motion, which he published in Two New Sciences in 1638.

1543 Copernicus dies, publishes posthumously De Revolutionibus

1564    Galileo born

1581    Enters University of Pisa

1585    Leaves Pisa without a degree

1586    Invents hydrostatic balance

1588    Receives lecturship in mathematics at Pisa

1592    Chair of mathematics at Univeristy of Padua

1595    Theory of the tides

1595    Invents military and geometric compass

1596    Le Meccaniche and the principle of horizontal inertia

1600    Bruno Giordano burned at the stake

1601    Death of Tycho Brahe

1609    Galileo constructs his first telescope, makes observations of the moon

1610    Galileo discovers 4 moons of Jupiter, Starry Messenger (Sidereus Nuncius), appointed chief philosopher and mathematician of the Duke of Tuscany, moves to Florence, observes Saturn, Venus goes through phases like the moon

1611    Galileo travels to Rome, inducted into the Lyncean Academy, name “telescope” is first used

1611    Scheiner discovers sunspots

1611    Galileo meets Barberini, a cardinal

1611 Johannes Kepler, Dioptrice

1613    Letters on sunspots published by Lincean Academy in Rome

1614    Galileo denounced from the pulpit

1615    (April) Bellarmine writes an essay against Coperinicus

1615    Galileo investigated by the Inquisition

1615    Writes Letter to Christina, but does not publish it

1615    (December) travels to Rome and stays at Tuscan embassy

1616    (January) Francesco Ingoli publishes essay against Copernicus

1616    (March) Decree against copernicanism

1616    Galileo publishes theory of tides, Galileo meets with Pope Paul V, Copernicus’ book is banned, Galileo warned not to support the Coperinican system, Galileo decides not to reply to Ingoli, Galileo proposes eclipses of Jupter’s moons to determine longitude at sea

1618    Three comets appear, Grassi gives a lecture not hostile to Galileo

1618    Galileo, through Mario Guiducci, publishes scathing attack on Grassi

1619    Jesuit Grassi (Sarsi) publishes attack on Galileo concerning 3 comets

1619    Marina Gamba dies, Galileo legitimizes his son Vinczenzio

1619 Kepler’s Laws, Epitome astronomiae Copernicanae.

1623    Barberini becomes Urban VIII, The Assayer published (response to Grassi)

1624    Galileo visits Rome and Urban VIII

1629    Birth of his grandson Galileo

1630    Death of Johanes Kepler

1632    Publication of the Dialogue Concerning the Two Chief World Systems, Galileo is indicted by the Inquisition (68 years old)

1633    (February) Travels to Rome

1633    Convicted, abjurs, house arrest in Rome, then Siena, then home to Arcetri

1638    Blind, publication of Two New Sciences

1642    Galileo dies (77 years old)

Galileo’s Trajectory

Galileo’s discovery of the law of fall and the parabolic trajectory began with early work on the physics of motion by predecessors like the Oxford Scholars, Tartaglia and the polymath Simon Stevin who dropped lead weights from the leaning tower of Delft three years before Galileo (may have) dropped lead weights from the leaning tower of Pisa.  The story of how Galileo developed his ideas of motion is described in the context of his studies of balls rolling on inclined plane and the surprising accuracy he achieved without access to modern timekeeping.

1583    Galileo Notices isochronism of the pendulum

1588    Receives lecturship in mathematics at Pisa

1589 – 1592  Work on projectile motion in Pisa

1592    Chair of mathematics at Univeristy of Padua

1596    Le Meccaniche and the principle of horizontal inertia

1600    Guidobaldo shares technique of colored ball

1602    Proves isochronism of the pendulum (experimentally)

1604    First experiments on uniformly accelerated motion

1604    Wrote to Scarpi about the law of fall (s ≈ t2)

1607-1608  Identified trajectory as parabolic

1609    Velocity proportional to time

1632    Publication of the Dialogue Concerning the Two Chief World Systems, Galileo is indicted by the Inquisition (68 years old)

1636    Letter to Christina published in Augsburg in Latin and Italian

1638    Blind, publication of Two New Sciences

1641    Invented pendulum clock (in theory)

1642    Dies (77 years old)

On the Shoulders of Giants

Galileo’s parabolic trajectory launched a new approach to physics that was taken up by a new generation of scientists like Isaac Newton, Robert Hooke and Edmund Halley.  The English Newtonian tradition was adopted by ambitious French iconoclasts who championed Newton over their own Descartes.  Chief among these was Pierre Maupertuis, whose principle of least action was developed by Leonhard Euler and Joseph Lagrange into a rigorous new science of dynamics.  Along the way, Maupertuis became embroiled in a famous dispute that entangled the King of Prussia as well as the volatile Voltaire who was mourning the death of his mistress Emilie du Chatelet, the lone female French physicist of the eighteenth century.

1644    Descartes’ vortex theory of gravitation

1662    Fermat’s principle

1669 – 1690    Huygens expands on Descartes’ vortex theory

1687 Newton’s Principia

1698    Maupertuis born

1729    Maupertuis entered University in Basel.  Studied under Johann Bernoulli

1736    Euler publishes Mechanica sive motus scientia analytice exposita

1737   Maupertuis report on expedition to Lapland.  Earth is oblate.  Attacks Cassini.

1744    Maupertuis Principle of Least Action.  Euler Principle of Least Action.

1745    Maupertuis becomes president of Berlin Academy.  Paris Academy cancels his membership after a campaign against him by Cassini.

1746    Maupertuis principle of Least Action for mass

1751    Samuel König disputes Maupertuis’ priority

1756    Cassini dies.  Maupertuis reinstated in the French Academy

1759    Maupertuis dies

1759    du Chatelet’s French translation of Newton’s Principia published posthumously

1760    Euler 3-body problem (two fixed centers and coplanar third body)

1760-1761 Lagrange, Variational calculus (J. L. Lagrange, “Essai d’une nouvelle méthod pour dEeterminer les maxima et lest minima des formules intégrales indéfinies,” Miscellanea Teurinensia, (1760-1761))

1762    Beginning of the reign of Catherine the Great of Russia

1763    Euler colinear 3-body problem

1765    Euler publishes Theoria motus corporum solidorum on rotational mechanics

1766    Euler returns to St. Petersburg

1766    Lagrange arrives in Berlin

1772    Lagrange equilateral 3-body problem, Essai sur le problème des trois corps, 1772, Oeuvres tome 6

1775    Beginning of the American War of Independence

1776    Adam Smith Wealth of Nations

1781    William Herschel discovers Uranus

1783    Euler dies in St. Petersburg

1787    United States Constitution written

1787    Lagrange moves from Berlin to Paris

1788    Lagrange, Méchanique analytique

1789    Beginning of the French Revolution

1799    Pierre-Simon Laplace Mécanique Céleste (1799-1825)

Geometry on My Mind

This history of modern geometry focuses on the topics that provided the foundation for the new visualization of physics.  It begins with Carl Gauss and Bernhard Riemann, who redefined geometry and identified the importance of curvature for physics.  Vector spaces, developed by Hermann Grassmann, Giuseppe Peano and David Hilbert, are examples of the kinds of abstract new spaces that are so important for modern physics, such as Hilbert space for quantum mechanics.  Fractal geometry developed by Felix Hausdorff later provided the geometric language needed to solve problems in chaos theory.

1629    Fermat described higher-dim loci

1637    Descarte’s Geometry

1649    van Schooten’s commentary on Descartes Geometry

1694    Leibniz uses word “coordinate” in its modern usage

1697    Johann Bernoulli shortest distance between two points on convex surface

1732    Euler geodesic equations for implicit surfaces

1748    Euler defines modern usage of function

1801    Gauss calculates orbit of Ceres

1807    Fourier analysis (published in 1822(

1807    Gauss arrives in Göttingen

1827    Karl Gauss establishes differential geometry of curved surfaces, Disquisitiones generales circa superficies curvas

1830    Bolyai and Lobachevsky publish on hyperbolic geometry

1834    Jacobi n-fold integrals and volumes of n-dim spheres

1836    Liouville-Sturm theorem

1838    Liouville’s theorem

1841    Jacobi determinants

1843    Arthur Cayley systems of n-variables

1843    Hamilton discovers quaternions

1844    Hermann Grassman n-dim vector spaces, Die Lineale Ausdehnungslehr

1846    Julius Plücker System der Geometrie des Raumes in neuer analytischer Behandlungsweise

1848 Jacobi Vorlesungen über Dynamik

1848    “Vector” coined by Hamilton

1854    Riemann’s habilitation lecture

1861    Riemann n-dim solution of heat conduction

1868    Publication of Riemann’s Habilitation

1869    Christoffel and Lipschitz work on multiple dimensional analysis

1871    Betti refers to the n-ply of numbers as a “space”.

1871    Klein publishes on non-euclidean geometry

1872 Boltzmann distribution

1872    Jordan Essay on the geometry of n-dimensions

1872    Felix Klein’s “Erlangen Programme”

1872    Weierstrass’ Monster

1872    Dedekind cut

1872    Cantor paper on irrational numbers

1872    Cantor meets Dedekind

1872 Lipschitz derives mechanical motion as a geodesic on a manifold

1874    Cantor beginning of set theory

1877    Cantor one-to-one correspondence between the line and n-dimensional space

1881    Gibbs codifies vector analysis

1883    Cantor set and staircase Grundlagen einer allgemeinen Mannigfaltigkeitslehre

1884    Abbott publishes Flatland

1887    Peano vector methods in differential geometry

1890    Peano space filling curve

1891    Hilbert space filling curve

1887    Darboux vol. 2 treats dynamics as a point in d-dimensional space.  Applies concepts of geodesics for trajectories.

1898    Ricci-Curbastro Lesons on the Theory of Surfaces

1902    Lebesgue integral

1904    Hilbert studies integral equations

1904    von Koch snowflake

1906    Frechet thesis on square summable sequences as infinite dimensional space

1908    Schmidt Geometry in a Function Space

1910    Brouwer proof of dimensional invariance

1913    Hilbert space named by Riesz

1914    Hilbert space used by Hausdorff

1915    Sierpinski fractal triangle

1918    Hausdorff non-integer dimensions

1918    Weyl’s book Space, Time, Matter

1918    Fatou and Julia fractals

1920    Banach space

1927    von Neumann axiomatic form of Hilbert Space

1935    Frechet full form of Hilbert Space

1967    Mandelbrot coast of Britain

1982    Mandelbrot’s book The Fractal Geometry of Nature

The Tangled Tale of Phase Space

Phase space is the central visualization tool used today to study complex systems.  The chapter describes the origins of phase space with the work of Joseph Liouville and Carl Jacobi that was later refined by Ludwig Boltzmann and Rudolf Clausius in their attempts to define and explain the subtle concept of entropy.  The turning point in the history of phase space was when Henri Poincaré used phase space to solve the three-body problem, uncovering chaotic behavior in his quest to answer questions on the stability of the solar system.  Phase space was established as the central paradigm of statistical mechanics by JW Gibbs and Paul Ehrenfest.

1804    Jacobi born (1904 – 1851) in Potsdam

1804    Napoleon I Emperor of France

1806    William Rowan Hamilton born (1805 – 1865)

1807    Thomas Young describes “Energy” in his Course on Natural Philosophy (Vol. 1 and Vol. 2)

1808    Bethoven performs his Fifth Symphony

1809    Joseph Liouville born (1809 – 1882)

1821    Hermann Ludwig Ferdinand von Helmholtz born (1821 – 1894)

1824    Carnot published Reflections on the Motive Power of Fire

1834    Jacobi n-fold integrals and volumes of n-dim spheres

1834-1835       Hamilton publishes his principle (1834, 1835).

1836    Liouville-Sturm theorem

1837    Queen Victoria begins her reign as Queen of England

1838    Liouville develops his theorem on products of n differentials satisfying certain first-order differential equations.  This becomes the classic reference to Liouville’s Theorem.

1847    Helmholtz  Conservation of Energy (force)

1849    Thomson makes first use of “Energy” (From reading Thomas Young’s lecture notes)

1850    Clausius establishes First law of Thermodynamics: Internal energy. Second law:  Heat cannot flow unaided from cold to hot.  Not explicitly stated as first and second laws

1851    Thomson names Clausius’ First and Second laws of Thermodynamics

1852    Thomson describes general dissipation of the universe (“energy” used in title)

1854    Thomson defined absolute temperature.  First mathematical statement of 2nd law.  Restricted to reversible processes

1854    Clausius stated Second Law of Thermodynamics as inequality

1857    Clausius constructs kinetic theory, Mean molecular speeds

1858    Clausius defines mean free path, Molecules have finite size. Clausius assumed that all molecules had the same speed

1860    Maxwell publishes first paper on kinetic theory. Distribution of speeds. Derivation of gas transport properties

1865    Loschmidt size of molecules

1865    Clausius names entropy

1868    Boltzmann adds (Boltzmann) factor to Maxwell distribution

1872    Boltzmann transport equation and H-theorem

1876    Loschmidt reversibility paradox

1877    Boltzmann  S = k logW

1890    Poincare: Recurrence Theorem. Recurrence paradox with Second Law (1893)

1896    Zermelo criticizes Boltzmann

1896    Boltzmann posits direction of time to save his H-theorem

1898    Boltzmann Vorlesungen über Gas Theorie

1905    Boltzmann kinetic theory of matter in Encyklopädie der mathematischen Wissenschaften

1906    Boltzmann dies

1910    Paul Hertz uses “Phase Space” (Phasenraum)

1911    Ehrenfest’s article in Encyklopädie der mathematischen Wissenschaften

1913    A. Rosenthal writes the first paper using the phrase “phasenraum”, combining the work of Boltzmann and Poincaré. “Beweis der Unmöglichkeit ergodischer Gassysteme” (Ann. D. Physik, 42, 796 (1913)

1913    Plancheral, “Beweis der Unmöglichkeit ergodischer mechanischer Systeme” (Ann. D. Physik, 42, 1061 (1913).  Also uses “Phasenraum”.

The Lens of Gravity

Gravity provided the backdrop for one of the most important paradigm shifts in the history of physics.  Prior to Albert Einstein’s general theory of relativity, trajectories were paths described by geometry.  After the theory of general relativity, trajectories are paths caused by geometry.  This chapter explains how Einstein arrived at his theory of gravity, relying on the space-time geometry of Hermann Minkowski, whose work he had originally harshly criticized.  The confirmation of Einstein’s theory was one of the dramatic high points in 20th century history of physics when Arthur Eddington journeyed to an island off the coast of Africa to observe stellar deflections during a solar eclipse.  If Galileo was the first rock star of physics, then Einstein was the first worldwide rock star of science.

1697    Johann Bernoulli was first to find solution to shortest path between two points on a curved surface (1697).

1728    Euler found the geodesic equation.

1783    The pair 40 Eridani B/C was discovered by William Herschel on 31 January

1783    John Michell explains infalling object would travel faster than speed of light

1796    Laplace describes “dark stars” in Exposition du system du Monde

1827    The first orbit of a binary star computed by Félix Savary for the orbit of Xi Ursae Majoris.

1827    Gauss curvature Theoriem Egregum

1844    Bessel notices periodic displacement of Sirius with period of half a century

1844    The name “geodesic line” is attributed to Liouville.

1845    Buys Ballot used musicians with absolute pitch for the first experimental verification of the Doppler effect

1854    Riemann’s habilitationsschrift

1862    Discovery of Sirius B (a white dwarf)

1868    Darboux suggested motions in n-dimensions

1872    Lipshitz first to apply Riemannian geometry to the principle of least action.

1895    Hilbert arrives in Göttingen

1902    Minkowski arrives in Göttingen

1905    Einstein’s miracle year

1906    Poincaré describes Lorentz transformations as rotations in 4D

1907    Einstein has “happiest thought” in November

1907    Einstein’s relativity review in Jahrbuch

1908    Minkowski’s Space and Time lecture

1908    Einstein appointed to unpaid position at University of Bern

1909    Minkowski dies

1909    Einstein appointed associate professor of theoretical physics at U of Zürich

1910    40 Eridani B was discobered to be of spectral type A (white dwarf)

1910    Size and mass of Sirius B determined (heavy and small)

1911    Laue publishes first textbook on relativity theory

1911    Einstein accepts position at Prague

1911    Einstein goes to the limits of special relativity applied to gravitational fields

1912    Einstein’s two papers establish a scalar field theory of gravitation

1912    Einstein moves from Prague to ETH in Zürich in fall.  Begins collaboration with Grossmann.

1913    Einstein EG paper

1914    Adams publishes spectrum of 40 Eridani B

1915    Sirius B determined to be also a low-luminosity type A white dwarf

1915    Einstein Completes paper

1916    Density of 40 Eridani B by Ernst Öpik

1916    Schwarzschild paper

1916 Einstein’s publishes theory of gravitational waves

1919    Eddington expedition to Principe

1920    Eddington paper on deflection of light by the sun

1922    Willem Luyten coins phrase “white dwarf”

1924    Eddington found a set of coordinates that eliminated the singularity at the Schwarzschild radius

1926    R. H. Fowler publishes paper on degenerate matter and composition of white dwarfs

1931    Chandrasekhar calculated the limit for collapse to white dwarf stars at 1.4MS

1933    Georges Lemaitre states the coordinate singularity was an artefact

1934    Walter Baade and Fritz Zwicky proposed the existence of the neutron star only a year after the discovery of the neutron by Sir James Chadwick.

1939    Oppenheimer and Snyder showed ultimate collapse of a 3MS  “frozen star”

1958    David Finkelstein paper

1965    Antony Hewish and Samuel Okoye discovered “an unusual source of high radio brightness temperature in the Crab Nebula”. This source turned out to be the Crab Nebula neutron star that resulted from the great supernova of 1054.

1967    Jocelyn Bell and Antony Hewish discovered regular radio pulses from CP 1919. This pulsar was later interpreted as an isolated, rotating neutron star.

1967    Wheeler’s “black hole” talk

1974    Joseph Taylor and Russell Hulse discovered the first binary pulsar, PSR B1913+16, which consists of two neutron stars (one seen as a pulsar) orbiting around their center of mass.

2015    LIGO detects gravitational waves on Sept. 14 from the merger of two black holes

2017    LIGO detects the merger of two neutron stars

On the Quantum Footpath

The concept of the trajectory of a quantum particle almost vanished in the battle between Werner Heisenberg’s matrix mechanics and Erwin Schrödinger’s wave mechanics.  It took Niels Bohr and his complementarity principle of wave-particle duality to cede back some reality to quantum trajectories.  However, Schrödinger and Einstein were not convinced and conceived of quantum entanglement to refute the growing acceptance of the Copenhagen Interpretation of quantum physics.  Schrödinger’s cat was meant to be an absurdity, but ironically it has become a central paradigm of practical quantum computers.  Quantum trajectories took on new meaning when Richard Feynman constructed quantum theory based on the principle of least action, inventing his famous Feynman Diagrams to help explain quantum electrodynamics.

1885    Balmer Theory: 

1897    J. J. Thomson discovered the electron

1904    Thomson plum pudding model of the atom

1911    Bohr PhD thesis filed. Studies on the electron theory of metals.  Visited England.

1911    Rutherford nuclear model

1911    First Solvay conference

1911    “ultraviolet catastrophe” coined by Ehrenfest

1913    Bohr combined Rutherford’s nuclear atom with Planck’s quantum hypothesis: 1913 Bohr model

1913    Ehrenfest adiabatic hypothesis

1914-1916       Bohr at Manchester with Rutherford

1916    Bohr appointed Chair of Theoretical Physics at University of Copenhagen: a position that was made just for him

1916    Schwarzschild and Epstein introduce action-angle coordinates into quantum theory

1920    Heisenberg enters University of Munich to obtain his doctorate

1920    Bohr’s Correspondence principle: Classical physics for large quantum numbers

1921    Bohr Founded Institute of Theoretical Physics (Copenhagen)

1922-1923       Heisenberg studies with Born, Franck and Hilbert at Göttingen while Sommerfeld is in the US on sabbatical.

1923    Heisenberg Doctorate.  The exam does not go well.  Unable to derive the resolving power of a microscope in response to question by Wien.  Becomes Born’s assistant at Göttingen.

1924    Heisenberg visits Niels Bohr in Copenhagen (and met Einstein?)

1924    Heisenberg Habilitation at Göttingen on anomalous Zeeman

1924 – 1925    Heisenberg worked with Bohr in Copenhagen, returned summer of 1925 to Göttiingen

1924    Pauli exclusion principle and state occupancy

1924    de Broglie hypothesis extended wave-particle duality to matter

1924    Bohr Predicted Halfnium (72)

1924    Kronig’s proposal for electron self spin

1924    Bose (Einstein)

1925    Heisenberg paper on quantum mechanics

1925    Dirac, reading proof from Heisenberg, recognized the analogy of noncommutativity with Poisson brackets and the correspondence with Hamiltonian mechanics.

1925    Uhlenbeck and Goudschmidt: spin

1926    Born, Heisenberg, Kramers: virtual oscillators at transition frequencies: Matrix mechanics (alternative to Bohr-Kramers-Slater 1924 model of orbits).  Heisenberg was Born’s student at Göttingen.

1926    Schrödinger wave mechanics

1927    de Broglie hypotehsis confirmed by Davisson and Germer

1927    Complementarity by Bohr: wave-particle duality “Evidence obtained under different experimental conditions cannot be comprehended within a single picture, but must be regarded as complementary in the sense that only the totality of the phenomena exhausts the possible information about the objects.

1927    Heisenberg uncertainty principle (Heisenberg was in Copenhagen 1926 – 1927)

1927    Solvay Conference in Brussels

1928    Heisenberg to University of Leipzig

1928    Dirac relativistic QM equation

1929    de Broglie Nobel Prize

1930    Solvay Conference

1932    Heisenberg Nobel Prize

1932    von Neumann operator algebra

1933    Dirac Lagrangian form of QM (basis of Feynman path integral)

1933    Schrödinger and Dirac Nobel Prize

1935    Einstein, Poldolsky and Rosen EPR paper

1935 Bohr’s response to Einsteins “EPR” paradox

1935    Schrodinger’s cat

1939    Feynman graduates from MIT

1941    Heisenberg (head of German atomic project) visits Bohr in Copenhagen

1942    Feynman PhD at Princeton, “The Principle of Least Action in Quantum Mechanics

1942 – 1945    Manhattan Project, Bethe-Feynman equation for fission yield

1943    Bohr escapes to Sweden in a fishing boat.  Went on to England secretly.

1945    Pauli Nobel Prize

1945    Death of Feynman’s wife Arline (married 4 years)

1945    Fall, Feynman arrives at Cornell ahead of Hans Bethe

1947    Shelter Island conference: Lamb Shift, did Kramer’s give a talk suggesting that infinities could be subtracted?

1947    Fall, Dyson arrives at Cornell

1948    Pocono Manor, Pennsylvania, troubled unveiling of path integral formulation and Feynman diagrams, Schwinger’s master presentation

1948    Feynman and Dirac. Summer drive across the US with Dyson

1949    Dyson joins IAS as a postdoc, trains a cohort of theorists in Feynman’s technique

1949    Karplus and Kroll first g-factor calculation

1950    Feynman moves to Cal Tech

1965    Schwinger, Tomonaga and Feynman Nobel Prize

1967    Hans Bethe Nobel Prize

From Butterflies to Hurricanes

Half a century after Poincaré first glimpsed chaos in the three-body problem, the great Russian mathematician Andrey Kolmogorov presented a sketch of a theorem that could prove that orbits are stable.  In the hands of Vladimir Arnold and Jürgen Moser, this became the KAM theory of Hamiltonian chaos.  This chapter shows how KAM theory fed into topology in the hands of Stephen Smale and helped launch the new field of chaos theory.  Edward Lorenz discovered chaos in numerical models of atmospheric weather and discovered the eponymous strange attractor.  Mathematical aspects of chaos were further developed by Mitchell Feigenbaum studying bifurcations in the logistic map that describes population dynamics.

1760    Euler 3-body problem (two fixed centers and coplanar third body)

1763    Euler colinear 3-body problem

1772    Lagrange equilateral 3-body problem

1881-1886       Poincare memoires “Sur les courbes de ́finies par une equation differentielle”

1890    Poincare “Sur le probleme des trois corps et les equations de la dynamique”. First-return map, Poincare recurrence theorem, stable and unstable manifolds

1892 – 1899    Poincare New Methods in Celestial Mechanics

1892    Lyapunov The General Problem of the Stability of Motion

1899    Poincare homoclinic trajectory

1913    Birkhoff proves Poincaré’s last geometric theorem, a special case of the three-body problem.

1927    van der Pol and van der Mark

1937    Coarse systems, Andronov and Pontryagin

1938    Morse theory

1942    Hopf bifurcation

1945    Cartwright and Littlewood study the van der Pol equation (Radar during WWII)

1954    Kolmogorov A. N., On conservation of conditionally periodic motions for a small change in Hamilton’s function.

1960    Lorenz: 12 equations

1962    Moser On Invariant Curves of Area-Preserving Mappings of an Annulus.

1963    Arnold Small denominators and problems of the stability of motion in classical and celestial mechanics

1963    Lorenz: 3 equations

1964    Arnold diffusion

1965    Smale’s horseshoe

1969    Chirikov standard map

1971    Ruelle-Takens (Ruelle coins phrase “strange attractor”)

1972    “Butterfly Effect” given for Lorenz’ talk (by Philip Merilees)

1975    Gollub-Swinney observe route to turbulence along lines of Ruelle

1975    Yorke coins “chaos theory”

1976    Robert May writes review article of the logistic map

1977    New York conference on bifurcation theory

1987    James Gleick Chaos: Making a New Science

Darwin in the Clockworks

The preceding timelines related to the central role played by families of trajectories phase space to explain the time evolution of complex systems.  These ideas are extended to explore the history and development of the theory of natural evolution by Charles Darwin.  Darwin had many influences, including ideas from Thomas Malthus in the context of economic dynamics.  After Darwin, the ideas of evolution matured to encompass broad topics in evolutionary dynamics and the emergence of the idea of fitness landscapes and game theory driving the origin of new species.  The rise of genetics with Gregor Mendel supplied a firm foundation for molecular evolution, leading to the moleculer clock of Linus Pauling and the replicator dynamics of Richard Dawkins.

1202    Fibonacci

1766    Thomas Robert Malthus born

1776    Adam Smith The Wealth of Nations

1798    Malthus “An Essay on the Principle of Population

1817    Ricardo Principles of Political Economy and Taxation

1838    Cournot early equilibrium theory in duopoly

1848    John Stuart Mill

1848    Karl Marx Communist Manifesto

1859    Darwin Origin of Species

1867    Karl Marx Das Kapital

1871    Darwin Descent of Man, and Selection in Relation to Sex

1871    Jevons Theory of Political Economy

1871    Menger Principles of Economics

1874    Walrus Éléments d’économie politique pure, or Elements of Pure Economics (1954)

1890    Marshall Principles of Economics

1908    Hardy constant genetic variance

1910    Brouwer fixed point theorem

1910    Alfred J. Lotka autocatylitic chemical reactions

1913    Zermelo determinancy in chess

1922    Fisher dominance ratio

1922    Fisher mutations

1925    Lotka predator-prey in biomathematics

1926    Vita Volterra published same equations independently

1927    JBS Haldane (1892—1964) mutations

1928    von Neumann proves the minimax theorem

1930    Fisher ratio of sexes

1932    Wright Adaptive Landscape

1932    Haldane The Causes of Evolution

1933    Kolmogorov Foundations of the Theory of Probability

1934    Rudolph Carnap The Logical Syntax of Language

1936    John Maynard Keynes, The General Theory of Employment, Interest and Money

1936    Kolmogorov generalized predator-prey systems

1938    Borel symmetric payoff matrix

1942    Sewall Wright    Statistical Genetics and Evolution

1943    McCulloch and Pitts A Logical Calculus of Ideas Immanent in Nervous Activity

1944    von Neumann and Morgenstern Theory of Games and Economic Behavior

1950    Prisoner’s Dilemma simulated at Rand Corportation

1950    John Nash Equilibrium points in n-person games and The Bargaining Problem

1951    John Nash Non-cooperative Games

1952    McKinsey Introduction to the Theory of Games (first textbook)

1953    John Nash Two-Person Cooperative Games

1953    Watson and Crick DNA

1955    Braithwaite’s Theory of Games as a Tool for the Moral Philosopher

1961    Lewontin Evolution and the Theory of Games

1962    Patrick Moran The Statistical Processes of Evolutionary Theory

1962    Linus Pauling molecular clock

1968    Motoo Kimura  neutral theory of molecular evolution

1972    Maynard Smith introduces the evolutionary stable solution (ESS)

1972    Gould and Eldridge Punctuated equilibrium

1973    Maynard Smith and Price The Logic of Animal Conflict

1973    Black Scholes

1977    Eigen and Schuster The Hypercycle

1978    Replicator equation (Taylor and Jonker)

1982    Hopfield network

1982    John Maynard Smith Evolution and the Theory of Games

1984    R. Axelrod The Evolution of Cooperation

The Measure of Life

This final topic extends the ideas of dynamics into abstract spaces of high dimension to encompass the idea of a trajectory of life.  Health and disease become dynamical systems defined by all the proteins and nucleic acids that comprise the physical self.  Concepts from network theory, autonomous oscillators and synchronization contribute to this viewpoint.  Healthy trajectories are like stable limit cycles in phase space, but disease can knock the system trajectory into dangerous regions of health space, as doctors turn to new developments in personalized medicine try to return the individual to a healthy path.  This is the ultimate generalization of Galileo’s simple parabolic trajectory.

1642    Galileo dies

1656    Huygens invents pendulum clock

1665    Huygens observes “odd kind of sympathy” in synchronized clocks

1673    Huygens publishes Horologium Oscillatorium sive de motu pendulorum

1736    Euler Seven Bridges of Königsberg

1845    Kirchhoff’s circuit laws

1852    Guthrie four color problem

1857    Cayley trees

1858    Hamiltonian cycles

1887    Cajal neural staining microscopy

1913    Michaelis Menten dynamics of enzymes

1924    Berger, Hans: neural oscillations (Berger invented the EEG)

1926    van der Pol dimensioness form of equation

1927    van der Pol periodic forcing

1943    McCulloch and Pits mathematical model of neural nets

1948    Wiener cybernetics

1952    Hodgkin and Huxley action potential model

1952    Turing instability model

1956    Sutherland cyclic AMP

1957    Broadbent and Hammersley bond percolation

1958    Rosenblatt perceptron

1959    Erdös and Renyi random graphs

1962    Cohen EGF discovered

1965    Sebeok coined zoosemiotics

1966    Mesarovich systems biology

1967    Winfree biological rythms and coupled oscillators

1969    Glass Moire patterns in perception

1970    Rodbell G-protein

1971    phrase “strange attractor” coined (Ruelle)

1972    phrase “signal transduction” coined (Rensing)

1975    phrase “chaos theory” coined (Yorke)

1975    Werbos backpropagation

1975    Kuramoto transition

1976    Robert May logistic map

1977    Mackey-Glass equation and dynamical disease

1982    Hopfield network

1990    Strogatz and Murillo pulse-coupled oscillators

1997    Tomita systems biology of a cell

1998    Strogatz and Watts Small World network

1999    Barabasi Scale Free networks

2000    Sequencing of the human genome

Physics in the Age of Contagion. Part 2: The Second Wave of COVID-19

Since my last Blog on the bifurcation physics of COVID-19, most of the US has approached the crest of “the wave”, with the crest arriving sooner in hot spots like New York City and a few weeks later in rural areas like Lafayette, Indiana where I live. As of the posting of this Blog, most of the US is in lock-down with only a few hold-out states. Fortunately, this was sufficient to avoid the worst case scenarios of my last Blog, but we are still facing severe challenges.

There is good news! The second wave can be managed and minimized if we don’t come out of lock-down too soon.

One fall-out of the (absolutely necessary) lock-down is the serious damage done to the economy that is now in its greatest retraction since the Great Depression. The longer the lock-down goes, the deeper the damage and the longer to recover. The single-most important question at this point in time, as we approach the crest, is when we can emerge from lock down? This is a critical question. If we emerge too early, then the pandemic will re-kindle into a second wave that could exceed the first. But if we emerge later than necessary, then the economy may take a decade to fully recover. We need a Goldilocks solution: not too early and not too late. How do we assess that?

The Value of Qualitative Physics

In my previous Blog I laid out a very simple model called the Susceptible-Infected-Removed (SIR) model and provided a Python program whose parameters can be tweaked to explore the qualitatitive behavior of the model, answering questions like: What is the effect of longer or shorter quarantine periods? What role does social distancing play in saving lives? What happens if only a small fraction of the population pays attention and practice social distancing?

It is necessary to wait from releasing the lock-down at least several weeks after the crest has passed to avoid the second wave.

It is important to note that none of the parameters in that SIR model are reliable and no attempt was made to fit the parameters to the actual data. To expert epidemiological modelers, this simplistic model is less than useless and potentially dangerous if wrong conclusions are arrived at and disseminated on the internet.

But here is the point: The actual numbers are less important than the functional dependences. What matters is how the solution changes as a parameter is changed. The Python programs allow non-experts to gain an intuitive understanding of the qualitative physics of the pandemic. For instance, it is valuable to gain a feeling of how sensitive the pandemic is to small changes in parameters. This is especially important because of the bifurcation physics of COVID-19 where very small changes can cause very different trajectories of the population dynamics.

In the spirit of the value of qualitative physics, I am extending here that simple SIR model to a slightly more sophisticated model that can help us understand the issues and parametric dependences of this question of when to emerge from lock-down. Again, no effort is made to fit actual data of this pandemic, but there are still important qualitative conclusions to be made.

The Two-Compartment SIR Model of COVID-19

To approach a qualitative understanding of what happens by varying the length of time of the country-wide shelter-in-place, it helps to think of two cohorts of the public: those who are compliant and conscientious valuing the lives of others, and those who don’t care and are non-compliant.

Fig. 1 Two-compartment SIR model for compliant and non-compliant cohorts.

These two cohorts can each be modeled separately by their own homogeneous SIR models, but with a coupling between them because even those who shelter in place must go out for food and medicines. The equations of this two-compartment model are

where n and q refer to the non-compliant and the compliant cohorts, respectively. I and S are the susceptible populations. The coupling parameters are knn for the coupling between non-compliants individuals, knq for the effect of the compliant individuals on the non-compliant, kqn for the effect of the non-compliant individuals on the compliant, and kqq for the effect of the compliant cohort on themselves.

There are two time frames for the model. The first time frame is the time of lock-down when the compliant cohort is sheltering in place and practicing good hygiene, but they still need to go out for food and medicines. (This model does not include the first responders. They are an important cohort, but do not make up a large fraction of the national population). The second time frame is after the lock-down is removed. Even then, good practices by the compliant group are expected to continue with the purpose to lower infections among themselves and among others.

This two-compartment model has roughly 8 adjustable parameters, all of which can be varied to study their effects on the predictions. None of them are well known, but general trends still can be explored.

Python Code

#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
Created on Sat March 21 2020

@author: nolte

D. D. Nolte, Introduction to Modern Dynamics: Chaos, Networks, Space and Time, 2nd ed. (Oxford,2019)

"""

import numpy as np
from scipy import integrate
from matplotlib import pyplot as plt

plt.close('all')

print(' ')
print('SIR.py')

def solve_flow(param,max_time=1000.0):

    def flow_deriv(x_y_z_w,tspan):
        In, Sn, Iq, Sq = x_y_z_w
        
        Inp = -mu*In + beta*knn*In*Sn + beta*knq*Iq*Sn
        Snp = -beta*knn*In*Sn - beta*knq*Iq*Sn
        
        Iqp = -mu*Iq + beta*kqn*In*Sq + beta*kqq*Iq*Sq
        Sqp = -beta*kqn*In*Sq - beta*kqq*Iq*Sq
        
        return [Inp, Snp, Iqp, Sqp]
    
    x0 = [In0, Sn0, Iq0, Sq0]
    
    # Solve for the trajectories
    t = np.linspace(tlo, thi, thi-tlo)
    x_t = integrate.odeint(flow_deriv, x0, t)

   
    return t, x_t

beta = 0.02   # infection rate
dill = 5      # mean days infectious
mu = 1/dill   # decay rate
fnq = 0.3     # fraction not quarantining
fq = 1-fnq    # fraction quarantining
P = 330       # Population of the US in millions
mr = 0.002    # Mortality rate
dq = 90       # Days of lock-down (this is the key parameter)

# During quarantine
knn = 50      # Average connections per day for non-compliant group among themselves
kqq = 0       # Connections among compliant group
knq = 0       # Effect of compliaht group on non-compliant
kqn = 5       # Effect of non-clmpliant group on compliant

initfrac = 0.0001          # Initial conditions:
In0 = initfrac*fnq         # infected non-compliant
Sn0 = (1-initfrac)*fnq     # susceptible non-compliant
Iq0 = initfrac*fq          # infected compliant
Sq0 = (1-initfrac)*fq      # susceptivle compliant

tlo = 0
thi = dq

param = (mu, beta, knn, knq, kqn, kqq)    # flow parameters

t1, y1 = solve_flow(param)

In1 = y1[:,0]
Sn1 = y1[:,1]
Rn1 = fnq - In1 - Sn1
Iq1 = y1[:,2]
Sq1 = y1[:,3]
Rq1 = fq - Iq1 - Sq1

# Lift the quarantine: Compliant group continues social distancing
knn = 50      # Adjusted coupling parameters
kqq = 5
knq = 20
kqn = 15

fin1 = len(t1)
In0 = In1[fin1-1]
Sn0 = Sn1[fin1-1]
Iq0 = Iq1[fin1-1]
Sq0 = Sq1[fin1-1]

tlo = fin1
thi = fin1 + 365-dq

param = (mu, beta, knn, knq, kqn, kqq)

t2, y2 = solve_flow(param)

In2 = y2[:,0]
Sn2 = y2[:,1]
Rn2 = fnq - In2 - Sn2
Iq2 = y2[:,2]
Sq2 = y2[:,3]
Rq2 = fq - Iq2 - Sq2

fin2 = len(t2)
t = np.zeros(shape=(fin1+fin2,))
In = np.zeros(shape=(fin1+fin2,))
Sn = np.zeros(shape=(fin1+fin2,))
Rn = np.zeros(shape=(fin1+fin2,))
Iq = np.zeros(shape=(fin1+fin2,))
Sq = np.zeros(shape=(fin1+fin2,))
Rq = np.zeros(shape=(fin1+fin2,))

t[0:fin1] = t1
In[0:fin1] = In1
Sn[0:fin1] = Sn1
Rn[0:fin1] = Rn1
Iq[0:fin1] = Iq1
Sq[0:fin1] = Sq1
Rq[0:fin1] = Rq1


t[fin1:fin1+fin2] = t2
In[fin1:fin1+fin2] = In2
Sn[fin1:fin1+fin2] = Sn2
Rn[fin1:fin1+fin2] = Rn2
Iq[fin1:fin1+fin2] = Iq2
Sq[fin1:fin1+fin2] = Sq2
Rq[fin1:fin1+fin2] = Rq2

plt.figure(1)
lines = plt.semilogy(t,In,t,Iq,t,(In+Iq))
plt.ylim([0.0001,.1])
plt.xlim([0,thi])
plt.legend(('Non-compliant','Compliant','Total'))
plt.setp(lines, linewidth=0.5)
plt.xlabel('Days')
plt.ylabel('Infected')
plt.title('Infection Dynamics for COVID-19 in US')
plt.show()

plt.figure(2)
lines = plt.semilogy(t,Rn*P*mr,t,Rq*P*mr)
plt.ylim([0.001,1])
plt.xlim([0,thi])
plt.legend(('Non-compliant','Compliant'))
plt.setp(lines, linewidth=0.5)
plt.xlabel('Days')
plt.ylabel('Deaths')
plt.title('Total Deaths for COVID-19 in US')
plt.show()

D = P*mr*(Rn[fin1+fin2-1] + Rq[fin1+fin2-1])
print('Deaths = ',D)

plt.figure(3)
lines = plt.semilogy(t,In/fnq,t,Iq/fq)
plt.ylim([0.0001,.1])
plt.xlim([0,thi])
plt.legend(('Non-compliant','Compliant'))
plt.setp(lines, linewidth=0.5)
plt.xlabel('Days')
plt.ylabel('Fraction of Sub-Population')
plt.title('Population Dynamics for COVID-19 in US')
plt.show()

Trends

The obvious trend to explore is the effect of changing the quarantine period. Fig. 2 shows the results of a an early release from shelter-in-place compared to pushing the release date one month longer. The trends are:

  • If the lock-down is released early, the second wave can be larger than the first wave
  • If the lock-down is released early, the compliant cohort will be mostly susceptible and will have the majority of new cases
  • There are 40% more deaths when the lock-down is released early

If the lock-down is ended just after the crest, this is too early. It is necessary to wait at least several weeks after the crest has passed to avoid the second wave. There are almost 40% more deaths for the 90-day period than the 120-day period. In addition, for the case when the quarantine is stopped too early, the compliant cohort, since they are the larger fraction and are mostly susceptible, will suffer a worse number of new infections than the non-compliant group who put them at risk in the first place. In addition, the second wave for the compliant group would be worse than the first wave. This would be a travesty! But by pushing the quarantine out by just 1 additional month, the compliant group will suffer fewer total deaths than the non-compliant group. Most importantly, the second wave would be substantially smaller than the first wave for both cohorts.

Fig. 2 Comparison of 90-day quarantine versus 120-day quarantine for the compliant and non-compliant cohort of individuals . When the ban is lifted too soon, the second wave can be bigger than the first. This model assumes that 30% of the population are non-compliant and that the compliant group continues to practice social distancing.

The lesson from this simple model is simple: push the quarantine date out as far as the economy can allow! There is good news! The second wave can be managed and minimized if we don’t come out of lock-down too soon.

Caveats and Disclaimers

This model is purely qualitative and only has value for studying trends that depend on changing parameters. Absolute numbers are not meant to be taken too seriously. For instance, the total number of deaths in this model are about 2x larger than what we are hearing from Dr. Fauci of NIAID at this time, so this simple model overestimates fatalities. Also, it doesn’t matter whether the number of quarantine days should be 60, 90 or 120 … what matters is that an additional month makes a large difference in total number of deaths. If someone does want to model the best possible number of quarantine days — the Goldilocks solution — then they need to get their hands on a professional epidemiological model (or an actual epidemiologist). The model presented here is not appropriate for that purpose.

Note added in postscript on April 8: Since posting the original blog on April 6, Dr, Fauci announced that as many as 90% of individuals are practicing some form of social distancing. In addition, many infections are not being reported because of lack of testing, which means that the mortality rate is lower than thought. Therefore, I have changed the mortality rate and figures with numbers that better reflect the current situation (that is changing daily), but still without any attempt to fit the numerous other parameters.

Physics in the Age of Contagion: The Bifurcation of COVID-19

We are at War! That may sound like a cliche, but more people in the United States may die over the next year from COVID-19 than US soldiers have died in all the wars ever fought in US history. It is a war against an invasion by an alien species that has no remorse and gives no quarter. In this war, one of our gravest enemies, beyond the virus, is misinformation. The Internet floods our attention with half-baked half-truths. There may even be foreign powers that see this time of crisis as an opportunity to sow fear through disinformation to divide the country.

Because of the bifurcation physics of the SIR model of COVID-19, small changes in personal behavior (if everyone participates) can literally save Millions of lives!

At such times, physicists may be tapped to help the war effort. This is because physicists have unique skill sets that help us see through the distractions of details to get to the essence of the problem. Our solutions are often back-of-the-envelope, but that is their strength. We can see zeroth-order results stripped bare of all the obfuscating minutia.

One way physicists can help in this war is to shed light on how infections percolate through a population and to provide estimates on the numbers involved. Perhaps most importantly, we can highlight what actions ordinary citizens can take that best guard against the worst-case scenarios of the pandemic. The zeroth-oder solutions may not say anything new that the experts don’t already know, but it may help spread the word of why such simple actions as shelter-in-place may save millions of lives.

The SIR Model of Infection

One of the simplest models for infection is the so-called SIR model that stands for Susceptible-Infected-Removed. This model is an averaged model (or a mean-field model) that disregards the fundamental network structure of human interactions and considers only averages. The dynamical flow equations are very simple

where I is the infected fraction of the population, and S is the susceptible fraction of the population. The coefficient μ is the rate at which patients recover or die, <k> is the average number of “links” to others, and β is the infection probability per link per day. The total population fraction is give by the constraint

where R is the removed population, most of whom will be recovered, but some fraction will have passed away. The number of deaths is

where m is the mortality rate, and Rinf is the longterm removed fraction of the population after the infection has run its course.

The nullclines, the curves along which the time derivatives vanish, are

Where the first nullcline intersects the third nullcline is the only fixed point of this simple model

The phase space of the SIR flow is shown in Fig. 1 plotted as the infected fraction as a function of the susceptible fraction. The diagonal is the set of initial conditions where R = 0. Each initial condition on the diagonal produces a dynamical trajectory. The dashed trajectory that starts at (1,0) is the trajectory for a new disease infecting a fully susceptible population. The trajectories terminate on the I = 0 axis at long times when the infection dies out. In this model, there is always a fraction of the population who never get the disease, not through unusual immunity, but through sheer luck.

Fig. 1 Phase space of the SIR model. The single fixed point has “marginal” stability, but leads to a finite number of of the population who never are infected. The dashed trajectory is the trajectory of the infection starting with a single case. (Adapted from “Introduction to Modern Dynamics” (Oxford University Press, 2019))

The key to understanding the scale of the pandemic is the susceptible fraction at the fixed point S*. For the parameters chosen to plot Fig. 1, the value of S* is 1/4, or β<k> = 4μ. It is the high value of the infection rate β<k> relative to the decay rate of the infection μ that allows a large fraction of the population to become infected. As the infection rate gets smaller, the fixed point S* moves towards unity on the horizontal axis, and less of the population is infected.

As soon as S* exceeds unity, for the condition

then the infection cannot grow exponentially and will decay away without infecting an appreciable fraction of the population. This condition represents a bifurcation in the infection dynamics. It means that if the infection rate can be reduced below the recovery rate, then the pandemic fades away. (It is important to point out that the R0 of a network model (the number of people each infected person infects) is analogous to the inverse of S*. When R0 > 1 then the infection spreads, just as when S* < 1, and vice versa.)

This bifurcation condition makes the strategy for fighting the pandemic clear. The parameter μ is fixed by the virus and cannot be altered. But the infection probability per day per social link, β, can be reduced by clean hygiene:

  • Don’t shake hands
  • Wash your hands often and thoroughly
  • Don’t touch your face
  • Cover your cough or sneeze in your elbow
  • Wear disposable gloves
  • Wipe down touched surfaces with disinfectants

And the number of contacts per person, <k>, can be reduced by social distancing:

  • No large gatherings
  • Stand away from others
  • Shelter-in-place
  • Self quarantine

The big question is: can the infection rate be reduced below the recovery rate through the actions of clean hygiene and social distancing? If there is a chance that it can, then literally millions of lives can be saved. So let’s take a look at COVID-19.

The COVID-19 Pandemic

To get a handle on modeling the COVID-19 pandemic using the (very simplistic) SIR model, one key parameter is the average number of people you are connected to, represented by <k>. These are not necessarily the people in your social network, but also includes people who may touch a surface you touched earlier, or who touched a surface you later touch yourself. It also includes anyone in your proximity who has coughed or sneezed in the past few minutes. The number of people in your network is a topic of keen current interest, but is surprisingly hard to pin down. For the sake of this model, I will take the number <k> = 50 as a nominal number. This is probably too small, but it is compensated by the probability of infection given by a factor r and by the number of days that an individual is infectious.

The spread is helped when infectious people go about their normal lives infecting others. But if a fraction of the population self quarantines, especially after they “may” have been exposed, then the effective number of infectious dinf days per person can be decreased. A rough equation that captures this is

where fnq is the fraction of the population that does NOT self quarantine, dill is the mean number of days a person is ill (and infectious), and dq is the number of days quarantined. This number of infectious days goes into the parameter β.

where r = 0.0002 infections per link per day2 , which is a very rough estimate of the coefficient for COVID-19.

It is clear why shelter-in-place can be so effective, especially if the number of days quarantined is equal to the number of days a person is ill. The infection could literally die out if enough people self quarantine by pushing the critical value S* above the bifurcation threshold. However, it is much more likely that large fractions of people will continue to move about. A simulation of the “wave” that passes through the US is shown in Fig. 2 (see the Python code in the section below for parameters). In this example, 60% of the population does NOT self quarantine. The wave peaks approximately 150 days after the establishment of community spread.

Fig. 2 Population dynamics for the US spread of COVID-19. The fraction that is infected represents a “wave” that passes through a community. In this simulation fnq = 60%. The total US dead after the wave has passed is roughly 2 Million in this simulation.

In addition to shelter-in-place, social distancing can have a strong effect on the disease spread. Fig. 3 shows the number of US deaths as a function of the fraction of the population who do NOT self-quarantine for a series of average connections <k>. The bifurcation effect is clear in this graph. For instance, if <k> = 50 is a nominal value, then if 85% of the population would shelter-in-place for 14 days, then the disease would fall below threshold and only a small number of deaths would occur. But if that connection number can be dropped even to <k> = 40, then only 60% would need to shelter-in-place to avoid the pandemic. By contrast, if 80% of the people don’t self-quarantine, and if <k> = 40, then there could be 2 Million deaths in the US by the time the disease has run its course.

Because of the bifurcation physics of this SIR model of COVID-19, small changes in personal behavior (if everyone participates) can literally save Millions of lives!

Fig. 3 Bifurcation plot of the number of US deaths as a function of the fraction of the population who do NOT shelter-in-place for different average links per person. At 20 links per person, the contagion could be contained. However, at 60 links per person, nearly 90% of the population would need to quarantine for at least 14 days to stop the spread.

There has been a lot said about “flattening the curve”, which is shown in Fig. 4. There are two ways that flattening the curve saves overall lives: 1) it keeps the numbers below the threshold capacity of hospitals; and 2) it decreases the total number infected and hence decreases the total dead. When the number of critical patients exceeds hospital capacity, the mortality rate increases. This is being seen in Italy where the hospitals have been overwhelmed and the mortality rate has risen from a baseline of 1% or 2% to as large as 8%. Flattening the curve is achieved by sheltering in place, personal hygiene and other forms of social distancing. The figure shows a family of curves for different fractions of the total population who shelter in place for 14 days. If more than 70% of the population shelters in place for 14 days, then the curve not only flattens … it disappears!

Fig. 4 Flattening the curve for a range of fractions of the population that shelters in place for 14 days. (See Python code for parameters.)

SIR Python Code

#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
Created on Sat March 21 2020
@author: nolte
D. D. Nolte, Introduction to Modern Dynamics: Chaos, Networks, Space and Time, 2nd ed. (Oxford,2019)
"""

import numpy as np
from scipy import integrate
from matplotlib import pyplot as plt

plt.close('all')

print(' ')
print('SIR.py')

def solve_flow(param,max_time=1000.0):

    def flow_deriv(x_y,tspan,mu,betap):
        x, y = x_y
        
        return [-mu*x + betap*x*y,-betap*x*y]
    
    x0 = [del1, del2]
    
    # Solve for the trajectories
    t = np.linspace(0, int(tlim), int(250*tlim))
    x_t = integrate.odeint(flow_deriv, x0, t, param)

   
    return t, x_t


r = 0.0002    # 0.0002
k = 50        # connections  50
dill = 14     # days ill 14
dpq = 14      # days shelter in place 14
fnq = 0.6     # fraction NOT sheltering in place
mr0 = 0.01    # mortality rate
mr1 = 0.03     # extra mortality rate if exceeding hospital capacity
P = 330       # population of US in Millions
HC = 0.003    # hospital capacity

dinf = fnq*dill + (1-fnq)*np.exp(-dpq/dill)*dill;

betap = r*k*dinf;
mu = 1/dill;

print('beta = ',betap)
print('dinf = ',dinf)
print('beta/mu = ',betap/mu)
          
del1 = .001         # infected
del2 = 1-del1       # susceptible

tlim = np.log(P*1e6/del1)/betap + 50/betap

param = (mu, betap)    # flow parameters

t, y = solve_flow(param)
I = y[:,0]
S = y[:,1]
R = 1 - I - S

plt.figure(1)
lines = plt.semilogy(t,I,t,S,t,R)
plt.ylim([0.001,1])
plt.xlim([0,tlim])
plt.legend(('Infected','Susceptible','Removed'))
plt.setp(lines, linewidth=0.5)
plt.xlabel('Days')
plt.ylabel('Fraction of Population')
plt.title('Population Dynamics for COVID-19 in US')
plt.show()

mr = mr0 + mr1*(0.2*np.max(I)-HC)*np.heaviside(0.2*np.max(I),HC)
Dead = mr*P*R[R.size-1]
print('US Dead = ',Dead)

D = np.zeros(shape=(100,))
x = np.zeros(shape=(100,))
for kloop in range(0,5):
    for floop in range(0,100):
        
        fnq = floop/100
        
        dinf = fnq*dill + (1-fnq)*np.exp(-dpq/dill)*dill;
        
        k = 20 + kloop*10
        betap = r*k*dinf
        
        tlim = np.log(P*1e6/del1)/betap + 50/betap

        param = (mu, betap)    # flow parameters

        t, y = solve_flow(param)       
        I = y[:,0]
        S = y[:,1]
        R = 1 - I - S
        
        mr = mr0 + mr1*(0.2*np.max(I)-HC)*np.heaviside(0.2*np.max(I),HC)

        D[floop] = mr*P*R[R.size-1]
        x[floop] = fnq
        
    plt.figure(2)
    lines2 = plt.plot(x,D)
    plt.setp(lines2, linewidth=0.5)

plt.ylabel('US Million Deaths')
plt.xlabel('Fraction NOT Quarantining')
plt.title('Quarantine and Distancing')        
plt.legend(('20','30','40','50','60','70'))
plt.show()    


label = np.zeros(shape=(9,))
for floop in range(0,8):
    
    fq = floop/10.0
    
    dinf = (1-fq)*dill + fq*np.exp(-dpq/dill)*dill;
    
    k = 50
    betap = r*k*dinf
    
    tlim = np.log(P*1e6/del1)/betap + 50/betap

    param = (mu, betap)    # flow parameters

    t, y = solve_flow(param)       
    I = y[:,0]
    S = y[:,1]
    R = 1 - I - S
    
    plt.figure(3)
    lines2 = plt.plot(t,I*P)
    plt.setp(lines2, linewidth=0.5)
    label[floop]=fq

plt.legend(label)
plt.ylabel('US Millions Infected')
plt.xlabel('Days')
plt.title('Flattening the Curve')       

You can run this Python code yourself and explore the effects of changing the parameters. For instance, the mortality rate is modeled to increase when the number of hospital beds is exceeded by the number of critical patients. This coefficient is not well known and hence can be explored numerically. Also, the infection rate r is not known well, nor the average number of connections per person. The effect of longer quarantines can also be tested relative to the fraction who do not quarantine at all. Because of the bifurcation physics of the disease model, large changes in dynamics can occur for small changes in parameters when the dynamics are near the bifurcation threshold.

Caveats and Disclaimers

This SIR model of COVID-19 is an extremely rough tool that should not be taken too literally. It can be used to explore ideas about the general effect of days quarantined, or changes in the number of social contacts, but should not be confused with the professional models used by epidemiologists. In particular, this mean-field SIR model completely ignores the discrete network character of person-to-person spread. It also homogenizes the entire country, where is it blatantly obvious that the dynamics inside New York City are very different than the dynamics in rural Indiana. And the elimination of the epidemic, so that it would not come back, would require strict compliance for people to be tested (assuming there are enough test kits) and infected individuals to be isolated after the wave has passed.

Second Edition of Introduction to Modern Dynamics (Chaos, Networks, Space and Time)

The second edition of Introduction to Modern Dynamics: Chaos, Networks, Space and Time is available from Oxford University Press and Amazon.

Most physics majors will use modern dynamics in their careers: nonlinearity, chaos, network theory, econophysics, game theory, neural nets, geodesic geometry, among many others.

The first edition of Introduction to Modern Dynamics (IMD) was an upper-division junior-level mechanics textbook at the level of Thornton and Marion (Classical Dynamics of Particles and Systems) and Taylor (Classical Mechanics).  IMD helped lead an emerging trend in physics education to update the undergraduate physics curriculum.  Conventional junior-level mechanics courses emphasized Lagrangian and Hamiltonian physics, but notably missing from the classic subjects are modern dynamics topics that most physics majors will use in their careers: nonlinearity, chaos, network theory, econophysics, game theory, neural nets, geodesic geometry, among many others.  These are the topics at the forefront of physics that drive high-tech businesses and start-ups, which is where more than half of all physicists work. IMD introduced these modern topics to junior-level physics majors in an accessible form that allowed them to master the fundamentals to prepare them for the modern world.

The second edition (IMD2) continues that trend by expanding the chapters to include additional material and topics.  It rearranges several of the introductory chapters for improved logical flow and expands them to include key conventional topics that were missing in the first edition (e.g., Lagrange undetermined multipliers and expanded examples of Lagrangian applications).  It is also an opportunity to correct several typographical errors and other errata that students have identified over the past several years.  The second edition also has expanded homework problems.

The goal of IMD2 is to strengthen the sections on conventional topics (that students need to master to take their GREs) to make IMD2 attractive as a mainstream physics textbook for broader adoption at the junior level, while continuing the program of updating the topics and approaches that are relevant for the roles that physicists play in the 21st century.

(New Chapters and Sections highlighted in red.)

New Features in Second Edition:

Second Edition Chapters and Sections

Part 1 Geometric Mechanics

• Expanded development of Lagrangian dynamics

• Lagrange multipliers

• More examples of applications

• Connection to statistical mechanics through the virial theorem

• Greater emphasis on action-angle variables

• The key role of adiabatic invariants

Part 1 Geometric Mechanics

Chapter 1 Physics and Geometry

1.1 State space and dynamical flows

1.2 Coordinate representations

1.3 Coordinate transformation

1.4 Uniformly rotating frames

1.5 Rigid-body motion

Chapter 2 Lagrangian Mechanics

2.1 Calculus of variations

2.2 Lagrangian applications

2.3 Lagrange’s undetermined multipliers

2.4 Conservation laws

2.5 Central force motion

2.6 Virial Theorem

Chapter 3 Hamiltonian Dynamics and Phase Space

3.1 The Hamiltonian function

3.2 Phase space

3.3 Integrable systems and action–angle variables

3.4 Adiabatic invariants

Part 2 Nonlinear Dynamics

• New section on non-autonomous dynamics

• Entire new chapter devoted to Hamiltonian mechanics

• Added importance to Chirikov standard map

• The important KAM theory of “constrained chaos” and solar system stability

• Degeneracy in Hamiltonian chaos

• A short overview of quantum chaos

• Rational resonances and the relation to KAM theory

• Synchronized chaos

Part 2 Nonlinear Dynamics

Chapter 4 Nonlinear Dynamics and Chaos

4.1 One-variable dynamical systems

4.2 Two-variable dynamical systems

4.3 Limit cycles

4.4 Discrete iterative maps

4.5 Three-dimensional state space and chaos

4.6 Non-autonomous (driven) flows

4.7 Fractals and strange attractors

Chapter 5 Hamiltonian Chaos

5.1 Perturbed Hamiltonian systems

5.2 Nonintegrable Hamiltonian systems

5.3 The Chirikov Standard Map

5.4 KAM Theory

5.5 Degeneracy and the web map

5.6 Quantum chaos

Chapter 6 Coupled Oscillators and Synchronization

6.1 Coupled linear oscillators

6.2 Simple models of synchronization

6.3 Rational resonances

6.4 External synchronization

6.5 Synchronization of Chaos

Part 3 Complex Systems

• New emphasis on diffusion on networks

• Epidemic growth on networks

• A new section of game theory in the context of evolutionary dynamics

• A new section on general equilibrium theory in economics

Part 3 Complex Systems

Chapter 7 Network Dynamics

7.1 Network structures

7.2 Random network topologies

7.3 Synchronization on networks

7.4 Diffusion on networks

7.5 Epidemics on networks

Chapter 8 Evolutionary Dynamics

81 Population dynamics

8.2 Virus infection and immune deficiency

8.3 Replicator Dynamics

8.4 Quasi-species

8.5 Game theory and evolutionary stable solutions

Chapter 9 Neurodynamics and Neural Networks

9.1 Neuron structure and function

9.2 Neuron dynamics

9.3 Network nodes: artificial neurons

9.4 Neural network architectures

9.5 Hopfield neural network

9.6 Content-addressable (associative) memory

Chapter 10 Economic Dynamics

10.1 Microeconomics and equilibrium

10.2 Macroeconomics

10.3 Business cycles

10.4 Random walks and stock prices (optional)

Part 4 Relativity and Space–Time

• Relativistic trajectories

• Gravitational waves

Part 4 Relativity and Space–Time

Chapter 11 Metric Spaces and Geodesic Motion

11.1 Manifolds and metric tensors

11.2 Derivative of a tensor

11.3 Geodesic curves in configuration space

11.4 Geodesic motion

Chapter 12 Relativistic Dynamics

12.1 The special theory

12.2 Lorentz transformations

12.3 Metric structure of Minkowski space

12.4 Relativistic trajectories

12.5 Relativistic dynamics

12.6 Linearly accelerating frames (relativistic)

Chapter 13 The General Theory of Relativity and Gravitation

13.1 Riemann curvature tensor

13.2 The Newtonian correspondence

13.3 Einstein’s field equations

13.4 Schwarzschild space–time

13.5 Kinematic consequences of gravity

13.6 The deflection of light by gravity

13.7 The precession of Mercury’s perihelion

13.8 Orbits near a black hole

13.9 Gravitational waves

Synopsis of 2nd Ed. Chapters

Chapter 1. Physics and Geometry (Sample Chapter)

This chapter has been rearranged relative to the 1st edition to provide a more logical flow of the overarching concepts of geometric mechanics that guide the subsequent chapters.  The central role of coordinate transformations is strengthened, as is the material on rigid-body motion with expanded examples.

Chapter 2. Lagrangian Mechanics (Sample Chapter)

Much of the structure and material is retained from the 1st edition while adding two important sections.  The section on applications of Lagrangian mechanics adds many direct examples of the use of Lagrange’s equations of motion.  An additional new section covers the important topic of Lagrange’s undetermined multipliers

Chapter 3. Hamiltonian Dynamics and Phase Space (Sample Chapter)

The importance of Hamiltonian systems and dynamics merits a stand-alone chapter.  The topics from the 1st edition are expanded in this new chapter, including a new section on adiabatic invariants that plays an important role in the development of quantum theory.  Some topics are de-emphasized from the 1st edition, such as general canonical transformations and the symplectic structure of phase space, although the specific transformation to action-angle coordinates is retained and amplified.

Chapter 4. Nonlinear Dynamics and Chaos

The first part of this chapter is retained from the 1st edition with numerous minor corrections and updates of figures.  The second part of the IMD 1st edition, treating Hamiltonian chaos, will be expanded into the new Chapter 5.

Chapter 5. Hamiltonian Chaos

This new stand-alone chapter expands on the last half of Chapter 3 of the IMD 1st edition.  The physical character of Hamiltonian chaos is substantially distinct from dissipative chaos that it deserves its own chapter.  It is also a central topic of interest for complex systems that are either conservative or that have integral invariants, such as our N-body solar system that played such an important role in the history of chaos theory beginning with Poincaré.  The new chapter highlights Poincaré’s homoclinic tangle, illustrated by the Chirikov Standard Map.  The Standard Map is an excellent introduction to KAM theory, which is one of the crowning achievements of the theory of dynamical systems by Komogorov, Arnold and Moser, connecting to deeper aspects of synchronization and rational resonances that drive the structure of systems as diverse as the rotation of the Moon and the rings of Saturn.  This is also a perfect lead-in to the next chapter on synchronization.  An optional section at the end of this chapter briefly discusses quantum chaos to show how Hamiltonian chaos can be extended into the quantum regime.

Chapter 6. Synchronization

This is an updated version of the IMD 1st ed. chapter.  It has a reduced initial section on coupled linear oscillators, retaining the key ideas about linear eigenmodes but removing some irrelevant details in the 1st edition.  A new section is added that defines and emphasizes the importance of quasi-periodicity.  A new section on the synchronization of chaotic oscillators is added.

Chapter 7. Network Dynamics

This chapter rearranges the structure of the chapter from the 1st edition, moving synchronization on networks earlier to connect from the previous chapter.  The section on diffusion and epidemics is moved to the back of the chapter and expanded in the 2nd edition into two separate sections on these topics, adding new material on discrete matrix approaches to continuous dynamics.

Chapter 8. Neurodynamics and Neural Networks

This chapter is retained from the 1st edition with numerous minor corrections and updates of figures.

Chapter 9. Evolutionary Dynamics

Two new sections are added to this chapter.  A section on game theory and evolutionary stable solutions introduces core concepts of evolutionary dynamics that merge well with the other topics of the chapter such as the pay-off matrix and replicator dynamics.  A new section on nearly neutral networks introduces new types of behavior that occur in high-dimensional spaces which are counter intuitive but important for understanding evolutionary drift.

Chapter 10.  Economic Dynamics

This chapter will be significantly updated relative to the 1st edition.  Most of the sections will be rewritten with improved examples and figures.  Three new sections will be added.  The 1st edition section on consumer market competition will be split into two new sections describing the Cournot duopoly and Pareto optimality in one section, and Walras’ Law and general equilibrium theory in another section.  The concept of the Pareto frontier in economics is becoming an important part of biophysical approaches to population dynamics.  In addition, new trends in economics are drawing from general equilibrium theory, first introduced by Walras in the nineteenth century, but now merging with modern ideas of fixed points and stable and unstable manifolds.  A third new section is added on econophysics, highlighting the distinctions that contrast economic dynamics (phase space dynamical approaches to economics) from the emerging field of econophysics (statistical mechanics approaches to economics).

Chapter 11. Metric Spaces and Geodesic Motion

 This chapter is retained from the 1st edition with several minor corrections and updates of figures.

Chapter 12. Relativistic Dynamics

This chapter is retained from the 1st edition with minor corrections and updates of figures.  More examples will be added, such as invariant mass reconstruction.  The connection between relativistic acceleration and Einstein’s equivalence principle will be strengthened.

Chapter 13. The General Theory of Relativity and Gravitation

This chapter is retained from the 1st edition with minor corrections and updates of figures.  A new section will derive the properties of gravitational waves, given the spectacular success of LIGO and the new field of gravitational astronomy.

Homework Problems:

All chapters will have expanded and updated homework problems.  Many of the homework problems from the 1st edition will remain, but the number of problems at the end of each chapter will be nearly doubled, while removing some of the less interesting or problematic problems.

Bibliography

D. D. Nolte, Introduction to Modern Dynamics: Chaos, Networks, Space and Time, 2nd Ed. (Oxford University Press, 2019)

The Physics of Modern Dynamics (with Python Programs)

It is surprising how much of modern dynamics boils down to an extremely simple formula

This innocuous-looking equation carries such riddles, such surprises, such unintuitive behavior that it can become the object of study for life.  This equation is called a vector flow equation, and it can be used to capture the essential physics of economies, neurons, ecosystems, networks, and even orbits of photons around black holes.  This equation is to modern dynamics what F = ma was to classical mechanics.  It is the starting point for understanding complex systems.

The Magic of Phase Space

The apparent simplicity of the “flow equation” masks the complexity it contains.  It is a vector equation because each “dimension” is a variable of a complex system.  Many systems of interest may have only a few variables, but ecosystems and economies and social networks may have hundreds or thousands of variables.  Expressed in component format, the flow equation is

where the superscript spans the number of variables.  But even this masks all that can happen with such an equation. Each of the functions fa can be entirely different from each other, and can be any type of function, whether polynomial, rational, algebraic, transcendental or composite, although they must be single-valued.  They are generally nonlinear, and the limitless ways that functions can be nonlinear is where the richness of the flow equation comes from.

The vector flow equation is an ordinary differential equation (ODE) that can be solved for specific trajectories as initial value problems.  A single set of initial conditions defines a unique trajectory.  For instance, the trajectory for a 4-dimensional example is described as the column vector

which is the single-parameter position vector to a point in phase space, also called state space.  The point sweeps through successive configurations as a function of its single parameter—time.  This trajectory is also called an orbit.  In classical mechanics, the focus has tended to be on the behavior of specific orbits that arise from a specific set of initial conditions.  This is the classic “rock thrown from a cliff” problem of introductory physics courses.  However, in modern dynamics, the focus shifts away from individual trajectories to encompass the set of all possible trajectories.

Why is Modern Dynamics part of Physics?

If finding the solutions to the “x-dot equals f” vector flow equation is all there is to do, then this would just be a math problem—the solution of ODE’s.  There are plenty of gems for mathematicians to look for, and there is an entire of field of study in mathematics called “dynamical systems“, but this would not be “physics”.  Physics as a profession is separate and distinct from mathematics, although the two are sometimes confused.  Physics uses mathematics as its language and as its toolbox, but physics is not mathematics.  Physics is done best when it is done qualitatively—this means with scribbles done on napkins in restaurants or on the back of envelopes while waiting in line. Physics is about recognizing relationships and patterns. Physics is about identifying the limits to scaling properties where the physics changes when scales change. Physics is about the mapping of the simplest possible mathematics onto behavior in the physical world, and recognizing when the simplest possible mathematics is a universal that applies broadly to diverse systems that seem different, but that share the same underlying principles.

So, granted solving ODE’s is not physics, there is still a tremendous amount of good physics that can be done by solving ODE’s. ODE solvers become the modern physicist’s experimental workbench, providing data output from numerical experiments that can test the dependence on parameters in ways that real-world experiments might not be able to access. Physical intuition can be built based on such simulations as the engaged physicist begins to “understand” how the system behaves, able to explain what will happen as the values of parameters are changed.

In the follow sections, three examples of modern dynamics are introduced with a preliminary study, including Python code. These examples are: Galactic dynamics, synchronized networks and ecosystems. Despite their very different natures, their description using dynamical flows share features in common and illustrate the beauty and depth of behavior that can be explored with simple equations.

Galactic Dynamics

One example of the power and beauty of the vector flow equation and its set of all solutions in phase space is called the Henon-Heiles model of the motion of a star within a galaxy.  Of course, this is a terribly complicated problem that involves tens of billions of stars, but if you average over the gravitational potential of all the other stars, and throw in a couple of conservation laws, the resulting potential can look surprisingly simple.  The motion in the plane of this galactic potential takes two configuration coordinates (x, y) with two associated momenta (px, py) for a total of four dimensions.  The flow equations in four-dimensional phase space are simply

Fig. 1 The 4-dimensional phase space flow equations of a star in a galaxy. The terms in light blue are a simple two-dimensional harmonic oscillator. The terms in magenta are the nonlinear contributions from the stars in the galaxy.

where the terms in the light blue box describe a two-dimensional simple harmonic oscillator (SHO), which is a linear oscillator, modified by the terms in the magenta box that represent the nonlinear galactic potential.  The orbits of this Hamiltonian system are chaotic, and because there is no dissipation in the model, a single orbit will continue forever within certain ranges of phase space governed by energy conservation, but never quite repeating.

Fig. 2 Two-dimensional Poincaré section of sets of trajectories in four-dimensional phase space for the Henon-Heiles galactic dynamics model. The perturbation parameter is &eps; = 0.3411 and the energy E = 1.

Hamilton4D.py

#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
Hamilton4D.py
Created on Wed Apr 18 06:03:32 2018

@author: nolte

Derived from:
D. D. Nolte, Introduction to Modern Dynamics: Chaos, Networks, Space and Time, 2nd ed. (Oxford,2019)
"""

import numpy as np
import matplotlib as mpl
from mpl_toolkits.mplot3d import Axes3D
from scipy import integrate
from matplotlib import pyplot as plt
from matplotlib import cm
import time
import os

plt.close('all')

# model_case 1 = Heiles
# model_case 2 = Crescent
print(' ')
print('Hamilton4D.py')
print('Case: 1 = Heiles')
print('Case: 2 = Crescent')
model_case = int(input('Enter the Model Case (1-2)'))

if model_case == 1:
    E = 1       # Heiles: 1, 0.3411   Crescent: 0.05, 1
    epsE = 0.3411   # 3411
    def flow_deriv(x_y_z_w,tspan):
        x, y, z, w = x_y_z_w
        a = z
        b = w
        c = -x - epsE*(2*x*y)
        d = -y - epsE*(x**2 - y**2)
        return[a,b,c,d]
else:
    E = .1       #   Crescent: 0.1, 1
    epsE = 1   
    def flow_deriv(x_y_z_w,tspan):
        x, y, z, w = x_y_z_w
        a = z
        b = w
        c = -(epsE*(y-2*x**2)*(-4*x) + x)
        d = -(y-epsE*2*x**2)
        return[a,b,c,d]
    
prms = np.sqrt(E)
pmax = np.sqrt(2*E)    
            
# Potential Function
if model_case == 1:
    V = np.zeros(shape=(100,100))
    for xloop in range(100):
        x = -2 + 4*xloop/100
        for yloop in range(100):
            y = -2 + 4*yloop/100
            V[yloop,xloop] = 0.5*x**2 + 0.5*y**2 + epsE*(x**2*y - 0.33333*y**3) 
else:
    V = np.zeros(shape=(100,100))
    for xloop in range(100):
        x = -2 + 4*xloop/100
        for yloop in range(100):
            y = -2 + 4*yloop/100
            V[yloop,xloop] = 0.5*x**2 + 0.5*y**2 + epsE*(2*x**4 - 2*x**2*y) 

fig = plt.figure(1)
contr = plt.contourf(V,100, cmap=cm.coolwarm, vmin = 0, vmax = 10)
fig.colorbar(contr, shrink=0.5, aspect=5)    
fig = plt.show()

repnum = 250
mulnum = 64/repnum

np.random.seed(1)
for reploop  in range(repnum):
    px1 = 2*(np.random.random((1))-0.499)*pmax
    py1 = np.sign(np.random.random((1))-0.499)*np.real(np.sqrt(2*(E-px1**2/2)))
    xp1 = 0
    yp1 = 0
    
    x_y_z_w0 = [xp1, yp1, px1, py1]
    
    tspan = np.linspace(1,1000,10000)
    x_t = integrate.odeint(flow_deriv, x_y_z_w0, tspan)
    siztmp = np.shape(x_t)
    siz = siztmp[0]

    if reploop % 50 == 0:
        plt.figure(2)
        lines = plt.plot(x_t[:,0],x_t[:,1])
        plt.setp(lines, linewidth=0.5)
        plt.show()
        time.sleep(0.1)
        #os.system("pause")

    y1 = x_t[:,0]
    y2 = x_t[:,1]
    y3 = x_t[:,2]
    y4 = x_t[:,3]
    
    py = np.zeros(shape=(2*repnum,))
    yvar = np.zeros(shape=(2*repnum,))
    cnt = -1
    last = y1[1]
    for loop in range(2,siz):
        if (last < 0)and(y1[loop] > 0):
            cnt = cnt+1
            del1 = -y1[loop-1]/(y1[loop] - y1[loop-1])
            py[cnt] = y4[loop-1] + del1*(y4[loop]-y4[loop-1])
            yvar[cnt] = y2[loop-1] + del1*(y2[loop]-y2[loop-1])
            last = y1[loop]
        else:
            last = y1[loop]
 
    plt.figure(3)
    lines = plt.plot(yvar,py,'o',ms=1)
    plt.show()
    
if model_case == 1:
    plt.savefig('Heiles')
else:
    plt.savefig('Crescent')
    

Networks, Synchronization and Emergence

A central paradigm of nonlinear science is the emergence of patterns and organized behavior from seemingly random interactions among underlying constituents.  Emergent phenomena are among the most awe inspiring topics in science.  Crystals are emergent, forming slowly from solutions of reagents.  Life is emergent, arising out of the chaotic soup of organic molecules on Earth (or on some distant planet).  Intelligence is emergent, and so is consciousness, arising from the interactions among billions of neurons.  Ecosystems are emergent, based on competition and symbiosis among species.  Economies are emergent, based on the transfer of goods and money spanning scales from the local bodega to the global economy.

One of the common underlying properties of emergence is the existence of networks of interactions.  Networks and network science are topics of great current interest driven by the rise of the World Wide Web and social networks.  But networks are ubiquitous and have long been the topic of research into complex and nonlinear systems.  Networks provide a scaffold for understanding many of the emergent systems.  It allows one to think of isolated elements, like molecules or neurons, that interact with many others, like the neighbors in a crystal or distant synaptic connections.

From the point of view of modern dynamics, the state of a node can be a variable or a “dimension” and the interactions among links define the functions of the vector flow equation.  Emergence is then something that “emerges” from the dynamical flow as many elements interact through complex networks to produce simple or emergent patterns.

Synchronization is a form of emergence that happens when lots of independent oscillators, each vibrating at their own personal frequency, are coupled together to push and pull on each other, entraining all the individual frequencies into one common global oscillation of the entire system.  Synchronization plays an important role in the solar system, explaining why the Moon always shows one face to the Earth, why Saturn’s rings have gaps, and why asteroids are mainly kept away from colliding with the Earth.  Synchronization plays an even more important function in biology where it coordinates the beating of the heart and the functioning of the brain.

One of the most dramatic examples of synchronization is the Kuramoto synchronization phase transition. This occurs when a large set of individual oscillators with differing natural frequencies interact with each other through a weak nonlinear coupling.  For small coupling, all the individual nodes oscillate at their own frequency.  But as the coupling increases, there is a sudden coalescence of all the frequencies into a single common frequency.  This mechanical phase transition, called the Kuramoto transition, has many of the properties of a thermodynamic phase transition, including a solution that utilizes mean field theory.

Fig. 3 The Kuramoto model for the nonlinear coupling of N simple phase oscillators. The term in light blue is the simple phase oscillator. The term in magenta is the global nonlinear coupling that connects each oscillator to every other.

The simulation of 20 Poncaré phase oscillators with global coupling is shown in Fig. 4 as a function of increasing coupling coefficient g. The original individual frequencies are spread randomly. The oscillators with similar frequencies are the first to synchronize, forming small clumps that then synchronize with other clumps of oscillators, until all oscillators are entrained to a single compromise frequency. The Kuramoto phase transition is not sharp in this case because the value of N = 20 is too small. If the simulation is run for 200 oscillators, there is a sudden transition from unsynchronized to synchronized oscillation at a threshold value of g.

Fig. 4 The Kuramoto model for 20 Poincare oscillators showing the frequencies as a function of the coupling coefficient.

The Kuramoto phase transition is one of the most important fundamental examples of modern dynamics because it illustrates many facets of nonlinear dynamics in a very simple way. It highlights the importance of nonlinearity, the simplification of phase oscillators, the use of mean field theory, the underlying structure of the network, and the example of a mechanical analog to a thermodynamic phase transition. It also has analytical solutions because of its simplicity, while still capturing the intrinsic complexity of nonlinear systems.

Kuramoto.py

#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
Created on Sat May 11 08:56:41 2019

@author: nolte

Derived from:
D. D. Nolte, Introduction to Modern Dynamics: Chaos, Networks, Space and Time, 2nd ed. (Oxford,2019)
"""

# https://www.python-course.eu/networkx.php
# https://networkx.github.io/documentation/stable/tutorial.html
# https://networkx.github.io/documentation/stable/reference/functions.html

import numpy as np
from scipy import integrate
from matplotlib import pyplot as plt
import networkx as nx
from UserFunction import linfit
import time

tstart = time.time()

plt.close('all')

Nfac = 20   # 25
N = 20      # 50
width = 0.2

# function: omegout, yout = coupleN(G)
def coupleN(G):

    # function: yd = flow_deriv(x_y)
    def flow_deriv(y,t0):
                
        yp = np.zeros(shape=(N,))
        for omloop  in range(N):
            temp = omega[omloop]
            linksz = G.node[omloop]['numlink']
            for cloop in range(linksz):
                cindex = G.node[omloop]['link'][cloop]
                g = G.node[omloop]['coupling'][cloop]

                temp = temp + g*np.sin(y[cindex]-y[omloop])
            
            yp[omloop] = temp
        
        yd = np.zeros(shape=(N,))
        for omloop in range(N):
            yd[omloop] = yp[omloop]
        
        return yd
    # end of function flow_deriv(x_y)

    mnomega = 1.0
    
    for nodeloop in range(N):
        omega[nodeloop] = G.node[nodeloop]['element']
    
    x_y_z = omega    
    
    # Settle-down Solve for the trajectories
    tsettle = 100
    t = np.linspace(0, tsettle, tsettle)
    x_t = integrate.odeint(flow_deriv, x_y_z, t)
    x0 = x_t[tsettle-1,0:N]
    
    t = np.linspace(1,1000,1000)
    y = integrate.odeint(flow_deriv, x0, t)
    siztmp = np.shape(y)
    sy = siztmp[0]
        
    # Fit the frequency
    m = np.zeros(shape = (N,))
    w = np.zeros(shape = (N,))
    mtmp = np.zeros(shape=(4,))
    btmp = np.zeros(shape=(4,))
    for omloop in range(N):
        
        if np.remainder(sy,4) == 0:
            mtmp[0],btmp[0] = linfit(t[0:sy//2],y[0:sy//2,omloop]);
            mtmp[1],btmp[1] = linfit(t[sy//2+1:sy],y[sy//2+1:sy,omloop]);
            mtmp[2],btmp[2] = linfit(t[sy//4+1:3*sy//4],y[sy//4+1:3*sy//4,omloop]);
            mtmp[3],btmp[3] = linfit(t,y[:,omloop]);
        else:
            sytmp = 4*np.floor(sy/4);
            mtmp[0],btmp[0] = linfit(t[0:sytmp//2],y[0:sytmp//2,omloop]);
            mtmp[1],btmp[1] = linfit(t[sytmp//2+1:sytmp],y[sytmp//2+1:sytmp,omloop]);
            mtmp[2],btmp[2] = linfit(t[sytmp//4+1:3*sytmp/4],y[sytmp//4+1:3*sytmp//4,omloop]);
            mtmp[3],btmp[3] = linfit(t[0:sytmp],y[0:sytmp,omloop]);

        
        #m[omloop] = np.median(mtmp)
        m[omloop] = np.mean(mtmp)
        
        w[omloop] = mnomega + m[omloop]
     
    omegout = m
    yout = y
    
    return omegout, yout
    # end of function: omegout, yout = coupleN(G)



Nlink = N*(N-1)//2      
omega = np.zeros(shape=(N,))
omegatemp = width*(np.random.rand(N)-1)
meanomega = np.mean(omegatemp)
omega = omegatemp - meanomega
sto = np.std(omega)

nodecouple = nx.complete_graph(N)

lnk = np.zeros(shape = (N,), dtype=int)
for loop in range(N):
    nodecouple.node[loop]['element'] = omega[loop]
    nodecouple.node[loop]['link'] = list(nx.neighbors(nodecouple,loop))
    nodecouple.node[loop]['numlink'] = np.size(list(nx.neighbors(nodecouple,loop)))
    lnk[loop] = np.size(list(nx.neighbors(nodecouple,loop)))

avgdegree = np.mean(lnk)
mnomega = 1

facval = np.zeros(shape = (Nfac,))
yy = np.zeros(shape=(Nfac,N))
xx = np.zeros(shape=(Nfac,))
for facloop in range(Nfac):
    print(facloop)
    facoef = 0.2

    fac = facoef*(16*facloop/(Nfac))*(1/(N-1))*sto/mnomega
    for nodeloop in range(N):
        nodecouple.node[nodeloop]['coupling'] = np.zeros(shape=(lnk[nodeloop],))
        for linkloop in range (lnk[nodeloop]):
            nodecouple.node[nodeloop]['coupling'][linkloop] = fac

    facval[facloop] = fac*avgdegree
    
    omegout, yout = coupleN(nodecouple)                           # Here is the subfunction call for the flow

    for omloop in range(N):
        yy[facloop,omloop] = omegout[omloop]

    xx[facloop] = facval[facloop]

plt.figure(1)
lines = plt.plot(xx,yy)
plt.setp(lines, linewidth=0.5)
plt.show()

elapsed_time = time.time() - tstart
print('elapsed time = ',format(elapsed_time,'.2f'),'secs')

The Web of Life

Ecosystems are among the most complex systems on Earth.  The complex interactions among hundreds or thousands of species may lead to steady homeostasis in some cases, to growth and collapse in other cases, and to oscillations or chaos in yet others.  But the definition of species can be broad and abstract, referring to businesses and markets in economic ecosystems, or to cliches and acquaintances in social ecosystems, among many other examples.  These systems are governed by the laws of evolutionary dynamics that include fitness and survival as well as adaptation.

The dimensionality of the dynamical spaces for these systems extends to hundreds or thousands of dimensions—far too complex to visualize when thinking in four dimensions is already challenging.  Yet there are shared principles and common behaviors that emerge even here.  Many of these can be illustrated in a simple three-dimensional system that is represented by a triangular simplex that can be easily visualized, and then generalized back to ultra-high dimensions once they are understood.

A simplex is a closed (N-1)-dimensional geometric figure that describes a zero-sum game (game theory is an integral part of evolutionary dynamics) among N competing species.  For instance, a two-simplex is a triangle that captures the dynamics among three species.  Each vertex of the triangle represents the situation when the entire ecosystem is composed of a single species.  Anywhere inside the triangle represents the situation when all three species are present and interacting.

A classic model of interacting species is the replicator equation. It allows for a fitness-based proliferation and for trade-offs among the individual species. The replicator dynamics equations are shown in Fig. 5.

Fig. 5 Replicator dynamics has a surprisingly simple form, but with surprisingly complicated behavior. The key elements are the fitness and the payoff matrix. The fitness relates to how likely the species will survive. The payoff matrix describes how one species gains at the loss of another (although symbiotic relationships also occur).

The population dynamics on the 2D simplex are shown in Fig. 6 for several different pay-off matrices. The matrix values are shown in color and help interpret the trajectories. For instance the simplex on the upper-right shows a fixed point center. This reflects the antisymmetric character of the pay-off matrix around the diagonal. The stable spiral on the lower-left has a nearly asymmetric pay-off matrix, but with unequal off-diagonal magnitudes. The other two cases show central saddle points with stable fixed points on the boundary. A very large variety of behaviors are possible for this very simple system. The Python program is shown in Trirep.py.

Fig. 6 Payoff matrix and population simplex for four random cases: Upper left is an unstable saddle. Upper right is a center. Lower left is a stable spiral. Lower right is a marginal case.

Trirep.py

#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
trirep.py
Created on Thu May  9 16:23:30 2019

@author: nolte

Derived from:
D. D. Nolte, Introduction to Modern Dynamics: Chaos, Networks, Space and Time, 2nd ed. (Oxford,2019)
"""

import numpy as np
from scipy import integrate
from matplotlib import pyplot as plt

plt.close('all')

def tripartite(x,y,z):

    sm = x + y + z
    xp = x/sm
    yp = y/sm
    
    f = np.sqrt(3)/2
    
    y0 = f*xp
    x0 = -0.5*xp - yp + 1;
    
    plt.figure(2)
    lines = plt.plot(x0,y0)
    plt.setp(lines, linewidth=0.5)
    plt.plot([0, 1],[0, 0],'k',linewidth=1)
    plt.plot([0, 0.5],[0, f],'k',linewidth=1)
    plt.plot([1, 0.5],[0, f],'k',linewidth=1)
    plt.show()
    

def solve_flow(y,tspan):
    def flow_deriv(y, t0):
    #"""Compute the time-derivative ."""
    
        f = np.zeros(shape=(N,))
        for iloop in range(N):
            ftemp = 0
            for jloop in range(N):
                ftemp = ftemp + A[iloop,jloop]*y[jloop]
            f[iloop] = ftemp
        
        phitemp = phi0          # Can adjust this from 0 to 1 to stabilize (but Nth population is no longer independent)
        for loop in range(N):
            phitemp = phitemp + f[loop]*y[loop]
        phi = phitemp
        
        yd = np.zeros(shape=(N,))
        for loop in range(N-1):
            yd[loop] = y[loop]*(f[loop] - phi);
        
        if np.abs(phi0) < 0.01:             # average fitness maintained at zero
            yd[N-1] = y[N-1]*(f[N-1]-phi);
        else:                                     # non-zero average fitness
            ydtemp = 0
            for loop in range(N-1):
                ydtemp = ydtemp - yd[loop]
            yd[N-1] = ydtemp
       
        return yd

    # Solve for the trajectories
    t = np.linspace(0, tspan, 701)
    x_t = integrate.odeint(flow_deriv,y,t)
    return t, x_t

# model_case 1 = zero diagonal
# model_case 2 = zero trace
# model_case 3 = asymmetric (zero trace)
print(' ')
print('trirep.py')
print('Case: 1 = antisymm zero diagonal')
print('Case: 2 = antisymm zero trace')
print('Case: 3 = random')
model_case = int(input('Enter the Model Case (1-3)'))

N = 3
asymm = 3      # 1 = zero diag (replicator eqn)   2 = zero trace (autocatylitic model)  3 = random (but zero trace)
phi0 = 0.001            # average fitness (positive number) damps oscillations
T = 100;


if model_case == 1:
    Atemp = np.zeros(shape=(N,N))
    for yloop in range(N):
        for xloop in range(yloop+1,N):
            Atemp[yloop,xloop] = 2*(0.5 - np.random.random(1))
            Atemp[xloop,yloop] = -Atemp[yloop,xloop]

if model_case == 2:
    Atemp = np.zeros(shape=(N,N))
    for yloop in range(N):
        for xloop in range(yloop+1,N):
            Atemp[yloop,xloop] = 2*(0.5 - np.random.random(1))
            Atemp[xloop,yloop] = -Atemp[yloop,xloop]
        Atemp[yloop,yloop] = 2*(0.5 - np.random.random(1))
    tr = np.trace(Atemp)
    A = Atemp
    for yloop in range(N):
        A[yloop,yloop] = Atemp[yloop,yloop] - tr/N
        
else:
    Atemp = np.zeros(shape=(N,N))
    for yloop in range(N):
        for xloop in range(N):
            Atemp[yloop,xloop] = 2*(0.5 - np.random.random(1))
        
    tr = np.trace(Atemp)
    A = Atemp
    for yloop in range(N):
        A[yloop,yloop] = Atemp[yloop,yloop] - tr/N

plt.figure(3)
im = plt.matshow(A,3,cmap=plt.cm.get_cmap('seismic'))  # hsv, seismic, bwr
cbar = im.figure.colorbar(im)

M = 20
delt = 1/M
ep = 0.01;

tempx = np.zeros(shape = (3,))
for xloop in range(M):
    tempx[0] = delt*(xloop)+ep;
    for yloop in range(M-xloop):
        tempx[1] = delt*yloop+ep
        tempx[2] = 1 - tempx[0] - tempx[1]
        
        x0 = tempx/np.sum(tempx);          # initial populations
        
        tspan = 70
        t, x_t = solve_flow(x0,tspan)
        
        y1 = x_t[:,0]
        y2 = x_t[:,1]
        y3 = x_t[:,2]
        
        plt.figure(1)
        lines = plt.plot(t,y1,t,y2,t,y3)
        plt.setp(lines, linewidth=0.5)
        plt.show()
        plt.ylabel('X Position')
        plt.xlabel('Time')

        tripartite(y1,y2,y3)

Topics in Modern Dynamics

These three examples are just the tip of the iceberg. The topics in modern dynamics are almost numberless. Any system that changes in time is a potential object of study in modern dynamics. Here is a list of a few topics that spring to mind.

Bibliography

D. D. Nolte, Introduction to Modern Dynamics: Chaos, Networks, Space and Time, 2nd Ed. (Oxford University Press, 2019) (The physics and the derivations of the equations for the examples in this blog can be found here.)

D. D. Nolte, Galileo Unbound: A Path Across Life, the Universe and Everything (Oxford University Press, 2018) (The historical origins of the examples in this blog can be found here.)

Limit-Cycle Oscillators: The Fast and the Slow of Grandfather Clocks

Imagine in your mind the stately grandfather clock.  The long slow pendulum swinging back and forth so purposefully with such majesty.  It harks back to slower simpler times—seemingly Victorian in character, although their origins go back to Christiaan Huygens in 1656.  In introductory physics classes the dynamics of the pendulum is taught as one of the simplest simple harmonic oscillators, only a bit more complicated than a mass on a spring.

But don’t be fooled!  This simplicity is an allusion, for the pendulum clock lies at the heart of modern dynamics.  It is a nonlinear autonomous oscillator with system gain that balances dissipation to maintain a dynamic equilibrium that ticks on resolutely as long as some energy source can continue to supply it (like the heavy clock weights).    

This analysis has converted the two-dimensional dynamics of the autonomous oscillator to a simple one-dimensional dynamics with a stable fixed point.

The dynamic equilibrium of the grandfather clock is known as a limit cycle, and they are the central feature of autonomous oscillators.  Autonomous oscillators are one of the building blocks of complex systems, providing the fundamental elements for biological oscillators, neural networks, business cycles, population dynamics, viral epidemics, and even the rings of Saturn.  The most famous autonomous oscillator (after the pendulum clock) is named for a Dutch physicist, Balthasar van der Pol (1889 – 1959), who discovered the laws that govern how electrons oscillate in vacuum tubes.  But this highly specialized physics problem has expanded to become the new guiding paradigm for the fundamental oscillating element of modern dynamics—the van der Pol oscillator.

The van der Pol Oscillator

The van der Pol (vdP) oscillator begins as a simple harmonic oscillator (SHO) in which the dissipation (loss of energy) is flipped to become gain of energy.  This is as simple as flipping the sign of the damping term in the SHO

where β is positive.  This 2nd-order ODE is re-written into a dynamical flow as

where γ = β/m is the system gain.  Clearly, the dynamics of this SHO with gain would lead to run-away as the oscillator grows without bound.             

But no real-world system can grow indefinitely.  It has to eventually be limited by things such as inelasticity.  One of the simplest ways to include such a limiting process in the mathematical model is to make the gain get smaller at larger amplitudes.  This can be accomplished by making the gain a function of the amplitude x as

When the amplitude x gets large, the gain decreases, becoming zero and changing sign when x = 1.  Putting this amplitude-dependent gain into the SHO equation yields

This is the van der Pol equation.  It is the quintessential example of a nonlinear autonomous oscillator.            

When the parameter ε is large, the vdP oscillator has can behave in strongly nonlinear ways, with strongly nonlinear and nonharmonic oscillations.  An example is shown in Fig. 2 for a = 5 and b = 2.5.  The oscillation is clearly non-harmonic.

Fig. 1 Time trace of the position and velocity of the vdP oscillator with w0 = 5 and ε = 2.5.
Fig. 2 State-space portrait of the vdP flow lines for w0 = 5 and ε = 2.5.
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
Created on Mon Apr 16 07:38:57 2018
@author: David Nolte
D. D. Nolte, Introduction to Modern Dynamics: Chaos, Networks, Space and Time, 2nd ed. (Oxford,2019)
"""
import numpy as np
from scipy import integrate
from matplotlib import pyplot as plt

plt.close('all')

def solve_flow(param,lim = [-3,3,-3,3],max_time=10.0):
# van der pol 2D flow 
    def flow_deriv(x_y, t0, alpha,beta):
        x, y = x_y
        return [y,-alpha*x+beta*(1-x**2)*y]
    
    plt.figure()
    xmin = lim[0]
    xmax = lim[1]
    ymin = lim[2]
    ymax = lim[3]
    plt.axis([xmin, xmax, ymin, ymax])

    N=144
    colors = plt.cm.prism(np.linspace(0, 1, N))
    
    x0 = np.zeros(shape=(N,2))
    ind = -1
    for i in range(0,12):
        for j in range(0,12):
            ind = ind + 1;
            x0[ind,0] = ymin-1 + (ymax-ymin+2)*i/11
            x0[ind,1] = xmin-1 + (xmax-xmin+2)*j/11
             
    # Solve for the trajectories
    t = np.linspace(0, max_time, int(250*max_time))
    x_t = np.asarray([integrate.odeint(flow_deriv, x0i, t, param)
                      for x0i in x0])

    for i in range(N):
        x, y = x_t[i,:,:].T
        lines = plt.plot(x, y, '-', c=colors[i])
        plt.setp(lines, linewidth=1)

    plt.show()
    plt.title(model_title)
    plt.savefig('Flow2D')
    
    return t, x_t

def solve_flow2(param,max_time=20.0):
# van der pol 2D flow 
    def flow_deriv(x_y, t0, alpha,beta):
        #"""Compute the time-derivative of a Medio system."""
        x, y = x_y
        return [y,-alpha*x+beta*(1-x**2)*y]
    model_title = 'van der Pol Oscillator'
    x0 = np.zeros(shape=(2,))
    x0[0] = 0
    x0[1] = 4.5
    
    # Solve for the trajectories
    t = np.linspace(0, max_time, int(250*max_time))
    x_t = integrate.odeint(flow_deriv, x0, t, param)
 
    return t, x_t

param = (5, 2.5)             # van der Pol
lim = (-7,7,-10,10)

t, x_t = solve_flow(param,lim)

t, x_t = solve_flow2(param)
plt.figure(2)
lines = plt.plot(t,x_t[:,0],t,x_t[:,1],'-')

Separation of Time Scales

Nonlinear systems can have very complicated behavior that may be difficult to address analytically.  This is why the numerical ODE solver is a central tool of modern dynamics.  But there is a very neat analytical trick that can be applied to tame the nonlinearities (if they are not too large) and simplify the autonomous oscillator.  This trick is called separation of time scales (also known as secular perturbation theory)—it looks for simultaneous fast and slow behavior within the dynamics.  An example of fast and slow time scales in a well-known dynamical system is found in the simple spinning top in which nutation (fast oscillations) are superposed on precession (slow oscillations).             

For the autonomous van der Pol oscillator the fast time scale is the natural oscillation frequency, while the slow time scale is the approach to the limit cycle.  Let’s assign t0 = t and t1 = εt, where ε is a small parameter.  t0 is the slow period (approach to the limit cycle) and t1 is the fast period (natural oscillation frequency).  The solution in terms of these time scales is

where x0 is a slow response and acts as an envelope function for x1 that is the fast response. The total differential is

Similarly, to obtain a second derivative

Therefore, the vdP equation in terms of x0 and x1 is

to lowest order. Now separate the orders to zeroth and first orders in ε, respectively,

Solve the first equation (a simple harmonic oscillator)

and plug the solution it into the right-hand side of the second equation to give

The key to secular perturbation theory is to confine dynamics to their own time scales.  In other words, the slow dynamics provide the envelope that modulates the fast carrier frequency.  The envelope dynamics are contained in the time dependence of the coefficients A and B.  Furthermore, the dynamics of x1 should be a homogeneous function of time, which requires each term in the last equation to be zero.  Therefore, the dynamical equations for the envelope functions are

These can be transformed into polar coordinates. Because the envelope functions do not depend on the slow time scale, the time derivatives are

With these expressions, the slow dynamics become

where the angular velocity in the fast variable is equal to zero, leaving only the angular velocity of the unperturbed oscillator. (This is analogous to the rotating wave approximation (RWA) in optics, and also equivalent to studying the dynamics in the rotating frame of the unperturbed oscillator.)

Making a final substitution ρ = R/2 gives a very simple set of dynamical equations

These final equations capture the essential properties of the relaxation of the dynamics to the limit cycle. To lowest order (when the gain is weak) the angular frequency is unaffected, and the system oscillates at the natural frequency. The amplitude of the limit cycle equals 1. A deviation in the amplitude from 1 decays slowly back to the limit cycle making it a stable fixed point in the radial dynamics. This analysis has converted the two-dimensional dynamics of the autonomous oscillator to a simple one-dimensional dynamics with a stable fixed point on the radius variable. The phase-space portrait of this simplified autonomous oscillator is shown in Fig. 3. What could be simpler? This simplified autonomous oscillator can be found as a fundamental element of many complex systems.

Fig. 3 The state-space diagram of the simplified autonomous oscillator. Initial conditions relax onto the limit cycle. (Reprinted from Introduction to Modern Dynamics (Oxford, 2019) on pg. 8)

Further Reading

D. D. Nolte, Introduction to Modern Dynamics: Chaos, Networks, Space and Time, 2nd edition (Oxford University Press, 2019)

Pikovsky, A. S., M. G. Rosenblum and J. Kurths (2003). Synchronization: A Universal concept in nonlinear science. Cambridge, Cambridge University Press.

The Three-Body Problem, Longitude at Sea, and Lagrange’s Points

When Newton developed his theory of universal gravitation, the first problem he tackled was Kepler’s elliptical orbits of the planets around the sun, and he succeeded beyond compare.  The second problem he tackled was of more practical importance than the tracks of distant planets, namely the path of the Earth’s own moon, and he was never satisfied. 

Newton’s Principia and the Problem of Longitude

Measuring the precise location of the moon at very exact times against the backdrop of the celestial sphere was a method for ships at sea to find their longitude.  Yet the moon’s orbit around the Earth is irregular, and Newton recognized that because gravity was universal, every planet exerted a force on each other, and the moon was being tugged upon by the sun as well as by the Earth.

Newton’s attempt with the Moon was his last significant scientific endeavor

            In Propositions 65 and 66 of Book 1 of the Principia, Newton applied his new theory to attempt to pin down the moon’s trajectory, but was thwarted by the complexity of the three bodies of the Earth-Moon-Sun system.  For instance, the force of the sun on the moon is greater than the force of the Earth on the moon, which raised the question of why the moon continued to circle the Earth rather than being pulled away to the sun. Newton correctly recognized that it was the Earth-moon system that was in orbit around the sun, and hence the sun caused only a perturbation on the Moon’s orbit around the Earth.  However, because the Moon’s orbit is approximately elliptical, the Sun’s pull on the Moon is not constant as it swings around in its orbit, and Newton only succeeded in making estimates of the perturbation. 

            Unsatisfied with his results in the Principia, Newton tried again, beginning in the summer of 1694, but the problem was to too great even for him.  In 1702 he published his research, as far as he was able to take it, on the orbital trajectory of the Moon.  He could pin down the motion to within 10 arc minutes, but this was not accurate enough for reliable navigation, representing an uncertainty of over 10 kilometers at sea—error enough to run aground at night on unseen shoals.  Newton’s attempt with the Moon was his last significant scientific endeavor, and afterwards this great scientist withdrew into administrative activities and other occult interests that consumed his remaining time.

Race for the Moon

            The importance of the Moon for navigation was too pressing to ignore, and in the 1740’s a heated competition to be the first to pin down the Moon’s motion developed among three of the leading mathematicians of the day—Leonhard Euler, Jean Le Rond D’Alembert and Alexis Clairaut—who began attacking the lunar problem and each other [1].  Euler in 1736 had published the first textbook on dynamics that used the calculus, and Clairaut had recently returned from Lapland with Maupertuis.  D’Alembert, for his part, had placed dynamics on a firm physical foundation with his 1743 textbook.  Euler was first to publish with a lunar table in 1746, but there remained problems in his theory that frustrated his attempt at attaining the required level of accuracy.  

            At nearly the same time Clairaut and D’Alembert revisited Newton’s foiled lunar theory and found additional terms in the perturbation expansion that Newton had neglected.  They rushed to beat each other into print, but Clairaut was distracted by a prize competition for the most accurate lunar theory, announced by the Russian Academy of Sciences and refereed by Euler, while D’Alembert ignored the competition, certain that Euler would rule in favor of Clairaut.  Clairaut won the prize, but D’Alembert beat him into print. 

            The rivalry over the moon did not end there. Clairaut continued to improve lunar tables by combining theory and observation, while D’Alembert remained more purely theoretical.  A growing animosity between Clairaut and D’Alembert spilled out into the public eye and became a daily topic of conversation in the Paris salons.  The difference in their approaches matched the difference in their personalities, with the more flamboyant and pragmatic Clairaut disdaining the purist approach and philosophy of D’Alembert.  Clairaut succeeded in publishing improved lunar theory and tables in 1752, followed by Euler in 1753, while D’Alembert’s interests were drawn away towards his activities for Diderot’s Encyclopedia

            The battle over the Moon in the late 1740’s was carried out on the battlefield of perturbation theory.  To lowest order, the orbit of the Moon around the Earth is a Keplerian ellipse, and the effect of the Sun, though creating problems for the use of the Moon for navigation, produces only a small modification—a perturbation—of its overall motion.  Within a decade or two, the accuracy of perturbation theory calculations, combined with empirical observations, had improved to the point that accurate lunar tables had sufficient accuracy to allow ships to locate their longitude to within a kilometer at sea.  The most accurate tables were made by Tobias Mayer, who was awarded posthumously a prize of 3000 pounds by the British Parliament in 1763 for the determination of longitude at sea. Euler received 300 pounds for helping Mayer with his calculations.  This was the same prize that was coveted by the famous clockmaker John Harrison and depicted so brilliantly in Dava Sobel’s Longitude (1995).

Lagrange Points

            Several years later in 1772 Lagrange discovered an interesting special solution to the planar three-body problem with three massive points each executing an elliptic orbit around the center of mass of the system, but configured such that their positions always coincided with the vertices of an equilateral triangle [2].  He found a more important special solution in the restricted three-body problem that emerged when a massless third body was found to have two stable equilibrium points in the combined gravitational potentials of two massive bodies.  These two stable equilibrium points  are known as the L4 and L5 Lagrange points.  Small objects can orbit these points, and in the Sun-Jupiter system these points are occupied by the Trojan asteroids.  Similarly stable Lagrange points exist in the Earth-Moon system where space stations or satellites could be parked. 

For the special case of circular orbits of constant angular frequency w, the motion of the third mass is described by the Lagrangian

where the potential is time dependent because of the motion of the two larger masses.  Lagrange approached the problem by adopting a rotating reference frame in which the two larger masses m1 and m2 move along the stationary line defined by their centers. The Lagrangian in the rotating frame is

where the effective potential is now time independent.  The first term in the effective potential is the Coriolis effect and the second is the centrifugal term.

Fig. Effective potential for the planar three-body problem and the five Lagrange points where the gradient of the effective potential equals zero. The Lagrange points are displayed on a horizontal cross section of the potential energy shown with equipotential lines. The large circle in the center is the Sun. The smaller circle on the right is a Jupiter-like planet. The points L1, L2 and L3 are each saddle-point equilibria positions and hence unstable. The points L4 and L5 are stable points that can collect small masses that orbit these Lagrange points.

            The effective potential is shown in the figure for m3 = 10m2.  There are five locations where the gradient of the effective potential equals zero.  The point L1 is the equilibrium position between the two larger masses.  The points L2 and L3 are at positions where the centrifugal force balances the gravitational attraction to the two larger masses.  These are also the points that separate local orbits around a single mass from global orbits that orbit the two-body system. The last two Lagrange points at L4 and L5 are at one of the vertices of an equilateral triangle, with the other two vertices at the positions of the larger masses. The first three Lagrange points are saddle points.  The last two are at maxima of the effective potential.

L1, lies between Earth and the sun at about 1 million miles from Earth. L1 gets an uninterrupted view of the sun, and is currently occupied by the Solar and Heliospheric Observatory (SOHO) and the Deep Space Climate Observatory. L2 also lies a million miles from Earth, but in the opposite direction of the sun. At this point, with the Earth, moon and sun behind it, a spacecraft can get a clear view of deep space. NASA’s Wilkinson Microwave Anisotropy Probe (WMAP) is currently at this spot measuring the cosmic background radiation left over from the Big Bang. The James Webb Space Telescope will move into this region in 2021.


[1] Gutzwiller, M. C. (1998). “Moon-Earth-Sun: The oldest three-body problem.” Reviews of Modern Physics 70(2): 589-639.

[2] J.L. Lagrange Essai sur le problème des trois corps, 1772, Oeuvres tome 6

Vladimir Arnold’s Cat Map

The 1960’s are known as a time of cultural revolution, but perhaps less known was the revolution that occurred in the science of dynamics.  Three towering figures of that revolution were Stephen Smale (1930 – ) at Berkeley, Andrey Kolmogorov (1903 – 1987) in Moscow and his student Vladimir Arnold (1937 – 2010).  Arnold was only 20 years old in 1957 when he solved Hilbert’s thirteenth problem (that any continuous function of several variables can be constructed with a finite number of two-variable functions).  Only a few years later his work on the problem of small denominators in dynamical systems provided the finishing touches on the long elusive explanation of the stability of the solar system (the problem for which Poincaré won the King Oscar Prize in mathematics in 1889 when he discovered chaotic dynamics ).  This theory is known as KAM-theory, using the first initials of the names of Kolmogorov, Arnold and Moser [1].  Building on his breakthrough in celestial mechanics, Arnold’s work through the 1960’s remade the theory of Hamiltonian systems, creating a shift in perspective that has permanently altered how physicists look at dynamical systems.

Hamiltonian Physics on a Torus

Traditionally, Hamiltonian physics is associated with systems of inertial objects that conserve the sum of kinetic and potential energy, in other words, conservative non-dissipative systems.  But a modern view (after Arnold) of Hamiltonian systems sees them as hyperdimensional mathematical mappings that conserve volume.  The space that these mappings inhabit is phase space, and the conservation of phase-space volume is known as Liouville’s Theorem [2].  The geometry of phase space is called symplectic geometry, and the universal position that symplectic geometry now holds in the physics of Hamiltonian mechanics is largely due to Arnold’s textbook Mathematical Methods of Classical Mechanics (1974, English translation 1978) [3]. Arnold’s famous quote from that text is “Hamiltonian mechanics is geometry in phase space”. 

One of the striking aspects of this textbook is the reduction of phase-space geometry to the geometry of a hyperdimensional torus for a large number of Hamiltonian systems.  If there are as many conserved quantities as there are degrees of freedom in a Hamiltonian system, then the system is called “integrable” (because you can integrated the equations of motion to find a constant of the motion). Then it is possible to map the physics onto a hyperdimensional torus through the transformation of dynamical coordinates into what are known as “action-angle” coordinates [4].  Each independent angle has an associated action that is conserved during the motion of the system.  The periodicity of the dynamical angle coordinate makes it possible to identify it with the angular coordinate of a multi-dimensional torus.  Therefore, every integrable Hamiltonian system can be mapped to motion on a multi-dimensional torus (one dimension for each degree of freedom of the system). 

Actually, integrable Hamiltonian systems are among the most boring dynamical systems you can imagine. They literally just go in circles (around the torus). But as soon as you add a small perturbation that cannot be integrated they produce some of the most complex and beautiful patterns of all dynamical systems. It was Arnold’s focus on motions on a torus, and perturbations that shift the dynamics off the torus, that led him to propose a simple mapping that captured the essence of Hamiltonian chaos.

The Arnold Cat Map

Motion on a two-dimensional torus is defined by two angles, and trajectories on a two-dimensional torus are simple helixes. If the periodicities of the motion in the two angles have an integer ratio, the helix repeats itself. However, if the ratio of periods (also known as the winding number) is irrational, then the helix never repeats and passes arbitrarily closely to any point on the surface of the torus. This last case leads to an “ergodic” system, which is a term introduced by Boltzmann to describe a physical system whose trajectory fills phase space. The behavior of a helix for rational or irrational winding number is not terribly interesting. It’s just an orbit going in circles like an integrable Hamiltonian system. The helix can never even cross itself.

However, if you could add a new dimension to the torus (or add a new degree of freedom to the dynamical system), then the helix could pass over or under itself by moving into the new dimension. By weaving around itself, a trajectory can become chaotic, and the set of many trajectories can become as mixed up as a bowl of spaghetti. This can be a little hard to visualize, especially in higher dimensions, but Arnold thought of a very simple mathematical mapping that captures the essential motion on a torus, preserving volume as required for a Hamiltonian system, but with the ability for regions to become all mixed up, just like trajectories in a nonintegrable Hamiltonian system.

A unit square is isomorphic to a two-dimensional torus. This means that there is a one-to-one mapping of each point on the unit square to each point on the surface of a torus. Imagine taking a sheet of paper and forming a tube out of it. One of the dimensions of the sheet of paper is now an angle coordinate that is cyclic, going around the circumference of the tube. Now if the sheet of paper is flexible (like it is made of thin rubber) you can bend the tube around and connect the top of the tube with the bottom, like a bicycle inner tube. The other dimension of the sheet of paper is now also an angle coordinate that is cyclic. In this way a flat sheet is converted (with some bending) into a torus.

Arnold’s key idea was to create a transformation that takes the torus into itself, preserving volume, yet including the ability for regions to pass around each other. Arnold accomplished this with the simple map

where the modulus 1 takes the unit square into itself. This transformation can also be expressed as a matrix

followed by taking modulus 1. The transformation matrix is called a Floquet matrix, and the determinant of the matrix is equal to unity, which ensures that volume is conserved.

Arnold decided to illustrate this mapping by using a crude image of the face of a cat (See Fig. 1). Successive applications of the transformation stretch and shear the cat, which is then folded back into the unit square. The stretching and folding preserve the volume, but the image becomes all mixed up, just like mixing in a chaotic Hamiltonian system, or like an immiscible dye in water that is stirred.

Fig. 1 Arnold’s illustration of his cat map from pg. 6 of V. I. Arnold and A. Avez, Ergodic Problems of Classical Mechanics (Benjamin, 1968) [5]
Fig. 2 Arnold Cat Map operation is an iterated succession of stretching with shear of a unit square, and translation back to the unit square. The mapping preserves and mixes areas, and is invertible.

Recurrence

When the transformation matrix is applied to continuous values, it produces a continuous range of transformed values that become thinner and thinner until the unit square is uniformly mixed. However, if the unit square is discrete, made up of pixels, then something very different happens (see Fig. 3). The image of the cat in this case is composed of a 50×50 array of pixels. For early iterations, the image becomes stretched and mixed, but at iteration 50 there are 4 low-resolution upside-down versions of the cat, and at iteration 75 the cat fully reforms, but is upside-down. Continuing on, the cat eventually reappears fully reformed and upright at iteration 150. Therefore, the discrete case displays a recurrence and the mapping is periodic. Calculating the period of the cat map on lattices can lead to interesting patterns, especially if the lattice is composed of prime numbers [6].

Fig. 3 A discrete cat map has a recurrence period. This example with a 50×50 lattice has a period of 150.

The Cat Map and the Golden Mean

The golden mean, or the golden ratio, 1.618033988749895 is never far away when working with Hamiltonian systems. Because the golden mean is the “most irrational” of all irrational numbers, it plays an essential role in KAM theory on the stability of the solar system. In the case of Arnold’s cat map, it pops up its head in several ways. For instance, the transformation matrix has eigenvalues

with the remarkable property that

which guarantees conservation of area.


Selected V. I. Arnold Publications

Arnold, V. I. “FUNCTIONS OF 3 VARIABLES.” Doklady Akademii Nauk Sssr 114(4): 679-681. (1957)

Arnold, V. I. “GENERATION OF QUASI-PERIODIC MOTION FROM A FAMILY OF PERIODIC MOTIONS.” Doklady Akademii Nauk Sssr 138(1): 13-&. (1961)

Arnold, V. I. “STABILITY OF EQUILIBRIUM POSITION OF A HAMILTONIAN SYSTEM OF ORDINARY DIFFERENTIAL EQUATIONS IN GENERAL ELLIPTIC CASE.” Doklady Akademii Nauk Sssr 137(2): 255-&. (1961)

Arnold, V. I. “BEHAVIOUR OF AN ADIABATIC INVARIANT WHEN HAMILTONS FUNCTION IS UNDERGOING A SLOW PERIODIC VARIATION.” Doklady Akademii Nauk Sssr 142(4): 758-&. (1962)

Arnold, V. I. “CLASSICAL THEORY OF PERTURBATIONS AND PROBLEM OF STABILITY OF PLANETARY SYSTEMS.” Doklady Akademii Nauk Sssr 145(3): 487-&. (1962)

Arnold, V. I. “BEHAVIOUR OF AN ADIABATIC INVARIANT WHEN HAMILTONS FUNCTION IS UNDERGOING A SLOW PERIODIC VARIATION.” Doklady Akademii Nauk Sssr 142(4): 758-&. (1962)

Arnold, V. I. and Y. G. Sinai. “SMALL PERTURBATIONS OF AUTHOMORPHISMS OF A TORE.” Doklady Akademii Nauk Sssr 144(4): 695-&. (1962)

Arnold, V. I. “Small denominators and problems of the stability of motion in classical and celestial mechanics (in Russian).” Usp. Mat. Nauk. 18: 91-192. (1963)

Arnold, V. I. and A. L. Krylov. “UNIFORM DISTRIBUTION OF POINTS ON A SPHERE AND SOME ERGODIC PROPERTIES OF SOLUTIONS TO LINEAR ORDINARY DIFFERENTIAL EQUATIONS IN COMPLEX REGION.” Doklady Akademii Nauk Sssr 148(1): 9-&. (1963)

Arnold, V. I. “INSTABILITY OF DYNAMICAL SYSTEMS WITH MANY DEGREES OF FREEDOM.” Doklady Akademii Nauk Sssr 156(1): 9-&. (1964)

Arnold, V. “SUR UNE PROPRIETE TOPOLOGIQUE DES APPLICATIONS GLOBALEMENT CANONIQUES DE LA MECANIQUE CLASSIQUE.” Comptes Rendus Hebdomadaires Des Seances De L Academie Des Sciences 261(19): 3719-&. (1965)

Arnold, V. I. “APPLICABILITY CONDITIONS AND ERROR ESTIMATION BY AVERAGING FOR SYSTEMS WHICH GO THROUGH RESONANCES IN COURSE OF EVOLUTION.” Doklady Akademii Nauk Sssr 161(1): 9-&. (1965)


Bibliography

[1] Dumas, H. S. The KAM Story: A friendly introduction to the content, history and significance of Classical Kolmogorov-Arnold-Moser Theory, World Scientific. (2014)

[2] See Chapter 6, “The Tangled Tale of Phase Space” in Galileo Unbound (D. D. Nolte, Oxford University Press, 2018)

[3] V. I. Arnold, Mathematical Methods of Classical Mechanics (Nauk 1974, English translation Springer 1978)

[4] See Chapter 3, “Hamiltonian Dynamics and Phase Space” in Introduction to Modern Dynamics, 2nd ed. (D. D. Nolte, Oxford University Press, 2019)

[5] V. I. Arnold and A. Avez, Ergodic Problems of Classical Mechanics (Benjamin, 1968)

[6] Gaspari, G. “THE ARNOLD CAT MAP ON PRIME LATTICES.” Physica D-Nonlinear Phenomena 73(4): 352-372. (1994)

Biased Double-well Potential: Bistability, Bifurcation and Hysteresis

Bistability, bifurcation and hysteresis are ubiquitous phenomena that arise from nonlinear dynamics and have considerable importance for technology applications.  For instance, the hysteresis associated with the flipping of magnetic domains under magnetic fields is the central mechanism for magnetic memory, and bistability is a key feature of switching technology.

… one of the most commonly encountered bifurcations is called a saddle-node bifurcation, which is the bifurcation that occurs in the biased double-well potential.

One of the simplest models for bistability and hysteresis is the one-dimensional double-well potential biased by a changing linear potential.  An example of a double-well potential with a bias is

where the parameter c is a control parameter (bias) that can be adjusted or that changes slowly in time c(t).  This dynamical system is also known as the Duffing oscillator. The net double-well potentials for several values of the control parameter c are shown in Fig. 1.   With no bias, there are two degenerate energy minima.  As c is made negative, the left well has the lowest energy, and as c is made positive the right well has the lowest energy.

The dynamics of this potential energy profile can be understood by imagining a small ball that responds to the local forces exerted by the potential.  For large negative values of c the ball will have its minimum energy in the left well.  As c is increased, the energy of the left well increases, and rises above the energy of the right well.  If the ball began in the left well, even when the left well has a higher energy than the right, there is a potential barrier that the ball cannot overcome and it remains on the left.  This local minimum is a stable equilibrium, but it is called “metastable” because it is not a global minimum of the system.  Metastability is the origin of hysteresis.

Fig. 1 A biased double-well potential in one dimension. The thresholds to destroy the local metastable minima are c = +/-1.05. For values beyond threshold, only a single minimum exists with no barrier. Hysteresis is caused by the mass being stuck in the metastable (upper) minimum because it has insufficient energy to overcome the potential barrier, until the barrier disappears at threshold and the ball rolls all the way down to the bottom to the new location. When the bias is slowly reversed, the new location becomes metastable, until the ball can overcome the barrier and roll down to its original minimum, etc.

           Once sufficient bias is applied that the local minimum disappears, the ball will roll downhill to the new minimum on the right, and in the presence of dissipation, it will come to rest in the new minimum.  The bias can then be slowly lowered, reversing this process. Because of the potential barrier, the bias must change sign and be strong enough to remove the stability of the now metastable fixed point with the ball on the right, allowing the ball to roll back down to its original location on the left.  This “overshoot” defines the extent of the hysteresis. The fact that there are two minima, and that one is metastable with a barrier between the two, produces “bistability”, meaning that there are two stable fixed points for the same control parameter.

           For illustration, assume a mass obeys the flow equation

including a damping term, where the force is the negative gradient of the potential energy.  The bias parameter c can be time dependent, beginning beyond the negative threshold and slowly increasing until it exceeds the positive threshold, and then reversing and decreasing again.  The position of the mass is locally a damped oscillator until a threshold is passed, and then the mass falls into the global minimum, as shown in Fig. 2. As the bias is reversed, it remains in the metastable minimum on the right until the control parameter passes threshold, and then the mass drops into the left minimum that is now a global minimum.

Fig. 2 Hysteresis diagram. The mass begins in the left well. As the parameter c increases, the mass remains in the well, even though it is no longer the global minimum when c becomes positive. When c passes the positive threshold (around 1.05 for this example), the mass falls into the right well, with damped oscillation. Then the control parameter c is decreased slowly until the negative threshold is passed, and the mass switches to the left well with damped oscillations. The difference between the “switch up” and “switch down” values of the control parameter represents the “hysteresis” of the this system.

The sudden switching of the biased double-well potential represents what is known as a “bifurcation”. A bifurcation is a sudden change in the qualitative behavior of a system caused by a small change in a control variable. Usually, a bifurcation occurs when the number of attractors of a system changes. There is a fairly large menagerie of different types of bifurcations, but one of the most commonly encountered bifurcations is called a saddle-node bifurcation, which is the bifurcation that occurs in the biased double-well potential. In fact, there are two saddle-node bifurcations.

Bifurcations are easily portrayed by creating a joint space between phase space and the one (or more) control parameters that induce the bifurcation. The phase space of the double well is two dimensional (position, velocity) with three fixed points, but the change in the number of fixed points can be captured by taking a projection of the phase space onto a lower-dimensional manifold. In this case, the projection is simply along the x-axis. Therefore a “co-dimensional phase space” can be constructed with the x-axis as one dimension and the control parameter as the other. This is illustrated in Fig. 3. The cubic curve traces out the solutions to the fixed-point equation

For a given value of the control parameter c there are either three solutions or one solution. The values of c where the number of solutions changes discontinuously is the bifurcation point c*. Two examples of the potential function are shown on the right for c = +1 and c = -0.5 showing the locations of the three fixed points.

Fig. 3 The co-dimension phase space combines the one-dimensional dynamics along the position x with the control parameter. For a given value of c, there are three or one solution for the fixed point. When there are three solutions, two are stable (the double minima) and one is unstable (the saddle). As the magnitude of the bias increases, one stable node annihilates with the unstable node (a minimum and the saddle merge) and the dynamics “switch” to the other minimum.

The threshold value in this example is c* = 1.05. When |c| < c* the two stable fixed points are the two minima of the double-well potential, and the unstable fixed point is the saddle between them. When |c| > c* then the single stable fixed point is the single minimum of the potential function. The saddle-node bifurcation takes its name from the fact (illustrated here) that the unstable fixed point is a saddle, and at the bifurcation the saddle point annihilates with one of the stable fixed points.

The following Python code illustrates the behavior of a biased double-well potential, with damping, in which the control parameter changes slowly with a sinusoidal time dependence.

Python Code

#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
Created on Wed Apr 17 15:53:42 2019
@author: nolte
D. D. Nolte, Introduction to Modern Dynamics: Chaos, Networks, Space and Time, 2nd ed. (Oxford,2019)
"""

import numpy as np
from scipy import integrate
from scipy import signal
from matplotlib import pyplot as plt

plt.close('all')
T = 400
Amp = 3.5

def solve_flow(y0,c0,lim = [-3,3,-3,3]):

    def flow_deriv(x_y, t, c0):
        #"""Compute the time-derivative of a Medio system."""
        x, y = x_y

        return [y,-0.5*y - x**3 + 2*x + x*(2*np.pi/T)*Amp*np.cos(2*np.pi*t/T) + Amp*np.sin(2*np.pi*t/T)]

    tsettle = np.linspace(0,T,101)   
    yinit = y0;
    x_tsettle = integrate.odeint(flow_deriv,yinit,tsettle,args=(T,))
    y0 = x_tsettle[100,:]
    
    t = np.linspace(0, 1.5*T, 2001)
    x_t = integrate.odeint(flow_deriv, y0, t, args=(T,))
    c  = Amp*np.sin(2*np.pi*t/T)
        
    return t, x_t, c

eps = 0.0001

for loop in range(0,100):
    c = -1.2 + 2.4*loop/100 + eps;
    xc[loop]=c
    
    coeff = [-1, 0, 2, c]
    y = np.roots(coeff)
    
    xtmp = np.real(y[0])
    ytmp = np.real(y[1])
    
    X[loop] = np.min([xtmp,ytmp])
    Y[loop] = np.max([xtmp,ytmp])
    Z[loop]= np.real(y[2])


plt.figure(1)
lines = plt.plot(xc,X,xc,Y,xc,Z)
plt.setp(lines, linewidth=0.5)
plt.show()
plt.title('Roots')

y0 = [1.9, 0]
c0 = -2.

t, x_t, c = solve_flow(y0,c0)
y1 = x_t[:,0]
y2 = x_t[:,1]

plt.figure(2)
lines = plt.plot(t,y1)
plt.setp(lines, linewidth=0.5)
plt.show()
plt.ylabel('X Position')
plt.xlabel('Time')

plt.figure(3)
lines = plt.plot(c,y1)
plt.setp(lines, linewidth=0.5)
plt.show()
plt.ylabel('X Position')
plt.xlabel('Control Parameter')
plt.title('Hysteresis Figure')

Further Reading:

D. D. Nolte, Introduction to Modern Dynamics: Chaos, Networks, Space and Time (2nd Edition) (Oxford University Press, 2019)

The Pendulum Lab