A Short History of Fractal Dimension

It is second nature to think of integer dimensions:  A line is one dimensional.  A plane is two dimensional. A volume is three dimensional.  A point has no dimensions.

It is harder to think in four dimensions and higher, but even here it is a simple extrapolation of lower dimensions.  Consider the basis vectors spanning a three-dimensional space consisting of the triples of numbers

Then a four dimensional hyperspace is just created by adding a new “tuple” to the list

and so on to 5 and 6 dimensions and on.  Child’s play!

But how do you think of fractional dimensions?  What is a fractional dimension?  For that matter, what is a dimension?  Even the integer dimensions began to unravel when George Cantor showed in 1877 that the line and the plane, which clearly had different “dimensionalities”, both had the same cardinality and could be put into a one-to-one correspondence.  From then onward the concept of dimension had to be rebuilt from the ground up, leading ultimately to fractals.

Here is a short history of fractal dimension, partially excerpted from my history of dynamics in Galileo Unbound (Oxford University Press, 2018) pg. 110 ff.  This blog page presents the history through a set of publications that successively altered how mathematicians thought about curves in spaces, beginning with Karl Weierstrass in 1872.

Karl Weierstrass (1872)

Karl Weierstrass (1815 – 1897) was studying convergence properties of infinite power series in 1872 when he began with a problem that Bernhard Riemann had given to his students some years earlier.  Riemann had asked whether the function

was continuous everywhere but not differentiable.  This simple question about a simple series was surprisingly hard to answer (it was not solved until Hardy provided the proof in 1916 [1]).  Therefore, Weierstrass conceived of a simpler infinite sum that was continuous everywhere and for which he could calculate left and right limits of derivatives at any point.  This function is

where b is a large odd integer and a is positive and less than one.  Weierstrass showed that the left and right derivatives failed to converge to the same value, no matter where he took his point.  In short, he had discovered a function that was continuous everywhere, but had a derivative nowhere [2].  This pathological function, called a “Monster” by Charles Hermite, is now called the Weierstrass function.

Beyond the strange properties that Weierstrass sought, the Weierstrass function would turn out to be a fractal curve (recognized much later by Besicovitch and Ursell in 1937 [3]) with a fractal (Hausdorff) dimension given by

although this was not proven until very recently [4].  An example of the function is shown in Fig. 1 for a = 0.5 and b = 5.  This specific curve has a fractal dimension D = 1.5693.  Notably, this is a number that is greater than 1 dimension (the topological dimension of the curve) but smaller than 2 dimensions (the embedding dimension of the curve).  The curve tends to fill more of the two dimensional plane than a straight line, so its intermediate fractal dimension has an intuitive feel about it.  The more “monstrous” the curve looks, the closer its fractal dimension approaches 2.

Fig. 1  Weierstrass’ “Monster” (1872) with a = 0.5, b = 5.  This continuous function is nowhere differentiable.  It is a fractal with fractal dimension D = 2 + ln(0.5)/ln(5) = 1.5693.

Georg Cantor (1883)

Partially inspired by Weierstrass’ discovery, George Cantor (1845 – 1918) published an example of an unusual ternary set in 1883 in “Grundlagen einer allgemeinen Mannigfaltigkeitslehre” (“Foundations of a General Theory of Aggregates”) [5].  The set generates a function (The Cantor Staircase) that has a derivative equal to zero almost everywhere, yet whose area integrates to unity.  It is a striking example of a function that is not equal to the integral of its derivative!  Cantor demonstrated that the size of his set is aleph0 , which is the cardinality of the real numbers.  But whereas the real numbers are uniformly distributed, Cantor’s set is “clumped”.  This clumpiness is an essential feature that distinguishes it from the one-dimensional number line, and it raised important questions about dimensionality. The fractal dimension of the ternary Cantor set is DH = ln(2)/ln(3) = 0.6309.

Fig. 2  The 1883 Cantor set (below) and the Cantor staircase (above, as the indefinite integral over the set).

Giuseppe Peano (1890)

In 1878, in a letter to his friend Richard Dedekind, Cantor showed that there was a one-to-one correspondence between the real numbers and the points in any n-dimensional space.  He was so surprised by his own result that he wrote to Dedekind “I see it, but I don’t believe it.”  The solid concepts of dimension and dimensionality were dissolving before his eyes.  What does it mean to trace the path of a trajectory in an n-dimensional space, if all the points in n dimensions were just numbers on a line?  What could such a trajectory look like?  A graphic example of a plane-filling path was constructed in 1890 by Peano [6], who was a peripatetic mathematician with interests that wandered broadly across the landscape of the mathematical problems of his day—usually ahead of his time.  Only two years after he had axiomatized linear vector spaces [7], Peano constructed a continuous curve that filled space. 

The construction of Peano’s curve proceeds by taking a square and dividing it into 9 equal sub squares.  Lines connect the centers of each of the sub squares.  Then each sub square is divided again into 9 sub squares whose centers are all connected by lines.  At this stage, the original pattern, repeated 9 times, is connected together by 8 links, forming a single curve.  This process is repeated infinitely many times, resulting in a curve that passes through every point of the original plane square.  In this way, a line is made to fill a plane.  Where Cantor had proven abstractly that the cardinality of the real numbers was the same as the points in n-dimensional space, Peano created a specific example.  This was followed quickly by another construction, invented by David Hilbert in 1891, that divided the square into four instead of nine, simplifying the construction, but also showing that such constructions were easily generated.

Fig. 3 Peano’s (1890) and Hilbert’s (1891) plane-filling curves.  When the iterations are taken to infinity, the curves approach every point of two-dimensional space arbitrarily closely, giving them a dimension DH = DE = 2, although their topological dimensions are DT = 1.

Helge von Koch (1904)

The space-filling curves of Peano and Hilbert have the extreme property that a one-dimensional curve approaches every point in a two-dimensional space.  This ability of a one-dimensional trajectory to fill space mirrored the ergodic hypothesis that Boltzmann relied upon as he developed statistical mechanics.  These examples by Peano, Hilbert and Boltzmann inspired searches for continuous curves whose dimensionality similarly exceeded one dimension, yet without filling space.  Weierstrass’ Monster was already one such curve, existing in some dimension greater than one but not filling the plane.  The construction of the Monster required infinite series of harmonic functions, and the resulting curve was single valued on its domain of real numbers. 

An alternative approach was proposed by Helge von Koch (1870—1924), a Swedish mathematician with an interest in number theory.  He suggested in 1904 that a set of straight line segments could be joined together, and then shrunk by a scale factor to act as new segments of the original pattern [8].  The construction of the Koch curve is shown in Fig. 4.  When the process is taken to its limit, it produces a curve, differentiable nowhere, which snakes through two dimensions.  When connected with other identical curves into a hexagon, the curve resembles a snowflake, and the construction is known as “Koch’s Snowflake”. 

The Koch curve begins in generation 1 with N0 = 4 elements.  These are shrunk by a factor of b = 1/3 to become the four elements of the next generation, and so on.  The number of elements varies with the observation scale according to the equation

where D is called the fractal dimension.  In the example of the Koch curve, the fractal dimension is

which is a number less than its embedding dimenion DE = 2.  The fractal is embedded in 2D but has a fractional dimension that is greater than it topological dimension DT = 1.

Fig. 4  Generation of a Koch curve (1904).  The fractal dimension is D = ln(4)/ln(3) = 1.26.  At each stage, four elements are reduced in size by a factor of 3.  The “length” of the curve approaches infinity as the features get smaller and smaller.  But the scaling of the length with size is determined uniquely by the fractal dimension.

Waclaw Sierpinski (1915)

Waclaw Sierpinski (1882 – 1969) was a Polish mathematician studying at the Jagellonian University in Krakow for his doctorate when he came across a theorem that every point in the plane can be defined by a single coordinate.  Intrigued by such an unintuitive result, he dived deep into Cantor’s set theory after he was appointed as a faculty member at the university in Lvov.  He began to construct curves that had more specific properties than the Peano or Hilbert curves, such as a curve that passes through every interior point of a unit square but that encloses an area that is only equal to 5/12 = 0.4167.  Sierpinski became interested in the topological properties of such sets.

Sierpinski considered how to define a curve that was embedded in DE = 2 but that was NOT constructed as a topological dimension DT = 1 curve as the curves of Peano, Hilbert, Koch (and even his own) had been.  To demonstrate this point, he described a construction that began with a topological dimension DT = 2 object, a planar triangle, from which the open set of its central inverted triangle is removed, leaving its boundary points.  The process is continued iteratively to all scales [9].  The resulting point set is shown in Fig. 5 and is called the Sierpinski gasket.  What is left after all the internal triangles are removed is a point set that can be made discontinuous by cutting it at a finite set of points.  This is shown in Fig. 5 by the red circles.  Each circle, no matter the size, cuts the set at three points, making the resulting set discontinuous.  Ten years later, Karl Menger would show that this property of discontinuous cuts determined the topological dimension of the Sierpinski gasket to be DT = 1.  The embedding dimension is of course DE = 2, and the fractal dimension of the Sierpinski gasket is

Fig. 5 The Sierpinski gasket.  The central triangle is removed (leaving its boundary) at each scale.  The pattern is self-similar with a fractal dimension DH = 1.5850.  Unintuitively, it has a topological dimension DT = 1.

Felix Hausdorff (1918)

The work by Cantor, Peano, von Koch and Sierpinski had created a crisis in geometry as mathematicians struggled to rescue concepts of dimensionality.  An important byproduct of that struggle was a much deeper understanding of concepts of space, especially in the hands of Felix Hausdorff. 

Felix Hausdorff (1868 – 1942) was born in Breslau, Prussia, and educated in Leipzig.  In his early years as a doctoral student, and as an assistant professor at Leipzig, he was a practicing mathematician by day and a philosopher and playwright by night, publishing under the pseudonym Paul Mongré.  He was at the University of Bonn working on set theory when the Greek mathematician Constatin Carathéodory published a paper in 1914 that showed how to construct a p-dimensional set in a q-dimensional space [9].  Haussdorff realized that he could apply similar ideas to the Cantor set.  He showed that the outer measure of the Cantor set would go discontinuously from zero to infinity as the fractional dimension increased smoothly.  The critical value where the measure changed its character became known as the Hausdorff dimension [11]

For the Cantor ternary set, the Hausdorff dimension is exactly DH = ln(2)/ln(3) = 0.6309.  This value for the dimension is less than the embedding dimension DE = 1 of the support (the real numbers on the interval [0, 1]), but it is also greater than DT = 0 which would hold for a countable number of points on the interval.  The work by Hausdorff became well known in the mathematics community who applied the idea to a broad range of point sets like Weierstrass’s monster and the Koch curve.

It is important to keep a perspective of what Hausdorff’s work meant during which period of time.  For instance, although the curves of Weierstrass, von Koch and Sierpinski were understood to present a challenge to concepts of dimension, it was only after Haussdorff that mathematicians began to think in terms of fractional dimensions and to calculate the fractional dimensions of these earlier point sets.  Despite the fact that Sierpinski created one of the most iconic fractals that we use as an example every day, he was unaware at the time that he was doing so.  His interest was topological—creating a curve for which any cut at any point would create disconnected subsets starting with objects (triangles) with topological dimension DT = 2.  In this way, talking about the early fractal objects tends to be anachronistic, using language to describe them that had not yet been invented at that time.

This perspective is also true for the ideas of topological dimension.  For instance, even Sierpinski was not fully tuned into the problems of defining topological dimension.  It turns out that what he created was a curve of topological dimension DT = 1, but that would only become clear later with the work of the Austrian mathematician Karl Menger.

Karl Menger (1926)

The day that Karl Menger (1902 – 1985) was born, his father, Carl Menger (1840 – 1941) lost his job.  Carl Menger was one of the founders of the famous Viennese school that established the marginalist view of economics.  However, Carl was not married to Karl’s mother, which was frowned upon by polite Vienna society, so he had to relinquish his professorship.  Despite his father’s reduction in status, Karl received an excellent education at a Viennese gymnasium (high school).  Among of his classmates were Wolfgang Pauli (Nobel Prize for Physics in 1945)  and Richard Kuhn (Nobel Prize for Chemistry in 1938).  When Karl began attending the University of Vienna he studied physics, but the mathematics professor Hans Hahn opened his eyes to the fascinating work on analysis that was transforming mathematics at that time, so Karl shifted his studies to mathematical analysis, specifically concerning conceptions of “curves”. 

Menger made important contributions to the history of fractal dimension as well as the history of topological dimension.  In his approach to defining the intrinsic topological dimension of a point set, he described the construction of a point set embedded in three dimensions that had zero volume, an infinite surface area, and a fractal dimension between 2 and 3.  The object is shown in Fig. 6 and is called a Menger “sponge” [12].  The Menger sponge is a fractal with a fractal dimension DH = ln(20)/ln(3) = 2.7268.  The face of the sponge is also known as the Sierpinski carpt.  The fractal dimension of the Sierpinski carpet is DH = ln(8)/ln(3) = 1.8928.

Fig. 6 Menger Sponge. Embedding dimension DE = 3. Fractal dimension DH = ln(20)/ln(3) = 2.7268. Topological dimension DT = 1: all one-dimensional metric spaces can be contained within the Menger sponge point set. Each face is a Sierpinski carpet with fractal dimension DH = ln(8)/ln(3) = 1.8928.

The striking feature of the Menger sponge is its topological dimension.  Menger created a new definition of topological dimension that partially solved the crises created by Cantor when he showed that every point on the unit square can be defined by a single coordinate.  This had put a one dimensional curve in one-to-one correspondence with a two-dimensional plane.  Yet the topology of a 2-dimensional object is clearly different than the topology of a line.  Menger found a simple definition that showed why 2D is different, topologically, than 3D, despite Cantor’s conundrum.  The answer came from the idea of making cuts on a point set and seeing if the cut created disconnected subsets. 

As a simple example, take a 1D line.  The removal of a single point creates two disconnected sub-lines.  The intersection of the cut with the line is 0-dimensional, and Menger showed that this defined the line as 1-dimensional.  Similarly, a line cuts the unit square into to parts.  The intersection of the cut with the plane is 1-dimensional, signifying that the dimension of the plane is 2-dimensional.  In other words, a (n-1) dimensional intersection of the boundary of a small neighborhood with the point set indicates that the point set has a dimension of n.  Generalizing this idea, looking at the Sierpinski gasket in Fig. 5, the boundary of a small circular region, if placed appropriately (as in the figure), intersects the Sierpinski gasket at three points of dimension zero.  Hence, the topological dimension of the Sierpinski gasket is one-dimensional.  Manger was likewise able to show that his sponge also had a topology that was one-dimensional, DT = 1, despite the embedding dimension of DE = 3.  In fact, all 1-dimensional metric spaces can be fit inside a Menger Sponge.

Benoit Mandelbrot (1967)

Benoit Mandelbrot (1924 – 2010) was born in Warsaw and his family emigrated to Paris in 1935.  He attended the Ecole Polytechnique where he studied under Gaston Julia (1893 – 1978) and Paul Levy (1886 – 1971).  Both Julia and Levy made significant contributions to the field of self-similar point sets and made a lasting impression on Mandelbrot.  He went to Cal Tech for a master’s degree in aeronautics and then a PhD in mathematical sciences from the University of Paris.  In 1958 Mandelbrot joined the research staff of the IBM Thomas J. Watson Research Center in Yorktown Heights, New York where he worked for over 35 years on topics of information theory and economics, always with a view of properties of self-similar sets and time series.

In 1967 Mandelbrot published one of his early important papers on the self-similar properties of the coastline of Britain.  He proposed that many natural features had statistical self similarity, which he applied to coastlines.  He published the work as “How Long Is the Coast of Britain? Statistical Self-Similarity and Fractional Dimension[13] in Science magazine , where he showed that the length of the coastline diverged with a Hausdorff dimension equal to D = 1.25.  Working at IBM, a world leader in computers, he had ready access to their power as well as their visualization capabilities.  Therefore, he was one of the first to begin exploring the graphical character of self-similar maps and point sets.

During one of his sabbaticals at Harvard University he began exploring the properties of Julia sets (named after his former teacher at the Ecole Polytechnique).  The Julia set is a self-similar point set that is easily visualized in the complex plane (two dimensions).  As Mandelbrot studied the convergence of divergence of infinite series defined by the Julia mapping, he discovered an infinitely nested pattern that was both beautiful and complex.  This has since become known as the Mandelbrot set.

Fig. 7 Mandelbrot set.

Later, in 1975, Mandelbrot coined the term fractal to describe these self-similar point sets, and he began to realize that these types of sets were ubiquitous in nature, ranging from the structure of trees and drainage basins, to the patterns of clouds and mountain landscapes.  He published his highly successful and influential book The Fractal Geometry of Nature in 1982, introducing fractals to the wider public and launching a generation of hobbyists interested in computer-generated fractals.  The rise of fractal geometry coincided with the rise of chaos theory that was aided by the same computing power.  For instance, important geometric structures of chaos theory, known as strange attractors, have fractal geometry. 

Appendix:  Box Counting

When confronted by a fractal of unknown structure, one of the simplest methods to find the fractal dimension is through box counting.  This method is shown in Fig. 8.  The fractal set is covered by a set of boxes of size b, and the number of boxes that contain at least one point of the fractal set are counted.  As the boxes are reduced in size, the number of covering boxes increases as 

To be numerically accurate, this method must be iterated over several orders of magnitude.  The number of boxes covering a fractal has this characteristic power law dependence, as shown in Fig. 8, and the fractal dimension is obtained as the slope.

Fig. 8  Calculation of the fractal dimension using box counting.  At each generation, the size of the grid is reduced by a factor of 3.  The number of boxes that contain some part of the fractal curve increases as  , where b is the scale


References

[1] Hardy, G. (1916). “Weierstrass’s non-differentiable function.” Transactions of the American Mathematical Society 17: 301-325.

[2] Weierstrass, K. (1872). “Uber continuirliche Functionen eines reellen Arguments, die fur keinen Werth des letzteren einen bestimmten Differentialquotienten besitzen.” Communication ri I’Academie Royale des Sciences II: 71-74.

[3] Besicovitch, A. S. and H. D. Ursell (1937). “Sets of fractional dimensions: On dimensional numbers of some continuous curves.” J. London Math. Soc. 1(1): 18-25.

[4] Shen, W. (2018). “Hausdorff dimension of the graphs of the classical Weierstrass functions.” Mathematische Zeitschrift. 289(1–2): 223–266.

[5] Cantor, G. (1883). Grundlagen einer allgemeinen Mannigfaltigkeitslehre. Leipzig, B. G. Teubner.

[6] Peano, G. (1890). “Sur une courbe qui remplit toute une aire plane.” Mathematische Annalen 36: 157-160.

[7] Peano, G. (1888). Calcolo geometrico secundo l’Ausdehnungslehre di H. Grassmann e precedutto dalle operazioni della logica deduttiva. Turin, Fratelli Bocca Editori.

[8] Von Koch, H. (1904). “Sur.une courbe continue sans tangente obtenue par une construction geometrique elementaire.” Arkiv for Mathematik, Astronomi och Fysich 1: 681-704.

[9] Sierpinski, W. (1915). “Sur une courbe dont tout point est un point de ramification.” Comptes Rendus Hebdomadaires des Seances de l’Academie des Sciences de Paris 160: 302-305.

[10] Carathéodory, C. (1914). “Über das lineare Mass von Punktmengen – eine Verallgemeinerung des Längenbegriffs.” Gött. Nachr. IV: 404–406.

[11] Hausdorff, F. (1919). “Dimension und ausseres Mass.” Mathematische Anna/en 79: 157-179.

[12] Menger, Karl (1926), “Allgemeine Räume und Cartesische Räume. I.”, Communications to the Amsterdam Academy of Sciences. English translation reprinted in Edgar, Gerald A., ed. (2004), Classics on fractals, Studies in Nonlinearity, Westview Press. Advanced Book Program, Boulder, CO

[13] B Mandelbrot, How Long Is the Coast of Britain? Statistical Self-Similarity and Fractional Dimension. Science, 156 3775 (May 5, 1967): 636-638.

Edward Lorenz’ Chaotic Butterfly

The butterfly effect is one of the most widely known principles of chaos theory. It has become a meme, propagating through popular culture in movies, books, TV shows and even casual conversation.

Can a butterfly flapping its wings in Florida send a hurricane to New York?

The origin of the butterfly effect is — not surprisingly — the image of a butterfly-like set of trajectories that was generated, in one of the first computer simulations of chaos theory, by Edward Lorenz.

Lorenz’ Butterfly

Excerpted from Galileo Unbound (Oxford, 2018) pg. 215

When Edward Lorenz (1917 – 2008) was a child, he memorized all perfect squares up to ten thousand.  This obvious interest in mathematics led him to a master’s degree in the subject at Harvard in 1940 under the supervision of Georg Birkhoff.  Lorenz’s master’s thesis was on an aspect of Riemannian geometry, but his foray into nonlinear dynamics was triggered by the intervention of World War II.  Only a few months before receiving his doctorate in mathematics from Harvard, the Japanese bombed Pearl Harbor. 

Lorenz left the PhD program at Harvard to join the United States Army Air Force to train as a weather forecaster in early 1942, and he took courses on forecasting and meteorology at MIT.  After receiving a second master’s degree, this time in meteorology, Lorenz was posted to Hawaii, then to Saipan and finally to Guam.  His area of expertise was in high-level winds, which were important for high-altitude bombing missions during the final months of the war in the Pacific.  After the Japanese surrender, Lorenz returned to MIT, where he continued his studies in meteorology, receiving his doctorate degree in 1948 with a thesis on the application of fluid dynamical equations to predict the motion of storms. 

One of Lorenz’ colleagues at MIT was Norbert Wiener (1894 – 1964), with whom he sometimes played chess during lunch at the faculty club.  Wiener had published his landmark book Cybernetics: Control and Communication in the Animal and Machine in 1949 which arose out of the apparently mundane problem of gunnery control during the Second World War.  As an abstract mathematician, Wiener attempted to apply his cybernetic theory to the complexities of weather, but he developed a theorem concerning nonlinear fluid dynamics which appeared to show that linear interpolation, of sufficient resolution, would suffice for weather forecasting, possibly even long-range forecasting.  Many on the meteorology faculty embraced this theorem because it fell in line with common practices of the day in which tomorrow’s weather was predicted using linear regression on measurements taken today.  However, Lorenz was skeptical, having acquired a detailed understanding of atmospheric energy cascades as larger vortices induced smaller vortices all the way down to the molecular level, dissipating as heat, and then all the way back up again as heat drove large-scale convection.  This was clearly not a system that would yield to linearization.  Therefore, Lorenz determined to solve nonlinear fluid dynamics models to test this conjecture.

Even with a computer in hand, the atmospheric equations needed to be simplified to make the calculations tractable.  Lorenz was more a scientist than an engineer, and more of a meteorologist than a forecaster.  He did not hesitate to make simplifying assumptions if they retained the correct phenomenological behavior, even if they no longer allowed for accurate weather predictions. 

He had simplified the number of atmospheric equations down to twelve.  Progress was good, and by 1961, he had completed a large initial numerical study.  He focused on nonperiodic solutions, which he suspected would deviate significantly from the predictions made by linear regression, and this hunch was vindicated by his numerical output.  One day, as he was testing his results, he decided to save time by starting the computations midway by using mid-point results from a previous run as initial conditions.  He typed in the three-digit numbers from a paper printout and went down the hall for a cup of coffee.  When he returned, he looked at the printout of the twelve variables and was disappointed to find that they were not related to the previous full-time run.  He immediately suspected a faulty vacuum tube, as often happened.  But as he looked closer at the numbers, he realized that, at first, they tracked very well with the original run, but then began to diverge more and more rapidly until they lost all connection with the first-run numbers.  His initial conditions were correct to a part in a thousand, but this small error was magnified exponentially as the solution progressed.

At this point, Lorenz recalled that he “became rather excited”.  He was looking at a complete breakdown of predictability in atmospheric science.  If radically different behavior arose from the smallest errors, then no measurements would ever be accurate enough to be useful for long-range forecasting.  At a more fundamental level, this was a break with a long-standing tradition in science and engineering that clung to the belief that small differences produced small effects.  What Lorenz had discovered, instead, was that the deterministic solution to his 12 equations was exponentially sensitive to initial conditions (known today as SIC). 

The Lorenz Equations

Over the following months, he was able to show that SIC was a result of the nonperiodic solutions.  The more Lorenz became familiar with the behavior of his equations, the more he felt that the 12-dimensional trajectories had a repeatable shape.  He tried to visualize this shape, to get a sense of its character, but it is difficult to visualize things in twelve dimensions, and progress was slow.  Then Lorenz found that when the solution was nonperiodic (the necessary condition for SIC), four of the variables settled down to zero, leaving all the dynamics to the remaining three variables. 

Lorenz narrowed the equations of atmospheric instability down to three variables: the stream function, the change in temperature and the deviation in linear temperature. The only parameter in the stream function is something known as the Prandtl Number. This is a dimensionless number which is the ratio of the kinetic viscosity of the fluid to its thermal diffusion coefficient and is a physical property of the fluid. The only parameter in the change in temperature is the Rayleigh Number which is a dimensionless parameter proportional to the difference in temperature between the top and the bottom of the fluid layer. The final parameter, in the equation for the deviation in linear temperature, is the ratio of the height of the fluid layer to the width of the convection rolls. The final simplified model is given by the flow equations

The Butterfly

Lorenz finally had a 3-variable dynamical system that displayed chaos.  Moreover, it had a three-dimensional state space that could be visualized directly.  He ran his simulations, exploring the shape of the trajectories in three-dimensional state space for a wide range of initial conditions, and the trajectories did indeed always settle down to restricted regions of state space.  They relaxed in all cases to a sort of surface that was elegantly warped, with wing-like patterns like a butterfly, as the state point of the system followed its dynamics through time.  The attractor of the Lorenz equations was strange.  Later, in 1971, David Ruelle (1935 – ), a Belgian-French mathematical physicist named this a “strange attractor”, and this name has become a standard part of the language of the theory of chaos.

The first graphical representation of the butterfly attractor is shown in Fig. 1 drawn by Lorenz for his 1963 publication.

Fig. 1 Excerpts of the title, abstract and sections of Lorenz’ 1963 paper. His three-dimensional flow equations produce trajectories that relax onto a three-dimensional “strange attractor“.

Using our modern plotting ability, the 3D character of the butterfly is shown in Fig. 2

Fig. 2 Edward Lorenz’ chaotic butterfly

A projection onto the x-y plane is shown in Fig. 3. In the full 3D state space the trajectories never overlap, but in the projection onto a 2D plane the trajectories are moving above and below each other.

Fig. 3 Projection of the butterfly onto the x-y plane centered on the origin.

The reason it is called a strange attractor is because all initial conditions relax onto the strange attractor, yet every trajectory on the strange attractor separates exponentially from neighboring trajectories, displaying the classic SIC property of chaos. So here is an elegant collection of trajectories that are certainly not just random noise, yet detailed prediction is still impossible. Deterministic chaos has significant structure, and generates beautiful patterns, without actual “randomness”.

Python Program

#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
Created on Mon Apr 16 07:38:57 2018

@author: nolte
Introduction to Modern Dynamics, 2nd edition (Oxford University Press, 2019)

Lorenz model of atmospheric turbulence
"""
import numpy as np
import matplotlib as mpl

import matplotlib.colors as colors
import matplotlib.cm as cmx

from scipy import integrate
from matplotlib import cm
from matplotlib import pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
from matplotlib.colors import cnames
from matplotlib import animation

plt.close('all')

jet = cm = plt.get_cmap('jet') 
values = range(10)
cNorm  = colors.Normalize(vmin=0, vmax=values[-1])
scalarMap = cmx.ScalarMappable(norm=cNorm, cmap=jet)

def solve_lorenz(N=12, angle=0.0, max_time=8.0, sigma=10.0, beta=8./3, rho=28.0):

    fig = plt.figure()
    ax = fig.add_axes([0, 0, 1, 1], projection='3d')
    ax.axis('off')

    # prepare the axes limits
    ax.set_xlim((-25, 25))
    ax.set_ylim((-35, 35))
    ax.set_zlim((5, 55))

    def lorenz_deriv(x_y_z, t0, sigma=sigma, beta=beta, rho=rho):
        """Compute the time-derivative of a Lorenz system."""
        x, y, z = x_y_z
        return [sigma * (y - x), x * (rho - z) - y, x * y - beta * z]

    # Choose random starting points, uniformly distributed from -15 to 15
    np.random.seed(1)
    x0 = -10 + 20 * np.random.random((N, 3))

    # Solve for the trajectories
    t = np.linspace(0, max_time, int(500*max_time))
    x_t = np.asarray([integrate.odeint(lorenz_deriv, x0i, t)
                      for x0i in x0])

    # choose a different color for each trajectory
    # colors = plt.cm.viridis(np.linspace(0, 1, N))
    # colors = plt.cm.rainbow(np.linspace(0, 1, N))
    # colors = plt.cm.spectral(np.linspace(0, 1, N))
    colors = plt.cm.prism(np.linspace(0, 1, N))

    for i in range(N):
        x, y, z = x_t[i,:,:].T
        lines = ax.plot(x, y, z, '-', c=colors[i])
        plt.setp(lines, linewidth=1)

    ax.view_init(30, angle)
    plt.show()

    return t, x_t


t, x_t = solve_lorenz(angle=0, N=12)

plt.figure(2)
lines = plt.plot(t,x_t[1,:,0],t,x_t[1,:,1],t,x_t[1,:,2])
plt.setp(lines, linewidth=1)
lines = plt.plot(t,x_t[2,:,0],t,x_t[2,:,1],t,x_t[2,:,2])
plt.setp(lines, linewidth=1)
lines = plt.plot(t,x_t[10,:,0],t,x_t[10,:,1],t,x_t[10,:,2])
plt.setp(lines, linewidth=1)

To explore the parameter space of the Lorenz attractor, the key parameters to change are sigma (the Prandtl number), r (the Rayleigh number) and b on line 31 of the Python code.

References

[1] E. N. Lorenz, The essence of chaos (The Jessie and John Danz lectures; Jessie and John Danz lectures.). Seattle :: University of Washington Press (in English), 1993.

[2] E. N. Lorenz, “Deterministic Nonperiodic Flow,” Journal of the Atmospheric Sciences, vol. 20, no. 2, pp. 130-141, 1963 (1963)

The Physics of U. S. Presidential Elections (why are so many elections so close?)

Well here is another squeaker! The 2020 U. S. presidential election was a dead heat. What is most striking is that half of the past six US presidential elections have been won by less than 1% of the votes cast in certain key battleground states. For instance, in 2000 the election was won in Florida by less than 1/100th of a percent of the total votes cast.

How can so many elections be so close? This question is especially intriguing when one considers the 2020 election, which should have been strongly asymmetric, because one of the two candidates had such serious character flaws. It is also surprising because the country is NOT split 50/50 between urban and rural populations (it’s more like 60/40). And the split of Democrat/Republican is about 33/29 — close, but not as close as the election. So how can the vote be so close so often? Is this a coincidence? Or something fundamental about our political system? The answer lies (partially) in nonlinear dynamics coupled with the libertarian tendencies of American voters.

Rabbits and Sheep

Elections are complex dynamical systems consisting of approximately 140 million degrees of freedom (the voters). Yet US elections are also surprisingly simple. They are dynamical systems with only 2 large political parties, and typically a very small third party.

Voters in a political party are not too different from species in an ecosystem. There are many population dynamics models of things like rabbit and sheep that seek to understand the steady-state solutions when two species vie for the same feedstock (or two parties vie for the same votes). Depending on reproduction rates and competition payoff, one species can often drive the other species to extinction. Yet with fairly small modifications of the model parameters, it is often possible to find a steady-state solution in which both species live in harmony. This is a symbiotic solution to the population dynamics, perhaps because the rabbits help fertilize the grass for the sheep to eat, and the sheep keep away predators for the rabbits.

There are two interesting features to such a symbiotic population-dynamics model. First, because there is a stable steady-state solution, if there is a perturbation of the populations, for instance if the rabbits are culled by the farmer, then the two populations will slowly relax back to the original steady-state solution. For this reason, this solution is called a “stable fixed point”. Deviations away from the steady-state values experience an effective “restoring force” that moves the population values back to the fixed point. The second feature of these models is that the steady state values depend on the parameters of the model. Small changes in the model parameters then cause small changes in the steady-state values. In this sense, this stable fixed point is not fundamental–it depends on the parameters of the model.

Fig. 1 Dynamics of rabbits and sheep competing for the same resource (grass). For these parameters, one species dies off while the other thrives. A slight shift in parameters can turn the central saddle point into a stable fixed point where sheep and rabbits coexist in stead state. ([1] Reprinted from Introduction to Modern Dynamics (Oxford University Press, 2019) pg. 119)

But there are dynamical models which do have a stability that maintains steady values even as the model parameters shift. These models have negative feedback, like many dynamical systems, but if the negative feedback is connected to winner-take-all outcomes of game theory, then a robustly stable fixed point can emerge at precisely the threshold where such a winner would take all.

The Replicator Equation

The replicator equation provides a simple model for competing populations [2]. Despite its simplicity, it can model surprisingly complex behavior. The central equation is a simple growth model

where the growth rate depends on the fitness fa of the a-th species relative to the average fitness φ of all the species. The fitness is given by

where pab is the payoff matrix among the different species (implicit Einstein summation applies). The fitness is frequency dependent through the dependence on xb. The average fitness is

This model has a zero-sum rule that keeps the total population constant. Therefore, a three-species dynamics can be represented on a two-dimensional “simplex” where the three vertices are the pure populations for each of the species. The replicator equation can be applied easily to a three-party system, one simply defines a payoff matrix that is used to define the fitness of a party relative to the others.

The Nonlinear Dynamics of Presidential Elections

Here we will consider the replicator equation with three political parties (Democratic, Republican and Libertarian). Even though the third party is never a serious contender, the extra degree of freedom provided by the third party helps to stabilize the dynamics between the Democrats and the Republicans.

It is already clear that an essentially symbiotic relationship is at play between Democrats and Republicans, because the elections are roughly 50/50. If this were not the case, then a winner-take-all dynamic would drive virtually everyone to one party or the other. Therefore, having 100% Democrats is actually unstable, as is 100% Republicans. When the populations get too far out of balance, they get too monolithic and too inflexible, then defections of members will occur to the other parties to rebalance the system. But this is just a general trend, not something that can explain the nearly perfect 50/50 vote of the 2020 election.

To create the ultra-stable fixed point at 50/50 requires an additional contribution to the replicator equation. This contribution must create a type of toggle switch that depends on the winner-take-all outcome of the election. If a Democrat wins 51% of the vote, they get 100% of the Oval Office. This extreme outcome then causes a back action on the electorate who is always afraid when one party gets too much power.

Therefore, there must be a shift in the payoff matrix when too many votes are going one way or the other. Because the winner-take-all threshold is at exactly 50% of the vote, this becomes an equilibrium point imposed by the payoff matrix. Deviations in the numbers of voters away from 50% causes a negative feedback that drives the steady-state populations back to 50/50. This means that the payoff matrix becomes a function of the number of voters of one party or the other. In the parlance of nonlinear dynamics, the payoff matrix becomes frequency dependent. This goes one step beyond the original replicator equation where it was the population fitness that was frequency dependent, but not the payoff matrix. Now the payoff matrix also becomes frequency dependent.

The frequency-dependent payoff matrix (in an extremely simple model of the election dynamics) takes on negative feedback between two of the species (here the Democrats and the Republicans). If these are the first and third species, then the payoff matrix becomes

where the feedback coefficient is

and where the population dependences on the off-diagonal terms guarantee that, as soon as one party gains an advantage, there is defection of voters to the other party. This establishes a 50/50 balance that is maintained even when the underlying parameters would predict a strongly asymmetric election.

For instance, look at the dynamics in Fig. 2. For this choice of parameters, the replicator model predicts a 75/25 win for the democrats. However, when the feedback is active, it forces the 50/50 outcome, despite the underlying advantage for the original parameters.

Fig. 2 Comparison of the stabilized election with 50/50 outcome compared to the replicator dynamics without the feedback. For the parameters chosen here, there would be a 75/25 victory of the Democrats over the Republications. However, when the feedback is in play, the votes balance out at 50/50.

There are several interesting features in this model. It may seem that the Libertarians are irrelevant because they never have many voters. But their presence plays a surprisingly important role. The Libertarians tend to stabilize the dynamics so that neither the democrats nor the republicans would get all the votes. Also, there is a saddle point not too far from the pure Libertarian vertex. That Libertarian vertex is an attractor in this model, so under some extreme conditions, this could become a one-party system…maybe not Libertarian in that case, but possibly something more nefarious, of which history can provide many sad examples. It’s a word of caution.

Disclaimers and Caveats

No attempt has been made to actually mode the US electorate. The parameters in the modified replicator equations are chosen purely for illustration purposes. This model illustrates a concept — that feedback in the payoff matrix can create an ultra-stable fixed point that is insensitive to changes in the underlying parameters of the model. This can possibly explain why so many of the US presidential elections are so tight.

Someone interested in doing actual modeling of US elections would need to modify the parameters to match known behavior of the voting registrations and voting records. The model presented here assumes a balanced negative feedback that ensures a 50/50 fixed point. This model is based on the aversion of voters to too much power in one party–an echo of the libertarian tradition in the country. A more sophisticated model would yield the fixed point as a consequence of the dynamics, rather than being a feature assumed in the model. In addition, nonlinearity could be added that would drive the vote off of the 50/50 point when the underlying parameters shift strongly enough. For instance, the 2008 election was not a close one, in part because the strong positive character of one of the candidates galvanized a large fraction of the electorate, driving the dynamics away from the 50/50 balance.

References

[1] D. D. Nolte, Introduction to Modern Dynamics: Chaos, Networks, Space and Time (Oxford University Press, 2019) 2nd Edition.

[2] Nowak, M. A. (2006). Evolutionary Dynamics: Exploring the Equations of Life. Cambridge, Mass., Harvard University Press.

The Ups and Downs of the Compound Double Pendulum

A chief principle of chaos theory states that even simple systems can display complex dynamics.  All that is needed for chaos, roughly, is for a system to have at least three dynamical variables plus some nonlinearity. 

A classic example of chaos is the driven damped pendulum.  This is a mass at the end of a massless rod driven by a sinusoidal perturbation.  The three variables are the angle, the angular velocity and the phase of the sinusoidal drive.  The nonlinearity is provided by the cosine function in the potential energy which is anharmonic for large angles.  However, the driven damped pendulum is not an autonomous system, because the drive is an external time-dependent function.  To find an autonomous system—one that persists in complex motion without any external driving function—one needs only to add one more mass to a simple pendulum to create what is known as a compound pendulum, or a double pendulum.

Daniel Bernoulli and the Discovery of Normal Modes

After the invention of the calculus by Newton and Leibniz, the first wave of calculus practitioners (Leibniz, Jakob and Johann Bernoulli and von Tschirnhaus) focused on static problems, like the functional form of the catenary (the shape of a hanging chain), or on constrained problems, like the brachistochrone (the path of least time for a mass under gravity to move between two points) and the tautochrone (the path of equal time).

The next generation of calculus practitioners (Euler, Johann and Daniel Bernoulli, and  D’Alembert) focused on finding the equations of motion of dynamical systems.  One of the simplest of these, that yielded the earliest equations of motion as well as the first identification of coupled modes, was the double pendulum.  The double pendulum, in its simplest form, is a mass on a rigid massless rod attached to another mass on a massless rod.  For small-angle motion, this is a simple coupled oscillator.

Fig. 1 The double pendulum as seen by Daniel Bernoulli, Johann Bernoulli and D’Alembert. This two-mass system played a central role in the earliest historical development of dynamical equations of motion.

Daniel Bernoulli, the son of Johann I Bernoulli, was the first to study the double pendulum, publishing a paper on the topic in 1733 in the proceedings of the Academy in St. Petersburg just as he returned from Russia to take up a post permanently in his home town of Basel, Switzerland.  Because he was a physicist first and mathematician second, he performed experiments with masses on strings to attempt to understand the qualitative as well as quantitative behavior of the two-mass system.  He discovered that for small motions there was a symmetric behavior that had a low frequency of oscillation and an antisymmetric motion that had a higher frequency of oscillation.  Furthermore, he recognized that any general motion of the double pendulum was a combination of the fundamental symmetric and antisymmetric motions.  This work by Daniel Bernoulli represents the discovery of normal modes of coupled oscillators.  It is also the first statement of the combination of motions that he would use later (1753) to express for the first time the principle of superposition. 

Superposition is one of the guiding principles of linear physical systems.  It provides a means for the solution of differential equations.  It explains the existence of eigenmodes and their eigenfrequencies.  It is the basis of all interference phenomenon, whether classical like the Young’s double-slit experiment or quantum like Schrödinger’s cat.  Today, superposition has taken center stage in quantum information sciences and helps define the spooky (and useful) properties of quantum entanglement.  Therefore, normal modes, composition of motion, superposition of harmonics on a musical string—these all date back to Daniel Bernoulli in the twenty years between 1733 and 1753.  (Daniel Bernoulli is also the originator of the Bernoulli principle that explains why birds and airplanes fly.)

Johann Bernoulli and the Equations of Motion

Daniel Bernoulli’s father was Johann I Bernoulli.  Daniel had been tutored by Johann, along with his friend Leonhard Euler, when Daniel was young.  But as Daniel matured as a mathematician, he and his father began to compete against each other in international mathematics competitions (which were very common in the early eighteenth century).  When Daniel beat his father in a competition sponsored by the French Academy, Johann threw Daniel out of his house and their relationship remained strained for the remainder of their lives.

Johann had a history of taking ideas from Daniel and never citing the source. For instance, when Johann published his work on equations of motion for masses on strings in 1742, he built on the work of his son Daniel from 1733 but never once mentioned it. Daniel, of course, was not happy.

In a letter dated 20 October 1742 that Daniel wrote to Euler, he said, “The collected works of my father are being printed, and I have Just learned that he has inserted, without any mention of me, the dynamical problems I first discovered and solved (such as e. g. the descent of a sphere on a moving triangle; the linked pendulum, the center of spontaneous rotation, etc.).” And on 4 September 1743, when Daniel had finally seen his father’s works in print, he said, “The new mechanical problems are mostly mine, and my father saw my solutions before he solved the problems in his way …”. [2]

Daniel clearly has the priority for the discovery of the normal modes of the linked (i.e. double or compound) pendulum, but Johann often would “improve” on Daniel’s work despite giving no credit for the initial work. As a mathematician, Johann had a more rigorous approach and could delve a little deeper into the math. For this reason, it was Johann in 1742 who came closest to writing down differential equations of motion for multi-mass systems, but falling just short. It was D’Alembert only one year later who first wrote down the differential equations of motion for systems of masses and extended it to the loaded string for which he was the first to derive the wave equation. The D’Alembertian operator is today named after him.

Double Pendulum Dynamics

The general dynamics of the double pendulum are best obtained from Lagrange’s equations of motion. However, setting up the Lagrangian takes careful thought, because the kinetic energy of the second mass depends on its absolute speed which is dependent on the motion of the first mass from which it is suspended. The velocity of the second mass is obtained through vector addition of velocities.

Fig. 2. The dynamics of the double pendulum.

The potential energy of the system is

so that the Lagrangian is

The partial derivatives are

and the time derivatives of the last two expressions are

Therefore, the equations of motion are

To get a sense of how this system behaves, we can make a small-angle approximation to linearize the equations to find the lowest-order normal modes.  In the small-angle approximation, the equations of motion become

where the determinant is

This quartic equation is quadratic in w2 and the quadratic solution is

This solution is still a little opaque, so taking the special case: R = R1 = R2 and M = M1 = M2 it becomes

There are two normal modes.  The low-frequency mode is symmetric as both masses swing (mostly) together, while the higher frequency mode is antisymmetric with the two masses oscillating against each other.  These are the motions that Daniel Bernoulli discovered in 1733.

It is interesting to note that if the string were rigid, so that the two angles were the same, then the lowest frequency would be 3/5 which is within 2% of the above answer but is certainly not equal.  This tells us that there is a slightly different angular deflection for the second mass relative to the first.

Chaos in the Double Pendulum

The full expression for the nonlinear coupled dynamics is expressed in terms of four variables (q1, q2, w1, w2).  The dynamical equations are

These can be put into the normal form for a four-dimensional flow as

The numerical solution of these equations produce a complex interplay between the angle of the first mass and the angle of the second mass. Examples of trajectory projections in configuration space are shown in Fig. 3 for E = 1. The horizontal is the first angle, and the vertical is the angle of the second mass.

Fig. 3 Trajectory projections onto configuration space. The horizontal axis is the first mass angle, and the vertical axis is the second mass angle. All of these are periodic or nearly periodic orbits except for the one on the lower left. E = 1.

The dynamics in state space are four dimensional which are difficult to visualize directly. Using the technique of the Poincaré first-return map, the four-dimensional trajectories can be viewed as a two-dimensional plot where the trajectories pierce the Poincaré plane. Poincare sections are shown in Fig. 4.

Fig. Poincare sections of the double pendulum in state space for increasing kinetic energy. Initial conditions are vertical in all. The horizontal axis is the angle of the second mass, and the vertical axis is the angular velocity of the second mass.

Python Code: DoublePendulum.py

#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
DoublePendulum.py
Created on Oct 16 06:03:32 2020
"Introduction to Modern Dynamics" 2nd Edition (Oxford, 2019)
@author: nolte
"""

import numpy as np
from scipy import integrate
from matplotlib import pyplot as plt
import time

plt.close('all')

E = 1.       # Try 0.8 to 1.5

def flow_deriv(x_y_z_w,tspan):
    x, y, z, w = x_y_z_w

    A = w**2*np.sin(y-x);
    B = -2*np.sin(x);
    C = z**2*np.sin(y-x)*np.cos(y-x);
    D = np.sin(y)*np.cos(y-x);
    EE = 2 - (np.cos(y-x))**2;
    
    FF = w**2*np.sin(y-x)*np.cos(y-x);
    G = -2*np.sin(x)*np.cos(y-x);
    H = 2*z**2*np.sin(y-x);
    I = 2*np.sin(y);
    JJ = (np.cos(y-x))**2 - 2;

    a = z
    b = w
    c = (A+B+C+D)/EE
    d = (FF+G+H+I)/JJ
    return[a,b,c,d]

repnum = 75

np.random.seed(1)
for reploop  in range(repnum):
    
    
    px1 = 2*(np.random.random((1))-0.499)*np.sqrt(E);
    py1 = -px1 + np.sqrt(2*E - px1**2);            

    xp1 = 0   # Try 0.1
    yp1 = 0   # Try -0.2
    
    x_y_z_w0 = [xp1, yp1, px1, py1]
    
    tspan = np.linspace(1,1000,10000)
    x_t = integrate.odeint(flow_deriv, x_y_z_w0, tspan)
    siztmp = np.shape(x_t)
    siz = siztmp[0]

    if reploop % 50 == 0:
        plt.figure(2)
        lines = plt.plot(x_t[:,0],x_t[:,1])
        plt.setp(lines, linewidth=0.5)
        plt.show()
        time.sleep(0.1)
        #os.system("pause")

    y1 = np.mod(x_t[:,0]+np.pi,2*np.pi) - np.pi
    y2 = np.mod(x_t[:,1]+np.pi,2*np.pi) - np.pi
    y3 = np.mod(x_t[:,2]+np.pi,2*np.pi) - np.pi
    y4 = np.mod(x_t[:,3]+np.pi,2*np.pi) - np.pi
    
    py = np.zeros(shape=(10*repnum,))
    yvar = np.zeros(shape=(10*repnum,))
    cnt = -1
    last = y1[1]
    for loop in range(2,siz):
        if (last < 0)and(y1[loop] > 0):
            cnt = cnt+1
            del1 = -y1[loop-1]/(y1[loop] - y1[loop-1])
            py[cnt] = y4[loop-1] + del1*(y4[loop]-y4[loop-1])
            yvar[cnt] = y2[loop-1] + del1*(y2[loop]-y2[loop-1])
            last = y1[loop]
        else:
            last = y1[loop]
     
    plt.figure(3)
    lines = plt.plot(yvar,py,'o',ms=1)
    plt.show()
    
plt.savefig('DPen')

You can change the energy E on line 16 and also the initial conditions xp1 and yp1 on lines 48 and 49. The energy E is the initial kinetic energy imparted to the two masses. For a given initial condition, what happens to the periodic orbits as the energy E increases?

References

[1] Daniel Bernoulli, Theoremata de oscillationibus corporum filo flexili connexorum et catenae verticaliter suspensae,” Academiae Scientiarum Imperialis Petropolitanae, 6, 1732/1733

[2] Truesdell B. The rational mechanics of flexible or elastic bodies, 1638-1788. (Turici: O. Fussli, 1960). (This rare and artistically produced volume, that is almost impossible to find today in any library, is one of the greatest books written about the early history of dynamics.)

Up-side-down Physics: Dynamic Equilibrium and the Inverted Pendulum

In the study of mechanics, the physics student moves through several stages in their education.  The first stage is the Newtonian physics of trajectories and energy and momentum conservation—there are no surprises there.  The second stage takes them to Lagrangians and Hamiltonians—here there are some surprises, especially for rigid body rotations.  Yet even at this stage, most problems have analytical solutions, and most of those solutions are exact.

Any street busker can tell you that an equally good (and more interesting) equilibrium point of a simple pendulum is when the bob is at the top.

It is only at the third stage that physics starts to get really interesting, and when surprising results with important ramifications emerge.  This stage is nonlinear physics.  Most nonlinear problems have no exact analytical solutions, but there are regimes where analytical approximations not only are possible but provide intuitive insights.  One of the best examples of this third stage is the dynamic equilibrium of Kapitsa’s up-side-down pendulum.

Piotr Kapitsa

Piotr Kapitsa (1894 – 1984) was a Russian physicist who received the Nobel Prize in physics in 1978 for his discovery in 1937 of superfluidity in liquid helium.  (He shared the 1978 prize with Penzias and Wilson who had discovered the cosmic microwave background.)  Superfluidity is a low-temperature hydrodynamic property of superfluids that shares some aspects in common with superconductivity.  Kapitsa published his results in Nature in 1938 in the same issue as a paper by John Allen and Don Misener of Cambridge, but Kapitsa had submitted his paper 19 days before Allen and Misener and so got priority (and the Nobel).

During his career Kapitsa was a leading force in Russian physics, surviving Stalin’s great purge through force of character, and helping to establish the now-famous Moscow Institute of Physics and Technology.  However, surviving Stalin did not always mean surviving with freedom, and around 1950 Kapitsa was under effective house arrest because of his unwillingness to toe the party line.

In his enforced free time, to while away the hours, Kapitsa developed an ingenious analytical approach to the problem of dynamic equilibrium.  His toy example was the driven inverted pendulum. It is surprising how many great works have emerged from the time freed up by house arrest: Galileo finally had time to write his “Two New Sciences”  after his run-in with the Inquisition, and Fresnel was free to develop his theory of diffraction after he ill-advisedly joined a militia to support the Bourbon king during Napoleon’s return. (In our own time, with so many physicists in lock-down and working from home, it will be interesting to see what great theory emerges from the pandemic.)

Stability in the Inverted Driven Pendulum

The only stable static equilibrium of the simple pendulum is when the pendulum bob is at its lowest point.  However, any street busker can tell you that an equally good (and more interesting) equilibrium point is when the bob is at the top.  The caveat is that this “inverted” equilibrium of the pendulum requires active stabilization. 

If the inverted pendulum is a simple physical pendulum, like a meter stick that you balance on the tip of your finger, you know that you need to nudge the stick gently and continuously this way and that with your finger, in response to the tipping stick, to keep it upright.  It’s an easy trick, and almost everyone masters it as a child.  With the human as the drive force, this is an example of a closed-loop control system.  The tipping stick is observed visually by the human, and the finger position is adjusted to compensate for the tip.  On the other hand, one might be interested to find an “open-loop” system that does not require active feedback or even an operator.  In 1908, Andrew Stephenson suggested that induced stability could be achieved by the inverted pendulum if a drive force of sufficiently high frequency were applied [1].  But the proof of the stability remained elusive until Kapitsa followed Stephenson’s suggestion by solving the problem through a separation of time scales [2].

The Method of Separation of Time Scales

The driven inverted pendulum has the dynamical equation

where w0 is the natural angular frequency of small-amplitude oscillations, a is a drive amplitude (with units of frequency) and w is the drive angular frequency that is assumed to be much larger than the natural frequency.  The essential assumption that allows the problem to be separate according to widely separated timescales is that the angular displacement has a slow contribution that changes on the time scale of the natural frequency, and a fast contribution that changes on the time scale of the much higher drive frequency.  The assumed solution then looks like

This is inserted into the dynamical equation to yield

where we have used the approximation

So far this is simple.  The next step is the key step.  It assumes that the dynamical equation should also separate into fast and slow contributions.  But the last term of the sin q expansion has a product of fast and slow components.  The key insight is that a time average can be used to average over the fast contribution.  The separation of the dynamical equation is then

where the time average of the fast variables is only needed on the first line.  The second line is a simple driven harmonic oscillator with a natural frequency that depends on cos qslow and a driving amplitude that depends on sin qslow.  The classic solution to the second line for qfast is

This solution can then be inserted into the first line to yield

This describes a pendulum under an effective potential (for high drive frequency and no damping)

The first term is unstable at the inverted position, but the second term is actually a restoring force.  If the second term is stronger than the first, then a dynamic equilibrium can be achieved. This occurs when the driving amplitude is larger than a threshold value

The effective potential for increasing drive amplitude looks like

Fig. 1 Effective potential as a function of angle and drive amplitude a (in units of w0)

When the drive amplitude is larger than sqrt(2), a slight dip forms in the unstable potential. The dip increases with increasing drive amplitude, as does the oscillation frequency of the effective potential.

Python Program: PenInverted.py

#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
PenInverted.py
Created on Friday Sept 11 06:03:32 2020
@author: nolte
D. D. Nolte, Introduction to Modern Dynamics: Chaos, Networks, Space and Time, 2nd ed. (Oxford,2019)
"""

import numpy as np
from scipy import integrate
from matplotlib import pyplot as plt

plt.close('all')

print(' ')
print('PenInverted.py')

F = 133.5          # 30 to 140  (133.5)
delt = 0.000       # 0.000 to 0.01
w = 20          # 20
def flow_deriv(x_y_z,tspan):
    x, y, z = x_y_z
    a = y
    b = -(1 + F*np.cos(z))*np.sin(x) - delt*y
    c = w
    return[a,b,c]
                
T = 2*np.pi/w

x0 = np.pi+0.3
v0 = 0.00
z0 = 0

x_y_z = [x0, v0, z0]

# Solve for the trajectories
t = np.linspace(0, 2000, 200000)
x_t = integrate.odeint(flow_deriv, x_y_z, t)
siztmp = np.shape(x_t)
siz = siztmp[0]

#y1 = np.mod(x_t[:,0]-np.pi,2*np.pi)-np.pi
y1 = x_t[:,0]
y2 = x_t[:,1]
y3 = x_t[:,2]    

plt.figure(1)
lines = plt.plot(t[0:2000],x_t[0:2000,0]/np.pi)
plt.setp(lines, linewidth=0.5)
plt.show()
plt.title('Angular Position')

plt.figure(2)
lines = plt.plot(t[0:1000],y2[0:1000])
plt.setp(lines, linewidth=0.5)
plt.show()
plt.title('Speed')

repnum = 5000
px = np.zeros(shape=(2*repnum,))
xvar = np.zeros(shape=(2*repnum,))
cnt = -1
testwt = np.mod(t,T)-0.5*T;
last = testwt[1]
for loop in range(2,siz-1):
    if (last < 0)and(testwt[loop] > 0):
        cnt = cnt+1
        del1 = -testwt[loop-1]/(testwt[loop] - testwt[loop-1])
        px[cnt] = (y2[loop]-y2[loop-1])*del1 + y2[loop-1]
        xvar[cnt] = (y1[loop]-y1[loop-1])*del1 + y1[loop-1]
        last = testwt[loop]
    else:
        last = testwt[loop]
 
plt.figure(3)
lines = plt.plot(xvar[0:5000],px[0:5000],'ko',ms=1)
plt.show()
plt.title('First Return Map')

plt.figure(4)
lines = plt.plot(x_t[0:1000,0]/np.pi,y2[0:1000])
plt.setp(lines, linewidth=0.5)
plt.show()
plt.title('Phase Space')

You can play with the parameters of this program to explore the physics of dynamic equilibrium. For instance, if the control parameter is slightly above the threshold (F = 32) at which a dip appears in the effective potential, the slow oscillation has a vary low frequency, as shown in Fig. 2. The high-frequency drive can still be seen superposed on the slow oscillation of the pendulum that is oscillating just like an ordinary pendulum but up-side-down!

Fig. 2 Just above the pitchfork bifurcation the slow oscillation has a low frequency. F = 32, w = 20, w0 = 1

The oscillation frequency is a function of the drive amplitude. This is a classic signature of a nonlinear system: amplitude-frequency coupling. Well above the threshold (F = 100), the frequency of oscillation in the effective potential becomes much larger, as in Fig. 3.

Fig. 3 High above the transition. F = 100, w = 20, w0 = 1

When the drive amplitude is more than four times larger than the threshold value (F > 140), the equilibrium is destroyed, so there is an upper bound to the dynamic stabilization. This happens when the “slow” frequency becomes comparable to the drive frequency and the separation-of-time-scales approach is no longer valid.

You can also play with the damping (delt) to see what effect it has on thresholds and long-term behavior starting at delt = 0.001 and increasing it.

Other Examples of Dynamic Equilibrium

Every physics student learns that there is no stable electrostatic equilibrium. However, if charges are put into motion, then a time-averaged potential can be created that can confine a charged particle. This is the principle of the Paul Ion Trap, named after Wolfgang Paul who was awarded the Nobel Prize in Physics in 1989 for this invention.

One of the most famous examples of dynamic equilibrium are the L4 and L5 Lagrange points. In the Earth-Jupiter system, these are the locations of the Trojan asteroids. These special Lagrange points are maxima (unstable equilibria) in the effective potential of a rotation coordinate system, but the Coriolis force creates a local minimum that traps the asteroids in a dynamically stable equilibrium.

In economics, general equilibrium theory describes how oscillating prices among multiple markets can stabilize economic performance in macroeconomics.

A recent paper in Science magazine used the principle of dynamic equilibrium to levitate a layer of liquid on which toy boats can ride right-side-up and up-side-down. For an interesting video see Upside-down boat (link).

References

[1] Stephenson Andrew (1908). “XX.On induced stability”. Philosophical Magazine. 6. 15: 233–236.

[2] Kapitza P. L. (1951). “Dynamic stability of a pendulum when its point of suspension vibrates”. Soviet Phys. JETP. 21: 588–597.

Links

https://en.wikipedia.org/wiki/Kapitza%27s_pendulum

A detailed derivation of Kapitsa’s approach:https://elmer.unibas.ch/pendulum/upside.htm

The bifurcation threshold for the inverted pendulum is a pitchfork bifurcation https://elmer.unibas.ch/pendulum/bif.htm#pfbif

Henri Poincaré and his Homoclinic Tangle

Will the next extinction-scale asteroid strike the Earth in our lifetime? 

This existential question—the question of our continued existence on this planet—is rhetorical, because there are far too many bodies in our solar system to accurately calculate all trajectories of all asteroids. 

The solar system is what is known as an N-body problem.  And even the N is not well determined.  The asteroid belt alone has over a million extinction-sized asteroids, and there are tens of millions of smaller ones that could still do major damage to life on Earth if they hit.  To have a hope of calculating even one asteroid trajectory do we ignore planetary masses that are too small?  What is too small?  What if we only consider the Sun, the Earth and Jupiter?  This is what Euler did in 1760, and he still had to make more assumptions.

Stability of the Solar System

Once Newton published his Principia, there was a pressing need to calculate the orbit of the Moon (see my blog post on the three-body problem).  This was important for navigation, because if the daily position of the moon could be known with sufficient accuracy, then ships would have a means to determine their longitude at sea.  However, the Moon, Earth and Sun are already a three-body problem, which still ignores the effects of Mars and Jupiter on the Moon’s orbit, not to mention the problem that the Earth is not a perfect sphere.  Therefore, to have any hope of success, toy systems that were stripped of all their obfuscating detail were needed.

Euler investigated simplified versions of the three-body problem around 1760, treating a body attracted to two fixed centers of gravity moving in the plane, and he solved it using elliptic integrals. When the two fixed centers are viewed in a coordinate frame that is rotating with the Sun-Earth system, it can come close to capturing many of the important details of the system. In 1762 Euler tried another approach, called the restricted three-body problem, where he considered a massless Moon attracted to a massive Earth orbiting a massive Sun, again all in the plane. Euler could not find general solutions to this problem, but he did stumble on an interesting special case when the three bodies remain collinear throughout their motions in a rotating reference frame.

It was not the danger of asteroids that was the main topic of interest in those days, but the question whether the Earth itself is in a stable orbit and is safe from being ejected from the Solar system.  Despite steadily improving methods for calculating astronomical trajectories through the nineteenth century, this question of stability remained open.

Poincaré and the King Oscar Prize of 1889

Some years ago I wrote an article for Physics Today called “The Tangled Tale of Phase Space” that tracks the historical development of phase space. One of the chief players in that story was Henri Poincaré (1854 – 1912). Henri Poincare was the Einstein before Einstein. He was a minor celebrity and was considered to be the greatest genius of his era. The event in his early career that helped launch him to stardom was a mathematics prize announced in 1887 to honor the birthday of King Oscar II of Sweden. The challenge problem was as simple as it was profound: Prove rigorously whether the solar system is stable.

This was the old N-body problem that had so far resisted solution, but there was a sense at that time that recent mathematical advances might make the proof possible. There was even a rumor that Dirichlet had outlined such a proof, but no trace of the outline could be found in his papers after his death in 1859.

The prize competition was announced in Acta Mathematica, written by the Swedish mathematician Gösta Mittag-Leffler. It stated:

Given a system of arbitrarily many mass points that attract each according to Newton’s law, under the assumption that no two points ever collide, try to find a representation of the coordinates of each point as a series in a variable that is some known function of time and for all of whose values the series converges uniformly.

The timing of the prize was perfect for Poincaré who was in his early thirties and just beginning to make his mark on mathematics. He was working on the theory of dynamical systems and was developing a new viewpoint that went beyond integrating single trajectories by focusing more broadly on whole classes of solutions. The question of the stability of the solar system seemed like a good problem to use to sharpen his mathematical tools. The general problem was still too difficult, so he began with Euler’s restricted three-body problem. He made steady progress, and along the way he invented an array of new techniques for studying the general properties of dynamical systems. One of these was the Poincaré section. Another was his set of integral invariants, one of which is recognized as the conservation of volume in phase space, also known as Liouville’s theorem, although it was Ludwig Boltzmann who first derived this result (see my Physics Today article). Eventually, he believed he had proven that the restricted three-body problem was stable.

By the time Poincaré had finished is prize submission, he had invented a new field of mathematical analysis, and the judges of the prize submission recognized it. Poincaré was named the winner, and his submission was prepared for publication in the Acta. However, Mittag-Leffler was a little concerned by a technical objection that had been raised, so he forwarded the comment to Poincaré for him to look at. At first, Poincaré thought the objection could easily be overcome, but as he worked on it and delved deeper, he had a sudden attack of panic. Trajectories near a saddle point did not converge. His proof of stability was wrong!

He alerted Mittag-Leffler to stop the presses, but it was too late. The first printing had been completed and review copies had already been sent to the judges. Mittag-Leffler immediately wrote to them asking for their return while Poincaré worked nonstop to produce a corrected copy. When he had completed his reanalysis, he had discovered a divergent feature of the solution to the dynamical problem near saddle points that his recognized today as the discovery of chaos. Poincaré paid for the reprinting of his paper out of his own pocket and (almost) all of the original printing was destroyed. This embarrassing moment in the life of a great mathematician was virtually forgotten until it was brought to light by the historian Barrow-Green in 1994 [1].

Poincaré is still a popular icon in France. Here is the Poincaré cafe in Paris.
A crater on the Moon is named after Poincaré.

Chaos in the Poincaré Return Map

Despite the fact that his conclusions on the stability of the 3-body problem flipped, Poincaré’s new tools for analyzing dynamical systems earned him the prize. He did not stop at his modified prize submission but continued working on systematizing his methods, publishing New Methods in Celestial Mechanics in several volumes through the 1890’s. It was here that he fully explored what happens when a trajectory approaches a saddle point of dynamical equilibrium.

The third volume of a three-book series that grew from Poincaré’s award-winning paper

To visualize a periodic trajectory, Poincaré invented a mathematical tool called a “first-return map”, also known as a Poincaré section. It was a way of taking a higher dimensional continuous trajectory and turning it into a simple iterated discrete map. Therefore, one did not need to solve continuous differential equations, it was enough to just iterate the map. In this way, complicated periodic, or nearly periodic, behavior could be explored numerically. However, even armed with this weapon, Poincaré found that iterated maps became unstable as a trajectory that originated from a saddle point approached another equivalent saddle point. Because the dynamics are periodic, the outgoing and incoming trajectories are opposite ends of the same trajectory, repeated with 2-pi periodicity. Therefore, the saddle point is also called a homoclinic point, meaning that trajectories in the discrete map intersect with themselves. (If two different trajectories in the map intersect, that is called a heteroclinic point.) When Poincaré calculated the iterations around the homoclinic point, he discovered a wild and complicated pattern in which a trajectory intersected itself many times. Poincaré wrote:

[I]f one seeks to visualize the pattern formed by these two curves and their infinite number of intersections … these intersections form a kind of lattice work, a weave, a chain-link network of infinitely fine mesh; each of the two curves can never cross itself, but it must fold back on itself in a very complicated way so as to recross all the chain-links an infinite number of times .… One will be struck by the complexity of this figure, which I am not even attempting to draw. Nothing can give us a better idea of the intricacy of the three-body problem, and of all the problems of dynamics in general…

Poincaré’s first view of chaos.

This was the discovery of chaos! Today we call this “lattice work” the “homoclinic tangle”. He could not draw it with the tools of his day … but we can!

Chirikov’s Standard Map

The restricted 3-body problem is a bit more complicated than is needed to illustrate Poincaré’s homoclinic tangle. A much simpler model is a discrete map called Chirikov’s Map or the Standard Map. It describes the Poincaré section of a periodically kicked oscillator that rotates or oscillates in the angular direction with an angular momentm J. The map has the simple form

in which the angular momentum in updated first, and then the angle variable is updated with the new angular momentum. When plotted on the (θ,J) plane, the standard map produces a beautiful kaleidograph of intertwined trajectories piercing the Poincaré plane, as shown in the figure below. The small points or dots are successive intersections of the higher-dimensional trajectory intersecting a plane. It is possible to trace successive points by starting very close to a saddle point (on the left) and connecting successive iterates with lines. These lines merge into the black trace in the figure that emerges along the unstable manifold of the saddle point on the left and approaches the saddle point on the right generally along the stable manifold.

Fig. Standard map for K = 0.97 at the transition to full chaos. The dark line is the trajectory of the unstable manifold emerging from the saddle point at (p,0). Note the wild oscillations as it approaches the saddle point at (3pi,0).

However, as the successive iterates approach the new saddle (which is really just the old saddle point because of periodicity) it crosses the stable manifold again and again, in ever wilder swings that diverge as it approaches the saddle point. This is just one trace. By calculating traces along all four stable and unstable manifolds and carrying them through to the saddle, a lattice work, or homoclinic tangle emerges.

Two of those traces originate from the stable manifolds, so to calculate their contributions to the homoclinic tangle, one must run these traces backwards in time using the inverse Chirikov map. This is

The four traces all intertwine at the saddle point in the figure below with a zoom in on the tangle in the next figure. This is the lattice work that Poincaré glimpsed in 1889 as he worked feverishly to correct the manuscript that won him the prize that established him as one of the preeminent mathematicians of Europe.

Fig. The homoclinic tangle caused by the folding of phase space trajectories as stable and unstable manifolds criss-cross in the Poincare map at the saddle point. This was the figure that Poincaré could not attempt to draw because of its complexity.
Fig. A zoom-in of the homoclinic tangle at the saddle point as the stable and unstable manifolds create a lattice of intersections. This is the fundamental origin of chaos and the sensitivity to initial conditions (SIC) that make forecasting almost impossible in chaotic systems.

Python Code: StandmapHom.py

#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
StandmapHom.py
Created on Sun Aug  2  2020
"Introduction to Modern Dynamics" 2nd Edition (Oxford, 2019)
@author: nolte
"""

import numpy as np
from matplotlib import pyplot as plt
from numpy import linalg as LA

plt.close('all')

eps = 0.97

np.random.seed(2)

plt.figure(1)

for eloop in range(0,100):

    rlast = 2*np.pi*(0.5-np.random.random())
    thlast = 4*np.pi*np.random.random()
    
    rplot = np.zeros(shape=(200,))
    thetaplot = np.zeros(shape=(200,))
    for loop in range(0,200):
        rnew = rlast + eps*np.sin(thlast)
        thnew = np.mod(thlast+rnew,4*np.pi)
        
        thetaplot[loop] = np.mod(thnew-np.pi,4*np.pi)     
        rtemp = np.mod(rnew + np.pi,2*np.pi)
        rplot[loop] = rtemp - np.pi
  
        rlast = rnew
        thlast = thnew
        
    plt.plot(np.real(thetaplot),np.real(rplot),'o',ms=0.2)
    plt.xlim(xmin=np.pi,xmax=4*np.pi)
    plt.ylim(ymin=-2.5,ymax=2.5)
        
plt.savefig('StandMap')

K = eps
eps0 = 5e-7

J = [[1,1+K],[1,1]]
w, v = LA.eig(J)

My = w[0]
Vu = v[:,0]     # unstable manifold
Vs = v[:,1]     # stable manifold

# Plot the unstable manifold
Hr = np.zeros(shape=(100,150))
Ht = np.zeros(shape=(100,150))
for eloop in range(0,100):
    
    eps = eps0*eloop

    roldu1 = eps*Vu[0]
    thetoldu1 = eps*Vu[1]
    
    Nloop = np.ceil(-6*np.log(eps0)/np.log(eloop+2))
    flag = 1
    cnt = 0
    
    while flag==1 and cnt < Nloop:
        
        ru1 = roldu1 + K*np.sin(thetoldu1)
        thetau1 = thetoldu1 + ru1
        
        roldu1 = ru1
        thetoldu1 = thetau1
        
        if thetau1 > 4*np.pi:
            flag = 0
            
        Hr[eloop,cnt] = roldu1
        Ht[eloop,cnt] = thetoldu1 + 3*np.pi
        cnt = cnt+1
    
x = Ht[0:99,12] - 2*np.pi
x2 = 6*np.pi - x
y = Hr[0:99,12]
y2 = -y
plt.plot(x,y,linewidth =0.75)
plt.plot(x2,y2,linewidth =0.75)

del x,y
x = Ht[5:39,15] - 2*np.pi
x2 = 6*np.pi - x
y = Hr[5:39,15]
y2 = -y
plt.plot(x,y,linewidth =0.75)
plt.plot(x2,y2,linewidth =0.75)

del x,y
x = Ht[12:69,16] - 2*np.pi
x2 = 6*np.pi - x
y = Hr[12:69,16]
y2 = -y
plt.plot(x,y,linewidth =0.75)
plt.plot(x2,y2,linewidth =0.75)

del x,y
x = Ht[15:89,17] - 2*np.pi
x2 = 6*np.pi - x
y = Hr[15:89,17]
y2 = -y
plt.plot(x,y,linewidth =0.75)
plt.plot(x2,y2,linewidth =0.75)

del x,y
x = Ht[30:99,18] - 2*np.pi
x2 = 6*np.pi - x
y = Hr[30:99,18]
y2 = -y
plt.plot(x,y,linewidth =0.75)
plt.plot(x2,y2,linewidth =0.75)

# Plot the stable manifold
del Hr, Ht
Hr = np.zeros(shape=(100,150))
Ht = np.zeros(shape=(100,150))
#eps0 = 0.03
for eloop in range(0,100):
    
    eps = eps0*eloop

    roldu1 = eps*Vs[0]
    thetoldu1 = eps*Vs[1]
    
    Nloop = np.ceil(-6*np.log(eps0)/np.log(eloop+2))
    flag = 1
    cnt = 0
    
    while flag==1 and cnt < Nloop:
        
        thetau1 = thetoldu1 - roldu1
        ru1 = roldu1 - K*np.sin(thetau1)

        roldu1 = ru1
        thetoldu1 = thetau1
        
        if thetau1 > 4*np.pi:
            flag = 0
            
        Hr[eloop,cnt] = roldu1
        Ht[eloop,cnt] = thetoldu1
        cnt = cnt+1
    
x = Ht[0:79,12] + np.pi
x2 = 6*np.pi - x
y = Hr[0:79,12]
y2 = -y
plt.plot(x,y,linewidth =0.75)
plt.plot(x2,y2,linewidth =0.75)

del x,y
x = Ht[4:39,15] + np.pi
x2 = 6*np.pi - x
y = Hr[4:39,15]
y2 = -y
plt.plot(x,y,linewidth =0.75)
plt.plot(x2,y2,linewidth =0.75)

del x,y
x = Ht[12:69,16] + np.pi
x2 =  6*np.pi - x
y = Hr[12:69,16]
y2 = -y
plt.plot(x,y,linewidth =0.75)
plt.plot(x2,y2,linewidth =0.75)

del x,y
x = Ht[15:89,17] + np.pi
x2 =  6*np.pi - x
y = Hr[15:89,17]
y2 = -y
plt.plot(x,y,linewidth =0.75)
plt.plot(x2,y2,linewidth =0.75)

del x,y
x = Ht[30:99,18] + np.pi
x2 =  6*np.pi - x
y = Hr[30:99,18]
y2 = -y
plt.plot(x,y,linewidth =0.75)
plt.plot(x2,y2,linewidth =0.75)

References

[1] D. D. Nolte, “The tangled tale of phase space,” Physics Today, vol. 63, no. 4, pp. 33-38, Apr (2010)

[2] M. C. Gutzwiller, “Moon-Earth-Sun: The oldest three-body problem,” Reviews of Modern Physics, vol. 70, no. 2, pp. 589-639, Apr (1998)

[3] Barrow-Green J. Oscar II’s Prize Competition and the Error in Poindare’s Memoir on the Three Body Problem. Arch Hist Exact Sci 48: 107-131, 1994.

[4] Barrow-Green J. Poincaré and the three body problem. London Mathematical Society, 1997.

[5] https://the-moon.us/wiki/Poincar%C3%A9

[6] Poincaré H and Goroff DL. New methods of celestial mechanics … Edited and introduced by Daniel L. Goroff. New York: American Institute of Physics, 1993.

Physics in the Age of Contagion: Part 4. Fifty Shades of Immunity to COVID-19

This is the fourth installment in a series of blogs on the population dynamics of COVID-19. In my first blog I looked at a bifurcation physics model that held the possibility (and hope) that with sufficient preventive action the pandemic could have died out and spared millions. That hope was in vain.

What will it be like to live with COVID-19 as a constant factor of modern life for years to come?

In my second blog I looked at a two-component population dynamics model that showed the importance of locking down and not emerging too soon. It predicted that waiting only a few extra weeks before opening could have saved tens of thousands of lives. Unfortunately, because states like Texas and Florida opened too soon and refused to mandate the wearing of masks, thousands of lives were lost.

In my third blog I looked at a network physics model that showed the importance of rapid testing and contact tracing to remove infected individuals to push the infection rate low — not only to flatten the curve, but to drive it down. While most of the developed world is succeeding in achieving this, the United States is not.

In this fourth blog, I am looking at a simple mean-field model that shows what it will be like to live with COVID-19 as a constant factor of modern life for years to come. This is what will happen if the period of immunity to the disease is short and people who recover from the disease can get it again. Then the disease will never go away and the world will need to learn to deal with it. How different that world will look from the one we had just a year ago will depend on the degree of immunity that is acquired after infection, how long a vaccine will provide protection before booster shots are needed, and how many people will get the vaccine or will refus.

SIRS for SARS

COVID-19 is a SARS corona virus known as SARS-CoV-2. SARS stands for Severe Acute Respiratory Syndrome. There is a simple and well-established mean-field model for an infectious disease like SARS that infects individuals, from which they recover, but after some lag period they become susceptible again. This is called the SIRS model, standing for Susceptible-Infected-Recovered-Susceptible. This model is similar to the SIS model of my first blog, but it now includes a mean lifetime for the acquired immunity, after an individual recovers from the infection and then becomes susceptible again. The bifurcation threshold is the same for the SIRS model as the SIS model, but with SIRS there is a constant susceptible population. The mathematical flow equations for this model are

where i is the infected fraction, r is the recovered fraction, and 1 – i – r = s is the susceptible fraction. The infection rate for an individual who has k contacts is βk. The recovery rate is μ and the mean lifetime of acquired immunity after recovery is τlife = 1/ν. This model assumes that all individuals are equivalent (no predispositions) and there is no vaccine–only natural immunity that fades in time after recovery.

The population trajectories for this model are shown in Fig. 1. The figure on the left is a 3-simplex where every point in the triangle stands for a 3-tuple (i, r, s). Our own trajectory starts at the right bottom vertex and generates the green trajectory that spirals into the fixed point. The parameters are chosen to be roughly equivalent to what is known about the virus (but with big uncertainties in the model parameters). One of the key results is that the infection will oscillate over several years, settling into a steady state after about 4 years. Thereafter, there is a steady 3% infected population with 67% of the population susceptible and 30% recovered. The decay time for the immunity is assumed to be one year in this model. Note that the first peak in the infected numbers will be about 1 year, or around March 2021. There is a second smaller peak (the graph on the right is on a vertical log scale) at about 4 years, or sometime in 2024.

Fig. 1 SIRS model for COVID-19 in which immunity acquired after recovery fades in time so an individual can be infected again. If immunity fades and there is never a vaccine, a person will have an 80% chance of getting the virus at least twice in their lifetime, and COVID will become the third highest cause of death in the US after heart disease and cancer.

Although the recovered fraction is around 30% for these parameters, it is important to understand that this is a dynamic equilibrium. If there is no vaccine, then any individual who was once infected can be infected again after about a year. So if they don’t get the disease in the first year, they still have about a 4% chance to get it every following year. In 50 years, a 20-year-old today would have almost a 90% chance of having been infected at least once and an 80% chance of having gotten it at least twice. In other words, if there is never a vaccine, and if immunity fades after each recovery, then almost everyone will eventually get the disease several times in their lifetime. Furthermore, COVID will become the third most likely cause of death in the US after heart disease (first) and cancer (second). The sad part of this story is that it all could have been avoided if the government leaders of several key nations, along with their populations, had behaved responsibly.

The Asymmetry of Personal Cost under COVID

The nightly news in the US during the summer of 2020 shows endless videos of large parties, dense with people, mostly young, wearing no masks. This is actually understandable even though regrettable. It is because of the asymmetry of personal cost. Here is what that means …

On any given day, an individual who goes out and about in the US has only about a 0.01 percent chance of contracting the virus. In the entire year, there is only about a 3% chance that that individual will get the disease. And even if they get the virus, they only have a 2% chance of dying. So the actual danger per day per person is so minuscule that it is hard to understand why it is so necessary to wear a mask and socially distance. Therefore, if you go out and don’t wear a mask, almost nothing bad will happen to YOU. So why not? Why not screw the masks and just go out!

And this is why that’s such a bad idea: because if no-one wears a mask, then tens or hundreds of thousands of OTHERS will die.

This is the asymmetry of personal cost. By ignoring distancing, nothing is likely to happen to YOU, but thousands of OTHERS will die. How much of your own comfort are you willing to give up to save others? That is the existential question.

This year is the 75th anniversary of the end of WW II. During the war everyone rationed and recycled, not because they needed it for themselves, but because it was needed for the war effort. Almost no one hesitated back then. It was the right thing to do even though it cost personal comfort. There was a sense of community spirit and doing what was good for the country. Where is that spirit today? The COVID-19 pandemic is a war just as deadly as any war since WW II. There is a community need to battle it. All anyone has to do is wear a mask and behave responsibly. Is this such a high personal cost?

The Vaccine

All of this can change if a reliable vaccine can be developed. There is no guarantee that this can be done. For instance, there has never been a reliable vaccine for the common cold. A more sobering thought is to realize is that there has never been a vaccine for the closely related virus SARS-CoV-1 that broke out in 2003 in China but was less infectious. But the need is greater now, so there is reason for optimism that a vaccine can be developed that elicits the production of antibodies with a mean lifetime at least as long as for naturally-acquired immunity.

The SIRS model has the same bifurcation threshold as the SIS model that was discussed in a previous blog. If the infection rate can be made slower than the recovery rate, then the pandemic can be eliminated entirely. The threshold is

The parameter μ, the recovery rate, is intrinsic and cannot be changed. The parameter β, the infection rate per contact, can be reduced by personal hygiene and wearing masks. The parameter <k>, the average number of contacts to a susceptible person, can be significantly reduced by vaccinating a large fraction of the population.

To simulate the effect of vaccination, the average <k> per person can be reduced at the time of vaccination. This lowers the average infection rate. The results are shown in Fig. 2 for the original dynamics, a vaccination of 20% of the populace, and a vaccination of 40% of the populace. For 20% vaccination, the epidemic is still above threshold, although the long-time infection is lower. For 40% of the population vaccinated, the disease falls below threshold and would decay away and vanish.

Fig. 2 Vaccination at 52 weeks can lower the infection cases (20% vaccinated) or eliminate them entirely (40% vaccinated). The vaccinations would need booster shots every year (if the decay time of immunity is one year).

In this model, the vaccination is assumed to decay at the same rate as naturally acquired immunity (one year), so booster shots would be needed every year. Getting 40% of the population to get vaccinated may be achieved. Roughly that fraction get yearly flu shots in the US, so the COVID vaccine could be added to the list. But at 40% it would still be necessary for everyone to wear face masks and socially distance until the pandemic fades away. Interestingly, if the 40% got vaccinated all on the same date (across the world), then the pandemic would be gone in a few months. Unfortunately, that’s unrealistic, so with a world-wide push to get 40% of the World’s population vaccinated within five years, it would take that long to eliminate the disease, taking us to 2025 before we could go back to the way things were in November of 2019. But that would take a world-wide vaccination blitz the likes of which the world has never seen.

Python Code: SIRS.py

#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
SIRS.py
Created on Fri July 17 2020
D. D. Nolte, "Introduction to Modern Dynamics: 
    Chaos, Networks, Space and Time, 2nd Edition (Oxford University Press, 2019)
@author: nolte
"""

import numpy as np
from scipy import integrate
from matplotlib import pyplot as plt

plt.close('all')

def tripartite(x,y,z):

    sm = x + y + z
    xp = x/sm
    yp = y/sm
    
    f = np.sqrt(3)/2
    
    y0 = f*xp
    x0 = -0.5*xp - yp + 1;
    
    lines = plt.plot(x0,y0)
    plt.setp(lines, linewidth=0.5)
    plt.plot([0, 1],[0, 0],'k',linewidth=1)
    plt.plot([0, 0.5],[0, f],'k',linewidth=1)
    plt.plot([1, 0.5],[0, f],'k',linewidth=1)
    plt.show()
    
print(' ')
print('SIRS.py')

def solve_flow(param,max_time=1000.0):

    def flow_deriv(x_y,tspan,mu,betap,nu):
        x, y = x_y
        
        return [-mu*x + betap*x*(1-x-y),mu*x-nu*y]
    
    x0 = [del1, del2]
    
    # Solve for the trajectories
    t = np.linspace(0, int(tlim), int(250*tlim))
    x_t = integrate.odeint(flow_deriv, x0, t, param)

    return t, x_t

 # rates per week
betap = 0.3;   # infection rate
mu = 0.2;      # recovery rate
nu = 0.02      # immunity decay rate

print('beta = ',betap)
print('mu = ',mu)
print('nu =',nu)
print('betap/mu = ',betap/mu)
          
del1 = 0.005         # initial infected
del2 = 0.005         # recovered

tlim = 600          # weeks (about 12 years)

param = (mu, betap, nu)    # flow parameters

t, y = solve_flow(param)
I = y[:,0]
R = y[:,1]
S = 1 - I - R

plt.figure(1)
lines = plt.semilogy(t,I,t,S,t,R)
plt.ylim([0.001,1])
plt.xlim([0,tlim])
plt.legend(('Infected','Susceptible','Recovered'))
plt.setp(lines, linewidth=0.5)
plt.xlabel('Days')
plt.ylabel('Fraction of Population')
plt.title('Population Dynamics for COVID-19')
plt.show()

plt.figure(2)
plt.hold(True)
for xloop in range(0,10):
    del1 = xloop/10.1 + 0.001
    del2 = 0.01

    tlim = 300
    param = (mu, betap, nu)    # flow parameters
    t, y = solve_flow(param)       
    I = y[:,0]
    R = y[:,1]
    S = 1 - I - R
    
    tripartite(I,R,S);

for yloop in range(1,6):
    del1 = 0.001;
    del2 = yloop/10.1
    t, y = solve_flow(param)
    I = y[:,0]
    R = y[:,1]
    S = 1 - I - R
    
    tripartite(I,R,S);
    
for loop in range(2,10):
    del1 = loop/10.1
    del2 = 1 - del1 - 0.01
    t, y = solve_flow(param)
    I = y[:,0]
    R = y[:,1]
    S = 1 - I - R
        
    tripartite(I,R,S);
    
plt.hold(False)
plt.title('Simplex Plot of COVID-19 Pop Dynamics')
 
vac = [1, 0.8, 0.6]
for loop in vac:
               
    # Run the epidemic to the first peak
    del1 = 0.005
    del2 = 0.005
    tlim = 52
    param = (mu, betap, nu)
    t1, y1 = solve_flow(param)
    
    # Now vaccinate a fraction of the population
    st = np.size(t1)
    del1 = y1[st-1,0]
    del2 = y1[st-1,1]
    tlim = 400
    
    param = (mu, loop*betap, nu)
    t2, y2 = solve_flow(param)
    
    t2 = t2 + t1[st-1]
    
    tc = np.concatenate((t1,t2))
    yc = np.concatenate((y1,y2))
    
    I = yc[:,0]
    R = yc[:,1]
    S = 1 - I - R
    
    plt.figure(3)
    plt.hold(True)
    lines = plt.semilogy(tc,I,tc,S,tc,R)
    plt.ylim([0.001,1])
    plt.xlim([0,tlim])
    plt.legend(('Infected','Susceptible','Recovered'))
    plt.setp(lines, linewidth=0.5)
    plt.xlabel('Weeks')
    plt.ylabel('Fraction of Population')
    plt.title('Vaccination at 1 Year')
    plt.show()
    
plt.hold(False)

Caveats and Disclaimers

No effort was made to match parameters to the actual properties of the COVID-19 pandemic. The SIRS model is extremely simplistic and can only show general trends because it homogenizes away all the important spatial heterogeneity of the disease across the cities and states of the country. If you live in a hot spot, this model says little about what you will experience locally. The decay of immunity is also a completely open question and the decay rate is completely unknown. It is easy to modify the Python program to explore the effects of differing decay rates and vaccination fractions. The model also can be viewed as a “compartment” to model local variations in parameters.

Timelines in the History and Physics of Dynamics (with links to primary texts)

These timelines in the History of Dynamics are organized along the Chapters in Galileo Unbound (Oxford, 2018). The book is about the physics and history of dynamics including classical and quantum mechanics as well as general relativity and nonlinear dynamics (with a detour down evolutionary dynamics and game theory along the way). The first few chapters focus on Galileo, while the following chapters follow his legacy, as theories of motion became more abstract, eventually to encompass the evolution of species within the same theoretical framework as the orbit of photons around black holes.

Galileo: A New Scientist

Galileo Galilei was the first modern scientist, launching a new scientific method that superseded, after one and a half millennia, Aristotle’s physics.  Galileo’s career began with his studies of motion at the University of Pisa that were interrupted by his move to the University of Padua and his telescopic discoveries of mountains on the moon and the moons of Jupiter.  Galileo became the first rock star of science, and he used his fame to promote the ideas of Copernicus and the Sun-centered model of the solar system.  But he pushed too far when he lampooned the Pope.  Ironically, Galileo’s conviction for heresy and his sentence to house arrest for the remainder of his life gave him the free time to finally finish his work on the physics of motion, which he published in Two New Sciences in 1638.

1543 Copernicus dies, publishes posthumously De Revolutionibus

1564    Galileo born

1581    Enters University of Pisa

1585    Leaves Pisa without a degree

1586    Invents hydrostatic balance

1588    Receives lecturship in mathematics at Pisa

1592    Chair of mathematics at Univeristy of Padua

1595    Theory of the tides

1595    Invents military and geometric compass

1596    Le Meccaniche and the principle of horizontal inertia

1600    Bruno Giordano burned at the stake

1601    Death of Tycho Brahe

1609    Galileo constructs his first telescope, makes observations of the moon

1610    Galileo discovers 4 moons of Jupiter, Starry Messenger (Sidereus Nuncius), appointed chief philosopher and mathematician of the Duke of Tuscany, moves to Florence, observes Saturn, Venus goes through phases like the moon

1611    Galileo travels to Rome, inducted into the Lyncean Academy, name “telescope” is first used

1611    Scheiner discovers sunspots

1611    Galileo meets Barberini, a cardinal

1611 Johannes Kepler, Dioptrice

1613    Letters on sunspots published by Lincean Academy in Rome

1614    Galileo denounced from the pulpit

1615    (April) Bellarmine writes an essay against Coperinicus

1615    Galileo investigated by the Inquisition

1615    Writes Letter to Christina, but does not publish it

1615    (December) travels to Rome and stays at Tuscan embassy

1616    (January) Francesco Ingoli publishes essay against Copernicus

1616    (March) Decree against copernicanism

1616    Galileo publishes theory of tides, Galileo meets with Pope Paul V, Copernicus’ book is banned, Galileo warned not to support the Coperinican system, Galileo decides not to reply to Ingoli, Galileo proposes eclipses of Jupter’s moons to determine longitude at sea

1618    Three comets appear, Grassi gives a lecture not hostile to Galileo

1618    Galileo, through Mario Guiducci, publishes scathing attack on Grassi

1619    Jesuit Grassi (Sarsi) publishes attack on Galileo concerning 3 comets

1619    Marina Gamba dies, Galileo legitimizes his son Vinczenzio

1619 Kepler’s Laws, Epitome astronomiae Copernicanae.

1623    Barberini becomes Urban VIII, The Assayer published (response to Grassi)

1624    Galileo visits Rome and Urban VIII

1629    Birth of his grandson Galileo

1630    Death of Johanes Kepler

1632    Publication of the Dialogue Concerning the Two Chief World Systems, Galileo is indicted by the Inquisition (68 years old)

1633    (February) Travels to Rome

1633    Convicted, abjurs, house arrest in Rome, then Siena, then home to Arcetri

1638    Blind, publication of Two New Sciences

1642    Galileo dies (77 years old)

Galileo’s Trajectory

Galileo’s discovery of the law of fall and the parabolic trajectory began with early work on the physics of motion by predecessors like the Oxford Scholars, Tartaglia and the polymath Simon Stevin who dropped lead weights from the leaning tower of Delft three years before Galileo (may have) dropped lead weights from the leaning tower of Pisa.  The story of how Galileo developed his ideas of motion is described in the context of his studies of balls rolling on inclined plane and the surprising accuracy he achieved without access to modern timekeeping.

1583    Galileo Notices isochronism of the pendulum

1588    Receives lecturship in mathematics at Pisa

1589 – 1592  Work on projectile motion in Pisa

1592    Chair of mathematics at Univeristy of Padua

1596    Le Meccaniche and the principle of horizontal inertia

1600    Guidobaldo shares technique of colored ball

1602    Proves isochronism of the pendulum (experimentally)

1604    First experiments on uniformly accelerated motion

1604    Wrote to Scarpi about the law of fall (s ≈ t2)

1607-1608  Identified trajectory as parabolic

1609    Velocity proportional to time

1632    Publication of the Dialogue Concerning the Two Chief World Systems, Galileo is indicted by the Inquisition (68 years old)

1636    Letter to Christina published in Augsburg in Latin and Italian

1638    Blind, publication of Two New Sciences

1641    Invented pendulum clock (in theory)

1642    Dies (77 years old)

On the Shoulders of Giants

Galileo’s parabolic trajectory launched a new approach to physics that was taken up by a new generation of scientists like Isaac Newton, Robert Hooke and Edmund Halley.  The English Newtonian tradition was adopted by ambitious French iconoclasts who championed Newton over their own Descartes.  Chief among these was Pierre Maupertuis, whose principle of least action was developed by Leonhard Euler and Joseph Lagrange into a rigorous new science of dynamics.  Along the way, Maupertuis became embroiled in a famous dispute that entangled the King of Prussia as well as the volatile Voltaire who was mourning the death of his mistress Emilie du Chatelet, the lone female French physicist of the eighteenth century.

1644    Descartes’ vortex theory of gravitation

1662    Fermat’s principle

1669 – 1690    Huygens expands on Descartes’ vortex theory

1687 Newton’s Principia

1698    Maupertuis born

1729    Maupertuis entered University in Basel.  Studied under Johann Bernoulli

1736    Euler publishes Mechanica sive motus scientia analytice exposita

1737   Maupertuis report on expedition to Lapland.  Earth is oblate.  Attacks Cassini.

1744    Maupertuis Principle of Least Action.  Euler Principle of Least Action.

1745    Maupertuis becomes president of Berlin Academy.  Paris Academy cancels his membership after a campaign against him by Cassini.

1746    Maupertuis principle of Least Action for mass

1751    Samuel König disputes Maupertuis’ priority

1756    Cassini dies.  Maupertuis reinstated in the French Academy

1759    Maupertuis dies

1759    du Chatelet’s French translation of Newton’s Principia published posthumously

1760    Euler 3-body problem (two fixed centers and coplanar third body)

1760-1761 Lagrange, Variational calculus (J. L. Lagrange, “Essai d’une nouvelle méthod pour dEeterminer les maxima et lest minima des formules intégrales indéfinies,” Miscellanea Teurinensia, (1760-1761))

1762    Beginning of the reign of Catherine the Great of Russia

1763    Euler colinear 3-body problem

1765    Euler publishes Theoria motus corporum solidorum on rotational mechanics

1766    Euler returns to St. Petersburg

1766    Lagrange arrives in Berlin

1772    Lagrange equilateral 3-body problem, Essai sur le problème des trois corps, 1772, Oeuvres tome 6

1775    Beginning of the American War of Independence

1776    Adam Smith Wealth of Nations

1781    William Herschel discovers Uranus

1783    Euler dies in St. Petersburg

1787    United States Constitution written

1787    Lagrange moves from Berlin to Paris

1788    Lagrange, Méchanique analytique

1789    Beginning of the French Revolution

1799    Pierre-Simon Laplace Mécanique Céleste (1799-1825)

Geometry on My Mind

This history of modern geometry focuses on the topics that provided the foundation for the new visualization of physics.  It begins with Carl Gauss and Bernhard Riemann, who redefined geometry and identified the importance of curvature for physics.  Vector spaces, developed by Hermann Grassmann, Giuseppe Peano and David Hilbert, are examples of the kinds of abstract new spaces that are so important for modern physics, such as Hilbert space for quantum mechanics.  Fractal geometry developed by Felix Hausdorff later provided the geometric language needed to solve problems in chaos theory.

1629    Fermat described higher-dim loci

1637    Descarte’s Geometry

1649    van Schooten’s commentary on Descartes Geometry

1694    Leibniz uses word “coordinate” in its modern usage

1697    Johann Bernoulli shortest distance between two points on convex surface

1732    Euler geodesic equations for implicit surfaces

1748    Euler defines modern usage of function

1801    Gauss calculates orbit of Ceres

1807    Fourier analysis (published in 1822(

1807    Gauss arrives in Göttingen

1827    Karl Gauss establishes differential geometry of curved surfaces, Disquisitiones generales circa superficies curvas

1830    Bolyai and Lobachevsky publish on hyperbolic geometry

1834    Jacobi n-fold integrals and volumes of n-dim spheres

1836    Liouville-Sturm theorem

1838    Liouville’s theorem

1841    Jacobi determinants

1843    Arthur Cayley systems of n-variables

1843    Hamilton discovers quaternions

1844    Hermann Grassman n-dim vector spaces, Die Lineale Ausdehnungslehr

1846    Julius Plücker System der Geometrie des Raumes in neuer analytischer Behandlungsweise

1848 Jacobi Vorlesungen über Dynamik

1848    “Vector” coined by Hamilton

1854    Riemann’s habilitation lecture

1861    Riemann n-dim solution of heat conduction

1868    Publication of Riemann’s Habilitation

1869    Christoffel and Lipschitz work on multiple dimensional analysis

1871    Betti refers to the n-ply of numbers as a “space”.

1871    Klein publishes on non-euclidean geometry

1872 Boltzmann distribution

1872    Jordan Essay on the geometry of n-dimensions

1872    Felix Klein’s “Erlangen Programme”

1872    Weierstrass’ Monster

1872    Dedekind cut

1872    Cantor paper on irrational numbers

1872    Cantor meets Dedekind

1872 Lipschitz derives mechanical motion as a geodesic on a manifold

1874    Cantor beginning of set theory

1877    Cantor one-to-one correspondence between the line and n-dimensional space

1881    Gibbs codifies vector analysis

1883    Cantor set and staircase Grundlagen einer allgemeinen Mannigfaltigkeitslehre

1884    Abbott publishes Flatland

1887    Peano vector methods in differential geometry

1890    Peano space filling curve

1891    Hilbert space filling curve

1887    Darboux vol. 2 treats dynamics as a point in d-dimensional space.  Applies concepts of geodesics for trajectories.

1898    Ricci-Curbastro Lesons on the Theory of Surfaces

1902    Lebesgue integral

1904    Hilbert studies integral equations

1904    von Koch snowflake

1906    Frechet thesis on square summable sequences as infinite dimensional space

1908    Schmidt Geometry in a Function Space

1910    Brouwer proof of dimensional invariance

1913    Hilbert space named by Riesz

1914    Hilbert space used by Hausdorff

1915    Sierpinski fractal triangle

1918    Hausdorff non-integer dimensions

1918    Weyl’s book Space, Time, Matter

1918    Fatou and Julia fractals

1920    Banach space

1927    von Neumann axiomatic form of Hilbert Space

1935    Frechet full form of Hilbert Space

1967    Mandelbrot coast of Britain

1982    Mandelbrot’s book The Fractal Geometry of Nature

The Tangled Tale of Phase Space

Phase space is the central visualization tool used today to study complex systems.  The chapter describes the origins of phase space with the work of Joseph Liouville and Carl Jacobi that was later refined by Ludwig Boltzmann and Rudolf Clausius in their attempts to define and explain the subtle concept of entropy.  The turning point in the history of phase space was when Henri Poincaré used phase space to solve the three-body problem, uncovering chaotic behavior in his quest to answer questions on the stability of the solar system.  Phase space was established as the central paradigm of statistical mechanics by JW Gibbs and Paul Ehrenfest.

1804    Jacobi born (1904 – 1851) in Potsdam

1804    Napoleon I Emperor of France

1806    William Rowan Hamilton born (1805 – 1865)

1807    Thomas Young describes “Energy” in his Course on Natural Philosophy (Vol. 1 and Vol. 2)

1808    Bethoven performs his Fifth Symphony

1809    Joseph Liouville born (1809 – 1882)

1821    Hermann Ludwig Ferdinand von Helmholtz born (1821 – 1894)

1824    Carnot published Reflections on the Motive Power of Fire

1834    Jacobi n-fold integrals and volumes of n-dim spheres

1834-1835       Hamilton publishes his principle (1834, 1835).

1836    Liouville-Sturm theorem

1837    Queen Victoria begins her reign as Queen of England

1838    Liouville develops his theorem on products of n differentials satisfying certain first-order differential equations.  This becomes the classic reference to Liouville’s Theorem.

1847    Helmholtz  Conservation of Energy (force)

1849    Thomson makes first use of “Energy” (From reading Thomas Young’s lecture notes)

1850    Clausius establishes First law of Thermodynamics: Internal energy. Second law:  Heat cannot flow unaided from cold to hot.  Not explicitly stated as first and second laws

1851    Thomson names Clausius’ First and Second laws of Thermodynamics

1852    Thomson describes general dissipation of the universe (“energy” used in title)

1854    Thomson defined absolute temperature.  First mathematical statement of 2nd law.  Restricted to reversible processes

1854    Clausius stated Second Law of Thermodynamics as inequality

1857    Clausius constructs kinetic theory, Mean molecular speeds

1858    Clausius defines mean free path, Molecules have finite size. Clausius assumed that all molecules had the same speed

1860    Maxwell publishes first paper on kinetic theory. Distribution of speeds. Derivation of gas transport properties

1865    Loschmidt size of molecules

1865    Clausius names entropy

1868    Boltzmann adds (Boltzmann) factor to Maxwell distribution

1872    Boltzmann transport equation and H-theorem

1876    Loschmidt reversibility paradox

1877    Boltzmann  S = k logW

1890    Poincare: Recurrence Theorem. Recurrence paradox with Second Law (1893)

1896    Zermelo criticizes Boltzmann

1896    Boltzmann posits direction of time to save his H-theorem

1898    Boltzmann Vorlesungen über Gas Theorie

1905    Boltzmann kinetic theory of matter in Encyklopädie der mathematischen Wissenschaften

1906    Boltzmann dies

1910    Paul Hertz uses “Phase Space” (Phasenraum)

1911    Ehrenfest’s article in Encyklopädie der mathematischen Wissenschaften

1913    A. Rosenthal writes the first paper using the phrase “phasenraum”, combining the work of Boltzmann and Poincaré. “Beweis der Unmöglichkeit ergodischer Gassysteme” (Ann. D. Physik, 42, 796 (1913)

1913    Plancheral, “Beweis der Unmöglichkeit ergodischer mechanischer Systeme” (Ann. D. Physik, 42, 1061 (1913).  Also uses “Phasenraum”.

The Lens of Gravity

Gravity provided the backdrop for one of the most important paradigm shifts in the history of physics.  Prior to Albert Einstein’s general theory of relativity, trajectories were paths described by geometry.  After the theory of general relativity, trajectories are paths caused by geometry.  This chapter explains how Einstein arrived at his theory of gravity, relying on the space-time geometry of Hermann Minkowski, whose work he had originally harshly criticized.  The confirmation of Einstein’s theory was one of the dramatic high points in 20th century history of physics when Arthur Eddington journeyed to an island off the coast of Africa to observe stellar deflections during a solar eclipse.  If Galileo was the first rock star of physics, then Einstein was the first worldwide rock star of science.

1697    Johann Bernoulli was first to find solution to shortest path between two points on a curved surface (1697).

1728    Euler found the geodesic equation.

1783    The pair 40 Eridani B/C was discovered by William Herschel on 31 January

1783    John Michell explains infalling object would travel faster than speed of light

1796    Laplace describes “dark stars” in Exposition du system du Monde

1827    The first orbit of a binary star computed by Félix Savary for the orbit of Xi Ursae Majoris.

1827    Gauss curvature Theoriem Egregum

1844    Bessel notices periodic displacement of Sirius with period of half a century

1844    The name “geodesic line” is attributed to Liouville.

1845    Buys Ballot used musicians with absolute pitch for the first experimental verification of the Doppler effect

1854    Riemann’s habilitationsschrift

1862    Discovery of Sirius B (a white dwarf)

1868    Darboux suggested motions in n-dimensions

1872    Lipshitz first to apply Riemannian geometry to the principle of least action.

1895    Hilbert arrives in Göttingen

1902    Minkowski arrives in Göttingen

1905    Einstein’s miracle year

1906    Poincaré describes Lorentz transformations as rotations in 4D

1907    Einstein has “happiest thought” in November

1907    Einstein’s relativity review in Jahrbuch

1908    Minkowski’s Space and Time lecture

1908    Einstein appointed to unpaid position at University of Bern

1909    Minkowski dies

1909    Einstein appointed associate professor of theoretical physics at U of Zürich

1910    40 Eridani B was discobered to be of spectral type A (white dwarf)

1910    Size and mass of Sirius B determined (heavy and small)

1911    Laue publishes first textbook on relativity theory

1911    Einstein accepts position at Prague

1911    Einstein goes to the limits of special relativity applied to gravitational fields

1912    Einstein’s two papers establish a scalar field theory of gravitation

1912    Einstein moves from Prague to ETH in Zürich in fall.  Begins collaboration with Grossmann.

1913    Einstein EG paper

1914    Adams publishes spectrum of 40 Eridani B

1915    Sirius B determined to be also a low-luminosity type A white dwarf

1915    Einstein Completes paper

1916    Density of 40 Eridani B by Ernst Öpik

1916    Schwarzschild paper

1916 Einstein’s publishes theory of gravitational waves

1919    Eddington expedition to Principe

1920    Eddington paper on deflection of light by the sun

1922    Willem Luyten coins phrase “white dwarf”

1924    Eddington found a set of coordinates that eliminated the singularity at the Schwarzschild radius

1926    R. H. Fowler publishes paper on degenerate matter and composition of white dwarfs

1931    Chandrasekhar calculated the limit for collapse to white dwarf stars at 1.4MS

1933    Georges Lemaitre states the coordinate singularity was an artefact

1934    Walter Baade and Fritz Zwicky proposed the existence of the neutron star only a year after the discovery of the neutron by Sir James Chadwick.

1939    Oppenheimer and Snyder showed ultimate collapse of a 3MS  “frozen star”

1958    David Finkelstein paper

1965    Antony Hewish and Samuel Okoye discovered “an unusual source of high radio brightness temperature in the Crab Nebula”. This source turned out to be the Crab Nebula neutron star that resulted from the great supernova of 1054.

1967    Jocelyn Bell and Antony Hewish discovered regular radio pulses from CP 1919. This pulsar was later interpreted as an isolated, rotating neutron star.

1967    Wheeler’s “black hole” talk

1974    Joseph Taylor and Russell Hulse discovered the first binary pulsar, PSR B1913+16, which consists of two neutron stars (one seen as a pulsar) orbiting around their center of mass.

2015    LIGO detects gravitational waves on Sept. 14 from the merger of two black holes

2017    LIGO detects the merger of two neutron stars

On the Quantum Footpath

The concept of the trajectory of a quantum particle almost vanished in the battle between Werner Heisenberg’s matrix mechanics and Erwin Schrödinger’s wave mechanics.  It took Niels Bohr and his complementarity principle of wave-particle duality to cede back some reality to quantum trajectories.  However, Schrödinger and Einstein were not convinced and conceived of quantum entanglement to refute the growing acceptance of the Copenhagen Interpretation of quantum physics.  Schrödinger’s cat was meant to be an absurdity, but ironically it has become a central paradigm of practical quantum computers.  Quantum trajectories took on new meaning when Richard Feynman constructed quantum theory based on the principle of least action, inventing his famous Feynman Diagrams to help explain quantum electrodynamics.

1885    Balmer Theory: 

1897    J. J. Thomson discovered the electron

1904    Thomson plum pudding model of the atom

1911    Bohr PhD thesis filed. Studies on the electron theory of metals.  Visited England.

1911    Rutherford nuclear model

1911    First Solvay conference

1911    “ultraviolet catastrophe” coined by Ehrenfest

1913    Bohr combined Rutherford’s nuclear atom with Planck’s quantum hypothesis: 1913 Bohr model

1913    Ehrenfest adiabatic hypothesis

1914-1916       Bohr at Manchester with Rutherford

1916    Bohr appointed Chair of Theoretical Physics at University of Copenhagen: a position that was made just for him

1916    Schwarzschild and Epstein introduce action-angle coordinates into quantum theory

1920    Heisenberg enters University of Munich to obtain his doctorate

1920    Bohr’s Correspondence principle: Classical physics for large quantum numbers

1921    Bohr Founded Institute of Theoretical Physics (Copenhagen)

1922-1923       Heisenberg studies with Born, Franck and Hilbert at Göttingen while Sommerfeld is in the US on sabbatical.

1923    Heisenberg Doctorate.  The exam does not go well.  Unable to derive the resolving power of a microscope in response to question by Wien.  Becomes Born’s assistant at Göttingen.

1924    Heisenberg visits Niels Bohr in Copenhagen (and met Einstein?)

1924    Heisenberg Habilitation at Göttingen on anomalous Zeeman

1924 – 1925    Heisenberg worked with Bohr in Copenhagen, returned summer of 1925 to Göttiingen

1924    Pauli exclusion principle and state occupancy

1924    de Broglie hypothesis extended wave-particle duality to matter

1924    Bohr Predicted Halfnium (72)

1924    Kronig’s proposal for electron self spin

1924    Bose (Einstein)

1925    Heisenberg paper on quantum mechanics

1925    Dirac, reading proof from Heisenberg, recognized the analogy of noncommutativity with Poisson brackets and the correspondence with Hamiltonian mechanics.

1925    Uhlenbeck and Goudschmidt: spin

1926    Born, Heisenberg, Kramers: virtual oscillators at transition frequencies: Matrix mechanics (alternative to Bohr-Kramers-Slater 1924 model of orbits).  Heisenberg was Born’s student at Göttingen.

1926    Schrödinger wave mechanics

1927    de Broglie hypotehsis confirmed by Davisson and Germer

1927    Complementarity by Bohr: wave-particle duality “Evidence obtained under different experimental conditions cannot be comprehended within a single picture, but must be regarded as complementary in the sense that only the totality of the phenomena exhausts the possible information about the objects.

1927    Heisenberg uncertainty principle (Heisenberg was in Copenhagen 1926 – 1927)

1927    Solvay Conference in Brussels

1928    Heisenberg to University of Leipzig

1928    Dirac relativistic QM equation

1929    de Broglie Nobel Prize

1930    Solvay Conference

1932    Heisenberg Nobel Prize

1932    von Neumann operator algebra

1933    Dirac Lagrangian form of QM (basis of Feynman path integral)

1933    Schrödinger and Dirac Nobel Prize

1935    Einstein, Poldolsky and Rosen EPR paper

1935 Bohr’s response to Einsteins “EPR” paradox

1935    Schrodinger’s cat

1939    Feynman graduates from MIT

1941    Heisenberg (head of German atomic project) visits Bohr in Copenhagen

1942    Feynman PhD at Princeton, “The Principle of Least Action in Quantum Mechanics

1942 – 1945    Manhattan Project, Bethe-Feynman equation for fission yield

1943    Bohr escapes to Sweden in a fishing boat.  Went on to England secretly.

1945    Pauli Nobel Prize

1945    Death of Feynman’s wife Arline (married 4 years)

1945    Fall, Feynman arrives at Cornell ahead of Hans Bethe

1947    Shelter Island conference: Lamb Shift, did Kramer’s give a talk suggesting that infinities could be subtracted?

1947    Fall, Dyson arrives at Cornell

1948    Pocono Manor, Pennsylvania, troubled unveiling of path integral formulation and Feynman diagrams, Schwinger’s master presentation

1948    Feynman and Dirac. Summer drive across the US with Dyson

1949    Dyson joins IAS as a postdoc, trains a cohort of theorists in Feynman’s technique

1949    Karplus and Kroll first g-factor calculation

1950    Feynman moves to Cal Tech

1965    Schwinger, Tomonaga and Feynman Nobel Prize

1967    Hans Bethe Nobel Prize

From Butterflies to Hurricanes

Half a century after Poincaré first glimpsed chaos in the three-body problem, the great Russian mathematician Andrey Kolmogorov presented a sketch of a theorem that could prove that orbits are stable.  In the hands of Vladimir Arnold and Jürgen Moser, this became the KAM theory of Hamiltonian chaos.  This chapter shows how KAM theory fed into topology in the hands of Stephen Smale and helped launch the new field of chaos theory.  Edward Lorenz discovered chaos in numerical models of atmospheric weather and discovered the eponymous strange attractor.  Mathematical aspects of chaos were further developed by Mitchell Feigenbaum studying bifurcations in the logistic map that describes population dynamics.

1760    Euler 3-body problem (two fixed centers and coplanar third body)

1763    Euler colinear 3-body problem

1772    Lagrange equilateral 3-body problem

1881-1886       Poincare memoires “Sur les courbes de ́finies par une equation differentielle”

1890    Poincare “Sur le probleme des trois corps et les equations de la dynamique”. First-return map, Poincare recurrence theorem, stable and unstable manifolds

1892 – 1899    Poincare New Methods in Celestial Mechanics

1892    Lyapunov The General Problem of the Stability of Motion

1899    Poincare homoclinic trajectory

1913    Birkhoff proves Poincaré’s last geometric theorem, a special case of the three-body problem.

1927    van der Pol and van der Mark

1937    Coarse systems, Andronov and Pontryagin

1938    Morse theory

1942    Hopf bifurcation

1945    Cartwright and Littlewood study the van der Pol equation (Radar during WWII)

1954    Kolmogorov A. N., On conservation of conditionally periodic motions for a small change in Hamilton’s function.

1960    Lorenz: 12 equations

1962    Moser On Invariant Curves of Area-Preserving Mappings of an Annulus.

1963    Arnold Small denominators and problems of the stability of motion in classical and celestial mechanics

1963    Lorenz: 3 equations

1964    Arnold diffusion

1965    Smale’s horseshoe

1969    Chirikov standard map

1971    Ruelle-Takens (Ruelle coins phrase “strange attractor”)

1972    “Butterfly Effect” given for Lorenz’ talk (by Philip Merilees)

1975    Gollub-Swinney observe route to turbulence along lines of Ruelle

1975    Yorke coins “chaos theory”

1976    Robert May writes review article of the logistic map

1977    New York conference on bifurcation theory

1987    James Gleick Chaos: Making a New Science

Darwin in the Clockworks

The preceding timelines related to the central role played by families of trajectories phase space to explain the time evolution of complex systems.  These ideas are extended to explore the history and development of the theory of natural evolution by Charles Darwin.  Darwin had many influences, including ideas from Thomas Malthus in the context of economic dynamics.  After Darwin, the ideas of evolution matured to encompass broad topics in evolutionary dynamics and the emergence of the idea of fitness landscapes and game theory driving the origin of new species.  The rise of genetics with Gregor Mendel supplied a firm foundation for molecular evolution, leading to the moleculer clock of Linus Pauling and the replicator dynamics of Richard Dawkins.

1202    Fibonacci

1766    Thomas Robert Malthus born

1776    Adam Smith The Wealth of Nations

1798    Malthus “An Essay on the Principle of Population

1817    Ricardo Principles of Political Economy and Taxation

1838    Cournot early equilibrium theory in duopoly

1848    John Stuart Mill

1848    Karl Marx Communist Manifesto

1859    Darwin Origin of Species

1867    Karl Marx Das Kapital

1871    Darwin Descent of Man, and Selection in Relation to Sex

1871    Jevons Theory of Political Economy

1871    Menger Principles of Economics

1874    Walrus Éléments d’économie politique pure, or Elements of Pure Economics (1954)

1890    Marshall Principles of Economics

1908    Hardy constant genetic variance

1910    Brouwer fixed point theorem

1910    Alfred J. Lotka autocatylitic chemical reactions

1913    Zermelo determinancy in chess

1922    Fisher dominance ratio

1922    Fisher mutations

1925    Lotka predator-prey in biomathematics

1926    Vita Volterra published same equations independently

1927    JBS Haldane (1892—1964) mutations

1928    von Neumann proves the minimax theorem

1930    Fisher ratio of sexes

1932    Wright Adaptive Landscape

1932    Haldane The Causes of Evolution

1933    Kolmogorov Foundations of the Theory of Probability

1934    Rudolph Carnap The Logical Syntax of Language

1936    John Maynard Keynes, The General Theory of Employment, Interest and Money

1936    Kolmogorov generalized predator-prey systems

1938    Borel symmetric payoff matrix

1942    Sewall Wright    Statistical Genetics and Evolution

1943    McCulloch and Pitts A Logical Calculus of Ideas Immanent in Nervous Activity

1944    von Neumann and Morgenstern Theory of Games and Economic Behavior

1950    Prisoner’s Dilemma simulated at Rand Corportation

1950    John Nash Equilibrium points in n-person games and The Bargaining Problem

1951    John Nash Non-cooperative Games

1952    McKinsey Introduction to the Theory of Games (first textbook)

1953    John Nash Two-Person Cooperative Games

1953    Watson and Crick DNA

1955    Braithwaite’s Theory of Games as a Tool for the Moral Philosopher

1961    Lewontin Evolution and the Theory of Games

1962    Patrick Moran The Statistical Processes of Evolutionary Theory

1962    Linus Pauling molecular clock

1968    Motoo Kimura  neutral theory of molecular evolution

1972    Maynard Smith introduces the evolutionary stable solution (ESS)

1972    Gould and Eldridge Punctuated equilibrium

1973    Maynard Smith and Price The Logic of Animal Conflict

1973    Black Scholes

1977    Eigen and Schuster The Hypercycle

1978    Replicator equation (Taylor and Jonker)

1982    Hopfield network

1982    John Maynard Smith Evolution and the Theory of Games

1984    R. Axelrod The Evolution of Cooperation

The Measure of Life

This final topic extends the ideas of dynamics into abstract spaces of high dimension to encompass the idea of a trajectory of life.  Health and disease become dynamical systems defined by all the proteins and nucleic acids that comprise the physical self.  Concepts from network theory, autonomous oscillators and synchronization contribute to this viewpoint.  Healthy trajectories are like stable limit cycles in phase space, but disease can knock the system trajectory into dangerous regions of health space, as doctors turn to new developments in personalized medicine try to return the individual to a healthy path.  This is the ultimate generalization of Galileo’s simple parabolic trajectory.

1642    Galileo dies

1656    Huygens invents pendulum clock

1665    Huygens observes “odd kind of sympathy” in synchronized clocks

1673    Huygens publishes Horologium Oscillatorium sive de motu pendulorum

1736    Euler Seven Bridges of Königsberg

1845    Kirchhoff’s circuit laws

1852    Guthrie four color problem

1857    Cayley trees

1858    Hamiltonian cycles

1887    Cajal neural staining microscopy

1913    Michaelis Menten dynamics of enzymes

1924    Berger, Hans: neural oscillations (Berger invented the EEG)

1926    van der Pol dimensioness form of equation

1927    van der Pol periodic forcing

1943    McCulloch and Pits mathematical model of neural nets

1948    Wiener cybernetics

1952    Hodgkin and Huxley action potential model

1952    Turing instability model

1956    Sutherland cyclic AMP

1957    Broadbent and Hammersley bond percolation

1958    Rosenblatt perceptron

1959    Erdös and Renyi random graphs

1962    Cohen EGF discovered

1965    Sebeok coined zoosemiotics

1966    Mesarovich systems biology

1967    Winfree biological rythms and coupled oscillators

1969    Glass Moire patterns in perception

1970    Rodbell G-protein

1971    phrase “strange attractor” coined (Ruelle)

1972    phrase “signal transduction” coined (Rensing)

1975    phrase “chaos theory” coined (Yorke)

1975    Werbos backpropagation

1975    Kuramoto transition

1976    Robert May logistic map

1977    Mackey-Glass equation and dynamical disease

1982    Hopfield network

1990    Strogatz and Murillo pulse-coupled oscillators

1997    Tomita systems biology of a cell

1998    Strogatz and Watts Small World network

1999    Barabasi Scale Free networks

2000    Sequencing of the human genome

Physics in the Age of Contagion. Part 3: Testing and Tracing COVID-19

In the midst of this COVID crisis (and the often botched governmental responses to it), there have been several success stories: Taiwan, South Korea, Australia and New Zealand stand out. What are the secrets to their success? First, is the willingness of the population to accept the seriousness of the pandemic and to act accordingly. Second, is a rapid and coherent (and competent) governmental response. Third, is biotechnology and the physics of ultra-sensitive biomolecule detection.

Antibody Testing

A virus consists a protein package called a capsid that surrounds polymers of coding RNA. Protein molecules on the capsid are specific to the virus and are the key to testing whether a person has been exposed to the virus. These specific molecules are called antigens, and the body produces antibodies — large biomolecules — that are rapidly evolved by the immune system and released into the blood system to recognize and bind to the antigen. The recognition and binding is highly specific (though not perfect) to the capsid proteins of the virus, so that other types of antibodies (produced to fend off other infections) tend not to bind to it. This specificity enables antibody testing.

In principle, all one needs to do is isolate the COVID-19 antigen, bind it to a surface, and run a sample of a patient’s serum (the part of the blood without the blood cells) over the same surface. If the patient has produced antibodies against the COVID-19, these antibodies will attach to the antigens stuck to the surface. After washing away the rest of the serum, what remains are anti-COVID antibodies attached to the antigens bound to the surface. The next step is to determine whether these antibodies have been bound to the surface or not.

Fig. 1 Schematic of an antibody macromolecule. The total height of the molecule is about 3 nanometers. The antigen binding sites are at the ends of the upper arms.

At this stage, there are many possible alternative technologies to detecting the bound antibodies (see section below on the physics of the BioCD for one approach). A conventional detection approach is known as ELISA (Enzyme-linked immunosorbant assay). To detect the bound antibody, a secondary antibody that binds to human antibodies is added to the test well. This secondary antibody contains either a fluorescent molecular tag or an enzyme that causes the color of the well to change (kind of like how a pregnancy test causes a piece of paper to change color). If the COVID antigen binds antibodies from the patient serum, then this second antibody will bind to the first and can be detected by fluorescence or by simple color change.

The technical challenges associated with antibody assays relate to false positives and false negatives. A false positive happens when the serum is too sticky and some antibodies NOT against COVID tend to stick to the surface of the test well. This is called non-specific binding. The secondary antibodies bind to these non-specifically-bound antibodies and a color change reports a detection, when in fact no COVID-specific antibodies were there. This is a false positive — the patient had not been exposed, but the test says they were.

On the other hand, a false negative occurs when the patient serum is possibly too dilute and even though anti-COVID antibodies are present, they don’t bind sufficiently to the test well to be detectable. This is a false negative — the patient had been exposed, but the test says they weren’t. Despite how mature antibody assay technology is, false positives and false negatives are very hard to eliminate. It is fairly common for false rates to be in the range of 5% to 10% even for high-quality immunoassays. The finite accuracy of the tests must be considered when doing community planning for testing and tracking. But the bottom line is that even 90% accuracy on the test can do a lot to stop the spread of the infection. This is because of the geometry of social networks and how important it is to find and isolate the super spreaders.

Social Networks

The web of any complex set of communities and their interconnections aren’t just random. Whether in interpersonal networks, or networks of cities and states and nations, it’s like the world-wide-web where the most popular webpages get the most links. This is the same phenomenon that makes the rich richer and the poor poorer. It produces a network with a few hubs that have a large fraction of the links. A network model that captures this network topology is known as the Barabasi-Albert model for scale-free networks [1]. A scale-free network tends to have one node that has the most links, then a couple of nodes that have a little fewer links, then several more with even fewer, and so on, until there are a vary large number of nodes with just a single link each.

When it comes to pandemics, this type of network topology is both a curse and a blessing. It is a curse, because if the popular node becomes infected it tends to infect a large fraction of the rest of the network because it is so linked in. But it is a blessing, because if that node can be identified and isolated from the rest of the network, then the chance of the pandemic sweeping across the whole network can be significantly reduced. This is where testing and contact tracing becomes so important. You have to know who is infected and who they are connected with. Only then can you isolate the central nodes of the network and make a dent in the pandemic spread.

An example of a Barabasi-Albert network is shown in Fig. 2 fhavingor 128 nodes. Some nodes have many links out (and in) the number of links connecting a node is called the node degree. There are several nodes of very high degree (a degree around 25 in this case) but also very many nodes that have only a single link. It’s the high-degree nodes that matter in a pandemic. If they get infected, then they infect almost the entire network. This scale-free network structure emphasizes the formation of central high-degree nodes. It tends to hold for many social networks, but also can stand for cities across a nation. A city like New York has links all over the country (by flights), while my little town of Lafayette IN might be modeled by a single link to Indianapolis. That same scaling structure is seen across many scales from interactions among nations to interactions among citizens in towns.

Fig. 2 A scale-free network with 128 nodes. A few nodes have high degree, but most nodes have a degree of one.

Isolating the Super Spreaders

In the network of nodes in Fig. 2, each node can be considered as a “compartment” in a multi-compartment SIR model (see my previous blog for the two-compartment SIR model of COVID-19). The infection of each node depends on the SIR dynamics of that node, plus the infections coming in from links other infected nodes. The equations of the dynamics for each node are

where Aab is the adjacency matrix where self-connection is allowed (infection dynamics within a node) and the sums go over all the nodes of the network. In this model, the population of each node is set equal to the degree ka of the node. The spread of the pandemic across the network depends on the links and where the infection begins, but the overall infection is similar to the simple SIR model for a given average network degree

However, if the pandemic starts, but then the highest-degree node (the super spreader) is isolated (by testing and contact tracing), then the saturation of the disease across the network can be decreased in a much greater proportion than simply given by the population of the isolated node. For instance, in the simulation in Fig. 3, a node of degree 20 is removed at 50 days. The fraction of the population that is isolated is only 10%, yet the saturation of the disease across the whole network is decreased by more than a factor of 2.

Fig. 3 Scale-free network of 128 nodes. Solid curve is infection dynamics of the full network. Dashed curve is the infection when the highest-degree node was isolated at 50 days.

In a more realistic model with many more nodes, and full testing to allow the infected nodes and their connections to be isolated, the disease can be virtually halted. This is what was achieved in Taiwan and South Korea. The question is why the United States, with its technologically powerful companies and all their capabilities, was so unprepared or unwilling to achieve the same thing.

Python Code: NetSIRSF.py

#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
NetSIRSF.py
Created on Sat May 11 08:56:41 2019
@author: nolte
D. D. Nolte, Introduction to Modern Dynamics: Chaos, Networks, Space and Time, 2nd ed. (Oxford,2019)
"""

# https://www.python-course.eu/networkx.php
# https://networkx.github.io/documentation/stable/tutorial.html
# https://networkx.github.io/documentation/stable/reference/functions.html

import numpy as np
from scipy import integrate
from matplotlib import pyplot as plt
import networkx as nx
import time
from random import random

tstart = time.time()

plt.close('all')

betap = 0.014;
mu = 0.13;

print('beta = ',betap)
print('betap/mu = ',betap/mu)


N = 128      # 50


facoef = 2
k = 1
nodecouple = nx.barabasi_albert_graph(N, k, seed=None)

indhi = 0
deg = 0
for omloop in nodecouple.node:
    degtmp = nodecouple.degree(omloop)
    if degtmp > deg:
        deg = degtmp
        indhi = omloop
print('highest degree node = ',indhi)
print('highest degree = ',deg)

plt.figure(1)
colors = [(random(), random(), random()) for _i in range(10)]
nx.draw_circular(nodecouple,node_size=75, node_color=colors)
print(nx.info(nodecouple))
        
# function: omegout, yout = coupleN(G)
def coupleN(G,tlim):

    # function: yd = flow_deriv(x_y)
    def flow_deriv(x_y,t0):
        
        N = np.int(x_y.size/2)
        yd = np.zeros(shape=(2*N,))
        ind = -1
        for omloop in G.node:
            ind = ind + 1
            temp1 = -mu*x_y[ind] + betap*x_y[ind]*x_y[N+ind]
            temp2 =  -betap*x_y[ind]*x_y[N+ind]
            linksz = G.node[omloop]['numlink']
            for cloop in range(linksz):
                cindex = G.node[omloop]['link'][cloop]
                indx = G.node[cindex]['index']
                g = G.node[omloop]['coupling'][cloop]
                
                temp1 = temp1 + g*betap*x_y[indx]*x_y[N+ind]
                temp2 = temp2 - g*betap*x_y[indx]*x_y[N+ind]
            
            yd[ind] = temp1
            yd[N+ind] = temp2
                
        return yd
    # end of function flow_deriv(x_y)
    x0 = x_y
    t = np.linspace(0,tlim,tlim)      # 600  300
    y = integrate.odeint(flow_deriv, x0, t)        
    
    return t,y
    # end of function: omegout, yout = coupleN(G)

lnk = np.zeros(shape = (N,), dtype=int)
ind = -1
for loop in nodecouple.node:
    ind = ind + 1
    nodecouple.node[loop]['index'] = ind
    nodecouple.node[loop]['link'] = list(nx.neighbors(nodecouple,loop))
    nodecouple.node[loop]['numlink'] = len(list(nx.neighbors(nodecouple,loop)))
    lnk[ind] = len(list(nx.neighbors(nodecouple,loop)))

gfac = 0.1

ind = -1
for nodeloop in nodecouple.node:
    ind = ind + 1
    nodecouple.node[nodeloop]['coupling'] = np.zeros(shape=(lnk[ind],))
    for linkloop in range (lnk[ind]):
        nodecouple.node[nodeloop]['coupling'][linkloop] = gfac*facoef
            
x_y = np.zeros(shape=(2*N,))   
for loop in nodecouple.node:
    x_y[loop]=0
    x_y[N+loop]=nodecouple.degree(loop)
    #x_y[N+loop]=1
x_y[N-1 ]= 0.01
x_y[2*N-1] = x_y[2*N-1] - 0.01
N0 = np.sum(x_y[N:2*N]) - x_y[indhi] - x_y[N+indhi]
print('N0 = ',N0)
     
tlim0 = 600
t0,yout0 = coupleN(nodecouple,tlim0)                           # Here is the subfunction call for the flow


plt.figure(2)
plt.yscale('log')
plt.gca().set_ylim(1e-3, 1)
for loop in range(N):
    lines1 = plt.plot(t0,yout0[:,loop])
    lines2 = plt.plot(t0,yout0[:,N+loop])
    lines3 = plt.plot(t0,N0-yout0[:,loop]-yout0[:,N+loop])

    plt.setp(lines1, linewidth=0.5)
    plt.setp(lines2, linewidth=0.5)
    plt.setp(lines3, linewidth=0.5)
    

Itot = np.sum(yout0[:,0:127],axis = 1) - yout0[:,indhi]
Stot = np.sum(yout0[:,128:255],axis = 1) - yout0[:,N+indhi]
Rtot = N0 - Itot - Stot
plt.figure(3)
#plt.plot(t0,Itot,'r',t0,Stot,'g',t0,Rtot,'b')
plt.plot(t0,Itot/N0,'r',t0,Rtot/N0,'b')
#plt.legend(('Infected','Susceptible','Removed'))
plt.legend(('Infected','Removed'))
plt.hold

# Repeat but innoculate highest-degree node
x_y = np.zeros(shape=(2*N,))   
for loop in nodecouple.node:
    x_y[loop]=0
    x_y[N+loop]=nodecouple.degree(loop)
    #x_y[N+loop]=1
x_y[N-1] = 0.01
x_y[2*N-1] = x_y[2*N-1] - 0.01
N0 = np.sum(x_y[N:2*N]) - x_y[indhi] - x_y[N+indhi]
     
tlim0 = 50
t0,yout0 = coupleN(nodecouple,tlim0)


# remove all edges from highest-degree node
ee = list(nodecouple.edges(indhi))
nodecouple.remove_edges_from(ee)
print(nx.info(nodecouple))

#nodecouple.remove_node(indhi)        
lnk = np.zeros(shape = (N,), dtype=int)
ind = -1
for loop in nodecouple.node:
    ind = ind + 1
    nodecouple.node[loop]['index'] = ind
    nodecouple.node[loop]['link'] = list(nx.neighbors(nodecouple,loop))
    nodecouple.node[loop]['numlink'] = len(list(nx.neighbors(nodecouple,loop)))
    lnk[ind] = len(list(nx.neighbors(nodecouple,loop)))

ind = -1
x_y = np.zeros(shape=(2*N,)) 
for nodeloop in nodecouple.node:
    ind = ind + 1
    nodecouple.node[nodeloop]['coupling'] = np.zeros(shape=(lnk[ind],))
    x_y[ind] = yout0[tlim0-1,nodeloop]
    x_y[N+ind] = yout0[tlim0-1,N+nodeloop]
    for linkloop in range (lnk[ind]):
        nodecouple.node[nodeloop]['coupling'][linkloop] = gfac*facoef

    
tlim1 = 500
t1,yout1 = coupleN(nodecouple,tlim1)

t = np.zeros(shape=(tlim0+tlim1,))
yout = np.zeros(shape=(tlim0+tlim1,2*N))
t[0:tlim0] = t0
t[tlim0:tlim1+tlim0] = tlim0+t1
yout[0:tlim0,:] = yout0
yout[tlim0:tlim1+tlim0,:] = yout1


plt.figure(4)
plt.yscale('log')
plt.gca().set_ylim(1e-3, 1)
for loop in range(N):
    lines1 = plt.plot(t,yout[:,loop])
    lines2 = plt.plot(t,yout[:,N+loop])
    lines3 = plt.plot(t,N0-yout[:,loop]-yout[:,N+loop])

    plt.setp(lines1, linewidth=0.5)
    plt.setp(lines2, linewidth=0.5)
    plt.setp(lines3, linewidth=0.5)
    

Itot = np.sum(yout[:,0:127],axis = 1) - yout[:,indhi]
Stot = np.sum(yout[:,128:255],axis = 1) - yout[:,N+indhi]
Rtot = N0 - Itot - Stot
plt.figure(3)
#plt.plot(t,Itot,'r',t,Stot,'g',t,Rtot,'b',linestyle='dashed')
plt.plot(t,Itot/N0,'r',t,Rtot/N0,'b',linestyle='dashed')
#plt.legend(('Infected','Susceptible','Removed'))
plt.legend(('Infected','Removed'))
plt.xlabel('Days')
plt.ylabel('Fraction of Sub-Population')
plt.title('Network Dynamics for COVID-19')
plt.show()
plt.hold()

elapsed_time = time.time() - tstart
print('elapsed time = ',format(elapsed_time,'.2f'),'secs')

Caveats and Disclaimers

No effort in the network model was made to fit actual disease statistics. In addition, the network in Figs. 2 and 3 only has 128 nodes, and each node was a “compartment” that had its own SIR dynamics. This is a coarse-graining approach that would need to be significantly improved to try to model an actual network of connections across communities and states. In addition, isolating the super spreader in this model would be like isolating a city rather than an individual, which is not realistic. The value of a heuristic model is to gain a physical intuition about scales and behaviors without being distracted by details of the model.

Postscript: Physics of the BioCD

Because antibody testing has become such a point of public discussion, it brings to mind a chapter of my own life that was closely related to this topic. About 20 years ago my research group invented and developed an antibody assay called the BioCD [2]. The “CD” stood for “compact disc”, and it was a spinning-disk format that used laser interferometry to perform fast and sensitive measurements of antibodies in blood. We launched a start-up company called QuadraSpec in 2004 to commercialize the technology for large-scale human disease screening.

A conventional compact disc consists of about a billion individual nulling interferometers impressed as pits into plastic. When the read-out laser beam straddles one of the billion pits, it experiences a condition of perfect destructive interferences — a zero. But when it was not shining on a pit it experiences high reflection — a one. So as the laser scans across the surface of the disc as it spins, a series of high and low reflections read off bits of information. Because the disc spins very fast, the data rate is very high, and a billion bits can be read in a matter of minutes.

The idea struck me in late 1999 just before getting on a plane to spend a weekend in New York City: What if each pit were like a test tube, so that instead of reading bits of ones and zeros it could read tiny amounts of protein? Then instead of a billion ones and zeros the disc could read a billion protein concentrations. But nulling interferometers are the least sensitive way to measure something sensitively because it operates at a local minimum in the response curve. The most sensitive way to do interferometry is in the condition of phase quadrature when the signal and reference waves are ninety-degrees out of phase and where the response curve is steepest, as in Fig. 4 . Therefore, the only thing you need to turn a compact disc from reading ones and zeros to proteins is to reduce the height of the pit by half. In practice we used raised ridges of gold instead of pits, but it worked in the same way and was extremely sensitive to the attachment of small amounts of protein.

Fig. 4 Principle of the BioCD antibody assay. Reprinted from Ref. [3]

This first generation BioCD was literally a work of art. It was composed of a radial array of gold strips deposited on a silicon wafer. We were approached in 2004 by an art installation called “Massive Change” that was curated by the Vancouver Art Museum. The art installation travelled to Toronto and then to the Museum of Contemporary Art in Chicago, where we went to see it. Our gold-and-silicon BioCD was on display in a section on art in technology.

The next-gen BioCDs were much simpler, consisting simply of oxide layers on silicon wafers, but they were much more versatile and more sensitive. An optical scan of a printed antibody spot on a BioCD is shown in Fig. 5 The protein height is only about 1 nanometer (the diameter of the spot is 100 microns). Interferometry can measure a change in the height of the spot (caused by binding antibodies from patient serum) by only about 10 picometers averaged over the size of the spot. This exquisite sensitivity enabled us to detect tiny fractions of blood-born antigens and antibodies at the level of only a nanogram per milliliter.

Fig. 5 Interferometric measurement of a printed antibody spot on a BioCD. The spot height is about 1 nanomater and the diameter is about 100 microns. Interferometry can measure a change of height by about 10 picometers averaged over the spot.

The real estate on a 100 mm diameter disc was sufficient to do 100,000 antibody assays, which would be 256 protein targets across 512 patients on a single BioCD that would take only a few hours to finish reading!

Fig. 6 A single BioCD has the potential to measure hundreds of proteins or antibodies per patient with hundreds of patients per disc.

The potential of the BioCD for massively multiplexed protein measurements made it possible to imagine testing a single patient for hundreds of diseases in a matter of hours using only a few drops of blood. Furthermore, by being simple and cheap, the test would allow people to track their health over time to look for emerging health trends.

If this sounds familiar to you, you’re right. That’s exactly what the notorious company Theranos was promising investors 10 years after we first proposed this idea. But here’s the difference: We learned that the tech did not scale. It cost us $10M to develop a BioCD that could test for just 4 diseases. And it would cost more than an additional $10M to get it to 8 diseases, because the antibody chemistry is not linear. Each new disease that you try to test creates a combinatorics problem of non-specific binding with all the other antibodies and antigens. To scale the test up to 100 diseases on the single platform using only a few drops of blood would have cost us more than $1B of R&D expenses — if it was possible at all. So we stopped development at our 4-plex product and sold the technology to a veterinary testing company that uses it today to test for diseases like heart worm and Lymes disease in blood samples from pet animals.

Five years after we walked away from massively multiplexed antibody tests, Theranos proposed the same thing and took in more than $700M in US investment, but ultimately produced nothing that worked. The saga of Theranos and its charismatic CEO Elizabeth Holmes has been the topic of books and documentaries and movies like “The Inventor: Out for Blood in Silicon Valley” and a rumored big screen movie starring Jennifer Lawrence as Holmes.

The bottom line is that antibody testing is a difficult business, and ramping up rapidly to meet the demands of testing and tracing COVID-19 is going to be challenging. The key is not to demand too much accuracy per test. False positives are bad for the individual, because it lets them go about without immunity and they might get sick, and false negatives are bad, because it locks them in when they could be going about. But if an inexpensive test of only 90% accuracy (a level of accuracy that has already been called “unreliable” in some news reports) can be brought out in massive scale so that virtually everyone can be tested, and tested repeatedly, then the benefit to society would be great. In the scaling networks that tend to characterize human interactions, all it takes is a few high-degree nodes to be isolated to make infection rates plummet.

References

[1] A. L. Barabasi and R. Albert, “Emergence of scaling in random networks,” Science, vol. 286, no. 5439, pp. 509-512, Oct 15 (1999)

[2] D. D. Nolte, “Review of centrifugal microfluidic and bio-optical disks,” Review Of Scientific Instruments, vol. 80, no. 10, p. 101101, Oct (2009)

[3] D. D. Nolte and F. E. Regnier, “Spinning-Disk Interferometry: The BioCD,” Optics and Photonics News, no. October 2004, pp. 48-53, (2004)

Physics in the Age of Contagion. Part 2: The Second Wave of COVID-19

Since my last Blog on the bifurcation physics of COVID-19, most of the US has approached the crest of “the wave”, with the crest arriving sooner in hot spots like New York City and a few weeks later in rural areas like Lafayette, Indiana where I live. As of the posting of this Blog, most of the US is in lock-down with only a few hold-out states. Fortunately, this was sufficient to avoid the worst case scenarios of my last Blog, but we are still facing severe challenges.

There is good news! The second wave can be managed and minimized if we don’t come out of lock-down too soon.

One fall-out of the (absolutely necessary) lock-down is the serious damage done to the economy that is now in its greatest retraction since the Great Depression. The longer the lock-down goes, the deeper the damage and the longer to recover. The single-most important question at this point in time, as we approach the crest, is when we can emerge from lock down? This is a critical question. If we emerge too early, then the pandemic will re-kindle into a second wave that could exceed the first. But if we emerge later than necessary, then the economy may take a decade to fully recover. We need a Goldilocks solution: not too early and not too late. How do we assess that?

The Value of Qualitative Physics

In my previous Blog I laid out a very simple model called the Susceptible-Infected-Removed (SIR) model and provided a Python program whose parameters can be tweaked to explore the qualitatitive behavior of the model, answering questions like: What is the effect of longer or shorter quarantine periods? What role does social distancing play in saving lives? What happens if only a small fraction of the population pays attention and practice social distancing?

It is necessary to wait from releasing the lock-down at least several weeks after the crest has passed to avoid the second wave.

It is important to note that none of the parameters in that SIR model are reliable and no attempt was made to fit the parameters to the actual data. To expert epidemiological modelers, this simplistic model is less than useless and potentially dangerous if wrong conclusions are arrived at and disseminated on the internet.

But here is the point: The actual numbers are less important than the functional dependences. What matters is how the solution changes as a parameter is changed. The Python programs allow non-experts to gain an intuitive understanding of the qualitative physics of the pandemic. For instance, it is valuable to gain a feeling of how sensitive the pandemic is to small changes in parameters. This is especially important because of the bifurcation physics of COVID-19 where very small changes can cause very different trajectories of the population dynamics.

In the spirit of the value of qualitative physics, I am extending here that simple SIR model to a slightly more sophisticated model that can help us understand the issues and parametric dependences of this question of when to emerge from lock-down. Again, no effort is made to fit actual data of this pandemic, but there are still important qualitative conclusions to be made.

The Two-Compartment SIR Model of COVID-19

To approach a qualitative understanding of what happens by varying the length of time of the country-wide shelter-in-place, it helps to think of two cohorts of the public: those who are compliant and conscientious valuing the lives of others, and those who don’t care and are non-compliant.

Fig. 1 Two-compartment SIR model for compliant and non-compliant cohorts.

These two cohorts can each be modeled separately by their own homogeneous SIR models, but with a coupling between them because even those who shelter in place must go out for food and medicines. The equations of this two-compartment model are

where n and q refer to the non-compliant and the compliant cohorts, respectively. I and S are the susceptible populations. The coupling parameters are knn for the coupling between non-compliants individuals, knq for the effect of the compliant individuals on the non-compliant, kqn for the effect of the non-compliant individuals on the compliant, and kqq for the effect of the compliant cohort on themselves.

There are two time frames for the model. The first time frame is the time of lock-down when the compliant cohort is sheltering in place and practicing good hygiene, but they still need to go out for food and medicines. (This model does not include the first responders. They are an important cohort, but do not make up a large fraction of the national population). The second time frame is after the lock-down is removed. Even then, good practices by the compliant group are expected to continue with the purpose to lower infections among themselves and among others.

This two-compartment model has roughly 8 adjustable parameters, all of which can be varied to study their effects on the predictions. None of them are well known, but general trends still can be explored.

Python Code: SIRWave.py

#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
SIRWave.py
Created on Sat March 21 2020
@author: nolte
D. D. Nolte, Introduction to Modern Dynamics: Chaos, Networks, Space and Time, 2nd ed. (Oxford,2019)
"""

import numpy as np
from scipy import integrate
from matplotlib import pyplot as plt

plt.close('all')

print(' ')
print('SIRWave.py')

def solve_flow(param,max_time=1000.0):

    def flow_deriv(x_y_z_w,tspan):
        In, Sn, Iq, Sq = x_y_z_w
        
        Inp = -mu*In + beta*knn*In*Sn + beta*knq*Iq*Sn
        Snp = -beta*knn*In*Sn - beta*knq*Iq*Sn
        
        Iqp = -mu*Iq + beta*kqn*In*Sq + beta*kqq*Iq*Sq
        Sqp = -beta*kqn*In*Sq - beta*kqq*Iq*Sq
        
        return [Inp, Snp, Iqp, Sqp]
    
    x0 = [In0, Sn0, Iq0, Sq0]
    
    # Solve for the trajectories
    t = np.linspace(tlo, thi, thi-tlo)
    x_t = integrate.odeint(flow_deriv, x0, t)

    return t, x_t

beta = 0.02   # infection rate
dill = 5      # mean days infectious
mu = 1/dill   # decay rate
fnq = 0.3     # fraction not quarantining
fq = 1-fnq    # fraction quarantining
P = 330       # Population of the US in millions
mr = 0.002    # Mortality rate
dq = 90       # Days of lock-down (this is the key parameter)

# During quarantine
knn = 50      # Average connections per day for non-compliant group among themselves
kqq = 0       # Connections among compliant group
knq = 0       # Effect of compliaht group on non-compliant
kqn = 5       # Effect of non-clmpliant group on compliant

initfrac = 0.0001          # Initial conditions:
In0 = initfrac*fnq         # infected non-compliant
Sn0 = (1-initfrac)*fnq     # susceptible non-compliant
Iq0 = initfrac*fq          # infected compliant
Sq0 = (1-initfrac)*fq      # susceptivle compliant

tlo = 0
thi = dq

param = (mu, beta, knn, knq, kqn, kqq)    # flow parameters

t1, y1 = solve_flow(param)

In1 = y1[:,0]
Sn1 = y1[:,1]
Rn1 = fnq - In1 - Sn1
Iq1 = y1[:,2]
Sq1 = y1[:,3]
Rq1 = fq - Iq1 - Sq1

# Lift the quarantine: Compliant group continues social distancing
knn = 50      # Adjusted coupling parameters
kqq = 5
knq = 20
kqn = 15

fin1 = len(t1)
In0 = In1[fin1-1]
Sn0 = Sn1[fin1-1]
Iq0 = Iq1[fin1-1]
Sq0 = Sq1[fin1-1]

tlo = fin1
thi = fin1 + 365-dq

param = (mu, beta, knn, knq, kqn, kqq)

t2, y2 = solve_flow(param)

In2 = y2[:,0]
Sn2 = y2[:,1]
Rn2 = fnq - In2 - Sn2
Iq2 = y2[:,2]
Sq2 = y2[:,3]
Rq2 = fq - Iq2 - Sq2

fin2 = len(t2)
t = np.zeros(shape=(fin1+fin2,))
In = np.zeros(shape=(fin1+fin2,))
Sn = np.zeros(shape=(fin1+fin2,))
Rn = np.zeros(shape=(fin1+fin2,))
Iq = np.zeros(shape=(fin1+fin2,))
Sq = np.zeros(shape=(fin1+fin2,))
Rq = np.zeros(shape=(fin1+fin2,))

t[0:fin1] = t1
In[0:fin1] = In1
Sn[0:fin1] = Sn1
Rn[0:fin1] = Rn1
Iq[0:fin1] = Iq1
Sq[0:fin1] = Sq1
Rq[0:fin1] = Rq1

t[fin1:fin1+fin2] = t2
In[fin1:fin1+fin2] = In2
Sn[fin1:fin1+fin2] = Sn2
Rn[fin1:fin1+fin2] = Rn2
Iq[fin1:fin1+fin2] = Iq2
Sq[fin1:fin1+fin2] = Sq2
Rq[fin1:fin1+fin2] = Rq2

plt.figure(1)
lines = plt.semilogy(t,In,t,Iq,t,(In+Iq))
plt.ylim([0.0001,.1])
plt.xlim([0,thi])
plt.legend(('Non-compliant','Compliant','Total'))
plt.setp(lines, linewidth=0.5)
plt.xlabel('Days')
plt.ylabel('Infected')
plt.title('Infection Dynamics for COVID-19 in US')
plt.show()

plt.figure(2)
lines = plt.semilogy(t,Rn*P*mr,t,Rq*P*mr)
plt.ylim([0.001,1])
plt.xlim([0,thi])
plt.legend(('Non-compliant','Compliant'))
plt.setp(lines, linewidth=0.5)
plt.xlabel('Days')
plt.ylabel('Deaths')
plt.title('Total Deaths for COVID-19 in US')
plt.show()

D = P*mr*(Rn[fin1+fin2-1] + Rq[fin1+fin2-1])
print('Deaths = ',D)

plt.figure(3)
lines = plt.semilogy(t,In/fnq,t,Iq/fq)
plt.ylim([0.0001,.1])
plt.xlim([0,thi])
plt.legend(('Non-compliant','Compliant'))
plt.setp(lines, linewidth=0.5)
plt.xlabel('Days')
plt.ylabel('Fraction of Sub-Population')
plt.title('Population Dynamics for COVID-19 in US')
plt.show()

Trends

The obvious trend to explore is the effect of changing the quarantine period. Fig. 2 shows the results of a an early release from shelter-in-place compared to pushing the release date one month longer. The trends are:

  • If the lock-down is released early, the second wave can be larger than the first wave
  • If the lock-down is released early, the compliant cohort will be mostly susceptible and will have the majority of new cases
  • There are 40% more deaths when the lock-down is released early

If the lock-down is ended just after the crest, this is too early. It is necessary to wait at least several weeks after the crest has passed to avoid the second wave. There are almost 40% more deaths for the 90-day period than the 120-day period. In addition, for the case when the quarantine is stopped too early, the compliant cohort, since they are the larger fraction and are mostly susceptible, will suffer a worse number of new infections than the non-compliant group who put them at risk in the first place. In addition, the second wave for the compliant group would be worse than the first wave. This would be a travesty! But by pushing the quarantine out by just 1 additional month, the compliant group will suffer fewer total deaths than the non-compliant group. Most importantly, the second wave would be substantially smaller than the first wave for both cohorts.

Fig. 2 Comparison of 90-day quarantine versus 120-day quarantine for the compliant and non-compliant cohort of individuals . When the ban is lifted too soon, the second wave can be bigger than the first. This model assumes that 30% of the population are non-compliant and that the compliant group continues to practice social distancing.

The lesson from this simple model is simple: push the quarantine date out as far as the economy can allow! There is good news! The second wave can be managed and minimized if we don’t come out of lock-down too soon.

Caveats and Disclaimers

This model is purely qualitative and only has value for studying trends that depend on changing parameters. Absolute numbers are not meant to be taken too seriously. For instance, the total number of deaths in this model are about 2x larger than what we are hearing from Dr. Fauci of NIAID at this time, so this simple model overestimates fatalities. Also, it doesn’t matter whether the number of quarantine days should be 60, 90 or 120 … what matters is that an additional month makes a large difference in total number of deaths. If someone does want to model the best possible number of quarantine days — the Goldilocks solution — then they need to get their hands on a professional epidemiological model (or an actual epidemiologist). The model presented here is not appropriate for that purpose.

Note added in postscript on April 8: Since posting the original blog on April 6, Dr, Fauci announced that as many as 90% of individuals are practicing some form of social distancing. In addition, many infections are not being reported because of lack of testing, which means that the mortality rate is lower than thought. Therefore, I have changed the mortality rate and figures with numbers that better reflect the current situation (that is changing daily), but still without any attempt to fit the numerous other parameters.