Democracy against Authoritarians: The Physics of Public Opinion

An old joke goes that Democracy is a terrible form of government … except it’s better than all the others!

Our world today is faced with conflict between democracy and dictatorship. On the one side is the free world, where leaders are chosen by some form of representation of large numbers of citizens and sometimes even a majority. On the other side is authoritarianism where a select few are selected by a select few to govern everyone else.

[I]t has been said that democracy is the worst form of Government except all those other forms that have been tried from time to time; but there is the broad feeling in our country that the people should rule, and that public opinion expressed by all constitutional means, should shape, guide, and control the actions of Ministers who are their servants and not their masters.

Winston Churchill (1947)

An argument in favor of democracy is freedom of choice for the largest segment of the population, plus the ability to remove leaders who fail to provide for the perceived welfare of the most citizens. This makes democracy adaptive, shifting with the times. It also makes leaders accountable for their actions and crimes. An argument in favor of authoritarianism is the myth of the benevolent dictator–someone who knows what’s best for the people even if the people don’t know it themselves.

But dictators are rarely benevolent, and as they become saturated with power, they are corrupted. The criminal massacres, perpetrated by Putin, of Ukrainian civilians is one of the strongest recent arguments against authoritarianism. A single man decides, on a whim, the life and death of thousands or maybe more. The invasion of Ukraine is so egregious and unwarranted, that we wonder how the Russian people can put up with their isolated and manic leader. Yet by some measure more than 60% of the people in Russia approve of the war.

How can the free world see the invasion as the atrocity it is, while Russia’s majority sees it as a just war? The answer is a surprising result of population dynamics known as the replicator-mutator equation. The challenge for us here in the free world is to learn how to game the replicator-mutator equation to break up the monopoly of popular opinion and make Putin pay for his arrogance. This blog explains how “mass hysteria” can arise from forces within a complex environment, and how to construct a possible antidote.

Replicator-Mutator Equation

There are several simple models of population dynamics that try to explain the rise and fall of the number of individuals that belong to varying cohorts within the population. These models incorporate aspects of relative benefit of one group over another, plus the chance to change sides–defection. The dynamics under these conditions can be highly nonlinear and highly non-intuitive. One of the simplest of these models is known as the replicator-mutator model where replication follows the fitness of the cohort, and where individuals can defect to a “more fit” cohort.

The basic dynamics of the model are

where xa is the fraction of the population that is in cohort a, Wab is a transition probability, and φ is the average fitness of the full population. The transition matrix is given by

where fb is the fitness of cohort b and Qba is a stochastic matrix that allows for defection of an individual from one cohort to another. The fitness of a cohort is given by

where pbc is the pay-off matrix for the relative benefit of one cohort at the expense of another. Finally the average fitness is

The Einstein implicit summation convention is assumed in all of these equations, and the metric space in which the dynamics are embedded is “flat” so that there is no essential difference between superscripts and subscripts. There is also a conservation law that the sum over all population fractions equals unity.

In the language of population dynamics, this model has frequency-dependent fitness, with defection and pay-off, in a zero-sum game.

One of the simplest questions to answer with this model is how so many people can come to believe one thing. This is known as “opinion uniformity”.

Uniformity versus Diversity

This replicator-mutator model explains the property of opinion uniformity, as well as the opposite extreme of opinion diversity. The starting point for both is the pay-off matrix pbc which is assumed to be unity on the diagonal for b = c and to a constant factor a for b ~= c. This pay-off is symmetric so that all opinions are equally “believable”. The stochastic defection matrix is close to unity on the diagonal, and has random terms on the off-diagonal that are proportional to a constant ε. The defection matrix allows a person from one cohort to defect to the belief system of another cohort if they believe that the new cohort has more merit. Cohorts with greater merit (fitness) gain more members over time, while cohorts with lower merit have fewer members over time.

Note that the fitness increases with the number of members in the cohort. This is the bandwagon effect. A belief system is perceived to have more merit if there are more people who believe it. This clearly creates a positive feedback that would cause this cohort to grow. Even though all the cohorts have this same positive feedback, the zero-sum rule only allows one of the cohorts to grow to its maximum extent, taking members away from all the other cohorts. This is illustrated in Fig. 1. One belief system wins, taking almost the full population with it.

Fig. 1 Population fractions evolving as a function of time for a = 0.5 and a small defection rate ε = 0.02. One winner takes almost all the population. These are two views of the same data on semilog and log-log.

What allows the winner to take all is the positive feedback where the fitness of the cohort increases with the number of members, combined with the ability for that cohort to take members from other cohorts through the defection matrix.

However, all of the cohorts are trying the same thing, and the pay-off matrix is fully symmetric and equal for all cohorts, so no cohort is intrinsically “better” than another. This property opens the door to a strong alternative to opinion uniformity. In fact, as more members are allowed to defect, it creates a trend counter to winner-take-all, helping to equalize the cohorts. Suddenly, a bifurcation is passed when the winner-take-all converts discontinuously to a super-symmetric situation when all opinions are held by equal numbers of people. This is illustrated in Fig. 2 for a slightly higher defection rate ε = 0.03. The parameters are identical to those in Fig. 1, but the higher defection rate stabilizes the super-symmetric state of maximum diversity.

Fig. 2 Population fractions for higher defection rate of 0.03. In super-symmetric state, all opinions are held at the same rate with maximum diversity.

These two extreme results of the replicator-mutator equation, that switch suddenly from one to the other dependent on the defection rate, may seem to produce solutions neither of which are ideal for a healthy democracy. One the one hand, in the uniform case where the wining opinion is monolithic, everyone is a carbon-copy of everyone else, which is a form of cultural death (lack of diversity). But, on the other hand, one might argue that maximum opinion diversity is just as concerning, because no-one can agree on anything. If all opinions are equivalent, then everyone in the society believes something different and there is no common ground. But in the diversity case, at least there is no state-level control of the population. In the case of opinion uniformity, the wining opinion can be manipulated by propaganda.

The Propaganda Machine

A government can “seed” the belief networks with propaganda that favors the fitness of what they want their citizens to hear. Because of the positive feedback, any slight advantage of one opinion over others can allow that opinion to gain large numbers through the bandwagon effect. Of course, even stronger control that stifles dissent, for instance by shutting down the free press, makes it that much more likely that the state-controlled story is believed. This may be one reason for the 60% (as of the writing of this blog) support Putin’s war, despite the obvious lies that are being told. This is illustrated in Fig. 3 by boosting the payoff between two similar lies that the government wants its people to believe. These rise to take about 60% of the population. Members of the cohort are brain-washed, not by the government alone, but by all their neighbors who are parroting the same thing.

Fig. 3 Government propaganda acts as a “seed” that makes the propaganda grow faster than other beliefs, even for a defection rate of 0.03 which is above the threshold of Fig. 2.

Breaking the Monopoly of Thought

How do we fight back? Not just against the Kremlin’s propaganda, but also against QANON and Trump’s Big Lie and the pernicious fallacy of nationalism? The answer is simple: diversity of thought! The sliver bullet in the replicator-mutator model is the defection matrix. The existence of a bifurcation means that a relatively small increase in the amount of diverse opinion, and the freedom to swap opinions, can lead to a major qualitative collapse of the monolithic thought, even when supported by government propaganda, as shown in Fig. 4. More people may still believe in the state-supported propaganda than the others, but it is no longer a majority.

Fig. 4 Increasing the defection rate can help equalize free opinions against the state-supported propaganda

The above models were all very homogeneous. It is more realistic that people are connected through small-world networks. In this case, there is much more diversity, as shown in Fig. 5, although the defection rate needs to be much higher to prevent a monolithic opinion from dominating. The state-supported propaganda is buried in the resulting mix of diverse ideas. Therefore, to counteract state control, people must feel free to hop about in their choice of beliefs and have access to other beliefs.

Fig. 5 The defection matrix is multiplied by the adjacency matrix of a small-world network. There is significant diversity of thought, but a relatively high defection rate is needed. The state-supported propaganda is buried in this mix.

This is a bit paradoxical. On the one hand, the connectivity of the internet has fostered the rise of conspiracy theories and other odd-ball ideas. But sustained access to multiple sources of information is the best defense against all that crazy stuff winning out. In other words, not only do we have to put up with the lunatic fringe if we are to have full diversity of thought, but we need to encourage everyone to feel free to “shop around” for different ideas, even if some of them are crazy. Our free society shouldn’t be cancelling people who have divergent opinions, because that sets us down the path to authoritarianism. As a recent add said in the New York Times, “Cancel culture cancels culture.” Unfortunately, authoritarianism is on the rise around the world, and the US almost suffered that fate on Jan. 6, 2021. Furthermore, with Xi aligning with Putin and giving him the green light on Ukraine–cynically on the eve of the Olympic Games (of peace)–the new world order will revolve around that axis for decades to come, if the world survives that long. Diversity and freedom may be the only antidote.

By David D. Nolte, March 24, 2022

Matlab Program: Repmut.m

function repmut
% https://github.itap.purdue.edu/nolte/Matlab-Programs-for-Nonlinear-Dynamics

clear
format compact

N = 63;     
p = 0.5;

mutype = 1;     % 0 = Hamming   1 = rand
pay = 1;        % 0 = Hamming   1 = 1/sqrt(N) 
ep = 0.5;      % average mutation rate: 0.1 to 0.01 typical  (0.4835)

%%%%% Set original population
x0temp = rand(1,N);    % Initial population
sx = sum(x0temp);
y0 = x0temp/sx;
Pop0 = sum(y0);


%%%%% Set Adjacency

%node = makeglobal(N);
%node = makeER(N,0.25);       % 0.5     0.25 
%node = makeSF(N,6);       % 12         6
node = makeSW(N,7,0.125);   % 15,0.5    7,0.5
[Adj,degree,Lap] = adjacency(node);

%%%%%% Set Hamming distance
for yloop = 1:N
    for xloop = 1:N
        H(yloop,xloop) = hamming(yloop-1,xloop-1);
    end
end

%%%%%%% Set Mutation matrix
if mutype == 0
    Qtemp = 1./(1+H/ep);    %Mutation matrix on Hamming
    Qsum = sum(Qtemp,2);
    mnQsum = mean(Qsum);
    
    % Normalize mutation among species
    for yloop = 1:N
        for xloop = 1:N
            Q(yloop,xloop) = Qtemp(yloop,xloop)/Qsum(xloop);
        end
    end
    
elseif mutype == 1  
    S = stochasticmatrix(N);
    Stemp = S - diag(diag(S));
    Qtemp = ep*Stemp;
    sm = sum(Qtemp,2)';
    Q = Qtemp + diag(ones(1,N) - sm);
end

figure(1)
imagesc(Q)
title('Mutation Matrix')
colormap(jet)

%%%%%%% Set payoff matrix
if pay == 1
    payoff = zeros(N,N);
    for yloop = 1:N
        payoff(yloop,yloop) = 1;
        for xloop = yloop + 1:N
            payoff(yloop,xloop) = p;
            payoff(xloop,yloop) = p;
            payoff(1,N) = 1;    % Propaganda
            payoff(N,1) = 1;
        end
    end
elseif pay == 0
    payoff = zerodiag(exp(-1*H));
end

figure(2)
imagesc(payoff)
title('Payoff Matrix')
colormap(jet)

% Run time evolution
tspan = [0 4000];
[t,x] = ode45(@quasispec,tspan,y0);

Pop0
[sz,dum] = size(t);
Popend = sum(x(sz,:))

for loop = 1:N
    fit(loop) = sum(payoff(:,loop)'.*x(sz,:));
end

phistar = sum(fit.*x(sz,:))       % final average fitness

xend = x(sz,:)
sortxend = sort(xend,'descend');
coher = sum(sortxend(1:2))

figure(3)
clf
h = colormap(lines);
for loop = 1:N
    plot(t,x(:,loop),'Color',h(round(loop*64/N),:),'LineWidth',1.25)
    hold on
end
hold off

figure(4)
clf
for loop = 1:N
    semilogx(t,x(:,loop),'Color',h(round(loop*64/N),:),'LineWidth',1.25)
    hold on
end
hold off

figure(5)
clf
for loop = 1:N
    semilogy(t,x(:,loop),'Color',h(round(loop*64/N),:),'LineWidth',1.25)
    hold on
end
hold off

figure(6)
clf
for loop = 1:N
    loglog(t,x(:,loop),'Color',h(round(loop*64/N),:),'LineWidth',1.25)
    hold on
end
hold off

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%
%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
    function yd = quasispec(~,y)
        
        for floop = 1:N
            f(floop) = sum(payoff(:,floop).*y);
        end
        
        Connect = Adj + eye(N);
        
        % Transition matrix
        for yyloop = 1:N
            for xxloop = 1:N
                W(yyloop,xxloop) = f(yyloop)*(Connect(yyloop,xxloop)*Q(yyloop,xxloop));
            end
        end
        
        phi = sum(f'.*y);   % Average fitness of population
        
        yd = W*y - phi*y;
        
    end     % end quasispec
end

Further Reading

M. A. Nowak, Evolutionary Dynamics: Exploring the Equations of Life. Cambridge, Mass.: Harvard University Press, 2006.

Spontaneous Symmetry Breaking: A Mechanical Model

Symmetry is the canvas upon which the laws of physics are written. Symmetry defines the invariants of dynamical systems. But when symmetry breaks, the laws of physics break with it, sometimes in dramatic fashion. Take the Big Bang, for example, when a highly-symmetric form of the vacuum, known as the “false vacuum”, suddenly relaxed to a lower symmetry, creating an inflationary cascade of energy that burst forth as our Universe.

The early universe was extremely hot and energetic, so much so that all the forces of nature acted as one–described by a unified Lagrangian (as yet resisting discovery by theoretical physicists) of the highest symmetry. Yet as the universe expanded and cooled, the symmetry of the Lagrangian broke, and the unified forces split into two (gravity and electro-nuclear). As the universe cooled further, the Lagrangian (of the Standard Model) lost more symmetry as the electro-nuclear split into the strong nuclear force and the electro-weak force. Finally, at a tiny fraction of a second after the Big Bang, the universe cooled enough that the unified electro-week force broke into the electromagnetic force and the weak nuclear force. At each stage, spontaneous symmetry breaking occurred, and invariants of physics were broken, splitting into new behavior. In 2008, Yoichiro Nambu received the Nobel Prize in physics for his model of spontaneous symmetry breaking in subatomic physics.

Fig. 1 The spontanous symmetry breaking cascade after the Big Bang. From Ref.

Bifurcation Physics

Physics is filled with examples of spontaneous symmetry breaking. Crystallization and phase transitions are common examples. When the temperature is lowered on a fluid of molecules with high average local symmetry, the molecular interactions can suddenly impose lower-symmetry constraints on relative positions, and the liquid crystallizes into an ordered crystal. Even solid crystals can undergo a phase transition as one symmetry becomes energetically advantageous over another, and the crystal can change to a new symmetry.

In mechanics, any time a potential function evolves slowly with some parameter, it can start with one symmetry and evolve to another lower symmetry. The mechanical system governed by such a potential may undergo a discontinuous change in behavior.

In complex systems and chaos theory, sudden changes in behavior can be quite common as some parameter is changed continuously. These discontinuous changes in behavior, in response to a continuous change in a control parameter, is known as a bifurcation. There are many types of bifurcation, carrying descriptive names like the pitchfork bifurcation, period-doubling bifurcation, Hopf bifurcation, and fold bifurcation, among others. The pitchfork bifurcation is a typical example, shown in Fig. 2. As a parameter is changed continuously (horizontal axis), a stable fixed point suddenly becomes unstable and two new stable fixed points emerge at the same time. This type of bifurcation is called pitchfork because the diagram looks like a three-tined pitchfork. (This is technically called a supercritical pitchfork bifurcation. In a subcritical pitchfork bifurcation the solid and dashed lines are swapped.) This is exactly the bifurcation displayed by a simple mechanical model that illustrates spontaneous symmetry breaking.

Fig. 2 Bifurcation plot of a pitchfork bifurcation. As a parameter is changed smoothly and continuously (horizontal axis), a stable fixed point suddenly splits into three fixed points: one unstable and the other two stable.

Sliding Mass on a Rotating Hoop

One of the simplest mechanical models that displays spontaneous symmetry breaking and the pitchfork bifurcation is a bead sliding without friction on a circular hoop that is spinning on the vertical axis, as in Fig. 3. When it spins very slowly, this is just a simple pendulum with a stable equilibrium at the bottom, and it oscillates with a natural oscillation frequency ω0 = sqrt(g/b), where b is the radius of the hoop and g is the acceleration due to gravity. On the other hand, when it spins very fast, then the bead is flung to to one side or the other by centrifugal force. The bead then oscillates around one of the two new stable fixed points, but the fixed point at the bottom of the hoop is very unstable, because any deviation to one side or the other will cause the centrifugal force to kick in. (Note that in the body frame, centrifugal force is a non-inertial force that arises in the non-inertial coordinate frame. )

Fig. 3 A bead sliding without friction on a circular hoop rotating about a vertical axis. At high speed, the bead has a stable equilibrium to either side of the vertical.

The solution uses the Euler equations for the body frame along principal axes. In order to use the standard definitions of ω1, ω2, and ω3, the angle θ MUST be rotated around the x-axis.  This means the x-axis points out of the page in the diagram.  The y-axis is tilted up from horizontal by θ, and the z-axis is tilted from vertical by θ.  This establishes the body frame.

The components of the angular velocity are

And the moments of inertia are (assuming the bead is small)

There is only one Euler equation that is non-trivial. This is for the x-axis and the angle θ. The x-axis Euler equation is

and solving for the angular acceleration gives.

This is a harmonic oscillator with a “phase transition” that occurs as ω increases from zero.  At first the stable equilibrium is at the bottom.  But when ω passes a critical threshold, the equilibrium angle begins to increase to a finite angle set by the rotation speed.

This can only be real if  the magnitude of the argument is equal to or less than unity, which sets the critical threshold spin rate to make the system move to the new stable points to one side or the other for

which interestingly is the natural frequency of the non-rotating pendulum. Note that there are two equivalent angles (positive and negative), so this problem has a degeneracy. 

This is an example of a dynamical phase transition that leads to spontaneous symmetry breaking and a pitchfork bifurcation. By integrating the angular acceleration we can get the effective potential for the problem. One contribution to the potential is due to gravity. The other is centrifugal force. When combined and plotted in Fig. 4 for a family of values of the spin rate ω, a pitchfork emerges naturally by tracing the minima in the effective potential. The values of the new equilibrium angles are given in Fig. 2.

Fig. 4 Effective potential as a function of angle for a family of spin rates. At the transition spin rate, the effective potential is essentially flat with zero natural frequency. The pitchfork is the dashed green line.

Below the transition threshold for ω, the bottom of the hoop is the equilibrium position. To find the natural frequency of oscillation, expand the acceleration expression

For small oscillations the natural frequency is given by

As the effective potential gets flatter, the natural oscillation frequency decreases until it vanishes at the transition spin frequency. As the hoop spins even faster, the new equilibrium positions emerge. To find the natural frequency of the new equilibria, expand θ around the new equilibrium θ’ = θ – θ0

Which is a harmonic oscillator with oscillation angular frequency

Note that this is zero frequency at the transition threshold, then rises to match the spin rate of the hoop at high frequency. The natural oscillation frequency as a function of the spin looks like Fig. 5.

Fig. 5 Angular oscillation frequency for the bead. The bifurcation occurs at the critical spin rate ω = sqrt(g/b).

This mechanical analog is highly relevant for the spontaneous symmetry breaking that occurs in ferroelectric crystals when they go through a ferroelectric transition. At high temperature, these crystals have no internal polarization. But as the crystal cools towards the ferroelectric transition temperature, the optical-mode phonon modes “soften” as the phonon frequency decreases and vanishes at the transition temperature when the crystal spontaneously polarizes in one of several equivalent directions. The observation of mode softening in a polar crystal is one signature of an impending ferroelectric phase transition. Our mass on the hoop captures this qualitative physics nicely.

Golden Behavior

For fun, let’s find at what spin frequency the harmonic oscillation frequency at the dynamic equilibria equal the original natural frequency of the pendulum. Then

which is the golden ratio.  It’s spooky how often the golden ratio appears in random physics problems!

The Butterfly Effect versus the Divergence Meter: The Physics of Stein’s Gate

Imagine if you just discovered how to text through time, i.e. time-texting, when a close friend meets a shocking death.  Wouldn’t you text yourself in the past to try to prevent it?  But what if, every time you change the time-line and alter the future in untold ways, the friend continues to die, and you seemingly can never stop it?  This is the premise of Stein’s Gate, a Japanese sci-fi animé bringing in the paradoxes of time travel, casting CERN as an evil clandestine spy agency, and introducing do-it-yourself inventors, hackers, and wacky characters, while it centers on a terrible death of a lovable character that can never be avoided.

It is also a good computational physics project that explores the dynamics of bifurcations, bistability and chaos. I teach a course in modern dynamics in the Physics Department at Purdue University.  The topics of the course range broadly from classical mechanics to chaos theory, social networks, synchronization, nonlinear dynamics, economic dynamics, population dynamics, evolutionary dynamics, neural networks, special and general relativity, among others that are covered in the course using a textbook that takes a modern view of dynamics [1].

For the final project of the second semester the students (Junior physics majors) are asked to combine two or three of the topics into a single project.  Students have come up with a lot of creative combinations: population dynamics of zombies, nonlinear dynamics of negative gravitational mass, percolation of misinformation in presidential elections, evolutionary dynamics of neural architecture, and many more.  In that spirit, and for a little fun, in this blog I explore the so-called physics of Stein’s Gate.

Stein’s Gate and the Divergence Meter

Stein’s Gate is a Japanese TV animé series that had a world-wide distribution in 2011.  The central premise of the plot is that certain events always occur even if you are on different timelines—like trying to avoid someone’s death in an accident.

This is the problem confronting Rintaro Okabe who tries to stop an accident that kills his friend Mayuri Shiina.  But every time he tries to change time, she dies in some other way.  It turns out that all the nearby timelines involve her death.  According to a device known as The Divergence Meter, Rintaro must get farther than 4% away from the original timeline to have a chance to avoid the otherwise unavoidable event. 

This is new.  Usually, time-travel Sci-Fi is based on the Butterfly Effect.  Chaos theory is characterized by something called sensitivity to initial conditions (SIC), meaning that slightly different starting points produce trajectories that diverge exponentially from nearby trajectories.  It is called the Butterfly Effect because of the whimsical notion that a butterfly flapping its wings in China can cause a hurricane in Florida. In the context of the butterfly effect, if you go back in time and change anything at all, the effect cascades through time until the present time in unrecognizable. As an example, in one episode of the TV cartoon The Simpsons, Homer goes back in time to the age of the dinosaurs and kills a single mosquito. When he gets back to our time, everything has changed in bazaar and funny ways.

Stein’s Gate introduces a creative counter example to the Butterfly Effect.  Instead of scrambling the future when you fiddle with the past, you find that you always get the same event, even when you change a lot of the conditions—Mayuri still dies.  This sounds eerily familiar to a physicist who knows something about chaos theory.  It means that the unavoidable event is acting like a stable fixed point in the time dynamics—an attractor!  Even if you change the initial conditions, the dynamics draw you back to the fixed point—in this case Mayuri’s accident.  What would this look like in a dynamical system?

The Local Basin of Attraction

Dynamical systems can be described as trajectories in a high-dimensional state space.  Within state space there are special points where the dynamics are static—known as fixed points.  For a stable fixed point, a slight perturbation away will relax back to the fixed point.  For an unstable fixed point, on the other hand, a slight perturbation grows and the system dynamics evolve away.  However, there can be regions in state space where every initial condition leads to trajectories that stay within that region.  This is known as a basin of attraction, and the boundaries of these basins are called separatrixes.

A high-dimensional state space can have many basins of attraction.  All the physics that starts within a basin stays within that basin—almost like its own self-consistent universe, bordered by countless other universes.  There are well-known physical systems that have many basins of attraction.  String theory is suspected to generate many adjacent universes where the physical laws are a little different in each basin of attraction. Spin glasses, which are amorphous solid-state magnets, have this property, as do recurrent neural networks like the Hopfield network.  Basins of attraction occur naturally within the physics of these systems.

It is possible to embed basins of attraction within an existing dynamical system.  As an example, let’s start with one of the simplest types of dynamics, a hyperbolic fixed point

that has a single saddle fixed point at the origin. We want to add a basin of attraction at the origin with a domain range given by a radius r0.  At the same time, we want to create a separatrix that keeps the outer hyperbolic dynamics separate from the internal basin dynamics.  To keep all outer trajectories in the outer domain, we can build a dynamical barrier to prevent the trajectories from crossing the separatrix.  This can be accomplished by adding a radial repulsive term

In x-y coordinates this is

We also want to keep the internal dynamics of our basin separate from the external dynamics. To do this, we can multiply by a sigmoid function, like a Heaviside function H(r-r0), to zero-out the external dynamics inside our basin.  The final external dynamics is then

Now we have to add the internal dynamics for the basin of attraction.  To make it a little more interesting, let’s make the internal dynamics an autonomous oscillator

Putting this all together, gives

This looks a little complex, for such a simple model, but it illustrates the principle.  The sigmoid is best if it is differentiable, so instead of a Heaviside function it can be a Fermi function

The phase-space portrait of the final dynamics looks like

Figure 1. Hyperbolic dynamics with a basin of attraction embedded inside it at the origin. The dynamics inside the basin of attraction is a limit cycle.

Adding the internal dynamics does not change the far-field external dynamics, which are still hyperbolic.  The repulsive term does split the central saddle point into two saddle points, one on each side left-and-right, so the repulsive term actually splits the dynamics. But the internal dynamics are self-contained and separate from the external dynamics. The origin is an unstable spiral that evolves to a limit cycle.  The basin boundary has marginal stability and is known as a “wall”. 

To verify the stability of the external fixed point, find the fixed point coordinates

and evaluate the Jacobian matrix (for A = 1 and x0 = 2)

which is clearly a saddle point because the determinant is negative.

In the context of Stein’s Gate, the basin boundary is equivalent to the 4% divergence which is necessary to escape the internal basin of attraction where Mayuri meets her fate.

Python Program: SteinsGate2D.py

#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
SteinsGate2D.py
Created on Sat March 6, 2021

@author: David Nolte
Introduction to Modern Dynamics, 2nd edition (Oxford University Press, 2019)

2D simulation of Stein's Gate Divergence Meter
"""
import numpy as np
from scipy import integrate
from matplotlib import pyplot as plt

plt.close('all')

def solve_flow(param,lim = [-6,6,-6,6],max_time=20.0):

    def flow_deriv(x_y, t0, alpha, beta, gamma):
        #"""Compute the time-derivative ."""
        x, y = x_y
        
        w = 1
        R2 = x**2 + y**2
        R = np.sqrt(R2)
        arg = (R-2)/0.1
        env1 = 1/(1+np.exp(arg))
        env2 = 1 - env1
        
        f = env2*(x*(1/(R-1.99)**2 + 1e-2) - x) + env1*(w*y + w*x*(1 - R))
        g = env2*(y*(1/(R-1.99)**2 + 1e-2) + y) + env1*(-w*x + w*y*(1 - R))
        
        return [f,g]
    model_title = 'Steins Gate'

    plt.figure()
    xmin = lim[0]
    xmax = lim[1]
    ymin = lim[2]
    ymax = lim[3]
    plt.axis([xmin, xmax, ymin, ymax])

    N = 24*4 + 47
    x0 = np.zeros(shape=(N,2))
    ind = -1
    for i in range(0,24):
        ind = ind + 1
        x0[ind,0] = xmin + (xmax-xmin)*i/23
        x0[ind,1] = ymin
        ind = ind + 1
        x0[ind,0] = xmin + (xmax-xmin)*i/23
        x0[ind,1] = ymax
        ind = ind + 1
        x0[ind,0] = xmin
        x0[ind,1] = ymin + (ymax-ymin)*i/23
        ind = ind + 1
        x0[ind,0] = xmax
        x0[ind,1] = ymin + (ymax-ymin)*i/23
    ind = ind + 1
    x0[ind,0] = 0.05
    x0[ind,1] = 0.05
    
    for thetloop in range(0,10):
        ind = ind + 1
        theta = 2*np.pi*(thetloop)/10
        ys = 0.125*np.sin(theta)
        xs = 0.125*np.cos(theta)
        x0[ind,0] = xs
        x0[ind,1] = ys

    for thetloop in range(0,10):
        ind = ind + 1
        theta = 2*np.pi*(thetloop)/10
        ys = 1.7*np.sin(theta)
        xs = 1.7*np.cos(theta)
        x0[ind,0] = xs
        x0[ind,1] = ys

    for thetloop in range(0,20):
        ind = ind + 1
        theta = 2*np.pi*(thetloop)/20
        ys = 2*np.sin(theta)
        xs = 2*np.cos(theta)
        x0[ind,0] = xs
        x0[ind,1] = ys
        
    ind = ind + 1
    x0[ind,0] = -3
    x0[ind,1] = 0.05
    ind = ind + 1
    x0[ind,0] = -3
    x0[ind,1] = -0.05
    ind = ind + 1
    x0[ind,0] = 3
    x0[ind,1] = 0.05
    ind = ind + 1
    x0[ind,0] = 3
    x0[ind,1] = -0.05
    ind = ind + 1
    x0[ind,0] = -6
    x0[ind,1] = 0.00
    ind = ind + 1
    x0[ind,0] = 6
    x0[ind,1] = 0.00
           
    colors = plt.cm.prism(np.linspace(0, 1, N))
                        
    # Solve for the trajectories
    t = np.linspace(0, max_time, int(250*max_time))
    x_t = np.asarray([integrate.odeint(flow_deriv, x0i, t, param)
                      for x0i in x0])

    for i in range(N):
        x, y = x_t[i,:,:].T
        lines = plt.plot(x, y, '-', c=colors[i])
        plt.setp(lines, linewidth=1)
       
    plt.show()
    plt.title(model_title)
        
    return t, x_t

param = (0.02,0.5,0.2)        # Steins Gate
lim = (-6,6,-6,6)

t, x_t = solve_flow(param,lim)

plt.savefig('Steins Gate')

The Lorenz Butterfly

Two-dimensional phase space cannot support chaos, and we would like to reconnect the central theme of Stein’s Gate, the Divergence Meter, with the Butterfly Effect.  Therefore, let’s actually incorporate our basin of attraction inside the classic Lorenz Butterfly.  The goal is to put an attracting domain into the midst of the three-dimensional state space of the Lorenz butterfly in a way that repels the butterfly, without destroying it, but attracts local trajectories.  The question is whether the butterfly can survive if part of its state space is made unavailable to it.

The classic Lorenz dynamical system is

As in the 2D case, we will put in a repelling barrier that prevents external trajectories from moving into the local basin, and we will isolate the external dynamics by using the sigmoid function.  The final flow equations looks like

where the radius is relative to the center of the attracting basin

and r0 is the radius of the basin.  The center of the basin is at [x0, y0, z0] and we are assuming that x0 = 0 and y0 = 0 and z0 = 25 for the standard Butterfly parameters p = 10, r = 25 and b = 8/3. This puts our basin of attraction a little on the high side of the center of the Butterfly. If we embed it too far inside the Butterfly it does actually destroy the Butterfly dynamics.

When r0 = 0, the dynamics of the Lorenz’ Butterfly are essentially unchanged.  However, when r0 = 1.5, then there is a repulsive effect on trajectories that pass close to the basin. It can be seen as part of the trajectory skips around the outside of the basin in Figure 2.

Figure 2. The Lorenz Butterfly with part of the trajectory avoiding the basin that is located a bit above the center of the Butterfly.

Trajectories can begin very close to the basin, but still on the outside of the separatrix, as in the top row of Figure 3 where the basin of attraction with r0 = 1.5 lies a bit above the center of the Butterfly. The Butterfly still exists for the external dynamics. However, any trajectory that starts within the basin of attraction remains there and executes a stable limit cycle. This is the world where Mayuri dies inside the 4% divergence. But if the initial condition can exceed 4%, then the Butterfly effect takes over. The bottom row of Figure 2 shows that the Butterfly itself is fragile. When the external dynamics are perturbed more strongly by more closely centering the local basin, the hyperbolic dynamics of the Butterfly are impeded and the external dynamics are converted to a stable limit cycle. It is interesting that the Butterfly, so often used as an illustration of sensitivity to initial conditions (SIC), is itself sensitive to perturbations that can convert it away from chaos and back to regular motion.

Figure 3. (Top row) A basin of attraction is embedded a little above the Butterfly. The Butterfly still exists for external trajectories, but any trajectory that starts inside the basin of attraction remains inside the basin. (Bottom row) The basin of attraction is closer to the center of the Butterfly and disrupts the hyperbolic point and converts the Butterfly into a stable limit cycle.

Discussion and Extensions

In the examples shown here, the local basin of attraction was put in “by hand” as an isolated region inside the dynamics. It would be interesting to consider more natural systems, like a spin glass or a Hopfield network, where the basins of attraction occur naturally from the physical principles of the system. Then we could use the “Divergence Meter” to explore these physical systems to see how far the dynamics can diverge before crossing a separatrix. These systems are impossible to visualize because they are intrinsically very high dimensional systems, but Monte Carlo approaches could be used to probe the “sizes” of the basins.

Another interesting extension would be to embed these complex dynamics into spacetime. Since this all started with the idea of texting through time, it would be interesting (and challenging) to see how we could describe this process in a high dimensional Minkowski space that had many space dimensions (but still only one time dimension). Certainly it would violate the speed of light criterion, but we could then take the approach of David Deutsch and view the time axis as if it had multiple branches, like the branches of the arctangent function, creating time-consistent sheets within a sheave of flat Minkowski spaces.

References

[1] D. D. Nolte, Introduction to Modern Dynamics: Chaos, Networks, Space and Time, 2nd edition (Oxford University Press, 2019)

[2] E. N. Lorenz, The essence of chaos. (University of Washington Press, 1993)

[3] E. N. Lorenz, “Deterministic Nonperiodic Flow,” Journal of the Atmospheric Sciences, vol. 20, no. 2, pp. 130-141, 1963 (1963)

Biased Double-well Potential: Bistability, Bifurcation and Hysteresis

Bistability, bifurcation and hysteresis are ubiquitous phenomena that arise from nonlinear dynamics and have considerable importance for technology applications.  For instance, the hysteresis associated with the flipping of magnetic domains under magnetic fields is the central mechanism for magnetic memory, and bistability is a key feature of switching technology.

… one of the most commonly encountered bifurcations is called a saddle-node bifurcation, which is the bifurcation that occurs in the biased double-well potential.

One of the simplest models for bistability and hysteresis is the one-dimensional double-well potential biased by a changing linear potential.  An example of a double-well potential with a bias is

where the parameter c is a control parameter (bias) that can be adjusted or that changes slowly in time c(t).  This dynamical system is also known as the Duffing oscillator. The net double-well potentials for several values of the control parameter c are shown in Fig. 1.   With no bias, there are two degenerate energy minima.  As c is made negative, the left well has the lowest energy, and as c is made positive the right well has the lowest energy.

The dynamics of this potential energy profile can be understood by imagining a small ball that responds to the local forces exerted by the potential.  For large negative values of c the ball will have its minimum energy in the left well.  As c is increased, the energy of the left well increases, and rises above the energy of the right well.  If the ball began in the left well, even when the left well has a higher energy than the right, there is a potential barrier that the ball cannot overcome and it remains on the left.  This local minimum is a stable equilibrium, but it is called “metastable” because it is not a global minimum of the system.  Metastability is the origin of hysteresis.

Fig. 1 A biased double-well potential in one dimension. The thresholds to destroy the local metastable minima are c = +/-1.05. For values beyond threshold, only a single minimum exists with no barrier. Hysteresis is caused by the mass being stuck in the metastable (upper) minimum because it has insufficient energy to overcome the potential barrier, until the barrier disappears at threshold and the ball rolls all the way down to the bottom to the new location. When the bias is slowly reversed, the new location becomes metastable, until the ball can overcome the barrier and roll down to its original minimum, etc.

           Once sufficient bias is applied that the local minimum disappears, the ball will roll downhill to the new minimum on the right, and in the presence of dissipation, it will come to rest in the new minimum.  The bias can then be slowly lowered, reversing this process. Because of the potential barrier, the bias must change sign and be strong enough to remove the stability of the now metastable fixed point with the ball on the right, allowing the ball to roll back down to its original location on the left.  This “overshoot” defines the extent of the hysteresis. The fact that there are two minima, and that one is metastable with a barrier between the two, produces “bistability”, meaning that there are two stable fixed points for the same control parameter.

           For illustration, assume a mass obeys the flow equation

including a damping term, where the force is the negative gradient of the potential energy.  The bias parameter c can be time dependent, beginning beyond the negative threshold and slowly increasing until it exceeds the positive threshold, and then reversing and decreasing again.  The position of the mass is locally a damped oscillator until a threshold is passed, and then the mass falls into the global minimum, as shown in Fig. 2. As the bias is reversed, it remains in the metastable minimum on the right until the control parameter passes threshold, and then the mass drops into the left minimum that is now a global minimum.

Fig. 2 Hysteresis diagram. The mass begins in the left well. As the parameter c increases, the mass remains in the well, even though it is no longer the global minimum when c becomes positive. When c passes the positive threshold (around 1.05 for this example), the mass falls into the right well, with damped oscillation. Then the control parameter c is decreased slowly until the negative threshold is passed, and the mass switches to the left well with damped oscillations. The difference between the “switch up” and “switch down” values of the control parameter represents the “hysteresis” of the this system.

The sudden switching of the biased double-well potential represents what is known as a “bifurcation”. A bifurcation is a sudden change in the qualitative behavior of a system caused by a small change in a control variable. Usually, a bifurcation occurs when the number of attractors of a system changes. There is a fairly large menagerie of different types of bifurcations, but one of the most commonly encountered bifurcations is called a saddle-node bifurcation, which is the bifurcation that occurs in the biased double-well potential. In fact, there are two saddle-node bifurcations.

Bifurcations are easily portrayed by creating a joint space between phase space and the one (or more) control parameters that induce the bifurcation. The phase space of the double well is two dimensional (position, velocity) with three fixed points, but the change in the number of fixed points can be captured by taking a projection of the phase space onto a lower-dimensional manifold. In this case, the projection is simply along the x-axis. Therefore a “co-dimensional phase space” can be constructed with the x-axis as one dimension and the control parameter as the other. This is illustrated in Fig. 3. The cubic curve traces out the solutions to the fixed-point equation

For a given value of the control parameter c there are either three solutions or one solution. The values of c where the number of solutions changes discontinuously is the bifurcation point c*. Two examples of the potential function are shown on the right for c = +1 and c = -0.5 showing the locations of the three fixed points.

Fig. 3 The co-dimension phase space combines the one-dimensional dynamics along the position x with the control parameter. For a given value of c, there are three or one solution for the fixed point. When there are three solutions, two are stable (the double minima) and one is unstable (the saddle). As the magnitude of the bias increases, one stable node annihilates with the unstable node (a minimum and the saddle merge) and the dynamics “switch” to the other minimum.

The threshold value in this example is c* = 1.05. When |c| < c* the two stable fixed points are the two minima of the double-well potential, and the unstable fixed point is the saddle between them. When |c| > c* then the single stable fixed point is the single minimum of the potential function. The saddle-node bifurcation takes its name from the fact (illustrated here) that the unstable fixed point is a saddle, and at the bifurcation the saddle point annihilates with one of the stable fixed points.

The following Python code illustrates the behavior of a biased double-well potential, with damping, in which the control parameter changes slowly with a sinusoidal time dependence.

Python Code: DWH.py

#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
DWH.py
Created on Wed Apr 17 15:53:42 2019
@author: nolte
D. D. Nolte, Introduction to Modern Dynamics: Chaos, Networks, Space and Time, 2nd ed. (Oxford,2019)
"""

import numpy as np
from scipy import integrate
from scipy import signal
from matplotlib import pyplot as plt

plt.close('all')
T = 400
Amp = 3.5

def solve_flow(y0,c0,lim = [-3,3,-3,3]):

    def flow_deriv(x_y, t, c0):
        #"""Compute the time-derivative of a Medio system."""
        x, y = x_y

        return [y,-0.5*y - x**3 + 2*x + x*(2*np.pi/T)*Amp*np.cos(2*np.pi*t/T) + Amp*np.sin(2*np.pi*t/T)]

    tsettle = np.linspace(0,T,101)   
    yinit = y0;
    x_tsettle = integrate.odeint(flow_deriv,yinit,tsettle,args=(T,))
    y0 = x_tsettle[100,:]
    
    t = np.linspace(0, 1.5*T, 2001)
    x_t = integrate.odeint(flow_deriv, y0, t, args=(T,))
    c  = Amp*np.sin(2*np.pi*t/T)
        
    return t, x_t, c

eps = 0.0001

for loop in range(0,100):
    c = -1.2 + 2.4*loop/100 + eps;
    xc[loop]=c
    
    coeff = [-1, 0, 2, c]
    y = np.roots(coeff)
    
    xtmp = np.real(y[0])
    ytmp = np.real(y[1])
    
    X[loop] = np.min([xtmp,ytmp])
    Y[loop] = np.max([xtmp,ytmp])
    Z[loop]= np.real(y[2])


plt.figure(1)
lines = plt.plot(xc,X,xc,Y,xc,Z)
plt.setp(lines, linewidth=0.5)
plt.show()
plt.title('Roots')

y0 = [1.9, 0]
c0 = -2.

t, x_t, c = solve_flow(y0,c0)
y1 = x_t[:,0]
y2 = x_t[:,1]

plt.figure(2)
lines = plt.plot(t,y1)
plt.setp(lines, linewidth=0.5)
plt.show()
plt.ylabel('X Position')
plt.xlabel('Time')

plt.figure(3)
lines = plt.plot(c,y1)
plt.setp(lines, linewidth=0.5)
plt.show()
plt.ylabel('X Position')
plt.xlabel('Control Parameter')
plt.title('Hysteresis Figure')

Further Reading:

D. D. Nolte, Introduction to Modern Dynamics: Chaos, Networks, Space and Time (2nd Edition) (Oxford University Press, 2019)

The Pendulum Lab