Twenty Years at Light Speed: Fiber Optics and the Future of the Photonic Internet

Twenty years ago this November, my book Mind at Light Speed: A New Kind of Intelligence was published by The Free Press (Simon & Schuster, 2001).  The book described the state of optical science at the turn of the Millennium through three generations of Machines of Light:  The Optoelectronic Generation of electronic control meshed with photonic communication; The All-Optical Generation of optical logic; and The Quantum Optical Generation of quantum communication and computing.

To mark the occasion of the publication, this Blog Post begins a three-part series that updates the state-of-the-art of optical technology, looking at the advances in optical science and technology over the past 20 years since the publication of Mind at Light Speed.  This first blog reviews fiber optics and the photonic internet.  The second blog reviews all-optical communication and computing.  The third and final blog reviews the current state of photonic quantum communication and computing.

The Wabash Yacht Club

During late 1999 and early 2000, while I was writing Mind at Light Speed, my wife Laura and I would often have lunch at the ironically-named Wabash Yacht Club.  Not only was it not a Yacht Club, but it was a dark and dingy college-town bar located in a drab 70-‘s era plaza in West Lafayette, Indiana, far from any navigable body of water.  But it had a great garlic burger and we loved the atmosphere.

The Wabash River. No yachts. (https://www.riverlorian.com/wabash-river)

One of the TV monitors in the bar was always tuned to a station that covered stock news, and almost every day we would watch the NASDAQ rise 100 points just over lunch.  This was the time of the great dot-com stock-market bubble—one of the greatest speculative bubbles in the history of world economics.  In the second quarter of 2000, total US venture capital investments exceeded $30B as everyone chased the revolution in consumer market economics.

Fiber optics will remain the core technology of the internet for the foreseeable future.

Part of that dot-com bubble was a massive bubble in optical technology companies, because everyone knew that the dot-com era would ride on the back of fiber optics telecommunications.  Fiber optics at that time had already revolutionized transatlantic telecommunications, and there seemed to be no obstacle for it to do the same land-side with fiber optics to every home bringing every dot-com product to every house and every movie ever made.  What would make this possible was the tremendous information bandwidth that can be crammed into tiny glass fibers in the form of photon packets traveling at the speed of light.

Doing optics research at that time was a heady experience.  My research on real-time optical holography was only on the fringe of optical communications, but at the CLEO conference on lasers and electro-optics, I was invited by tiny optics companies to giant parties, like a fully-catered sunset cruise on a schooner sailing Baltimore’s inner harbor.  Venture capital scouts took me to dinner in San Francisco with an eye to scoop up whatever patents I could dream of.  And this was just the side show.  At the flagship fiber-optics conference, the Optical Fiber Conference (OFC) of the OSA, things were even crazier.  One tiny company that made a simple optical switch went almost overnight from a company worth a couple of million to being bought out by Nortel (the giant Canadian telecommunications conglomerate of the day) for over 4 billion dollars.

The Telecom Bubble and Bust

On the other side from the small mom-and-pop optics companies were the giants like Corning (who made the glass for the glass fiber optics) and Nortel.  At the height of the telecom bubble in September 2000, Nortel had a capitalization of almost $400B Canadian dollars due to massive speculation about the markets around fiber-optic networks.

One of the central questions of the optics bubble of Y2K was what the new internet market would look like.  Back then, fiber was only beginning to connect to distribution nodes that were connected off the main cross-country trunk lines.  Cable TV dominated the market with fixed programming where you had to watch whatever they transmitted whenever they transmitted it.  Google was only 2 years old, and Youtube didn’t even exist then—it was founded in 2005.  Everyone still shopped at malls, while Amazon had only gone public three years before.

There were fortune tellers who predicted that fiber-to-the-home would tap a vast market of online commerce where you could buy anything you wanted and have it delivered to your door.  They foretold of movies-on-demand, where anyone could stream any movie they wanted at any time.  They also foretold of phone calls and video chats that never went over the phone lines ruled by the telephone monopolies.  The bandwidth, the data rates, that these markets would drive were astronomical.  The only technology at that time that could support such high data rates was fiber optics.

At first, these fortune tellers drove an irrational exuberance.  But as the stocks inflated, there were doomsayers who pointed out that the costs at that time of bringing fiber into homes was prohibitive. And the idea that people would be willing to pay for movies-on-demand was laughable.  The cost of the equipment and the installation just didn’t match what then seemed to be a sparse market demand.  Furthermore, the fiber technology in the year 2000 couldn’t even get to the kind of data rates that could support these dreams.

In March of 2000 the NASDAQ hit a high of 5000, and then the bottom fell out.

By November 2001 the NASDAQ had fallen to 1500.  One of the worst cases of the telecom bust was Nortel whose capitalization plummeted from $400B at its high to $5B Canadian by August 2002.  Other optics companies fared little better.

The main questions, as we stand now looking back from 20 years in the future, are: What in real life motivated the optics bubble of 2000?  And how far has optical technology come since then?  The surprising answer is that the promise of optics in 2000 was not wrong—the time scale was just off. 

Fiber to the Home

Today, fixed last-mile broadband service is an assumed part of life in metro areas in the US.  This broadband takes on three forms: legacy coaxial cable, 4G wireless soon to be upgraded to 5G, and fiber optics.  There are arguments pro and con for each of these technologies, especially moving forward 10 or 20 years or more, and a lot is at stake.  The global market revenue was $108 Billion in 2020 and is expected to reach $200 Billion in 2027, growing at over 9% from 2021 to 2027.

(ShutterStock_75369058.jpg)

To sort through the pros and cons to pick the wining technology, several key performance parameters must be understood for each technology.  The two most important performance measures are bandwidth and latency.  Bandwidth is the data rate—how many bits per second can you get to the home.  Latency is a little more subtle.  It is the time it takes to complete a transmission.  This time includes the actual time for information to travel from a transmitter to a receiver, but that is rarely the major contributor.  Currently, almost all of the latency is caused by the logical operations needed to move the information onto and off of the home data links. 

Coax (short for coaxial cable) is attractive because so much of the last-mile legacy hardware is based on the old cable services.  But coax cable has very limited bandwidth and high latency. As a broadband technology, it is slowly disappearing.

Wireless is attractive because the information is transmitted in the open air without any need for physical wires or fibers.  But high data rates require high frequency.  For instance, 4G wireless operates at frequencies between 700 MHz to 2.6 GHz.  Current WiFi is 2.4 GHz or 5 GHz, and next-generation 5G will have 26 GHz using millimeter wave technology, and WiGig is even more extreme at 60 GHz.  While WiGig will deliver up to 10 Gbits per second, as everyone with wireless routers in their homes knows, the higher the frequency, the more it is blocked by walls or other obstacles.  Even 5 GHz is mostly attenuated by walls, and the attenuation gets worse as the frequency gets higher.  Testing of 5G networks has shown that cell towers need to be closely spaced to allow seamless coverage.  And the crazy high frequency of WiGig all but guarantees that it will only be usable for line-of-sight communication within a home or in an enterprise setting. 

Fiber for the last mile, on the other hand, has multiple advantages.  Chief among these is that fiber is passive.  It is a light pipe that has ten thousand times more usable bandwidth than a coaxial cable.  For instance, lab tests have pushed up to 100 Tbit/sec over kilometers of fiber.  To access that bandwidth, the input and output hardware can be continually upgraded, while the installed fiber is there to handle pretty much any amount of increasing data rates for the next 10 or 20 years.  Fiber installed today is supporting 1 Gbit/sec data rates, and the existing protocol will work up to 10 Gbit/sec—data rates that can only be hoped for with WiFi.  Furthermore, optical communications on fiber have latencies of around 1.5 msec over 20 kilometers compared with 4G LTE that has a latency of 8 msec over 1 mile.  The much lower latency is key to support activities that cannot stand much delay, such as voice over IP, video chat, remote controlled robots, and virtual reality (i.e., gaming).  On top of all of that, the internet technology up to the last mile is already almost all optical.  So fiber just extends the current architecture across the last mile.

Therefore, fixed fiber last-mile broadband service is a technology winner.  Though the costs can be higher than for WiFi or coax in the short run for installation, the long-run costs are lower when amortized over the lifetime of the installed fiber which can exceed 25 years.

It is becoming routine to have fiber-to-the-curb (FTTC) where a connection box converts photons in fibers into electrons on copper to take the information into the home.  But a market also exists in urban settings for fiber-to-the-home (FTTH) where the fiber goes directly into the house to a receiver and only then would the information be converted from photons to electrons and electronics.

Shortly after Mind at Light Speed was published in 2001, I was called up by a reporter for the Seattle Times who wanted to know my thoughts about FTTH.  When I extolled its virtue, he nearly hung up on me.  He was in the middle of debunking the telecom bubble and his premise was that FTTH was a fraud.  In 2001 he might have been right.  But in 2021, FTTH is here, it is expanding, and it will continue to do so for at least another quarter century.  Fiber to the home will become the legacy that some future disruptive technology will need to displace.

Fig. 1 Optical data rates on optical links, trunk lines and submarine cables over the past 30 years and projecting into the future. Redrawn from Refs. [1, 2]

Trunk-Line Fiber Optics

Despite the rosy picture for Fiber to the Home, a storm is brewing for the optical trunk lines.  The total traffic on the internet topped a billion Terrabytes in 2019 and is growing fast, doubling about every 2 years on an exponential growth curve.  In 20 years, that becomes another factor of a thousand more traffic in 2040 than today.  Therefore, the technology companies that manage and supply the internet worry about a capacity crunch that is fast approaching when there will be more demand than the internet can supply.

Over the past 20 years, the data rates on the fiber trunk lines—the major communication links that span the United States—matched demand by packing more bits in more ways into the fibers.  Up to 2009, increased data rates were achieved using dispersion-managed wavelength-division multiplexing (WDM) which means that they kept adding more lasers of slightly different colors to send the optical bits down the fiber.  For instance, in 2009 the commercial standard was 80 colors each running at 40 Gbit/sec for a total of 3.2 Tbit/sec down a single fiber. 

Since 2009, increased bandwidth has been achieved through coherent WDM, where not only the amplitude of light but also the phase of the light is used to encode bits of information using interferometry.  We are still in the coherent WDM era as improved signal processing is helping to fill the potential coherent bandwidth of a fiber.  Commercial protocols using phase-shift keying, quadrature phase-shift keying, and 16-quadrature amplitude modulation currently support 50 Gbit/sec, 100 Gbit/sec and 200 Gbit/sec, respectively.  But the capacity remaining is shrinking, and several years from now, a new era will need to begin in order to keep up with demand.  But if fibers are already using time, color, polarization and phase to carry information, what is left? 

The answer is space!

Coming soon will be commercial fiber trunk lines that use space-division multiplexing (SDM).  The simplest form is already happening now as fiber bundles are replacing single-mode fibers.  If you double the number of fibers in a cable, then you double the data rate of the cable.  But the problem with this simple approach is the scaling.  If you double just 10 times, then you need 1024 fibers in a single cable—each fiber needing its own hardware to launch the data and retrieve it at the other end.  This is linear scaling, which is bad scaling for commercial endeavors. 

Fig. 2 Fiber structures for space-division multiplexing (SDM). Fiber bundles are cables of individual single-mode fibers. Multi-element fibers (MEF) are single-mode fibers formed together inside the coating. Multi-core fibers (MCF) have multiple cores within the cladding. Few-mode fibers (FMF) are multi-mode fibers with small mode numbers. Coupled core (CC) fibers are multi-core fibers in which the cores are close enough that the light waves are coupled into coupled spatial modes. Redrawn from Ref. [3]

Therefore, alternatives for tapping into SDM are being explored in lab demonstrations now that have sublinear scaling (costs don’t rise as fast as improved capacity).  These include multi-element fibers where multiple fiber optical elements are manufactured as a group rather than individually and then combined into a cable.  There are also multi-core fibers, where multiple fibers share the same cladding.  These approaches provide multiple fibers for multiple channels without a proportional rise in cost.

More exciting are approaches that use few-mode-fibers (FMF) to support multiple spatial modes traveling simultaneously down the same fiber.  In the same vein are coupled-core fibers which is a middle ground between multi-core fibers and few-mode fibers in that individual cores can interact within the cladding to support coupled spatial modes that can encode separate spatial channels.  Finally, combinations of approaches can use multiple formats.  For instance, a recent experiment combined FMF and MCF that used 19 cores each supporting 6 spatial modes for a total of 114 spatial channels.

However, space-division multiplexing has been under development for several years now, yet it has not fully moved into commercial systems. This may be a sign that the doubling rate of bandwidth may be starting to slow down, just as Moore’s Law slowed down for electronic chips.  But there were doomsayers foretelling the end of Moore’s Law for decades before it actually slowed down, because new ideas cannot be predicted. But even if the full capacity of fiber is being approached, there is certainly nothing that will replace fiber with any better bandwidth.  So fiber optics will remain the core technology of the internet for the foreseeable future. 

But what of the other generations of Machines of Light: the all-optical and the quantum-optical generations?  How have optics and photonics fared in those fields?  Stay tuned for my next blogs to find out.

Bibliography

[1] P. J. Winzer, D. T. Neilson, and A. R. Chraplyvy, “Fiber-optic transmission and networking: the previous 20 and the next 20 years,” Optics Express, vol. 26, no. 18, pp. 24190-24239, Sep (2018) [Link]

[2] W. Shi, Y. Tian, and A. Gervais, “Scaling capacity of fiber-optic transmission systems via silicon photonics,” Nanophotonics, vol. 9, no. 16, pp. 4629-4663, Nov (2020)

[3] E. Agrell, M. Karlsson, A. R. Chraplyvy, D. J. Richardson, P. M. Krummrich, P. Winzer, K. Roberts, J. K. Fischer, S. J. Savory, B. J. Eggleton, M. Secondini, F. R. Kschischang, A. Lord, J. Prat, I. Tomkos, J. E. Bowers, S. Srinivasan, M. Brandt-Pearce, and N. Gisin, “Roadmap of optical communications,” Journal of Optics, vol. 18, no. 6, p. 063002, 2016/05/04 (2016) [Link]

Of Solar Flares, Cosmic Ray Physics and American Vikings

Exactly a thousand years ago this year an American Viking living in the Norse outpost on Straumfjord, on the northern tip of Newfoundland, took a metal axe and cut a tree.  The trimmed parts of the tree were cast away and, almost a thousand years later, were found by archeologists and stored for later study. What that study found was an exact date of the felling of the tree, in AD 1021.

How can that date be known to such precision?  The answer comes from a confluence of modern science: solar flares, cosmic ray physics, archeology, recent advances in dendrochronology, and the historiography of Icelandic sagas. The new findings were reported in the Oct. 20, 2021 issue of Nature.

American Vikings

Snorri Thorfinnsson was the first American Viking born in the Western Hemisphere.  He was born in Newfoundland sometime around AD 1007, the son of Thorfinn Karlsefni and his wife Gudrid Thorbjarnardottir, who were exploring the wooded coasts of Labrador and Newfoundland for timber to bring back to the Norse settlements in Greenland which had no wood for building.  Thorfinn and Gudrid traveled in a small fleet of Norse trading vessels known as knarrs.   

Knarrs were not the sleek long boats of Viking raiders, but were instead the semi-trailer trucks of the Viking expansion between AD 793 and 1066.  A knarr was an open planked boat about 50 feet long and 15 feet wide with a single mast and square-rigged sail.  It had a keel and could be rigged with a sprit to run close-hauled to the wind.  Its cargo was typically walrus ivory, wood, wool, wheat and furs with enough mid-ship room for a few livestock.

By using the technique of latitude sailing, that is by sailing to a known latitude and then keeping the North Star at a fixed angle above the horizon, knarrs could traverse the North Atlantic in a matter of weeks, sailing back and forth between Norway and Iceland and Greenland.  The trip from Greenland’s eastern settlement to Norway was 3000 km and took about 4 weeks (compare that to the two months it took the Mayflower to cross the Atlantic 600 years later).  Storms and bad weather put a crimp in this type of latitude sailing when the North Star could be obscured for days or weeks, and the sailors could end up somewhere they didn’t expect.  This is what happened to the merchant Bjarni Herjólfsson circa 985 when his ships were blown west in a terrible storm and he came upon a land of white beaches and green forests stretching to the horizon.  To get home, he sailed north along the new-discovered coast to the known latitude of Greenland and then headed east until he hit land. 

Map of the Norse voyages. Yellow: 3000 km between Greenland and Norway (about 4 weeks by knarr) was a “routine” voyage. Red: 3000 km between Greenland and the Norse outpost at Straumfjord in Newfoundland (about 4 weeks by knarr). Green: 2000 km from the northern tip of Newfoundland to Long Island Sound (about 3 weeks by knarr). Butternut wood remnants discovered at Straumfjord likely came from the southern coast of Maine or the coast of Connecticut.

Bjarni never set foot on the new land, but his tale inspired Leif Eriksson, the son of Erik the Red, to explore the new world.  Leif bought Bjarni’s knarr and with a small fleet sailed up the west coast of Greenland to where Bjarni had landed, then headed due west along the latitude of what is today the Davis Straight.  Leif made landfall on Baffin Island and sailed south down the Labrador coast to Belle Island in the Gulf of St. Lawrence, that he named Straumfjord, and then across to the northern tip of Newfoundland on the edge of a shallow bay where they could run their ships onto shore.  There, sometime around AD1000 they built a small settlement of wood houses that they used as a base for wider explorations of the land they called Vinland.  Later expeditions returned to the Straumfjord settlement and expanded it, including Thorfinn and Gudrid, where their son Snorri was born. 

View of the reconstructed Norse outpost at L’Anse aux Meadows in Newfoundland, Canada, and the Gulf of St. Lawrence (Straumfjord).

The voyage one-way between Newfoundland and Greenland took only 3 to 4 weeks, and each successive group repaired the damage from the ravages of the Newfoundland weather.  One of these repairs happened in the year AD 1021, long after Thorfinn and Gudrid and Snorri had resettled in northern Iceland, where their descendants crafted a saga of their exploits that was passed down by oral tradition through the generations until they were written down around AD 1400 and then almost forgotten…until the archeologist Anne Stine Ingstad with her husband Helge Ingstad found the remains of wood houses in 1960 buried under the turf at a place called L’Anse aux Meadows on Newfoundland’s northern tip. 

The Icelandic Saga of Erik the Red written around 1387-1394 and known as the Flateyjarbók (The Flatley Book).

The outpost at L’Anse aux Meadows was used on and off for decades as a base for the timber and fur trade. In addition to the dwellings, vast numbers of wood chips and discarded tree parts were uncovered, pointing to an active timber operation. Some of the wood is from the butternut tree which does not grow in Newfoundland nor anywhere along the shores of the Gulf of St. Lawrence. The modern areas of the butternut tree within range of Norse excursions are from the southern coast of Maine and the coast of Connecticut on Long Island Sound. Given how freely the Norse sailed their knarrs, making routine voyages of several weeks duration, the three-week trip from L’Anse aux Meadows to Long Island Sound seems easy, and there were plenty of bays to slip into for provisions as they went. Although there is no direct evidence for the Norse presence along the northeastern coast of the US, it seems highly likely that they plied these waterways and brought back the butternut timber to L’Anse aux Meadows.

Carbon 14 dating placed the age of the outpost at L’Anse aux Meadows at around AD 1000, consistent with the chronology of the Icelandic Sagas. But with an accuracy of plus or minus several decades it was not possible to know where it fit into the story…except for a lucky accident of solar physics.

Miyake Events and Solar Physics

In 2012, while studying tree rings from two cedar trees in Japan, Fuse Miyake of Nagoya University and his team from the Solar-Terrestrial Environment Laboratory made the unexpected discovery that a single tree ring, shared in common between the two specimens, had 20% more Carbon 14 than any of the other rings.  The ratio of Carbon 14 in nature to the usual Carbon 12 is very stable, with a variation of about 2% year to year, mostly due to measurement accuracy.  Therefore, the 20% spike in Carbon 14 was a striking anomaly.  By comparing the known ages of the cedars to the rings, using the techniques of dendrochronology, the date of the ring growth was pinpointed to the year 774-775.

A solar flare like this may generate a solar proton event (SPE).

Such a sudden increase in Carbon 14 over only a year’s time could only be caused by a sudden and massive influx of high-energy cosmic rays into the Earth’s upper atmosphere.  Carbon 14 is generated by the capture of 10-40 MeV neutrons by Nitrogen 14 followed by proton decay of the excited nitrogen nucleus.  The high-energy neutrons are generated as byproducts of even higher energy processes.  Miyake and his team considered high-energy gamma photons from a local super nova, but that was not consistent with the timing or amount of Carbon 14 that was generated.  They next considered a massive generation of high-energy solar protons when the sun spits out a massive surge of high-energy protons.    The exact cause of a solar proton event is still debated, but it is likely to be associated with solar flares that accelerate the protons to high energy.  The high-energy protons can penetrate the Earth’s magnetic field and cause particle cascades in the upper atmosphere.  They called it a Solar Proton Event (SPE), but it has since been renamed a Miyake Event.

Solar proton events may be associated with the Aurora Borealis. In the year of the Miyake event of 774 there were historical reports of unusual atmospheric lights and patterns. The Aurora is caused by electron currents which may be associated with the proton event.
High-energy protons from the sun cause high-altitude cosmic ray cascades that also produce high-energy neutrons. The neutrons are captured by Nitrogen 14 which decays rapidly into Carbon 14. Carbon 14 eventually decays back to Nitrogen 14 with a half life of about 5000 years.

Miyake Events are extremely rare.  There have been only about 3 or 4 SPE’s in the past 10,000 years.  By luck, another Miyake Event occurred in 993, about 8 years after Bjarni Herjólfsson was blown off course and about 7 years before Leif Eriksson began exploring the new world.  The excess Carbon 14 rained down on Earth and was incorporated into the fresh growth of juniper and fir trees growing near the northern Newfoundland shore.  Twenty seven years later, while repairing Leif Eriksson’s wood houses, a Viking felled the trees with a metal axe.  Chunks of the trees were discarded, with the traces of the metal axe blade as well as the outer bark of the tree intact.

The intact bark on the wood pieces was an essential part of the dating. Simply by counting the number of tree rings from the ring of 993, it was possible to know not only the year the tree was cut down, but even the season. Furthermore, the existence of the marks from the metal axe confirmed that the tree was felled by someone from the settlement because there were no metal tools among the indigenous people.

The Norse timber traders treated the indigenous people terribly from the very first expeditions, with tales of wanton murder recorded proudly in the later sagas. This was ultimately their undoing. Resistance from the local tribes could be fierce, and the Norse could spare few casualties in their small expeditions. Eventually, the Norse were driven off. The wood structures at L’Anse aux Meadows were burned before they sank beneath the turf, and human remains with arrow wounds have been uncovered from the site, hinting at how this bold tale ended.

Physics of the Flipping iPhone and the Fate of the Earth

Find an iPhone, then flip it face upwards (hopefully over a soft cushion or mattress).  What do you see?

An iPhone is a rectangular parallelepiped with three unequal dimensions and hence three unequal principle moments of inertia I1 < I2 < I3.  These axes are: vertical to the face, horizontal through the small dimension, and horizontal through the long dimension. So now spin the iPhone around its long axis, it keeps a nice and steady spin.  And then spin it around an axis point out of the face, again it’s a nice steady spin. But flip it face upwards, and it almost always does a half twist. Why?

The answer is variously known as the Tennis Racket Theorem or the Intermediate Axis Theorem or even the Dzhanibekov Effect. If you don’t have an iPhone or Samsung handy, then watch this NASA video of the effect.

Stability Analysis

The flipping iPhone is a rigid body experiencing force-free motion. The Euler equations are an easy way to approach the somewhat complicated physics. These equations are

They all equal zero because there is no torque. First let’s assume the object is rotating mainly around the x1 axis so that ω2 and ω3 are small (rotating mainly around ω1).  Then solving for the angular accelerations yields

This is a two-dimensional flow equation in the variables  ω2, ω3.  Hence we can apply classic stability analysis for rotation mainly about the x1 axis. The Jacobian matrix is

This matrix has a trace τ = 0 and a determinant Δ given by

Because of the ordering I1 < I2 < I3 we know that this is quantity is positive. 

Armed with the trace and the determinant of a two-dimensional flow, we simply need to look at the 2D “stability space” as shown in Fig. 1. The horizontal axis is the determinant of the Jacobian matrix evaluated at the fixed point of the motion, and the vertical axis is the trace. In the case of the flipping iPhone, the Jacobian matrix is independent of both ω2 and ω3 (if they are remain small), so it has a global stability. When the determinant is positive, the stability depends on the trace. If the trace is positive, all motions are unstable (deviations grow exponentially). If the trace is negative, all motions are stable. The sideways parabola in the figure is known as the discriminant. If solutions are within the discriminant, they are spirals. As the trace approaches the origin, the spirals get slower and slow, until they become simple harmonic motions when the trace goes to zero. This kind of marginal stability is also known as centers. Centers have a stead-state stability without dissipation.

Fig. 1 The stability space for two-dimensional dynamics. The vertical axis is the trace of the Jacobian matrix and the horizontal axis is the determinant. If the determinant is negative, all motions are unstable saddle points. Otherwise, stability depends on the sign of the trace, unless the trace is zero, for which case the motion has steady-state stability like celestial orbits or harmonic oscillators. (Reprinted from Ref. [1])

For the flipping iPhone (or tennis racket or book), the trace is zero and the determinant is positive for rotation mainly about the x1 axis, and the stability is therefore a “center”.  This is why the iPhone spins nicely about its axis with the smallest moment.

Let’s permute the indices to get the motion about the x3 axis with the largest moment. Then

The trace and determinant are

where the determinant is again positive and the stability is again a center.

But now let’s permute again so that the motion is mainly about the x2 axis with the intermediate moment.  In this case

And the trace and determinant are

The determinant is now negative, and from Fig. 1, this means that the stability is a saddle point. 

Saddle points in 2D have one stable manifold and one unstable manifold.  If the initial condition is just a little off the stability point, then the deviation will grow as the dynamical trajectory moves away from the equilibrium point along the unstable manifold.

The components of the angular frequencies of each of these cases is shown in Fig. 2 for rotation mainly around x1, then x2 and then x3. A small amount of rotation is given as an initial condition about the other two axes for each case. For these calculations, no approximations were made, using the full Euler equations, and the motion is fully three-dimensional.

Fig. 2 Angular frequency components for motion with initial conditions of spin mainly about, respecitvely, the x1, x2 and x3 axes. The x2 case shows strong nonlinearity and slow unstable dynamics that periodically reverse. (I1 = 0.3, I2 = 0.5, I3 = 0.7)

Fate of the Spinning Earth

When two of the axes have very similar moments of inertia, that is, when the object becomes more symmetric, then the unstable dynamics can get very slow. An example is shown in Fig. 3 for I2 just a bit smaller than I3. The high frequency spin remains the same for long times and then quickly reverses. During the time when the spin is nearly stable, the other angular frequencies are close to zero, and the object would have only a slight wobble to it. Yet, in time, the wobble goes from bad to worse, until the whole thing flips over. It’s inevitable for almost any real-world solid…like maybe the Earth.

Fig. 3 Angular frequencies for a slightly asymmetric rigid body. The spin remains the same for long times and then flips suddenly.

The Earth is an oblate spheroid, wider at the equator because of the centrifugal force of the rotation. If it were a perfect spheroid, then the two moments orthogonal to the spin axis would be identically equal. However, the Earth has landmasses, continents, that make the moments of inertia slightly unequal. This would have catastrophic consequences, because if the Earth were perfectly rigid, then every few million years it should flip over, scrambling the seasons!

But that doesn’t happen. The reason is that the Earth has a liquid mantel and outer core that very slowly dissipate any wobble. The Earth, and virtually every celestial object that has any type of internal friction, always spins about its axis with the highest moment of inertia, which also means the system relaxes to its lowest kinetic energy for conserved L through the simple equation

So we are safe!

Python Code

Here is a simple Python code to explore the intermediate axis theorem. Change the moments of inertia and change the initial conditions. Note that this program does not solve for the actual motions–the configuration-space trajectories. The solution of the Euler equations gives the time evolution of the three components of the angular velocity. Incremental rotations could be applied through rotation matrices operating on the configuration space to yield the configuration-space trajectory of the flipping iPhone (link to the technical details here).

#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
Created on Thurs Oct 7 19:38:57 2021

@author: David Nolte
Introduction to Modern Dynamics, 2nd edition (Oxford University Press, 2019)

FlipPhone Example
"""
import numpy as np
import matplotlib as mpl
from mpl_toolkits.mplot3d import Axes3D
from scipy import integrate
from matplotlib import pyplot as plt

plt.close('all')
fig = plt.figure()
ax = fig.add_axes([0, 0, 1, 1], projection='3d')
ax.axis('on')

I1 = 0.45   # Moments of inertia can be changed here
I2 = 0.5
I3 = 0.55

def solve_lorenz(max_time=300.0):

# Flip Phone
    def flow_deriv(x_y_z, t0):

        x, y, z = x_y_z
        
        yp1 = ((I2-I3)/I1)*y*z;
        yp2 = ((I3-I1)/I2)*z*x;
        yp3 = ((I1-I2)/I3)*x*y;
        
        return [yp1, yp2, yp3]
    
    model_title = 'Flip Phone'

    t = np.linspace(0, max_time/4, int(250*max_time/4))

    # Solve for trajectories
    x0 = [[0.01,1,0.01]]   # Initial Conditions:  Change the major rotation axis here ....
    t = np.linspace(0, max_time, int(250*max_time))
    x_t = np.asarray([integrate.odeint(flow_deriv, x0i, t)
                      for x0i in x0])
     
    x, y, z = x_t[0,:,:].T
    lines = ax.plot(x, y, z, '-')
    plt.setp(lines, linewidth=0.5)

    ax.view_init(30, 30)
    plt.show()
    plt.title(model_title)
    plt.savefig('Flow3D')

    return t, x_t

ax.set_xlim((-1.1, 1.1))
ax.set_ylim((-1.1, 1.1))
ax.set_zlim((-1.1, 1.1))

t, x_t = solve_lorenz()

plt.figure(2)
lines = plt.plot(t,x_t[0,:,0],t,x_t[0,:,1],t,x_t[0,:,2])
plt.setp(lines, linewidth=1)


[1] D. D. Nolte, Introduction to Modern Dynamics, 2nd Edition (Oxford, 2019)

To see more on the Intermediate Axis Theorem, watch this amazing Youtube.

And here is another description of the Intermediate Axis Theorem.

Spontaneous Symmetry Breaking: A Mechanical Model

Symmetry is the canvas upon which the laws of physics are written. Symmetry defines the invariants of dynamical systems. But when symmetry breaks, the laws of physics break with it, sometimes in dramatic fashion. Take the Big Bang, for example, when a highly-symmetric form of the vacuum, known as the “false vacuum”, suddenly relaxed to a lower symmetry, creating an inflationary cascade of energy that burst forth as our Universe.

The early universe was extremely hot and energetic, so much so that all the forces of nature acted as one–described by a unified Lagrangian (as yet resisting discovery by theoretical physicists) of the highest symmetry. Yet as the universe expanded and cooled, the symmetry of the Lagrangian broke, and the unified forces split into two (gravity and electro-nuclear). As the universe cooled further, the Lagrangian (of the Standard Model) lost more symmetry as the electro-nuclear split into the strong nuclear force and the electro-weak force. Finally, at a tiny fraction of a second after the Big Bang, the universe cooled enough that the unified electro-week force broke into the electromagnetic force and the weak nuclear force. At each stage, spontaneous symmetry breaking occurred, and invariants of physics were broken, splitting into new behavior. In 2008, Yoichiro Nambu received the Nobel Prize in physics for his model of spontaneous symmetry breaking in subatomic physics.

Fig. 1 The spontanous symmetry breaking cascade after the Big Bang. From Ref.

Bifurcation Physics

Physics is filled with examples of spontaneous symmetry breaking. Crystallization and phase transitions are common examples. When the temperature is lowered on a fluid of molecules with high average local symmetry, the molecular interactions can suddenly impose lower-symmetry constraints on relative positions, and the liquid crystallizes into an ordered crystal. Even solid crystals can undergo a phase transition as one symmetry becomes energetically advantageous over another, and the crystal can change to a new symmetry.

In mechanics, any time a potential function evolves slowly with some parameter, it can start with one symmetry and evolve to another lower symmetry. The mechanical system governed by such a potential may undergo a discontinuous change in behavior.

In complex systems and chaos theory, sudden changes in behavior can be quite common as some parameter is changed continuously. These discontinuous changes in behavior, in response to a continuous change in a control parameter, is known as a bifurcation. There are many types of bifurcation, carrying descriptive names like the pitchfork bifurcation, period-doubling bifurcation, Hopf bifurcation, and fold bifurcation, among others. The pitchfork bifurcation is a typical example, shown in Fig. 2. As a parameter is changed continuously (horizontal axis), a stable fixed point suddenly becomes unstable and two new stable fixed points emerge at the same time. This type of bifurcation is called pitchfork because the diagram looks like a three-tined pitchfork. (This is technically called a supercritical pitchfork bifurcation. In a subcritical pitchfork bifurcation the solid and dashed lines are swapped.) This is exactly the bifurcation displayed by a simple mechanical model that illustrates spontaneous symmetry breaking.

Fig. 2 Bifurcation plot of a pitchfork bifurcation. As a parameter is changed smoothly and continuously (horizontal axis), a stable fixed point suddenly splits into three fixed points: one unstable and the other two stable.

Sliding Mass on a Rotating Hoop

One of the simplest mechanical models that displays spontaneous symmetry breaking and the pitchfork bifurcation is a bead sliding without friction on a circular hoop that is spinning on the vertical axis, as in Fig. 3. When it spins very slowly, this is just a simple pendulum with a stable equilibrium at the bottom, and it oscillates with a natural oscillation frequency ω0 = sqrt(g/b), where b is the radius of the hoop and g is the acceleration due to gravity. On the other hand, when it spins very fast, then the bead is flung to to one side or the other by centrifugal force. The bead then oscillates around one of the two new stable fixed points, but the fixed point at the bottom of the hoop is very unstable, because any deviation to one side or the other will cause the centrifugal force to kick in. (Note that in the body frame, centrifugal force is a non-inertial force that arises in the non-inertial coordinate frame. )

Fig. 3 A bead sliding without friction on a circular hoop rotating about a vertical axis. At high speed, the bead has a stable equilibrium to either side of the vertical.

The solution uses the Euler equations for the body frame along principal axes. In order to use the standard definitions of ω1, ω2, and ω3, the angle θ MUST be rotated around the x-axis.  This means the x-axis points out of the page in the diagram.  The y-axis is tilted up from horizontal by θ, and the z-axis is tilted from vertical by θ.  This establishes the body frame.

The components of the angular velocity are

And the moments of inertia are (assuming the bead is small)

There is only one Euler equation that is non-trivial. This is for the x-axis and the angle θ. The x-axis Euler equation is

and solving for the angular acceleration gives.

This is a harmonic oscillator with a “phase transition” that occurs as ω increases from zero.  At first the stable equilibrium is at the bottom.  But when ω passes a critical threshold, the equilibrium angle begins to increase to a finite angle set by the rotation speed.

This can only be real if  the magnitude of the argument is equal to or less than unity, which sets the critical threshold spin rate to make the system move to the new stable points to one side or the other for

which interestingly is the natural frequency of the non-rotating pendulum. Note that there are two equivalent angles (positive and negative), so this problem has a degeneracy. 

This is an example of a dynamical phase transition that leads to spontaneous symmetry breaking and a pitchfork bifurcation. By integrating the angular acceleration we can get the effective potential for the problem. One contribution to the potential is due to gravity. The other is centrifugal force. When combined and plotted in Fig. 4 for a family of values of the spin rate ω, a pitchfork emerges naturally by tracing the minima in the effective potential. The values of the new equilibrium angles are given in Fig. 2.

Fig. 4 Effective potential as a function of angle for a family of spin rates. At the transition spin rate, the effective potential is essentially flat with zero natural frequency.

Below the transition threshold for ω, the bottom of the hoop is the equilibrium position. To find the natural frequency of oscillation, expand the acceleration expression

For small oscillations the natural frequency is given by

As the effective potential gets flatter, the natural oscillation frequency decreases until it vanishes at the transition spin frequency. As the hoop spins even faster, the new equilibrium positions emerge. To find the natural frequency of the new equilibria, expand θ around the new equilibrium θ’ = θ – θ0

Which is a harmonic oscillator with oscillation angular frequency

Note that this is zero frequency at the transition threshold, then rises to match the spin rate of the hoop at high frequency. The natural oscillation frequency as a function of the spin looks like Fig. 5.

Fig. 5 Angular oscillation frequency for the bead. The bifurcation occurs at the critical spin rate ω = sqrt(g/b).

This mechanical analog is highly relevant for the spontaneous symmetry breaking that occurs in ferroelectric crystals when they go through a ferroelectric transition. At high temperature, these crystals have no internal polarization. But as the crystal cools towards the ferroelectric transition temperature, the optical-mode phonon modes “soften” as the phonon frequency decreases and vanishes at the transition temperature when the crystal spontaneously polarizes in one of several equivalent directions. The observation of mode softening in a polar crystal is one signature of an impending ferroelectric phase transition. Our mass on the hoop captures this qualitative physics nicely.

Golden Behavior

For fun, let’s find at what spin frequency the harmonic oscillation frequency at the dynamic equilibria equal the original natural frequency of the pendulum. Then

which is the golden ratio.  It’s spooky how often the golden ratio appears in random physics problems!

Surfing on a Black Hole: Accretion Disk Death Spiral

The most energetic physical processes in the universe (shy of the Big Bang itself) are astrophysical jets. These are relativistic beams of ions and radiation that shoot out across intergalactic space, emitting nearly the full spectrum of electromagnetic radiation, seen as quasars (quasi-stellar objects) that are thought to originate from supermassive black holes at the center of distant galaxies. The most powerful jets emit more energy than the light from a thousand Milky Way galaxies.

Where can such astronomical amounts of energy come from?

Black Hole Accretion Disks

The potential wells of black holes are so deep and steep, that they attract matter from their entire neighborhood. If a star comes too close, the black hole can rip the hydrogen and helium atoms off the star’s surface and suck them into a death spiral that can only end in oblivion beyond the Schwarzschild radius.

However, just before they disappear, these atoms and ions make one last desperate stand to resist the inevitable pull, and they park themselves near an orbit that is just stable enough that they can survive many orbits before they lose too much energy, through collisions with the other atoms and ions, and resume their in-spiral. This last orbit, called the inner-most stable circular orbit (ISCO), is where matter accumulates into an accretion disk.

Fig. 1 Artist’s rendering of a black hole pulling matter from a near-by star where it accumulates in the accretion disk just outside the black hole Schwarzschild radius. (Credit: Wikipedia)
Fig. 2 The famous first image of the black hole in M87 galaxy made by the Event Horizon Telescope collaboration. The bright ring surrounding the “shadow” is the light emitted from the accretion disk.
Fig. 3 Explanation of the image of the accretion disk around a black hole. (You have to watch the simulations at NASA.)

The Innermost Stable Circular Orbit (ISCO)

At what radius is the inner-most stable circular orbit? To find out, write the energy equation of a particle orbiting a black hole with an effective potential function as

where the effective potential is

The first two terms of the effective potential are the usual Newtonian terms that include the gravitational potential and the repulsive contribution from the angular momentum that normally prevents the mass from approaching the origin.  The third term is the GR term that is attractive and overcomes the centrifugal barrier at small values of r, allowing the orbit to collapse to the center.  This is the essential danger of orbiting a black hole—not all orbits around a black hole are stable, and even circular orbits will decay and be swallowed up if too close to the black hole. 

To find the conditions for circular orbits, take the derivative of the effective potential and set it to zero

This is a quadratic equation that can be solved for r. There is an innermost stable circular orbit (ISCO) that is obtained when the term in the square root of the quadratic formula vanishes when the angular momentum satisfies the condition

which gives the simple result for the inner-most circular orbit as

Therefore, no particle can sustain a circular orbit with a radius closer than three times the Schwarzschild radius.  Inside that, it will spiral into the black hole.

A single trajectory solution to the GR flow [1] is shown in Fig. 4.  The particle begins in an elliptical orbit outside the innermost circular orbit and is captured into a nearly circular orbit inside the ISCO.  This orbit eventually decays and spirals with increasing speed into the black hole.  Accretion discs around black holes occupy these orbits before collisions cause them to lose angular momentum and spiral into the black hole.

Fig. 4 Orbital simulation for a particle falling starting in an elliptical orbit near a black hole. In these units, Rs = 0.15 and  ISCO = 0.44.  A particle that begins with an ellipticity settles into a nearly circular orbit near the ISCO, after which it spirals into the black hole. (Reprinted from Introduction to Modern Dynamics)

The gravity of black holes is so great, that even photons can orbit black holes in circular orbits. The radius or the circular photon orbit defines what is known as the photon sphere. The radius of the photon sphere is RPS = 1.5RS, which is just a factor of 2 smaller than the ISCO.

Binding Energy of a Particle at the ISCO

So where does all the energy come from to power astrophysical jets? The explanation comes from the binding energy of a particle at the ISCO.  The energy conservation equation including angular momentum for a massive particle of mass m orbiting a black hole of mass M is

where the term on the right is the kinetic energy of the particle at infinity, and the second and third terms on the left are the effective potential

Solving for the binding energy at the ISCO gives

Therefore, 6% of the rest energy of the object is given up when it spirals into the ISCO.  Remember that the fusion of two hydrogen atoms into helium gives up only about 0.7% of its rest mass energy. Therefore, the energy emission per nucleon for an atom falling towards the ISCO is TEN times more efficient than nuclear fusion!

This incredible energy resource is where the energy for galactic jets and quasars comes from.


[1] These equations apply for particles that are nonrelativistic.  Special relativity effects become important when the orbital radius of the particle approaches the Schwarzschild radius, which introduces relativistic corrections to these equations.

The Secret Life of Snow: Laser Speckle

If you have ever seen euphemistically-named “snow”—the black and white dancing pixels on television screens in the old days of cathode-ray tubes—you may think it is nothing but noise.  But the surprising thing about noise is that it is a good place to hide information. 

Shine a laser pointer on any rough surface and look at the scattered light on a distant wall, then you will see the same patterns of light and dark known as laser speckle.  If you move your head or move the pointer, then the speckle shimmers—just like the snow on the old TVs.  This laser speckle—this snow—is providing fundamental new ways to extract information hidden inside three-dimensional translucent objects—objects like biological tissue or priceless paintings or silicon chips.

Snow Crash

The science fiction novel Snow Crash, published in 1992 by Neal Stephenson, is famous for popularizing virtual reality and the role of avatars.  The central mystery of the novel is the mind-destroying mental crash that is induced by Snow—white noise in the metaverse.  The protagonist hero of the story—a hacker with an avatar improbably named Hiro Protagonist—must find the source of snow and thwart the nefarious plot behind it.

Fig. 1 Book cover of Snow Crash

If Hiro’s snow in his VR headset is caused by laser speckle, then the seemingly random pattern is composed of amplitudes and phases that vary spatially and temporally.  There are many ways to make computer-generated versions of speckle.  One of the simplest is to just add together a lot of sinusoidal functions with varying orientations and periodicities.  This is a “Fourier” approach to speckle which views it as a random superposition of two-dimensional spatial frequencies.  An example is shown in Fig. 2 for one sinusoid which has been added to 20 others to generate the speckle pattern on the right.  There is still residual periodicity in the speckle for N = 20, but as N increases, the speckle pattern becomes strictly random, like noise. 

But if the sinusoids that are being added together link the periodicity with their amplitude through some functional relationship, then the final speckle can be analyze using a 2D Fourier transform to find its spatial frequency spectrum.  The functional form of this spectrum can tell a lot about the underlying processes of the speckle formation.  This is part of the information hidden inside snow.

Fig. 2 Sinusoidal addition to generate random speckle.  a) One example of a spatial periodicity.  b) The superposition of 20 random sinusoids.

(To watch animations of the figures in real time, See YouTube video.)

An alternative viewpoint to generating a laser speckle pattern thinks in terms of spatially-localized patches that add randomly together with random amplitudes and phases.  This is a space-domain view of speckle formation in contrast to the Fourier-space view of the previous construction.  Sinusoids are “global” extending spatially without bound. The underlying spatially localized functions can be almost any local function.  Gaussians spring to mind, but so do Airy functions, because they are common point-spread functions that participate in the formation of images through lenses.  The example in Fig 3a shows one such Airy function, and in 3b for speckle generated from N = 20 Airy functions of varying amplitudes and phases and locations.

Fig. 3  Generating speckle by random superposition of point spread functions (spatially-localized functions) of varying amplitude, phase, position and bandwidth.

These two examples are complementary ways of generating speckle, where the 2D Fourier-domain approach is conjugate to the 2D space-domain approach.

However, laser speckle is actually a 3D phenomenon, and the two-dimensional speckle patterns are just 2D cross sections intersecting a complex 3D pattern of light filaments.  To get a sense of how laser speckle is formed in a physical system, one can solve the propagation of a laser beam through a random optical medium.  In this way you can visualize the physical formation of the regions of brightness and darkness when the fragmented laser beam exits the random material. 

Fig. 4 Propagation of a coherent beam into a random optical medium. Speckle is intrinsically three dimensional while 2D speckle is the cross section of the light filaments.

Coherent Patch

For a quantitative understanding of laser speckle, when 2D laser speckle is formed by an optical system, the central question is how big are the regions of brightness and darkness?  This is a question of spatial coherence, and one way to define spatial coherence is through the coherence area at the observation plane

where A is the source emitting area, z is the distance to the observation plane, and Ωs is the solid angle subtended by the source emitting area as seen from the observation point. This expression assumes that the angular spread of the light scattered from the illumination area is very broad. Larger distances and smaller emitting areas (pinholes in an optical diffuser or focused laser spots on a rough surface) produce larger coherence areas in the speckle pattern. For a Gaussian intensity distribution at the emission plane, the coherence area is

for beam waist w0 at the emission plane.  To put some numbers to these parameters to give an intuitive sense of the size of speckle spots, assume a wavelength of 1 micron, a focused beam waist of 0.1 mm and a viewing distance of 1 meter.  This gives patches with a radius of about 2 millimeters.  Examples of laser speckle are shown in Fig. 5 for a variety of beam waist values w0

Fig. 5 Speckle intensities for Gaussian illumination of a random phase screen for changing illumination radius w0 = 64, 32, 16 and 8 microns for f = 1 cm and W = 500 nm with a field-of-view of 5 mm. (Reproduced from Ref.[1])

Speckle Holograms

Associated with any intensity modulation must be a phase modulation through the Kramers-Kronig relations [2].  Phase cannot be detected directly in the speckle intensity pattern, but it can be measured by using interferometry.  One of the easiest interferometric techniques is holography in which a coherent plane wave is caused to intersect, at a small angle, a speckle pattern generated from the same laser source.  An example of a speckle hologram and its associated phase is shown in Fig. 6. The fringes of the hologram are formed when a plane reference wave interferes with the speckle field.  The fringes are not parallel because of the varying phase of the speckle field, but the average spatial frequency is still recognizable in Fig. 5a.  The associated phase map is shown in Fig. 5b.

Fig. 6 Speckle hologram and speckle phase.  a) A coherent plane-wave reference added to fully-developed speckle (unity contrast) produces a speckle hologram.  b) The phase of the speckle varies through 2π.

Optical Vortex Physics

In the speckle intensity field, there are locations where the intensity vanishes, and the phase becomes undefined.  In the neighborhood of a singular point the phase wraps around it with a 2pi phase range.  Because of the wrapping phase such a singular point is called and optical vortex [3].  Vortices always come in pairs with opposite helicity (defined by the direction of the wrapping phase) with a line of neutral phase between them as shown in Fig. 7.  The helicity defines the topological charge of the vortex, and they can have topological charges larger than ±1 if the phase wraps multiple times.  In dynamic speckle these vortices are also dynamic and move with speeds related to the underlying dynamics of the scattering medium [4].  Vortices can annihilate if they have opposite helicity, and they can be created in pairs.  Studies of singular optics have merged with structured illumination [5] to create an active field of topological optics with applications in biological microscopy as well as material science.

iFig. 7 Optical vortex patterns. a) Log intensity showing zeros in the intensity field.  The circles identify the intensity nulls which are the optical vortices  b) Associated phase with a 2pi phase wrapping around each singularity.  c) Associated hologram showing dislocations in the fringes that occur at the vortices.

References

[1] D. D. Nolte, Optical Interferometry for Biology and Medicine. (Springer, 2012)

[2] A. Mecozzi, C. Antonelli, and M. Shtaif, “Kramers-Kronig coherent receiver,” Optica, vol. 3, no. 11, pp. 1220-1227, Nov (2016)

[3] M. R. Dennis, R. P. King, B. Jack, K. O’Holleran, and M. J. Padgett, “Isolated optical vortex knots,” Nature Physics, vol. 6, no. 2, pp. 118-121, Feb (2010)

[4] S. J. Kirkpatrick, K. Khaksari, D. Thomas, and D. D. Duncan, “Optical vortex behavior in dynamic speckle fields,” Journal of Biomedical Optics, vol. 17, no. 5, May (2012), Art no. 050504

[5] H. Rubinsztein-Dunlop, A. Forbes, M. V. Berry, M. R. Dennis, D. L. Andrews, M. Mansuripur, C. Denz, C. Alpmann, P. Banzer, T. Bauer, E. Karimi, L. Marrucci, M. Padgett, M. Ritsch-Marte, N. M. Litchinitser, N. P. Bigelow, C. Rosales-Guzman, A. Belmonte, J. P. Torres, T. W. Neely, M. Baker, R. Gordon, A. B. Stilgoe, J. Romero, A. G. White, R. Fickler, A. E. Willner, G. D. Xie, B. McMorran, and A. M. Weiner, “Roadmap on structured light,” Journal of Optics, vol. 19, no. 1, Jan (2017), Art no. 013001

Random Walks with Paul Langevin: Stochastic Dynamics

One of the most important conclusions from chaos theory is that not all random-looking processes are actually random.  In deterministic chaos, structures such as strange attractors are not random at all but are fractal structures determined uniquely by the dynamics.  But sometimes, in nature, processes really are random, or at least have to be treated as such because of their complexity.  Brownian motion is a perfect example of this.  At the microscopic level, the jostling of the Brownian particle can be understood in terms of deterministic momentum transfers from liquid atoms to the particle.  But there are so many liquid particles that their individual influences cannot be directly predicted.  In this situation, it is more fruitful to view the atomic collisions as a stochastic process with well-defined physical parameters and then study the problem statistically. This is what Einstein did in his famous 1905 paper that explained the statistical physics of Brownian motion.

Then there is the middle ground between deterministic mechanics and stochastic mechanics, where complex dynamics gains a stochastic component. This is what Paul Langevin did in 1908 when he generalized Einstein.

Paul Langevin

Paul Langevin (1872 – 1946) had the fortune to stand at the cross-roads of modern physics, making key contributions, while serving as a commentator expanding on the works of the giants like Einstein and Lorentz and Bohr.  He was educated at the École Normale Supérieure and at the Sorbonne with a year in Cambridge studying with J. J. Thompson.  At the Sorbonne he worked in the laboratory of Jean Perrin (1870 – 1942) who received the Nobel Prize in 1926 for the experimental work on Brownian motion that had set the stage for Einstein’s crucial analysis of the problem confirming the atomic nature of matter. 

Langevin received his PhD in 1902 on the topic of x-ray ionization of gases and was appointed as a lecturer at the College de France to substitute in for Éleuthère Mascart (who was an influential French physicist in optics).  In 1905 Langevin published several papers that delved into the problems of Lorentz contraction, coming very close to expressing the principles of relativity.  This work later led Einstein to say that, had he delayed publishing his own 1905 paper on the principles of relativity, then Langevin might have gotten there first [1].

Fig. 1 From left to right: Albert Einstein, Paul Ehrenfest, Paul Langevin (seated), Kamerlingh Onnes, and Pierre Weiss

Also in 1905, Langevin published his most influential work, providing the theoretical foundations for the physics of paramagnetism and diamagnetism.  He was working closely with Pierre Curie whose experimental work on magnetism had established the central temperature dependence of the phenomena.  Langevin used the new molecular model of matter to derive the temperature dependence as well as the functional dependence on magnetic field.  One surprising result was that only the valence electrons, moving relativistically, were needed to contribute to the molecular magnetic moment.  This later became one of the motivations for Bohr’s model of multi-electron atoms.

Langevin suffered personal tragedy during World War II when the Vichy government arrested him because of his outspoken opposition to fascism.  He was imprisoned and eventually released to house arrest.  In 1942, his son-in-law was executed by the Nazis, and in 1943 his daughter was sent to Auschwitz.  Fearing for his own life, Langevin escaped to Switzerland.  He returned shortly after the liberation of Paris and was joined after the end of the war by his daughter who had survived Auschwitz and later served in the Assemblée Consultative as a communist member.  Langevin passed away in 1946 and received a national funeral.  His remains lie today in the Pantheon.

The Langevin Equation

In 1908, Langevin realized that Einstein’s 1905 theory on Brownian motion could be simplified while at the same time generalized.  Langevin introduced a new quantity into theoretical physics—the stochastic force [2].  With this new theoretical tool, he was able to work with diffusing particles in momentum space as dynamical objects with inertia buffeted by random forces, providing a Newtonian formulation for short-time effects that were averaged out and lost in Einstein’s approach.

Stochastic processes are understood by considering a dynamical flow that includes a random function.  The resulting set of equations are called the Langevin equation, namely

where fa is a set of N regular functions, and σa is the standard deviation of the a-th process out of N.  The stochastic functions ξa are in general non-differentiable but are integrable.  They have zero mean, and no temporal correlations.  The solution is an N-dimensional trajectory that has properties of a random walk superposed on the dynamics of the underlying mathematical flow.

As an example, take the case of a particle moving in a one-dimensional potential, subject to drag and to an additional stochastic force

where γ is the drag coefficient, U is a potential function and B is the velocity diffusion coefficient.  The second term in the bottom equation is the classical force from a potential function, while the third term is the stochastic force.  The crucial point is that the stochastic force causes jumps in velocity that integrate into displacements, creating a random walk superposed on the deterministic mechanics.

Fig. 2 Paul Langevin’s 1908 paper on stochastic dynamics.

Random Walk in a Harmonic Potential

Diffusion of a particle in a weak harmonic potential is equivalent to a mass on a weak spring in a thermal bath.  For short times, the particle motion looks like a random walk, but for long times, the mean-squared displacement must satisfy the equipartition relation

The Langevin equation is the starting point of motion under a stochastic force F’

where the second equation has been multiplied through by x. For a spherical particle of radius a, the viscous drag factor is

and η is the viscosity.  The term on the left of the dynamical equation can be rewritten to give

It is then necessary to take averages.  The last term on the right vanishes because of the random signs of xF’.  However, the buffeting from the random force can be viewed as arising from an effective temperature.  Then from equipartition on the velocity

this gives

Making the substitution y = <x2> gives

which is the dynamical equation for a particle in a harmonic potential subject to a constant effective force kBT.  For small objects in viscous fluids, the inertial terms are negligible relative to the other terms (see Life at small Reynolds Number [3]), so the dynamic equation is

with the general solution

For short times, this is expanded by the Taylor series to

This solution at short times describes a diffusing particle (Fickian behavior) with a diffusion coefficient D. However, for long times the solution asymptotes to an equipartition value of <x2> = kBT/k. In the intermediate time regime, the particle is walking randomly, but the mean-squared displacement is no longer growing linearly with time.

Constrained motion shows clear saturation to the size set by the physical constraints (equipartition for an oscillator or compartment size for a freely diffusing particle [4]).  However, if the experimental data do not clearly extend into the saturation time regime, then the fit to anomalous diffusion can lead to exponents that do not equal unity.  This is illustrated in Fig. 3 with asymptotic MSD compared with the anomalous diffusion equation fit for the exponent β.  Care must be exercised in the interpretation of the exponents obtained from anomalous diffusion experiments.  In particular, all constrained motion leads to subdiffusive interpretations if measured at intermediate times.

Fig. 3 Fit of mean-squared displacement (MSD) for constrained diffusion to the anomalous diffusion equation. The saturated MSD mimics the functional form for anomalous diffusion.

Random Walk in a Double Potential

The harmonic potential has well-known asymptotic dynamics which makes the analytic treatment straightforward. However, the Langevin equation is general and can be applied to any potential function. Take a double-well potential as another example

The resulting Langevin equation can be solved numerically in the presence of random velocity jumps. A specific stochastic trajectory is shown in Fig. 4 that applies discrete velocity jumps using a normal distribution of jumps of variance 2B.  The notable character of this trajectory, besides the random-walk character, is the ability of the particle to jump the barrier between the wells.  In the deterministic system, the initial condition dictates which stable fixed point would be approached.  In the stochastic system, there are random fluctuations that take the particle from one basin of attraction to the other.

Fig. 4 Stochastic trajectory of a particle in a double-well potential. The start position is at the unstable fixed point between the wells, and the two stable fixed points (well centers) are the solid dots.

            The stochastic long-time probability distribution p(x,v) in Fig. 5 introduces an interesting new view of trajectories in state space that have a different character than typical state-space flows.  If we think about starting a large number of systems with the same initial conditions, and then letting the stochastic dynamics take over, we can define a time-dependent probability distribution p(x,v,t) that describes the likely end-positions of an ensemble of trajectories on the state plane as a function of time.  This introduces the idea of the trajectory of a probability cloud in state space, which has a strong analogy to time-dependent quantum mechanics.  The Schrödinger equation can be viewed as a diffusion equation in complex time, which is the basis of a technique known as quantum Monte Carlo that solves for ground state wave functions using concepts of random walks.  This goes beyond the topics of classical mechanics, and it shows how such diverse fields as econophysics, diffusion, and quantum mechanics can share common tools and language.

Fig. 5 Density p(x,v) of N = 4000 random-walkers in the double-well potential with σ = 1.

Stochastic Chaos

“Stochastic Chaos” sounds like an oxymoron. “Chaos” is usually synonymous with “deterministic chaos”, meaning that every next point on a trajectory is determined uniquely by its previous location–there is nothing random about the evolution of the dynamical system. It is only when one looks at long times, or at two nearby trajectories, that non-repeatable and non-predictable behavior emerges, so there is nothing stochastic about it.

On the other hand, there is nothing wrong with adding a stochastic function to the right-hand side of a deterministic flow–just as in the Langevin equation. One question immediately arises: if chaos has sensitivity to initial conditions (SIC), wouldn’t it be highly susceptible to constant buffeting by a stochastic force? Let’s take a look!

To the well-known Rössler model, add a stochastic function to one of the three equations,

in this case to the y-dot equation. This is just like the stochastic term in the random walks in the harmonic and double-well potentials. The solution is shown in Fig. 6. In addition to the familiar time-series of the Rössler model, there are stochastic jumps in the y-variable. An x-y projection similarly shows the familiar signature of the model, and the density of trajectory points is shown in the density plot on the right. The rms jump size for this simulation is approximately 10%.

Fig. 6 Stochastic Rössler dynamics. The usual three-dimensional flow is buffetted by a stochastic term that produces jumps in the y-direction. A single trajectory is shown in projection on the left, and the density of trajectories on the x-y plane is shown on the right.
Fig. 7 State-space densities for the normal Rössler model (left) and for the stochastic model (right). The Rössler attractor dominates over the stochastic behavior.

Now for the supposition that because chaos has sensitivity to initial conditions that it should be highly susceptible to stochastic contributions–the answer can be seen in Fig. 7 in the state-space densities. Other than a slightly more fuzzy density for the stochastic case, the general behavior of the Rössler strange attractor is retained. The attractor is highly stable against the stochastic fluctuations. This demonstrates just how robust deterministic chaos is.

On the other hand, there is a saddle point in the Rössler dynamics a bit below the lowest part of the strange attractor in the figure, and if the stochastic jumps are too large, then the dynamics become unstable and diverge. A hint at this is already seen in the time series in Fig. 6 that shows the nearly closed orbit that occurs transiently at large negative y values. This is near the saddle point, and this trajectory is dangerously close to going unstable. Therefore, while the attractor itself is stable, anything that drives a dynamical system to a saddle point will destabilize it, so too much stochasticity can cause a sudden destruction of the attractor.


• Parts of this blog were excerpted from D. D. Nolte, Optical Interferometry for Biology and Medicine. Springer, 2012, pp. 1-354 and D. D. Nolte, Introduction to Modern Dynamics. Oxford, 2015 (first edition).

[1] A. Einstein, “Paul Langevin” in La Pensée, 12 (May-June 1947), pp. 13-14.

[2] D. S. Lemons and A. Gythiel, “Paul Langevin’s 1908 paper ”On the theory of Brownian motion”,” American Journal of Physics, vol. 65, no. 11, pp. 1079-1081, Nov (1997)

[3] E. M. Purcell, “Life at Low Reynolds-Number,” American Journal of Physics, vol. 45, no. 1, pp. 3-11, (1977)

[4] Ritchie, K., Shan, X.Y., Kondo, J., Iwasawa, K., Fujiwara, T., Kusumi, A.: Detection of non- Brownian diffusion in the cell membrane in single molecule tracking. Biophys. J. 88(3), 2266–2277 (2005)

A Random Walk in 10 Dimensions

Physics in high dimensions is becoming the norm in modern dynamics.  It is not only that string theory operates in ten dimensions (plus one for time), but virtually every complex dynamical system is described and analyzed within state spaces of high dimensionality.  Population dynamics, for instance, may describe hundreds or thousands of different species, each of whose time-varying populations define a separate axis in a high-dimensional space.  Coupled mechanical systems likewise may have hundreds or thousands (or more) of degrees of freedom that are described in high-dimensional phase space

In high-dimensional landscapes, mountain ridges are much more common than mountain peaks. This has profound consequences for the evolution of life, the dynamics of complex systems, and the power of machine learning.

For these reasons, as physics students today are being increasingly exposed to the challenges and problems of high-dimensional dynamics, it is important to build tools they can use to give them an intuitive feeling for the highly unintuitive behavior of systems in high-D.

Within the rapidly-developing field of machine learning, which often deals with landscapes (loss functions or objective functions) in high dimensions that need to be minimized, high dimensions are usually referred to in the negative as “The Curse of Dimensionality”.

Dimensionality might be viewed as a curse for several reasons.  First, it is almost impossible to visualize data in dimensions higher than d = 4 (the fourth dimension can sometimes be visualized using colors or time series).  Second, too many degrees of freedom create too many variables to fit or model, leading to the classic problem of overfitting.  Put simply, there is an absurdly large amount of room in high dimensions.  Third, our intuition about relationships among areas and volumes are highly biased by our low-dimensional 3D experiences, causing us to have serious misconceptions about geometric objects in high-dimensional spaces.  Physical processes occurring in 3D can be over-generalized to give preconceived notions that just don’t hold true in higher dimensions.

Take, for example, the random walk.  It is usually taught starting from a 1-dimensional random walk (flipping a coin) that is then extended to 2D and then to 3D…most textbooks stopping there.  But random walks in high dimensions are the rule rather than the exception in complex systems.  One example that is especially important in this context is the problem of molecular evolution.  Each site on a genome represents an independent degree of freedom, and molecular evolution can be described as a random walk through that space, but the space of all possible genetic mutations is enormous.  Faced with such an astronomically large set of permutations, it is difficult to conceive of how random mutations could possibly create something as complex as, say, ATP synthase which is the basis of all higher bioenergetics.  Fortunately, the answer to this puzzle lies in the physics of random walks in high dimensions. 

Why Ten Dimensions?

This blog presents the physics of random walks in 10 dimensions.  Actually, there is nothing special about 10 dimensions versus 9 or 11 or 20, but it gives a convenient demonstration of high-dimensional physics for several reasons.  First, it is high enough above our 3 dimensions that there is no hope to visualize it effectively, even by using projections, so it forces us to contend with the intrinsic “unvisualizability” of high dimensions.  Second, ten dimensions is just big enough that it behaves roughly like any higher dimension, at least when it comes to random walks.  Third, it is about as big as can be handled with typical memory sizes of computers.  For instance, a ten-dimensional hypercubic lattice with 10 discrete sites along each dimension has 10^10 lattice points (10 Billion or 10 Gigs) which is about the limit of what a typical computer can handle with internal memory.

As a starting point for visualization, let’s begin with the well-known 4D hypercube but extend it to a 4D hyperlattice with three values along each dimension instead of two. The resulting 4D lattice can be displayed in 2D as a network with 3^4 = 81 nodes and 216 links or edges. The result is shown in Fig. 1, represented in two dimensions as a network graph with nodes and edges. Each node has four links with neighbors. Despite the apparent 3D look that this graph has about it, if you look closely you will see the frustration that occurs when trying to link to 4 neighbors, causing many long-distance links.

[See YouTube video for movies showing evolving hyperlattices and random walks in 10D.]

Fig. 1 A 4D hyperlattice with three sites along each of the 4 dimensions. This high dimensional discrete lattice is represented as a network graph in 2D with nodes and edges.

We can also look at a 10D hypercube that has 2^10 = 1024 nodes and 5120 edges, shown in Fig. 2. It is a bit difficult to see the hypercubic symmetry when presented in 2D, but each node has exactly 10 links.

Fig. 2 A 10D hypercube of 1024 nodes and 5120 edges. Each node has exactly 10 links to neighbors

Extending this 10D lattice to 10 positions instead of 2 and trying to visualize it is prohibitive, since the resulting graph in 2D just looks like a mass of overlapping circles. However, our interest extends not just to ten locations per dimension, but to an unlimited number of locations. This is the 10D infinite lattice on which we want to explore the physics of the random walk.

Diffusion in Ten Dimensions

An unconstrained random walk in 10D is just a minimal extension beyond a simple random walk in 1D. Because each dimension is independent, a single random walker takes a random step along any of the 10 dimensions at each iteration so that motion in any one of the 10 dimensions is just a 1D random walk. Therefore, a simple way to visualize this random walk in 10D is simply to plot the walk against each dimension, as in Fig. 3. There is one chance in ten that the walker will take a positive or negative step along any given dimension at each time point.

Fig. 3 A single walker taking random unit steps in 10 dimensions. The position of the walker as a function of time is shown for all ten dimensions.

An alternate visualization of the 10D random walker is shown in Fig. 4 for the same data as Fig. 3. In this case the displacement is color coded, and each column is a different dimension. Time is on the vertical axis (starting at the top and increasing downward). This type of color map can easily be extended to hundreds of dimensions. Each row is a position vector of the single walker in the 10D space

Fig. 4 Same data as in Fig. 3 for a single 10D random walker on a hyperlattice. Distance is color coded. Time is on the vertical axis (increasing downward). Each row is a 10D position vector, and this representation is of a single 10D trajectory.

In the 10D hyperlattice in this section, all lattice sites are accessible at each time point, so there is no constraint preventing the walk from visiting a previously-visited node. There is a possible adjustment that can be made to the walk that prevents it from ever crossing its own path. This is known as a self-avoiding-walk (SAW). In two dimensions, there is a major difference in the geometric and dynamical properties of an ordinary walk and an SAW. However, in dimensions larger than 4, it turns out that there are so many possibilities of where to go (high-dimensional spaces have so much free room) that it is highly unlikely that a random walk will ever cross itself. Therefore, in our 10D hyperlattice we do not need to make the distinction between an ordinary walk and a self-avoiding-walk. However, there are other constraints that can be imposed that mimic how complex systems evolve in time, and these constraints can have important consequences, as we see next.

Random Walk in a Maximally Rough Landscape

In the infinite hyperlattice of the previous section, all lattice sites are the same and are all equally accessible. However, in the study of complex systems, it is common to assign a value to each node in a high-dimensional lattice. This value can be assigned by a potential function, producing a high-dimensional potential landscape over the lattice geometry. Or the value might be the survival fitness of a species, producing a high-dimensional fitness landscape that governs how species compete and evolve. Or the value might be a loss function (an objective function) in a minimization problem from multivariate analysis or machine learning. In all of these cases, the scalar value on the nodes defines a landscape over which a state point executes a walk. The question then becomes, what are the properties of a landscape in high dimensions, and how does it affect a random walker?

As an example, let’s consider a landscape that is completely random point-to-point. There are no correlations in this landscape, making it maximally rough. Then we require that a random walker takes a walk along iso-potentials in this landscape, never increasing and never decreasing its potential. Beginning with our spatial intuition living in 3D space, we might be concerned that such a walker would quickly get confined in some area of the lanscape. Think of a 2D topo map with countour lines drawn on it — If we start at a certain elevation on a mountain side, then if we must walk along directions that maintain our elevation, we stay on a given contour and eventually come back to our starting point after circling the mountain peak — we are trapped! But this intuition informed by our 3D lives is misleading. What happens in our 10D hyperlattice?

To make the example easy to analyze, let’s assume that our potential function is restricted to N discrete values. This means that of the 10 neighbors to a given walker site, on average only 10/N are likely to have the same potential value as the given walker site. This constrains the available sites for the walker, and it converts the uniform hyperlattice into a hyperlattice site percolation problem.

Percolation theory is a fascinating topic in statistical physics. There are many deep concepts that come from asking simple questions about how nodes are connected across a network. The most important aspect of percolation theory is the concept of a percolation threshold. Starting with a complete network that is connected end-to-end, start removing nodes at random. For some critical fraction of nodes removed (on average) there will no longer be a single connected cluster that spans the network. This critical fraction is known as the percolation threshold. Above the percolation threshold, a random walker can get from one part of the network to another. Below the percolation threshold, the random walker is confined to a local cluster.

If a hyperlattice has N discrete values for the landscape potential (or height, or contour) and if a random walker can only move to site that has the same value as the walker’s current value (remains on the level set), then only a fraction of the hyperlattice sites are available to the walker, and the question of whether the walker can find a path the spans the hyperlattice becomes simply a question of how the fraction of available sites relates to the percolation threshold.

The percolation threshold for hyperlattices is well known. For reasonably high dimensions, it is given to good accuracy by

where d is the dimension of the hyperlattice. For a 10D hyperlattice the percolation threshold is pc(10) = 0.0568, or about 6%. Therefore, if more than 6% of the sites of the hyperlattice have the same value as the walker’s current site, then the walker is free to roam about the hyperlattice.

If there are N = 5 discrete values for the potential, then 20% of the sites are available, which is above the percolation threshold, and walkers can go as far as they want. This statement holds true no matter what the starting value is. It might be 5, which means the walker is as high on the landscape as they can get. Or it might be 1, which means the walker is as low on the landscape as they can get. Yet even if they are at the top, if the available site fraction is above the percolation threshold, then the walker can stay on the high mountain ridge, spanning the landscape. The same is true if they start at the bottom of a valley. Therefore, mountain ridges are very common, as are deep valleys, yet they allow full mobility about the geography. On the other hand, a so-called mountain peak would be a 5 surrounded by 4’s or lower. The odds for having this happen in 10D are 0.2*(1-0.8^10) = 0.18. Then the total density of mountain peaks, in a 10D hyperlattice with 5 potential values, is only 18%. Therefore, mountain peaks are rare in 10D, while mountain ridges are common. In even higher dimensions, the percolation threshold decreases roughly inversely with the dimensionality, and mountain peaks become extremely rare and play virtually no part in walks about the landscape.

To illustrate this point, Fig. 5 is the same 10D network that is in Fig. 2, but only the nodes sharing the same value are shown for N = 5, which means that only 20% of the nodes are accessible to a walker who stays only on nodes with the same values. There is a “giant cluster” that remains connected, spanning the original network. If the original network is infinite, then the giant cluster is also infinite but contains a finite fraction of the nodes.

Fig. 5 A 10D cluster that spans the network in Fig. 2 for 1/5 of the nodes sharing the same landscape value. This cluster represents a mountain ridge that spans the space. There are four additional co-existing clusters, each of which separately spans the same 10D space.

The quantitative details of the random walk can change depending on the proximity of the sub-networks (the clusters, the ridges or the level sets) to the percolation threshold. For instance, a random walker in D =10 with N = 5 is shown in Fig. 6. The diffusion is a bit slower than in the unconstrained walk of Figs. 3 and 4. But the ability to wander about the 10D space is retained.

Fig. 6 A random walker on the level-set cluster of Fig. 5

This is then the general important result: In high-dimensional landscapes, mountain ridges are much more common than mountain peaks. This has profound consequences for the evolution of life, the dynamics of complex systems, and the power of machine learning.

Consequences for Evolution and Machine Learning

When the high-dimensional space is the space of possible mutations on a genome, and when the landscape is a fitness landscape that assigns a survival advantage for one mutation relative to others, then the random walk describes the evolution of a species across generations. The prevalence of ridges, or more generally level sets, in high dimensions has a major consequence for the evolutionary process, because a species can walk along a level set acquiring many possible mutations that have only neutral effects on the survivability of the species. At the same time, the genetic make-up is constantly drifting around in this “neutral network”, allowing the species’ genome to access distant parts of the space. Then, at some point, natural selection may tip the species up a nearby (but rare) peak, and a new equilibrium is attained for the species.

One of the early criticisms of fitness landscapes was the (erroneous) criticism that for a species to move from one fitness peak to another, it would have to go down and cross wide valleys of low fitness to get to another peak. But this was a left-over from thinking in 3D. In high-D, neutral networks are ubiquitous, and a mutation can take a step away from one fitness peak onto one of the neutral networks, which can be sampled by a random walk until the state is near some distant peak. It is no longer necessary to think in terms of high peaks and low valleys of fitness — just random walks. The evolution of extremely complex structures, like ATP synthase, can then be understood as a random walk along networks of nearly-neutral fitness — once our 3D biases are eliminated.

The same arguments hold for many situations in machine learning and especially deep learning. When training a deep neural network, there can be thousands of neural weights that need to be trained through the minimization of a loss function, also known as an objective function. The loss function is the equivalent to a potential, and minimizing the loss function over the thousands of dimensions is the same problem as maximizing the fitness of an evolving species.

At first look, one might think that deep learning is doomed to failure. We have all learned, from the earliest days in calculus, that enough adjustable parameter can fit anything, but the fit is meaningless because it predicts nothing. Deep learning seems to be the worst example of this. How can fitting thousands of adjustable parameters be useful when the dimensionality of the optimization space is orders of magnitude larger than the degrees of freedom of the system being modeled?

The answer comes from the geometry of high dimensions. The prevalence of neutral networks in high dimensions gives lots of chances to escape local minima. In fact, local minima are actually rare in high dimensions, and when they do occur, there is a neutral network nearby onto which they can escape (if the effective temperature of the learning process is set sufficiently high). Therefore, despite the insanely large number of adjustable parameters, general solutions, that are meaningful and predictive, can be found by adding random walks around the objective landscape as a partial strategy in combination with gradient descent.

Given the superficial analogy of deep learning to the human mind, the geometry of random walks in ultra-high dimensions may partially explain our own intelligence and consciousness.

Biblography

S. Gravilet, Fitness Landscapes and the Origins of Species. Princeton University Press, 2004.

M. Kimura, The Neutral Theory of Molecular Evolution. Cambridge University Press, 1968.

YouTube Vlog on A Random Walk in 10 Dimensions

The Transverse Doppler Effect and Relativistic Time Dilation

One of the hardest aspects to grasp about relativity theory is the question of whether an event “look as if” it is doing something, or whether it “actually is” doing something. 

Take, for instance, the classic twin paradox of relativity theory in which there are twins who wear identical high-precision wrist watches.  One of them rockets off to Alpha Centauri at relativistic speeds and returns while the other twin stays on Earth.  Each twin sees the other twin’s clock running slowly because of relativistic time dilation.  Yet when they get back together and, standing side-by-side, they compare their watches—the twin who went to Alpha Centauri is actually younger than the other, despite the paradox.  The relativistic effect of time dilation is “real”, not just apparent, regardless of whether they come back together to do the comparison.

Yet this understanding of relativistic effects took many years, even decades, to gain acceptance after Einstein proposed them.  He was aware himself that key experiments were required to prove that relativistic effects are real and not just apparent.

Einstein and the Transverse Doppler Effect

In 1905 Einstein used his new theory of special relativity to predict observable consequences that included a general treatment of the relativistic Doppler effect [1].  This included the effects of time dilation in addition to the longitudinal effect of the source chasing the wave.  Time dilation produced a correction to Doppler’s original expression for the longitudinal effect that became significant at speeds approaching the speed of light.  More significantly, it predicted a transverse Doppler effect for a source moving along a line perpendicular to the line of sight to an observer.  This effect had not been predicted either by Christian Doppler (1803 – 1853) or by Woldemar Voigt (1850 – 1919). 

( Read article in Physics Today on the history of the Doppler effect [2] )

Despite the generally positive reception of Einstein’s theory of special relativity, some of its consequences were anathema to many physicists at the time.  A key stumbling block was the question whether relativistic effects, like moving clocks running slowly, were only apparent, or were actually real, and Einstein had to fight to convince others of its reality.  When Johannes Stark (1874 – 1957) observed Doppler line shifts in ion beams called “canal rays” in 1906 (Stark received the 1919 Nobel prize in part for this discovery) [3], Einstein promptly published a paper suggesting how the canal rays could be used in a transverse geometry to directly detect time dilation through the transverse Doppler effect [4].  Thirty years passed before the experiment was performed with sufficient accuracy by Herbert Ives and G. R. Stilwell in 1938 to measure the transverse Doppler effect [5].  Ironically, even at this late date, Ives and Stilwell were convinced that their experiment had disproved Einstein’s time dilation by supporting Lorentz’ contraction theory of the electron.  The Ives-Stilwell experiment was the first direct test of time dilation, followed in 1940 by muon lifetime measurements [6].

A) Transverse Doppler Shift Relative to Emission Angle

The Doppler effect varies between blue shifts in the forward direction to red shifts in the backward direction, with a smooth variation in Doppler shift as a function of the emission angle.  Consider the configuration shown in Fig. 1 for light emitted from a source moving at speed v and emitting at an angle θ0 in the receiver frame. The source moves a distance vT in the time of a single emission cycle (assume a harmonic wave). In that time T (which is the period of oscillation of the light source — or the period of a clock if we think of it putting out light pulses) the light travels a distance cT before another cycle begins (or another pulse is emitted).

Fig. 1 Configuration for detection of Doppler shifts for emission angle θ0. The light source travels a distance vT during the time of a single cycle, while the wavefront travels a distance cT towards the detector.

[ See YouTube video on the derivation of the transverse Doppler Effect.]

The observed wavelength in the receiver frame is thus given by

where T is the emission period of the moving source.  Importantly, the emission period is time dilated relative to the proper emission time of the source

Therefore,

This expression can be evaluated for several special cases:

a) θ0 = 0 for forward emission

which is the relativistic blue shift for longitudinal motion in the direction of the receiver.

b) θ0 = π for backward emission

which is the relativistic red shift for longitudinal motion away from the receiver

c) θ0 = π/2 for transverse emission

This transverse Doppler effect for emission at right angles is a red shift, caused only by the time dilation of the moving light source.  This is the effect proposed by Einstein and observed by Stark that proved moving clocks tick slowly.  But it is not the only way to view the transverse Doppler effect.

B) Transverse Doppler Shift Relative to Angle at Reception

A different option for viewing the transverse Doppler effect is the angle to the moving source at the moment that the light is detected.  The geometry of this configuration relative to the previous is illustrated in Fig. 2.

Fig. 2 The detection point is drawn at a finite distance. However, the relationship between θ0 and θ1 is independent of the distance to the detector

The transverse distance to the detection point is

The length of the line connecting the detection point P with the location of the light source at the moment of detection is (using the law of cosines)

Combining with the first equation gives

An equivalent expression is obtained as

Note that this result, relating θ1 to θ0, is independent of the distance to the observation point.

When θ1 = π/2, then

yielding

for which the Doppler effect is

which is a blue shift.  This creates the unexpected result that sin θ0 = π/2 produces a red shift, while sin θ1 = π/2 produces a blue shift. The question could be asked: which one represents time dilation? In fact, it is sin θ0 = π/2 that produces time dilation exclusively, because in that configuration there is no foreshortening effect on the wavelength–only the emission time.

C) Compromise: The Null Transverse Doppler Shift

The previous two configurations each could be used as a definition for the transverse Doppler effect. But one gives a red shift and one gives a blue shift, which seems contradictory. Therefore, one might try to strike a compromise between these two cases so that sin θ1 = sin θ0, and the configuration is shown in Fig. 3.

This is the case when θ1 + θ2 = π.  The sines of the two angles are equal, yielding

and

which is solved for

Inserting this into the Doppler equation gives

where the Taylor’s expansion of the denominator (at low speed) cancels the numerator to give zero net Doppler shift. This compromise configuration represents the condition of null Doppler frequency shift. However, for speeds approaching the speed of light, the net effect is a lengthening of the wavelength, dominated by time dilation, causing a red shift.

D) Source in Circular Motion Around Receiver

An interesting twist can be added to the problem of the transverse Doppler effect: put the source or receiver into circular motion, one about the other. In the case of a source in circular motion around the receiver, it is easy to see that this looks just like case A) above for θ0 = π/2, which is the red shift caused by the time dilation of the moving source

However, there is the possible complication that the source is no longer in an inertial frame (it experiences angular acceleration) and therefore it is in the realm of general relativity instead of special relativity. In fact, it was Einstein’s solution to this problem that led him to propose the Equivalence Principle and make his first calculations on the deflection of light by gravity. His solution was to think of an infinite number of inertial frames, each of which was instantaneously co-moving with the same linear velocity as the source. These co-moving frames are inertial and can be analyzed using the principles of special relativity. The general relativistic effects come from slipping from one inertial co-moving frame to the next. But in the case of the circular transverse Doppler effect, each instantaneously co-moving frame has the exact configuration as case A) above, and so the wavelength is red shifted exactly by the time dilation.

E) Receiver in Circular Motion Around Source

With the notion of co-moving inertial frames now in hand, this configuration is exactly the same as case B) above, and the wavelength is blue shifted

References

[1] A. Einstein, “On the electrodynamics of moving bodies,” Annalen Der Physik, vol. 17, no. 10, pp. 891-921, Sep (1905)

[2] D. D. Nolte, “The Fall and Rise of the Doppler Effect,” Physics Today, vol. 73, no. 3, pp. 31-35, Mar (2020)

[3] J. Stark, W. Hermann, and S. Kinoshita, “The Doppler effect in the spectrum of mercury,” Annalen Der Physik, vol. 21, pp. 462-469, Nov 1906.

[4] A. Einstein, “Possibility of a new examination of the relativity principle,” Annalen Der Physik, vol. 23, no. 6, pp. 197-198, May (1907)

[5] H. E. Ives and G. R. Stilwell, “An experimental study of the rate of a moving atomic clock,” Journal of the Optical Society of America, vol. 28, p. 215, 1938.

[6] B. Rossi and D. B. Hall, “Variation of the Rate of Decay of Mesotrons with Momentum,” Physical Review, vol. 59, pp. 223–228, 1941.

Locking Clocks in Strong Gravity

(Guest Post with Moira Andrews)

… GR combined with nonlinear synchronization yields the novel phenomenon of a “synchronization cascade”.

Imagine a space ship containing a collection of highly-accurate atomic clocks factory-set to arbitrary precision at the space-ship factory before launch.  The clocks are lined up with precisely-equal spacing along the axis of the space ship, which should allow the astronauts to study events in spacetime to high accuracy as they orbit neutron stars or black holes.  Despite all the precision, spacetime itself will conspire to detune the clocks.  Yet all is not lost.  Using the physics of nonlinear synchronization, the astronauts can bring all the clocks together to a compromise frequency—locking all the clocks to a common rate.  This blog post shows how this can happen.

Fig.1 The high-precision space ship with a line of clocks.

Synchronization of Oscillators

The simplest synchronization problem is two “phase oscillators” coupled with a symmetric nonlinearity. The dynamical flow is

where ωk are the individual angular frequencies and g is the coupling constant. When g is greater than the difference Δω, then the two oscillators, despite having different initial frequencies, will find a stable fixed point and lock to a compromise frequency.

Taking this model to N phase oscillators creates the well-known Kuramoto model that is characterized by a relatively sharp mean-field phase transition leading to global synchronization. The model averages N phase oscillators to a mean field where g is the coupling coefficient, K is the mean amplitude, Θ is the mean phase, and ω-bar is the mean frequency. The dynamics are given by

The last equation is the final mean-field equation that synchronizes each individual oscillator to the mean field. For a large number of oscillators that are globally coupled to each other, increasing the coupling has little effect on the oscillators until a critical threshold is crossed, after which all the oscillators synchronize with each other. This is known as the Kuramoto synchronization transition, shown in Fig. 2 for 20 oscillators with uniformly distributed initial frequencies. Note that the critical coupling constant gc is roughly half of the spread of initial frequencies.

Fig. 2 Entrainment graph of the Kuramoto transition for evenly distributed clock frequencies. N = 20.

The question that this blog seeks to answer is how this synchronization mechanism may be used in a space craft exploring the strong gravity around neutron stars or black holes. The key to answering this question is the metric tensor for this system

where the first term is the time-like term g00 that affects ticking clocks, and the second term is the space-like term that affects the length of the space craft.

Kuramoto versus the Neutron Star

Consider the space craft holding a steady radius above a neutron star, as in Fig. 3. For simplicity, hold the craft stationary rather than in an orbit to remove the details of rotating frames. Because each clock is at a different gravitational potential, it runs at a different rate because of gravitational time dilation–clocks nearer to the neutron star run slower than clocks farther away. There is also a gravitational length contraction of the space craft, which modifies the clock rates as well.

Fig. 3 The space ship orbiting a neutron star. Each identical clock is at a different gravitational potential, causing them to run at different rates.

The analysis starts by incorporating the first-order approximation of time dilation through the  component g00. The component is brought in through the period of oscillations. All frequencies are referenced to the base oscillator that has the angular rate ω0, and the other frequencies are primed. As we consider oscillators higher in the space craft at positions R + h, the 1/(R+h) term in g00 decreases as does the offset between each successive oscillator.

The dynamical equations for a system for only two clocks, coupled through the constant k, are

These are combined to a single equation by considering the phase difference

The two clocks will synchronize to a compromise frequency for the critical coupling coefficient

Now, if there is a string of N clocks, as in Fig. 3, the question is how the frequencies will spread out by gravitational time dilation, and what the entrainment of the frequencies to a common compromise frequency looks like. If the ship is located at some distance from the neutron star, then the gravitational potential at one clock to the next is approximately linear, and coupling them would produce the classic Kuramoto transition.

However, if the ship is much closer to the neutron star, so that the gravitational potential is no longer linear, then there is a “fan-out” of frequencies, with the bottom-most clock ticking much more slowly than the top-most clock. Coupling these clocks produces a modified, or “stretched”, Kuramoto transition as in Fig. 4.

Fig. 4 The “stretched” Kuramoto transition for N = 20 clocks near a neutron star. The bottom-most clock is just above the surface of the neutron star (left) and at twice that height (right). The spatial separation of the clocks in these examples is RS/20, and R0 is the radial position of the bottom-most clock.

In the two examples in Fig. 4, the bottom-most clock is just above the radius of the neutron star (at R0 = 4RS for a solar-mass neutron star, where RS is the Schwarzschild radius) and at twice that radius (at R0 = 8RS). The length of the ship, along which the clocks are distributed, is RS in this example. This may seem unrealistically large, but we could imagine a regular-sized ship supporting a long stiff cable dangling below it composed of carbon nanotubes that has the clocks distributed evenly on it, with the bottom-most clock at the radius R0. In fact, this might be a reasonable design for exploring spacetime events near a neutron star (although even carbon nanotubes would not be able to withstand the strain).

Kuramoto versus the Black Hole

Against expectation, exploring spacetime around a black hole is actually easier than around a neutron star, because there is no physical surface at the Schwarzschild radius RS, and gravitational tidal forces can be small for large black holes. In fact, one of the most unintuitive aspects of black holes pertains to a space ship falling into one. A distant observer sees the space ship contracting to zero length and the clocks slowing down and stopping as the space ship approaches the Schwarzschild radius asymptotically, but never crossing it. However, on board the ship, all appears normal as it crosses the Schwarzschild radius. To the astronaut inside, there is is a gravitational potential inside the space ship that causes the clocks at the base to run more slowly than the upper clocks, and length contraction affects the spacing a little, but otherwise there is no singularity as the event horizon is passed. This appears as a classic “paradox” of physics, with two different observers seeing paradoxically different behaviors.

The resolution of this paradox lies in the differential geometry of the two observers. Each approximates spacetime with a Euclidean coordinate system that matches the local coordinates. The distant observer references the warped geometry to this “chart”, which produces the apparent divergence of the Schwarzschild metric at RS. However, the astronaut inside the space ship has her own flat chart to which she references the locally warped space time around the ship. Therefore, it is the differential changes, referenced to the ships coordinate origin, that capture gravitational time dilation and length contraction. Because the synchronization takes place in the local coordinate system of the ship, this is the coordinate system that goes into the dynamical equations for synchronization. Taking this approach, the shifts in the clock rates are given by the derivative of the metric as

where hn is the height of the n-th clock above R0.

Fig. 5 shows the entrainment plot for the black hole. The plot noticeably has a much smoother transition. In this higher mass case, the system does not have as many hard coupling transitions and instead exhibits smooth behavior for global coupling. This is the Kuramoto “cascade”. Contrast the behavior of Fig. 5 (left) to the classic Kuramoto transition of Fig. 2. The increasing frequency separations near the black hole produces a succession of frequency locks as the coupling coefficient increases. For comparison, the case of linear coupling along the cable is shown in Fig. 5 on the right. The cascade is now accompanied with interesting oscillations as one clock entrains with a neighbor, only to be pulled back by interaction with locked subclusters.

Fig. 5 The Kuramoto cascade for R0 = 1RS for global coupling (left) and linear coupling (right).

Now let us consider what role the spatial component of the metric tensor plays in the synchronization. The spatial component causes the space between the oscillators to decrease closer to the supermassive object. This would cause the oscillators to entrain faster because the bottom oscillators that entrain the slowest would be closer together, but the top oscillators would entrain slower since they are a farther distance apart, as in Fig. 6.

Fig. 6 The space ship experiencing gravitational length contraction that changes the separations among the clocks and further changes their respective gravitational potentials and clock rates.

In terms of the local coordinates of the space ship, the locations of each clock are

These values for hn can be put into the equation for ωn above. But it is clear that this produces a second order effect. Even at the event horizon, this effect is only a fraction of the shifts caused by g00 directly on the clocks. This is in contrast to what a distant observer sees–the clock separations decreasing to zero, which would seem to decrease the frequency shifts. But the synchronization coupling is performed in the ship frame, not the distant frame, so the astronaut can safely ignore this contribution.

As a final exploration of the black hole, before we leave it behind, look at the behavior for different values of R0 in Fig. 7. At 4RS, the Kuramoto transition is stretched. At 2RS there is a partial Kuramoto transition for the upper clocks, that then stretch into a cascade of locking events for the lower clocks. At 1RS we see the full cascade as before.

Fig. 7 The Kuramoto transition stretches into a cascade as the radius approaches the event horizon.

Note from the Editor:

This blog post by Moira Andrews is based on her final project for Phys 411, upper division undergraduate mechanics, at Purdue University. Students are asked to combine two seemingly-unrelated aspects of modern dynamics and explore the results. Moira thought of synchronizing clocks that are experiencing gravitational time dilation near a massive body. This is a nice example of how GR combined with nonlinear synchronization yields the novel phenomenon of a “synchronization cascade”.

Bibliography

Cheng, T.-P. (2010). Relativity, Gravitation and Cosmology. Oxford University Press.

Contributors to Wikimedia projects. (2004, July 23). Gravitational time dilation – Wikipedia. Wikipedia, the Free Encyclopedia; Wikimedia Foundation, Inc. https://en.wikipedia.org/wiki/Gravitational_time_dilation

Keeton, C. (2014). Principles of Astrophysics. Springer.

Marmet, P. (n.d.). Natural Length Contraction Due to Gravity. Newton Physics – Links to Papers, Books and Web Sites. Retrieved April 27, 2021, from https://newtonphysics.on.ca/gravity/index.html

Nolte, D. D. (2019). Introduction to Modern Dynamics (2nd ed.). Oxford University Press, USA.