The Bountiful Bernoullis of Basel

The task of figuring out who’s who in the Bernoulli family is a hard nut to crack.  The Bernoulli name populates a dozen different theorems or physical principles in the history of science and mathematics, but each one was contributed by any of four or five different Bernoullis of different generations—brothers, uncles, nephews and cousins.  What makes the task even more difficult is that any given Bernoulli might be called by several different aliases, while many of them shared the same name across generations.  To make things worse, they often worked and published on each other’s problems.

To attribute a theorem to a Bernoulli is not too different from attributing something to the famous mathematical consortium called Nicholas Bourbaki.  It’s more like a team rather than an individual.  But in the case of Bourbaki, the goal was selfless anonymity, while in the case of the Bernoullis it was sometimes the opposite—bald-faced competition and one-up-manship coupled with jealousy and resentment. Fortunately, the competition tended to breed more output than less, and the world benefited from the family feud.

The Bernoulli Family Tree

The Bernoullis are intimately linked with the beautiful city of Basel, Switzerland, situated on the Rhine River where it leaves Switzerland and forms the border between France and Germany . The family moved there from the Netherlands in the 1600’s to escape the Spanish occupation.

Basel Switzerland

The first Bernoulli born in Basel was Nikolaus Bernoulli (1623 – 1708), and he had four sons: Jakob I, Nikolaus, Johann I and Hieronymous I. The “I”s in this list refer to the fact, or the problem, that many of the immediate descendants took their father’s or uncle’s name. The long-lived family heritage in the roles of mathematician and scientist began with these four brothers. Jakob Bernoulli (1654 – 1705) was the eldest, followed by Nikolaus Bernoulli (1662 – 1717), Johann Bernoulli (1667 – 1748) and then Hieronymous (1669 – 1760). In this first generation of Bernoullis, the great mathematicians were Jakob and Johann. More mathematical equations today are named after Jakob, but Johann stands out because of the longevity of his contributions, the volume and impact of his correspondence, the fame of his students, and the number of offspring who also took up mathematics. Johann was also the worst when it came to jealousy and spitefulness—against his brother Jakob, whom he envied, and specifically against his son Daniel, whom he feared would eclipse him.

Jakob Bernoulli (aka James or Jacques or Jacob)

Jakob Bernoulli (1654 – 1705) was the eldest of the first generation of brothers and also the first to establish himself as a university professor. He held the chair of mathematics at the university in Basel. While his interests ranged broadly, he is known for his correspondences with Leibniz as he and his brother Johann were among the first mathematicians to apply Lebiniz’ calculus to solving specific problems. The Bernoulli differential equation is named after him. It was one of the first general differential equations to be solved after the invention of the calculus. The Bernoulli inequality is one of the earliest attempts to find the Taylor expansion of exponentiation, which is also related to Bernoulli numbers, Bernoulli polynomials and the Bernoulli triangle. A special type of curve that looks like an ellipse with a twist in the middle is the lemniscate of Bernoulli.

Perhaps Jakob’s most famous work was his Ars Conjectandi (1713) on probability theory. Many mathematical theorems of probability named after a Bernoulli refer to this work, such as Bernoulli distribution, Bernoulli’s golden theorem (the law of large numbers), Bernoulli process and Bernoulli trial.

Fig. Bernoulli numbers in Jakob’s Ars Conjectandi (1713)

Johann Bernoulli (aka Jean or John)

Jakob was 13 years older than his brother Johann Bernoulli (1667 – 1748), and Jakob tutored Johann in mathematics who showed great promise. Unfortunately, Johann had that awkward combination of high self esteem with low self confidence, and he increasingly sought to show that he was better than his older brother. As both brothers began corresponding with Leibniz on the new calculus, they also began to compete with one another. Driven by his insecurity, Johann also began to steal ideas from his older brother and claim them for himself.

A classic example of this is the famous brachistrochrone problem that was posed by Johann in the Acta Eruditorum in 1696. Johann at this time was a professor of mathematics at Gronigen in the Netherlands. He challenged the mathematical world to find the path of least time for a mass to travel under gravity between two points. He had already found one solution himself and thought that no-one else would succeed. Yet when he heard his brother Jakob was responding to the challenge, he spied out his result and then claimed it as his own. Within the year and a half there were 4 additional solutions—all correct—using different approaches.  One of the most famous responses was by Newton (who as usual did not give up his method) but who is reported to have solved the problem in a day.  Others who contributed solutions were Gottfried Leibniz, Ehrenfried Walther von Tschirnhaus, and Guillaume de l’Hôpital in addition to Jakob.

The participation of de l’Hôpital in the challenge was a particular thorn in Johann’s side, because de l’Hôpital had years earlier paid Johann to tutor him in Leibniz’ new calculus at a time when l’Hôpital knew nothing of the topic. What is today known as l’Hôpital’s theorem on ratios of limits in fact was taught to l’Hôpital by Johann. Johann never forgave l’Hôpital for publicizing the result—but l’Hôpital had the discipline to write a textbook while Johann did not. To be fair, l’Hôpital did give Johann credit in the opening of his book, but that was not enough for Johann who continued to carry his resentment.

When Jakob died of tuberculosis in 1705, Johann campaigned to replace him in his position as professor of mathematics and succeeded. In that chair, Johann had many famous students (Euler foremost among them, but also Maupertuis and Clairaut). Part of Johann’s enduring fame stems from his many associations and extensive correspondences with many of the top mathematicians of the day. For instance, he had a regular correspondence with the mathematician Varignon, and it was in one of these letters that Johann proposed the principle of virtual velocities which became a key axiom for Joseph Lagrange’s later epic work on the foundations of mechanics (see Chapter 4 in Galileo Unbound).

Johann remained in his chair of mathematics at Basel for almost 40 years. This longevity, and the fame of his name, guaranteed that he taught some of the most talented mathematicians of the age, including his most famous student Leonhard Euler, who is held by some as one of the four greatest mathematicians of all time (the others were Archimedes, Newton and Gauss) [1].

Nikolaus I Bernoulli

Nikolaus I Bernoulli (1687 – 1759, son of Nikolaus) was the cousin of Daniel and nephew to both Jacob and Johann. He was a well-known mathematician in his time (he briefly held Galileo’s chair in Padua), though few specific discoveries are attributed to him directly. He is perhaps most famous today for posing the “St. Petersburg Paradox” of economic game theory. Ironically, he posed this paradox while his cousin Nikolaus II Bernoulli (brother of Daniel Bernoulli) was actually in St. Petersburg with Daniel.

The St. Petersburg paradox is a simple game of chance played with a fair coin where a player must buy in at a certain price in order to place $2 in a pot that doubles each time the coin lands heads, and pays out the pot at the first tail. The average pay-out of this game has infinite expectation, so it seems that anyone should want to buy in at any cost. But most people would be unlikely to buy in even for a modest $25. Why? And is this perception correct? The answer was only partially provided by Nikolaus. The definitive answer was given by his cousin Daniel Bernoulli.

Daniel Bernoulli

Daniel Bernoulli (1700 – 1782, son of Johann I) is my favorite Bernoulli. While most of the other Bernoullis were more mathematicians than scientists, Daniel Bernoulli was more physicist than mathematician. When we speak of “Bernoulli’s principle” today, the fundamental force that allows birds and airplanes to fly, we are referring to his work on hydrodynamics. He was one of the earliest originators of economic dynamics through his invention of the utility function and diminishing returns, and he was the first to clearly state the principle of superposition, which lies at the heart today of the physics of waves and quantum technology.

Daniel Bernoulli

While in St. Petersburg, Daniel conceived of the solution to the St. Petersburg paradox (he is the one who actually named it). To explain why few people would pay high stakes to play the game, he devised a “utility function” that had “diminishing marginal utility” in which the willingness to play depended on ones wealth. Obviously a wealthy person would be willing to pay more than a poor person. Daniel stated

The determination of the value of an item must not be based on the price, but rather on the utility it yields…. There is no doubt that a gain of one thousand ducats is more significant to the pauper than to a rich man though both gain the same amount.

He created a log utility function that allowed one to calculate the highest stakes a person should be willing to take based on their wealth. Indeed, a millionaire may only wish to pay $20 per game to play, in part because the average payout over a few thousand games is only about $5 per game. It is only in the limit of an infinite number of games (and an infinite bank account by the casino) that the average payout diverges.

Daniel Bernoulli Hydrodynamica (1638)

Johann II Bernoulli

Daniel’s brother Johann II (1710 – 1790) published in 1736 one of the most important texts on the theory of light during the time between Newton and Euler. Although the work looks woefully anachronistic today, it provided one of the first serious attempts at understanding the forces acting on light rays and describing them mathematically [5]. Euler based his new theory of light, published in 1746, on much of the work laid down by Johann II. Euler came very close to proposing a wave-like theory of light, complete with a connection between frequency of wave pulses and colors, that would have preempted Thomas Young by more than 50 years. Euler, Daniel and Johann II as well as Nicholas II were all contemporaries as students of Johann I in Basel.

More Relations

Over the years, there were many more Bernoullis who followed in the family tradition. Some of these include:

Johann II Bernoulli (1710–1790; also known as Jean), son of Johann, mathematician and physicist

Johann III Bernoulli (1744–1807; also known as Jean), son of Johann II, astronomer, geographer and mathematician

Jacob II Bernoulli (1759–1789; also known as Jacques), son of Johann II, physicist and mathematician

Johann Jakob Bernoulli (1831–1913), art historian and archaeologist; noted for his Römische Ikonographie (1882 onwards) on Roman Imperial portraits

Ludwig Bernoully (1873 – 1928), German architect in Frankfurt

Hans Bernoulli (1876–1959), architect and designer of the Bernoullihäuser in Zurich and Grenchen SO

Elisabeth Bernoulli (1873-1935), suffragette and campaigner against alcoholism.

Notable marriages to the Bernoulli family include the Curies (Pierre Curie was a direct descendant to Johann I) as well as the German author Hermann Hesse (married to a direct descendant of Johann I).

References

[1] Calinger, Ronald S.. Leonhard Euler : Mathematical Genius in the Enlightenment, Princeton University Press (2015).

[2] Euler L and Truesdell C. Leonhardi Euleri Opera Omnia. Series secunda: Opera mechanica et astronomica XI/2. The rational mechanics of flexible or elastic bodies 1638-1788. (Zürich: Orell Füssli, 1960).

[3] D Speiser, Daniel Bernoulli (1700-1782), Helvetica Physica Acta 55 (1982), 504-523.

[4] Leibniz GW. Briefwechsel zwischen Leibniz, Jacob Bernoulli, Johann Bernoulli und Nicolaus Bernoulli. (Hildesheim: Olms, 1971).

[5] Hakfoort C. Optics in the age of Euler : conceptions of the nature of light, 1700-1795. (Cambridge: Cambridge University Press, 1995).

Brook Taylor’s Infinite Series

When Leibniz claimed in 1704, in a published article in Acta Eruditorum, to have invented the differential calculus in 1684 prior to anyone else, the British mathematicians rushed to Newton’s defense. They knew Newton had developed his fluxions as early as 1666 and certainly no later than 1676. Thus ensued one of the most bitter and partisan priority disputes in the history of math and science that pitted the continental Leibnizians against the insular Newtonians. Although a (partisan) committee of the Royal Society investigated the case and found in favor of Newton, the affair had the effect of insulating British mathematics from Continental mathematics, creating an intellectual desert as the forefront of mathematical analysis shifted to France. Only when George Green filled his empty hours with the latest advances in French analysis, as he tended his father’s grist mill, did British mathematics wake up. Green self-published his epic work in 1828 that introduced what is today called Green’s Theorem.

Yet the period from 1700 to 1828 was not a complete void for British mathematics. A few points of light shone out in the darkness, Thomas Simpson, Collin Maclaurin, Abraham de Moivre, and Brook Taylor (1685 – 1731) who came from an English family that had been elevated to minor nobility by an act of Cromwell during the English Civil War.

Growing up in Bifrons House

 

View of Bifrons House from sometime in the late-1600’s showing the Jacobean mansion and the extensive south gardens.

When Brook Taylor was ten years old, his father bought Bifrons House [1], one of the great English country houses, located in the county of Kent just a mile south of Canterbury.  English country houses were major cultural centers and sources of employment for 300 years from the seventeenth century through the early 20th century. While usually being the country homes of nobility of all levels, from Barons to Dukes, sometimes they were owned by wealthy families or by representatives in Parliament, which was the case for the Taylors. Bifrons House had been built around 1610 in the Jacobean architectural style that was popular during the reign of James I.  The house had a stately front façade, with cupola-topped square towers, gable ends to the roof, porches of a renaissance form, and extensive manicured gardens on the south side.  Bifrons House remained the seat of the Taylor family until 1824 when they moved to a larger house nearby and let Bifrons first to a Marquess and then in 1828 to Lady Byron (ex-wife of Lord Byron) and her daughter Ada Lovelace (the mathematician famous for her contributions to early computer science). The Taylor’s sold the house in 1830 to the first Marquess Conyngham.

Taylor’s life growing up in the rarified environment of Bifrons House must have been like scenes out of the popular BBC TV drama Downton Abbey.  The house had a large staff of servants and large grounds at the edge of a large park near the town of Patrixbourne. Life as the heir to the estate would have been filled with social events and fine arts that included music and painting. Taylor developed a life-long love of music during his childhood, later collaborating with Isaac Newton on a scientific investigation of music (it was never published). He was also an amateur artist, and one of the first books he published after being elected to the Royal Society was on the mathematics of linear perspective, which contained some of the early results of projective geometry.

There is a beautiful family portrait in the National Portrait Gallery in London painted by John Closterman around 1696. The portrait is of the children of John Taylor about a year after he purchased Bifrons House. The painting is notable because Brook, the heir to the family fortunes, is being crowned with a wreath by his two older sisters (who would not inherit). Brook was only about 11 years old at the time and was already famous within his family for his ability with music and numbers.

Portrait of the children of John Taylor around 1696. Brook Taylor is the boy being crowned by his sisters on the left. (National Portrait Gallery)

Taylor never had to go to school, being completely tutored at home until he entered St. John’s College, Cambridge, in 1701.  He took mathematics classes from Machin and Keill and graduated in 1709.  The allowance from his father was sufficient to allow him to lead the life of a gentleman scholar, and he was elected a member of the Royal Society in 1712 and elected secretary of the Society just two years later.  During the following years he was active as a rising mathematician until 1721 when he married a woman of a good family but of no wealth.  The support of a house like Bifrons always took money, and the new wife’s lack of it was enough for Taylor’s father to throw the new couple out.  Unfortunately, his wife died in childbirth along with the child, so Taylor returned home in 1723.  These family troubles ended his main years of productivity as a mathematician.

Portrait of Brook Taylor

Methodus incrementorum directa et inversa

Under the eye of the Newtonian mathematician Keill at Cambridge, Taylor became a staunch supporter and user of Newton’s fluxions. Just after he was elected as a member of the Royal Society in 1712, he participated in an investigation of the priority for the invention of the calculus that pitted the British Newtonians against the Continental Leibnizians. The Royal Society found in favor of Newton (obviously) and raised the possibility that Leibniz learned of Newton’s ideas during a visit to England just a few years before Leibniz developed his own version of the differential calculus.

A re-evaluation of the priority dispute from today’s perspective attributes the calculus to both men. Newton clearly developed it first, but did not publish until much later. Leibniz published first and generated the excitement for the new method that dispersed its use widely. He also took an alternative route to the differential calculus that is demonstrably different than Newton’s. Did Leibniz benefit from possibly knowing Newton’s results (but not his methods)? Probably. But that is how science is supposed to work … building on the results of others while bringing new perspectives. Leibniz’ methods and his notations were superior to Newton’s, and the calculus we use today is closer to Leibniz’ version than to Newton’s.

Once Taylor was introduced to Newton’s fluxions, he latched on and helped push its development. The same year (1715) that he published a book on linear perspective for art, he also published a ground-breaking book on the use of the calculus to solve practical problems. This book, Methodus incrementorum directa et inversa, introduced several new ideas, including finite difference methods (which are used routinely today in numerical simulations of differential equations). It also considered possible solutions to the equation for a vibrating string for the first time.

The vibrating string is one of the simplest problem in “continuum mechanics”, but it posed a severe challenge to Newtonian physics of point particles. It was only much later that D’Alembert used Newton’s first law of action-reaction to eliminate internal forces to derive D’Alembert’s principle on the net force on an extended body. Yet Taylor used finite differences to treat the line mass of the string in a way that yielded a possible solution of a sine function. Taylor was the first to propose that a sine function was the form of the string displacement during vibration. This idea would be taken up later by D’Alembert (who first derived the wave equation), and by Euler (who vehemently disagreed with D’Alembert’s solutions) and Daniel Bernoulli (who was the first to suggest that it is not just a single sine function, but a sum of sine functions, that described the string’s motion — the principle of superposition).

Of course, the most influential idea in Taylor’s 1715 book was his general use of an infinite series to describe a curve.

Taylor’s Series

Infinite series became a major new tool in the toolbox of analysis with the publication of John WallisArithmetica Infinitorum published in 1656. Shortly afterwards many series were published such as Nikolaus Mercator‘s series (1668)

and James Gregory‘s series (1668)

And of course Isaac Newton’s generalized binomial theorem that he worked out famously during the plague years of 1665-1666

But these consisted mainly of special cases that had been worked out one by one. What was missing was a general method that could yield a series expression for any curve.

Taylor used concepts of finite differences as well as infinitesimals to derive his formula for expanding a function as a power series around any point. His derivation in Methodus incrementorum directa et inversa is not easily recognized today. Using difference tables, and ideas from Newton’s fluxions that viewed functions as curves traced out as a function of time, he arrived at the somewhat cryptic expression

where the “dots” are time derivatives, x stands for the ordinate (the function), v is a finite difference, and z is the abcissa moving with constant speed. If the abcissa moves with unit speed, then this becomes Taylor’s Series (in modern notation)

The term “Taylor’s series” was probably first used by L’Huillier in 1786, although Condorcet attributed the equation to both Taylor and d’Alembert in 1784. It was Lagrange in 1797 who immortalized Taylor by claiming that Taylor’s theorem was the foundation of analysis.

Example: sin(x)

Expand sin(x) around x = π

This is related to the expansion around x = 0 (also known as a Maclaurin series)

Example: arctan(x)

To get an feel for how to apply Taylor’s theorem to a function like arctan, begin with

and take the derivative of both sides

Rewrite this as

and substitute the expression for y

and integrate term by term to arrive at

This is James Gregory’s famous series. Although the math here is modern and only takes a few lines, it parallel’s Gregory’s approach. But Gregory had to invent aspects of calculus as he went along — his derivation covering many dense pages. In the priority dispute between Leibniz and Newton, Gregory is usually overlooked as an independent inventor of many aspects of the calculus. This is partly because Gregory acknowledged that Newton had invented it first, and he delayed publishing to give Newton priority.

Two-Dimensional Taylor’s Series

The ideas behind the Taylor’s series generalizes to any number of dimensions. For a scalar function of two variables it takes the form (out to second order)

where J is the Jacobian matrix (vector) and H is the Hessian matrix defined for the scalar function as

and

As a concrete example, consider the two-dimensional Gaussian function

The Jacobean and Hessian matrices are

which are the first- and second-order coefficients of the Taylor series.

References

[1] “A History of Bifrons House”, B. M. Thomas, Kent Archeological Society (2017)

[2] L. Feigenbaum, “TAYLOR,BROOK AND THE METHOD OF INCREMENTS,” Archive for History of Exact Sciences, vol. 34, no. 1-2, pp. 1-140, (1985)

[3] A. Malet, “GREGORIE, JAMES ON TANGENTS AND THE TAYLOR RULE FOR SERIES EXPANSIONS,” Archive for History of Exact Sciences, vol. 46, no. 2, pp. 97-137, (1993)

[4] E. Harier and G. Wanner, Analysis by its History (Springer, 1996)

Painting of Bifrons Park by Jan Wyck c. 1700

Hermann Grassmann’s Nimble Wedge Product

          

Hyperspace is neither a fiction nor an abstraction. Every interaction we have with our every-day world occurs in high-dimensional spaces of objects and coordinates and momenta. This dynamical hyperspace—also known as phase space—is as real as mathematics, and physics in phase space can be calculated and used to predict complex behavior. Although phase space can extend to thousands of dimensions, our minds are incapable of thinking even in four dimensions—we have no ability to visualize such things. 

Grassmann was convinced that he had discovered a fundamentally new type of mathematics—he actually had.

            Part of the trick of doing physics in high dimensions is having the right tools and symbols with which to work.  For high-dimensional math and physics, one such indispensable tool is Hermann Grassmann’s wedge product. When I first saw the wedge product, probably in some graduate-level dynamics textbook, it struck me as a little cryptic.  It is sort of like a vector product, but not, and it operated on things that had an intimidating name— “forms”. I kept trying to “understand” forms as if they were types of vectors.  After all, under special circumstances, forms and wedges did produce some vector identities.  It was only after I actually stepped back and asked myself how they were constructed that I realized that forms and wedge products were just a simple form of algebra, called exterior algebra. Exterior algebra is an especially useful form of algebra with simple rules.  It goes far beyond vectors while harking back to a time before vectors even existed.

Hermann Grassmann: A Backwater Genius

We are so accustomed to working with oriented objects, like vectors that have a tip and tail, that it is hard to think of a time when that wouldn’t have been natural.  Yet in the mid 1800’s, almost no one was thinking of orientations as a part of geometry, and it took real genius to conceive of oriented elements, how to manipulate them, and how to represent them graphically and mathematically.  At a time when some of the greatest mathematicians lived—Weierstrass, Möbius, Cauchy, Gauss, Hamilton—it turned out to be a high school teacher from a backwater in Prussia who developed the theory for the first time.

Hermann Grassmann

            Hermann Grassmann was the son of a high school teacher at the Gymnasium in Stettin, Prussia, (now Szczecin, Poland) and he inherited his father’s position, but at a lower level.  Despite his lack of background and training, he had serious delusions of grandeur, aspiring to teach mathematics at the university in Berlin, even when he was only allowed to teach the younger high school students basic subjects.  Nonetheless, Grassmann embarked on a program to educate himself, attending classes at Berlin in mathematics.  As part of the requirements to be allowed to teach mathematics to the senior high-school students, he had to submit a thesis on an appropriate topic. 

Modern Szczecin.

            For years, he had been working on an idea that had originally come from his father about a mathematical theory that could manipulate abstract objects or concepts.  He had taken this vague thought and had slowly developed it into a rigorous mathematical form with symbols and manipulations.  His mind was one of those that could permute endlessly, and he defined and discovered dozens of different ways that objects could be defined and combined, and he wrote them all down in a tome of excessive size and complexity.  When it was time to submit the thesis to the examiners, he had created a broad new system of algebra—at a time when no one recognized what a new algebra even meant, especially not his examiners, who could understand none of it.  Fortunately, Grassmann had been corresponding with the famous German mathematician August Möbius over his ideas, and Möbius was encouraging and supportive, and the examiners accepted his thesis and allowed him to teach the upper class-men at his high school. 

The Gymnasium in Stettin

            Encouraged by his success, Grassmann hoped that Möbius would help him climb even higher to teach in Berlin.  Convinced that he had discovered a fundamentally new type of mathematics (he actually had), he decided to publish his thesis as a book under the title Die Lineale Ausdehnungslehre, ein neuer Zweig der Mathematik (The Theory of Linear Extension, a New Branch of Mathematics).  He published it out of his own pocket.  It is some measure of his delusion that he had thousands printed, but almost none sold, and piles of the books were stored away to be used later as scrap paper. Möbius likewise distanced himself from Grassmann and his obsessive theories. Discouraged, Grassmann turned his back on mathematics, though he later achieved fame in the field of linguistics.  (For more on Grassmann’s ideas and struggle for recognition, see Chapter 4 of Galileo Unbound).

Excerpt from Grassmann’s Ausdehnungslehre (Google Books).

The Odd Identity of Nicholas Bourbaki

If you look up the publication history of the famous French mathematician, Nicholas Bourbaki, you will be amazed to see a publication history that spans from 1935 to 2018 — over 85 years of publications!  But if you look in the obituaries, you will see that he died in 1968.  It’s pretty impressive to still be publishing 50 years after your death.  JRR Tolkein has been doing that regularly, but few others spring to mind.

            Actually, you have been duped!  Nicholas is a fiction, constructed as a hoax by a group of French mathematicians who were simultaneously deadly serious about the need for a rigorous foundation on which to educate the new wave of mathematicians in the mid 20th century.  The group was formed during a mathematics meeting in 1924, organized by André Weil and joined by Henri Cartan (son of Eli Cartan), Claude Chevalley, Jean Coulomb, Jean Delsarte, Jean Dieudonné, Charles Ehresmann, René de Possel, and Szolem Mandelbrojt (uncle of Benoit Mandelbrot).  They picked the last name of a French general, and Weil’s wife named him Nicholas.  The group began publishing books under this pseudonym in 1935 and has continued until the present time.  While their publications were entirely serious, the group from time to time had fun with mild hoaxes, such as posting his obituary on one occasion and a wedding announcement of his daughter on another. 

            The wedge product symbol took several years to mature.  Eli Cartan’s book on differential forms published in 1945 used brackets to denote the product instead of the wedge. In Chevally’s book of 1946, he does not use the wedge, but uses a small square, and the book  Chevalley wrote in 1951 “Introduction to the Theory of Algebraic Functions of One Variable” still uses a small square.  But in 1954, Chevalley uses the wedge symbol in his book on Spinors.  He refers to his own book of 1951 (which did not use the wedge) and also to the 1943 version of Bourbaki. The few existing copies of the 1943 Algebra by Bourbaki lie in obscure European libraries. The 1973 edition of the book does indeed use the wedge, although I have yet to get my hands on the original 1943 version. Therefore, the wedge symbol seems to have originated with Chevalley sometime between 1951 and 1954 and gained widespread use after that.

Exterior Algebra

Exterior algebra begins with the definition of an operation on elements.  The elements, for example (u, v, w, x, y, z, etc.) are drawn from a vector space in its most abstract form as “tuples”, such that x = [x1, x2, x3, …, xn] in an n-dimensional space.  On these elements there is an operation called the “wedge product”, the “exterior product”, or the “Grassmann product”.  It is denoted, for example between two elements x and y, as x^y.  It captures the sense of orientation through anti-commutativity, such that

As simple as this definition is, it sets up virtually all later manipulations of vectors and their combinations.  For instance, we can immediately prove (try it yourself) that the wedge product of a vector element with itself equals zero

Once the elements of the vector space have been defined, it is possible to define “forms” on the vector space.  For instance, a 1-form, also known as a vector, is any function

where a, b, c are scalar coefficients.  The wedge product of two 1-forms

yields a 2-form, also known as a bivector.  This specific example makes a direct connection to the cross product in 3-space as

where the unit vectors are mapped onto the 2-forms

Indeed, many of the vector identities of 3-space can be expressed in terms of exterior products, but these are just special cases, and the wedge product is more general.  For instance, while the triple vector cross product is not associative, the wedge product is associative

which can give it an advantage when performing algebra on r-forms.  Expressing the wedge product in terms of vector components

yields the immediate generalization to any number of dimensions (using the Einstein summation convention)

In this way, the wedge product expresses relationships in any number of dimensions.

            A 3-form is constructed as the wedge product of 3 vectors

where the Levi-Civita permuation symbol has been introduced such that

Note that in 3-space there can be no 4-form, because one of the basis elements would be repeated, rendering the product zero.  Therefore, the most general multilinear form for 3-space is

with 23 = 8 elements: one scalar, three 1-forms, three 2-forms and one 3-form.  In 4-space there are 24 = 16 elements: one scalar, four 1-forms, six 2-forms, four 3-forms and one 4-form.  So, the number of elements rises exponentially with the dimension of the space.

            At this point, we have developed a rich multilinear structure, all based on the simple anti-commutativity of elements x^y = -y^x.  This process is called by another name: a Clifford algebra, named after William Kingdon Clifford (1845-1879), second wrangler at Cambridge and close friend of Arthur Cayley.  But the wedge product is not just algebra—there is also a straightforward geometric interpretation of wedge products that make them useful when extending theories of surfaces and volumes into higher dimensions.

Geometric Interpretation

In Euclidean space, a cross product is related to areas and volumes of paralellapipeds. Wedge products are more general than cross products and they generalize the idea of areas and volumes to higher dimension. As an illustration, an area 2-form is shown in Fig. 1 and a 3-form in Fig. 2.

Fig. 1 Area 2-form showing how the area of a parallelogram is related to the wedge product. The 2-form is an oriented area perpendicular to the unit vector.
Fig. 2 A volume 3-form in Euclidean space. The volume of the parallelogram is equal to the magnitude of the wedge product of the three vectors u, v, and w.

The wedge product is not limited to 3 dimensions nor to Euclidean spaces. This is the power and the beauty of Grassmann’s invention. It also generalizes naturally to differential geometry of manifolds producing what are called differential forms. When integrating in higher dimensions or on non-Euclidean manifolds, the most appropriate approach is to use wedge products and differential forms, which will be the topic of my next blog on the generalized Stokes’ theorem.

Further Reading

1.         Dieudonné, J., The Tragedy of Grassmann. Séminaire de Philosophie et Mathématiques 1979, fascicule 2, 1-14.

2.         Fearnley-Sander, D., Hermann Grassmann and the Creation of Linear Algegra. American Mathematical Monthly 1979, 86 (10), 809-817.

3.         Nolte, D. D., Galileo Unbound: A Path Across Life, the Universe and Everything. Oxford University Press: 2018.

4.         Vargas, J. G., Differential Geometry for Physicists and Mathematicians: Moving Frames and Differential Forms: From Euclid Past Riemann. 2014; p 1-293.

George Green’s Theorem

For a thirty-year old miller’s son with only one year of formal education, George Green had a strange hobby—he read papers in mathematics journals, mostly from France.  This was his escape from a dreary life running a flour mill on the outskirts of Nottingham, England, in 1823.  The tall wind mill owned by his father required 24-hour attention, with farmers depositing their grain at all hours and the mechanisms and sails needing constant upkeep.  During his one year in school when he was eight years old he had become fascinated by maths, and he had nurtured this interest after leaving school one year later, stealing away to the top floor of the mill to pore over books he scavenged, devouring and exhausting all that English mathematics had to offer.  By the time he was thirty, his father’s business had become highly successful, providing George with enough wages to become a paying member of the private Nottingham Subscription Library with access to the Transactions of the Royal Society as well to foreign journals.  This simple event changed his life and changed the larger world of mathematics.

Green’s windmill in Sneinton, England.

French Analysis in England

George Green was born in Nottinghamshire, England.  No record of his birth exists, but he was baptized in 1793, which may be assumed to be the year of his birth.  His father was a baker in Nottingham, but the food riots of 1800 forced him to move outside of the city to the town of Sneinton, where he bought a house and built an industrial-scale windmill to grind flour for his business.  He prospered enough to send his eight-year old son to Robert Goodacre’s Academy located on Upper Parliament Street in Nottingham.  Green was exceptionally bright, and after one year in school he had absorbed most of what the Academy could teach him, including a smattering of Latin and Greek as well as French along with what simple math that was offered.  Once he was nine, his schooling was over, and he took up the responsibility of helping his father run the mill, which he did faithfully, though unenthusiastically, for the next 20 years.  As the milling business expanded, his father hired a mill manager that took part of the burden off George.  The manager had a daughter Jane Smith, and in 1824 she had her first child with Green.  Six more children were born to the couple over the following fifteen years, though they never married.

Without adopting any microscopic picture of how electric or magnetic fields are produced or how they are transmitted through space, Green could still derive rigorous properties that are independent of any details of the microscopic model.

            During the 20 years after leaving Goodacre’s Academy, Green never gave up learning what he could, teaching himself to read French readily as well as mastering English mathematics.  The 1700’s and early 1800’s had been a relatively stagnant period for English mathematics.  After the priority dispute between Newton and Leibniz over the invention of the calculus, English mathematics had become isolated from continental advances.  This was part snobbery, but also part handicap as the English school struggled with Newton’s awkward fluxions while the continental mathematicians worked with Leibniz’ more fruitful differential notation.  One notable exception was Brook Taylor who developed the Taylor’s Series (and who grew up on the opposite end of the economic spectrum from Green, see my Blog on Taylor). However, the French mathematicians in the early 1800’s were especially productive, including such works as those by Lagrange, Laplace and Poisson.

            One block away from where Green lived stood the Free Grammar School overseen by headmaster John Topolis.  Topolis was a Cambridge graduate on a minor mission to update the teaching of mathematics in England, well aware that the advances on the continent were passing England by.  For instance, Topolis translated Laplace’s mathematically advanced Méchaniqe Celéste from French into English.  Topolis was also well aware of the work by the other French mathematicians and maintained an active scholarly output that eventually brought him back to Cambridge as Dean of Queen’s College in 1819 when Green was 26 years old.  There is no record whether Topolis and Green knew each other, but their close proximity and common interests point to a natural acquaintance.  One can speculate that Green may even have sought Topolis out, given his insatiable desire to learn more mathematics, and it is likely that Topolis would have introduced Green to the vibrant French school of mathematics.             

By the time Green joined the Nottingham Subscription Library, he must already have been well trained in basic mathematics, and membership in the library allowed him to request loans of foreign journals (sort of like Interlibrary Loan today).  With his library membership beginning in 1823, Green absorbed the latest advances in differential equations and must have begun forming a new viewpoint of the uses of mathematics in the physical sciences.  This was around the same time that he was beginning his family with Jane as well as continuing to run his fathers mill, so his mathematical hobby was relegated to the dark hours of the night.  Nonetheless, he made steady progress over the next five years as his ideas took rough shape and were refined until finally he took pen to paper, and this uneducated miller’s son began a masterpiece that would change the history of mathematics.

Essay on Mathematical Analysis of Electricity and Magnetism

By 1827 Green’s free-time hobby was about to bear fruit, and he took out a modest advertisement to announce its forthcoming publication.  Because he was an unknown, and unknown to any of the local academics (Topolis had already gone back to Cambridge), he chose vanity publishing and published out of pocket.   An Essay on the Application of Mathematical Analysis to the Theories of Electricity and Magnetism was printed in March of 1828, and there were 51 subscribers, mostly from among the members of the Nottingham Subscription Library who bought it at 7 shillings and 6 pence per copy, probably out of curiosity or sympathy rather than interest.  Few, if any, could have recognized that Green’s little essay contained several revolutionary elements.

Fig. 1 Cover page of George Green’s Essay

            The topic of the essay was not remarkable, treating mathematical problems of electricity and magnetism, which was in vogue at that time.  As background, he had read works by Cavendish, Poisson, Arago, Laplace, Fourier, Cauchy and Thomas Young (probably Young’s Course of Lectures on Natural Philosopy and the Mechanical Arts (1807)).  He paid close attention to Laplace’s treatment of celestial mechanics and gravitation which had obvious strong analogs to electrostatics and the Coulomb force because of the common inverse square dependence. 

            One radical contribution in Green’s essay was his introduction of the potential function—one of the first uses of the concept of a potential function in mathematical physics—and he gave it its modern name.  Others had used similar constructions, such as Euler [1], D’Alembert [2], Laplace[3] and Poisson [4], but the use had been implicit rather than explicit.  Green shifted the potential function to the forefront, as a central concept from which one could derive other phenomena.  Another radical contribution from Green was his use of the divergence theorem.  This has tremendous utility, because it relates a volume integral to a surface integral.  It was one of the first examples of how measuring something over a closed surface could determine a property contained within the enclosed volume.  Gauss’ law is the most common example of this, where measuring the electric flux through a closed surface determines the amount of enclosed charge.  Lagrange in 1762 [5] and Gauss in 1813 [6] had used forms of the divergence theorem in the context of gravitation, but Green applied it to electrostatics where it has become known as Gauss’ law and is one of the four Maxwell equations.  Yet another contribution was Green’s use of linear superposition to determine the potential of a continuous charge distribution, integrating the potential of a point charge over a continuous charge distribution.  This was equivalent to defining what is today called a Green’s function, which is a common method to solve partial differential equations.

            A subtle contribution of Green’s Essay, but no less influential, was his adoption of a mathematical approach to a physics problem based on the fundamental properties of the mathematical structure rather than on any underlying physical model.  Without adopting any microscopic picture of how electric or magnetic fields are produced or how they are transmitted through space, he could still derive rigorous properties that are independent of any details of the microscopic model.  For instance, the inverse square law of both electrostatics and gravitation is a fundamental property of the divergence theorem (a mathematical theorem) in three-dimensional space.  There is no need to consider what space is composed of, such as the many differing models of the ether that were being proposed around that time.  He would apply this same fundamental mathematical approach in his later career as a Cambridge mathematician to explain the laws of reflection and refraction of light.

George Green: Cambridge Mathematician

A year after the publication of the Essay, Green’s father died a wealthy man, his milling business having become very successful.  Green inherited the family fortune, and he was finally able to leave the mill and begin devoting his energy to mathematics.  Around the same time he began working on mathematical problems with the support of Sir Edward Bromhead.  Bromhead was a Nottingham peer who had been one of the 51 subscribers to Green’s published Essay.  As a graduate of Cambridge he was friends with Herschel, Babbage and Peacock, and he recognized the mathematical genius in this self-educated miller’s son.  The two men spent two years working together on a pair of publications, after which Bromhead used his influence to open doors at Cambridge.

            In 1832, at the age of 40, George Green enrolled as an undergraduate student in Gonville and Caius College at Cambridge.  Despite his concerns over his lack of preparation, he won the first-year mathematics prize.  In 1838 he graduated as fourth wrangler only two positions behind the future famous mathematician James Joseph Sylvester (1814 – 1897).  Based on his work he was elected as a fellow of the Cambridge Philosophical Society in 1840.  Green had finally become what he had dreamed of being for his entire life—a professional mathematician.

            Green’s later papers continued the analytical dynamics trend he had established in his Essay by applying mathematical principles to the reflection and refraction of light. Cauchy had built microscopic models of the vibrating ether to explain and derive the Fresnel reflection and transmission coefficients, attempting to understand the structure of ether.  But Green developed a mathematical theory that was independent of microscopic models of the ether.  He believed that microscopic models could shift and change as newer models refined the details of older ones.  If a theory depended on the microscopic interactions among the model constituents, then it too would need to change with the times.  By developing a theory based on analytical dynamics, founded on fundamental principles such as minimization principles and geometry, then one could construct a theory that could stand the test of time, even as the microscopic understanding changed.  This approach to mathematical physics was prescient, foreshadowing the geometrization of physics in the late 1800’s that would lead ultimately to Einsteins theory of General Relativity.

Green’s Theorem and Greens Function

Green died in 1841 at the age of 49, and his Essay was mostly forgotten.  Ten years later a young William Thomson (later Lord Kelvin) was graduating from Cambridge and about to travel to Paris to meet with the leading mathematicians of the age.  As he was preparing for the trip, he stumbled across a mention of Green’s Essay but could find no copy in the Cambridge archives.  Fortunately, one of the professors had a copy that he lent Thomson.  When Thomson showed the work to Liouville and Sturm it caused a sensation, and Thomson later had the Essay republished in Crelle’s journal, finally bringing the work and Green’s name into the mainstream.

            In physics and mathematics it is common to name theorems or laws in honor of a leading figure, even if the they had little to do with the exact form of the theorem.  This sometimes has the effect of obscuring the historical origins of the theorem.  A classic example of this is the naming of Liouville’s theorem on the conservation of phase space volume after Liouville, who never knew of phase space, but who had published a small theorem in pure mathematics in 1838, unrelated to mechanics, that inspired Jacobi and later Boltzmann to derive the form of Liouville’s theorem that we use today.  The same is true of Green’s Theorem and Green’s Function.  The form of the theorem known as Green’s theorem was first presented by Cauchy [7] in 1846 and later proved by Riemann [8] in 1851.  The equation is named in honor of Green who was one of the early mathematicians to show how to relate an integral of a function over one manifold to an integral of the same function over a manifold whose dimension differed by one.  This property is a consequence of the Generalized Stokes Theorem (named after George Stokes), of which the Kelvin-Stokes Theorem, the Divergence Theorem and Green’s Theorem are special cases.

Fig. 2 Green’s theorem and its relationship with the Kelvin-Stokes theorem, the Divergence theorem and the Generalized Stokes theorem (expressed in differential forms)

            Similarly, the use of Green’s function for the solution of partial differential equations was inspired by Green’s use of the superposition of point potentials integrated over a continuous charge distribution.  The Green’s function came into more general use in the late 1800’s and entered the mainstream of physics in the mid 1900’s [9].

Fig. 3 The application of Green’s function so solve a linear operator problem, and an example applied to Poisson’s equation.

[1] L. Euler, Novi Commentarii Acad. Sci. Petropolitanae , 6 (1761)

[2] J. d’Alembert, “Opuscules mathématiques” , 1 , Paris (1761)

[3] P.S. Laplace, Hist. Acad. Sci. Paris (1782)

[4] S.D. Poisson, “Remarques sur une équation qui se présente dans la théorie des attractions des sphéroïdes” Nouveau Bull. Soc. Philomathique de Paris , 3 (1813) pp. 388–392

[5] Lagrange (1762) “Nouvelles recherches sur la nature et la propagation du son” (New researches on the nature and propagation of sound), Miscellanea Taurinensia (also known as: Mélanges de Turin ), 2: 11 – 172

[6] C. F. Gauss (1813) “Theoria attractionis corporum sphaeroidicorum ellipticorum homogeneorum methodo nova tractata,” Commentationes societatis regiae scientiarium Gottingensis recentiores, 2: 355–378

[7] Augustin Cauchy: A. Cauchy (1846) “Sur les intégrales qui s’étendent à tous les points d’une courbe fermée” (On integrals that extend over all of the points of a closed curve), Comptes rendus, 23: 251–255.

[8] Bernhard Riemann (1851) Grundlagen für eine allgemeine Theorie der Functionen einer veränderlichen complexen Grösse (Basis for a general theory of functions of a variable complex quantity), (Göttingen, (Germany): Adalbert Rente, 1867

[9] Schwinger, Julian (1993). “The Greening of quantum Field Theory: George and I”: 10283. arXiv:hep-ph/9310283

Geometry as Motion

Nothing seems as static and as solid as geometry—there is even a subfield of geometry known as “solid geometry”. Geometric objects seem fixed in time and in space. Yet the very first algebraic description of geometry was born out of kinematic constructions of curves as René Descartes undertook the solution of an ancient Greek problem posed by Pappus of Alexandria (c. 290 – c. 350) that had remained unsolved for over a millennium. In the process, Descartes’ invented coordinate geometry.

Descartes used kinematic language in the process of drawing  curves, and he even talked about the speed of the moving point. In this sense, Descartes’ curves are trajectories.

The problem of Pappus relates to the construction of what were known as loci, or what today we call curves or functions. Loci are a smooth collection of points. For instance, the intersection of two fixed lines in a plane is a point. But if you allow one of the lines to move continuously in the plane, the intersection between the moving line and the fixed line sweeps out a continuous succession of points that describe a curve—in this case a new line. The problem posed by Pappus was to find the appropriate curve, or loci, when multiple lines are allowed to move continuously in the plane in such a way that their movements are related by given ratios. It can be shown easily in the case of two lines that the curves that are generated are other lines. As the number of lines increases to three or four lines, the loci become the conic sections: circle, ellipse, parabola and hyperbola. Pappus then asked what one would get if there were five such lines—what type of curves were these? This was the problem that attracted Descartes.

What Descartes did—the step that was so radical that it reinvented geometry—was to fix lines in position rather than merely in length. To us, in the 21st century, such an act appears so obvious as to remove any sense of awe. But by fixing a line in position, and by choosing a fixed origin on that line to which other points on the line were referenced by their distance from that origin, and other lines were referenced by their positions relative to the first line, then these distances could be viewed as unknown quantities whose solution could be sought through algebraic means. This was Descartes’ breakthrough that today is called “analytic geometry”— algebra could be used to find geometric properties.

Newton too viewed mathematical curves as living things that changed in time, which was one of the central ideas behind his fluxions—literally curves in flux.

Today, we would call the “locations” of the points their “coordinates”, and Descartes is almost universally credited with the discovery of the Cartesian coordinate system. Cartesian coordinates are the well-known grids of points, defined by the x-axis and the y-axis placed at right angles to each other, at whose intersection is the origin. Each point on the plane is defined by a pair of numbers, usually represented as (x, y). However, there are no grids or orthogonal axes in Descartes’ Géométrie, and there are no pairs of numbers defining locations of points. About the most Cartesian-like element that can be recognized in Descartes’ La Géométrie is the line of reference AB, as in Fig. 1.

Descartesgeo5

Fig. 1 The first figure in Descartes’ Géométrie that defines 3 lines that are placed in position relative to the point marked A, which is the origin. The point C is one point on the loci that is to be found such that it satisfies given relationships to the 3 lines.

 

In his radical new approach to loci, Descartes used kinematic language in the process of drawing the curves, and he even talked about the speed of the moving point. In this sense, Descartes’ curves are trajectories, time-dependent things. Important editions of Descartes’ Discourse were published in two volumes in 1659 and 1661 which were read by Newton as a student at Cambridge. Newton also viewed mathematical curves as living things that changed in time, which was one of the central ideas behind his fluxions—literally curves in flux.