There is a very real possibility that quantum computing is, and always will be, a technology of the future. Yet if it is ever to be the technology of the now, then it needs two things: practical high-performance implementation and a killer app. Both of these will require technological breakthroughs. Whether this will be enough to make quantum computing real (commercializable) was the topic of a special symposium at the Conference on Lasers and ElectroOptics (CLEO) held in San Jose the week of May 6, 2019.
Quantum computing is stuck in a sort of limbo between hype and hope, pitched with incredible (unbelievable?) claims, yet supported by tantalizing laboratory demonstrations.
The symposium had panelists from many top groups working in quantum information science, including Jerry Chow (IBM), Mikhail Lukin (Harvard), Jelena Vuckovic (Stanford), Birgitta Whaley (Berkeley) and Jungsang Kim (IonQ). The moderator Ben Eggleton (U Sydney) posed the question to the panel: “Will Quantum Computing Actually Work?”. My Blog for this week is a report, in part, of what they said, and also what was happening in the hallways and the scientific sessions at CLEO. My personal view after listening and watching this past week is that the future of quantum computers is optics.
It is either ironic or obvious that the central figure behind quantum computing is Albert Einstein. It is obvious because Einstein provided the fundamental tools of quantum computing by creating both quanta and entanglement (the two key elements to any quantum computer). It is ironic, because Einstein turned his back on quantum mechanics, and he “invented” entanglement to actually argue that it was an “incomplete science”.
The actual quantum revolution did not begin with Max Planck in 1900, as so many Modern Physics textbooks attest, but with Einstein in 1905. This was his “miracle year” when he published 5 seminal papers, each of which solved one of the greatest outstanding problems in the physics of the time. In one of those papers he used simple arguments based on statistics, combined with the properties of light emission, to propose — actually to prove — that light is composed of quanta of energy (later to be named “photons” by Gilbert Lewis in 1924). Although Planck’s theory of blackbody radiation contained quanta implicitly through the discrete actions of his oscillators in the walls of the cavity, Planck vigorously rejected the idea that light itself came in quanta. He even apologized for Einstein, as he was proposing Einstein for membership the Berlin Academy, saying that he should be admitted despite his grave error of believing in light quanta. When Millikan set out in 1914 to prove experimentally that Einstein was wrong about photons by performing exquisite experiments on the photoelectric effect, he actually ended up proving that Einstein was right after all, which brought Einstein the Nobel Prize in 1921.
In the early 1930’s after a series of intense and public debates with Bohr over the meaning of quantum mechanics, Einstein had had enough of the “Copenhagen Interpretation” of quantum mechanics. In league with Schrödinger, who deeply disliked Heisenberg’s version of quantum mechanics, the two proposed two of the most iconic problems of quantum mechanics. Schrödinger launched, as a laughable parody, his eponymously-named “Schrödinger’s Cat”, and Einstein launched what has become known as the “Entanglement”. Each was intended to show the absurdity of quantum mechanics and drive a nail into its coffin, but each has been embraced so thoroughly by physicists that Schrödinger and Einstein are given the praise and glory for inventing these touchstones of quantum science. Schrödinger’s cat and entanglement both lie at the heart of the problems and the promise of quantum computers.
Between Hype and Hope
Quantum computing is stuck in a sort of limbo between hype and hope, pitched with incredible (unbelievable?) claims, yet supported by tantalizing laboratory demonstrations. In the midst of the current revival in quantum computing interest (the first wave of interest in quantum computing was in the 1990’s, see “Mind at Light Speed“), the US Congress has passed a house resolution to fund quantum computing efforts in the United States with a commitment $1B. This comes on the heels of commercial efforts in quantum computing by big players like IBM, Microsoft and Google, and also is partially in response to China’s substantial financial commitment to quantum information science. These acts, and the infusion of cash, will supercharge efforts on quantum computing. But this comes with real danger of creating a bubble. If there is too much hype, and if the expensive efforts under-deliver, then the bubble will burst, putting quantum computing back by decades. This has happened before, as in the telecom and fiber optics bubble of Y2K that burst in 2001. The optics industry is still recovering from that crash nearly 20 years later. The quantum computing community will need to be very careful in managing expectations, while also making real strides on some very difficult and long-range problems.
This was part of what the discussion at the CLEO symposium centered around. Despite the charge by Eggleton to “be real” and avoid the hype, there was plenty of hype going around on the panel and plenty of optimism, tempered by caution. I admit that there is reason for cautious optimism. Jerry Chow showed IBM’s very real quantum computer (with a very small number of qubits) that can be accessed through the cloud by anyone. They even built a user interface to allow users to code their own quantum codes. Jungsang Kim of IonQ was equally optimistic, showing off their trapped-atom quantum computer with dozens of trapped ions acting as individual qubits. Admittedly Chow and Kim have vested interests in their own products, but the technology is certainly impressive. One of the sharpest critics, Mikhail Lukin of Harvard, was surprisingly also one of the most optimistic. He made clear that scalable quantum computers in the near future is nonsense. Yet he is part of a Harvard-MIT collaboration that has constructed a 51-qubit array of trapped atoms that sets a world record. Although it cannot be used for quantum computing, it was used to simulate a complex many-body physics problem, and it found an answer that could not be calculated or predicted using conventional computers.
The panel did come to a general consensus about quantum computing that highlights the specific challenges that the field will face as it is called upon to deliver on its hyperbole. They each echoed an idea known as the “supremacy plot” which is a two-axis graph of number of qubits and number of operations (also called circuit depth). The graph has one region that is not interesting, one region that is downright laughable (at the moment), and one final area of great hope. The region of no interest lies in the range of large numbers of qubits but low numbers of operations, or large numbers of operations on a small number of qubits. Each of these extremes can easily be calculated on conventional computers and hence is of no practical interest. The region that is laughable is the the area of large numbers of qubits and large numbers of operations. No one suggested that this area can be accessed in even the next 10 years. The region that everyone is eager to reach is the region of “quantum supremacy”. This consists of quantum computers that have enough qubits and enough operations that they cannot be simulated by classical computers. When asked where this region is, the panel consensus was that it would require more than 50 qubits and more than hundreds or thousands of operations. What makes this so exciting is that there are real technologies that are now approaching this region–and they are based on light.
Chris Monroe’s Perfect Qubits
The second plenary session at CLEO featured the recent Nobel prize winners Art Ashkin, Donna Strickland and Gerard Mourou who won the 2018 Nobel prize in physics for laser applications. (Donna Strickland is only the third woman to win the Nobel prize in physics.) The warm-up band for these headliners was Chris Monroe, founder of the start-up company IonQ out of the University of Maryland. Monroe outlined the general layout of their quantum computer which is based on trapped atoms which he called “perfect qubits”. Each trapped atom is literally an atomic clock with the kind of exact precision that atomic clocks come with. The quantum properties of these atoms are as perfect as is needed for any quantum computation, and the limits on the performance of the current IonQ system is entirely caused by the classical controls that trap and manipulate the atoms. This is where the efforts of their rapidly growing R&D team are focused.
If trapped atoms are the perfect qubit, then the perfect quantum communication channel is the photon. The photon in vacuum is the quintessential messenger, propagating forever and interacting with nothing. This is why experimental cosmologists can see the photons originating from the Big Bang 13 billion years ago (actually from about a hundred thousand years after the Big Bang when the Universe became transparent). In a quantum computer based on trapped atoms as the gates, photons become the perfect wires.
On the quantum supremacy chart, Monroe plotted the two main quantum computing technologies: solid state (based mainly on superconductors but also some semiconductor technology) and trapped atoms. The challenges to solid state quantum computers comes with the scale-up to the range of 50 qubits or more that will be needed to cross the frontier into quantum supremacy. The inhomogeneous nature of solid state fabrication, as perfected as it is for the transistor, is a central problem for a solid state solution to quantum computing. Furthermore, by scaling up the number of solid state qubits, it is extremely difficult to simultaneously increase the circuit depth. In fact, circuit depth is likely to decrease (initially) as the number of qubits rises because of the two-dimensional interconnect problem that is well known to circuit designers. Trapped atoms, on the other hand, have the advantages of the perfection of atomic clocks that can be globally interconnected through perfect photon channels, and scaling up the number of qubits can go together with increased circuit depth–at least in the view of Monroe, who admittedly has a vested interest. But he was speaking before an audience of several thousand highly-trained and highly-critical optics specialists, and no scientist in front of such an audience will make a claim that cannot be supported (although the reality is always in the caveats).
The Future of Quantum Computing is Optics
The state of the art of the photonic control of light equals the levels of sophistication of electronic control of the electron in circuits. Each is driven by big-world applications: electronics by the consumer electronics and computer market, and photonics by the telecom industry. Having a technology attached to a major world-wide market is a guarantee that progress is made relatively quickly with the advantages of economy of scale. The commercial driver is profits, and the driver for funding agencies (who support quantum computing) is their mandate to foster competitive national economies that create jobs and improve standards of living.
The yearly CLEO conference is one of the top conferences in laser science in the world, drawing in thousands of laser scientists who are working on photonic control. Integrated optics is one of the current hot topics. It brings many of the resources of the electronics industry to bear on photonics. Solid state optics is mostly concerned with quantum properties of matter and its interaction with photons, and this year’s CLEO conference hosted many focused sessions on quantum sensors, quantum control, quantum information and quantum communication. The level of external control of quantum systems is increasing at a spectacular rate. Sitting in the audience at CLEO you get the sense that you are looking at the embryonic stages of vast new technologies that will be enlisted in the near future for quantum computing. The challenge is, there are so many variants that it is hard to know which of these naissent technologies will win and change the world. But the key to technological progress is diversity (as it is for society), because it is the interplay and cross-fertilization among the diverse technologies that drives each forward, and even technologies that recede away still contribute to the advances of the winning technology.
The expert panel at CLEO on the future of quantum computing punctuated their moments of hype with moments of realism as they called for new technologies to solve some of the current barriers to quantum computers. Walking out of the panel discussion that night, and walking into one of the CLEO technical sessions the next day, you could almost connect the dots. The enabling technologies being requested by the panel are literally being built by the audience.
In the end, the panel had a surprisingly prosaic argument in favor of the current push to build a working quantum computer. It is an echo of the movie Field of Dreams, with the famous quote “If you build it they will come”. That was the plea made by Lukin, who argued that by putting quantum computers into the hands of users, then the killer app that will drive the future economics of quantum computers likely will emerge. You don’t really know what to do with a quantum computer until you have one.
Given the “perfect qubits” of trapped atoms, and the “perfect photons” of the communication channels, combined with the dizzying assortment of quantum control technologies being invented and highlighted at CLEO, it is easy to believe that the first large-scale quantum computers will be based on light.