Clash of the Quantum Titans

After decades of debate, disputes over the mathematical rules governing reality remain unresolved

Schrödinger’s cat was born 75 years ago. Its date of death remains uncertain. Science’s most famous feline remains perpetually both alive and dead, a mythological zombie symbolizing an enduring enigma at the heart of modern physics.

It’s an imaginary cat, of course, invented by Austrian physicist Erwin Schrödinger in 1935 to emphasize the weirdness of quantum mechanics, the mathematical constitution governing the microworld. An experiment could be devised, Schrödinger showed, to put a cat in a box into a live-dead limbo (technically, a “superposition” of states) until somebody looked inside. In the same paper, Schrödinger described another quantum conundrum, known as entanglement, in which measuring one particle seemed to affect the properties of another a great distance away. Entanglement was the feature of quantum physics that Einstein lamented as “spooky action at a distance.”

Semighostly cats and spooky long-distance influences both reflected the radical new view of reality that quantum theory imposed on 20th century physics. Quantum reality is ruled by probabilities. Instead of the rock-solid cause-and-effect world of Newtonian physics, humans occupy a casino universe with an undetermined future, a state of affairs that evoked Einstein’s famous complaint that God does not play dice.

Although both Einstein and Schrödinger helped give birth to quantum physics, they believed something was seriously amiss about it. Yet its predictions have always come true, no matter how absurd. Experiments have confirmed the Schrödinger-cat superposition, for instance, with atoms in multiple locations or simultaneous high- and low-energy states playing the role of the cat. And entanglement, though it remains mysterious, is no longer ghostly. It is now regarded as an information and communication resource, sort of the way the electromagnetic spectrum offers channels for TV stations and cell phone signals. Entanglement provides for an entirely novel type of communication involving “quantum” information, useful for sending secret codes, enabling computers to perform otherwise impossible calculations and promising new kinds of messaging networks.

In 1935, Austrian physicist Erwin Schrödinger developed a cat-killing thought experiment to illustrate his annoyance with one of the oddest features of quantum mechanics… Michael Morgenstern

“This is a huge effort worldwide now to develop quantum information,” says Anton Zeilinger, a leading quantum experimentalist. “It is probably the fastest-expanding subfield of physics right now … not only because of the technical promise but also because of the fundamental questions still being very interesting.”

Those same fundamental questions that concerned Einstein and Schrödinger continue to disturb many physicists today. What quantum mechanics really means, where it ultimately comes from, why it denies the cause-and-effect certainty of traditional physics are all questions that haunt the deepest scientific thinkers — and divide them almost as badly as 21st century political parties. Physicists simply can’t agree on how to interpret quantum physics. They fight like cats and dogs over it.

“There are different views,” says physicist Nino Zanghì of the University of Genoa in Italy. “And the different views are defended by sensible people.”

At the heart of these disputes is the very nature of reality itself, and whether quantum physics is the last word on how to describe it. Zeilinger, of the University of Vienna, advocates the standard quantum view of reality’s fuzziness. “It turns out that the notion of a reality ‘out there’ existing prior to our observation … is not correct in all situations,” he points out.

Yet some physicists cling to the prejudice that cause-and-effect determinism will someday be returned to its privileged status, and physics will restore objectivity to reality.

“I basically understand why people have this position,” Zeilinger responds. “But the evidence is overwhelming that this approach would not succeed.”

Both particle and wave

That evidence has been accumulating for more than a century. Quantum theory’s first success came in 1900, when Max Planck invented it to solve a problem about the colors of light emitted from hot objects. Energy from such objects had to be emitted in packets (called quanta) to explain the observed colors, Planck determined. In 1905 Einstein used quantum reasoning to explain the emission of electrons from certain substances exposed to light (the photoelectric effect). He concluded that light itself was composed of particles (later called photons), but few believed him at the time, the wave nature of light having been conclusively established a century earlier.

In 1913, though, the quantum concept gained credibility when the Danish physicist Niels Bohr used it to explain the colors emitted by hydrogen atoms. A dozen years later, Bohr’s German mentee Werner Heisenberg (and then shortly thereafter, Schrödinger himself) showed how to apply the quantum concept to more complicated atoms, and quantum mechanics was born.

One small snag remained: Heisenberg’s math treated an atom’s electrons as particles; Schrödinger’s equation described waves. Both approaches produced identical results, though, and Heisenberg’s professor, Max Born, then showed that the wave math could be interpreted as a measure of probabilities for a particle’s properties. At the same time, new experiments revealed that electrons did indeed sometimes behave like waves — and for that matter, light sometimes behaved like particles, just as Einstein had declared two decades earlier.

Out of this morass emerged two principles. One was Bohr’s principle of complementarity. No one picture of nature provides a complete description of quantum phenomena, Bohr asserted. Rather, mutually exclusive but complementary pictures must be invoked, depending on the experimental situation. In other words, if you design an experiment to see if electrons are waves, you get waves. If you design an experiment to test whether electrons are particles, you get particles.

Bohr’s philosophical principle was complemented by the hard mathematics of Heisenberg’s uncertainty principle, which declared that you couldn’t precisely measure some pairs of properties simultaneously. (You could not, for example, determine both the exact position and the momentum of an electron in any one experiment.) Heisenberg’s limit had nothing to do with human capabilities; an electron simply does not possess a well-defined position or momentum before a measurement. Unobserved, an electron exists in multiple locations at once, just as Schrödinger’s cat is both alive and dead until somebody opens the box. All physics can provide are the odds of spotting the electron in any given place (or finding an expired feline).

Both Bohr’s complementarity and Heisenberg’s uncertainty were proposed in 1927. For the next several years, Einstein and Bohr engaged in a titanic debate over the implications — Einstein attempting to show that the uncertainty principle had exceptions, Bohr refuting Einstein’s arguments at every turn. Finally Einstein acknowledged that the uncertainty principle was unavoidable with respect to what could be observed. But he began to believe that the observable was not all there was to reality.

Throughout their lives, Albert Einstein and Niels Bohr (shown in 1927 at a conference in Brussels) disputed the implications of quantum mechanics for the nature of reality. Paul Ehrenfest, Courtesy AIP Emilio Segrè Visual Archives, Ehrenfest Collection

In 1935, Einstein and two collaborators proposed a thought experiment designed to illustrate a mismatch between reality itself and its quantum description. Suppose, they wrote, that two particles interacted with each other and then flew far apart. Quantum math describes the pair as a single system, such that measuring the momentum of particle A would instantly reveal the momentum of particle B. Similarly, measuring particle A’s position instead would instantly reveal the position of particle B. (Other properties, such as spin or polarization, would be similarly linked.)

Einstein and friends (Boris Podolsky and Nathan Rosen) did not deny that a measurement on particle A, no matter how distant, could provide information about particle B. But it seemed to them that if either the position or momentum of particle B could be determined, it must have possessed both: There should be no way that an action at one place could change the “reality” at another place far away. Yet the uncertainty principle allowed measuring only one property or the other, not both. Therefore quantum mechanics must be an incomplete theory. There has to be something more.

Einstein’s thought experiment inspired Schrödinger’s paper describing the spooky entanglement. It also inspired a critical reply from Bohr. He declared that it made no sense to ascribe “reality” to something before it was measured. In any actual experiment, he pointed out, you could measure only momentum or position, not both. Position and momentum could not be simultaneously real.

In more picturable terms, the dispute boils down to something like this: Suppose a quantum leather factory in Las Vegas sends out two boxes — one to Sue in Ohio and one to Jim in Texas. Sue opens her box and finds a left-handed glove; she then knows instantly that Jim has received a right-handed glove. But suppose Sue had opened the box and found a right-footed shoe. Then she would know for sure that Jim now possessed a left shoe.

Einstein believed that the shoes or gloves were inside the boxes from the time they left the factory. Bohr believed that both boxes contained formless leather until Sue opened her box (or Jim — it doesn’t matter who goes first). Instead of causes and effects all operating at specific locations, Bohr’s view implies a holistic aspect of reality, with “nonlocal” influences defying the ordinary limits of space and time.

For decades afterward, the Einstein-Podolsky-Rosen paper remained irrelevant to the practice of physics, since most physicists accepted Bohr’s response or believed no real-life experiment could settle the dispute. But then in 1964, John Bell, a physicist at the CERN laboratory in Geneva, analyzed a variant of the EPR idea. Bell showed that, in principle, experiments could in fact distinguish between quantum mechanics and theories that proposed additional “hidden” features that would restore locality to reality. Twenty years later, various experiments had been conducted to test Bell’s math, and quantum mechanics, like Perry Mason, won every time.

Einstein was no Hamilton Burger, though, and his appeal is still posthumously pending. Even though the experiments all turn out exactly as Bohr would have predicted, sympathy for Einstein’s position has been increasing in recent years, along with growing antipathy toward Bohr.

Some physicists claim (simultaneously) that it’s impossible to understand what Bohr meant and that he was wrong (perhaps a curious example of complementarity in itself). Bohr had repeatedly insisted that experiments had to specify a clear distinction between the measuring apparatus, described in ordinary nonquantum language, and the quantum system to be observed — say, electrons or photons. Modern anti-Bohrians argue, though, that quantum mechanics applies to experimental instruments (and everything else), so any such division of the world into classical instruments and quantum phenomena is arbitrary and illogical.

As Maximilian Schlosshauer and Kristian Camilleri point out in a recent paper, Bohr understood very well that instruments also obeyed quantum mechanics. But he insisted that distinguishing the instrument from the object of observation was essential to doing science. Otherwise everything becomes a mixed-up mess of quantum fuzz, with no way to find out anything definite. “If everything is just gobbled up by ever-spreading entanglement and homogenized into one gargantuan maelstrom of nonlocal quantum holism, and if we can’t conceptually isolate and localize a system and regard it as causally independent from some (potentially distant) other system, then there are no systems that could be the object of empirical knowledge,” Schlosshauer and Camilleri write in a paper posted online in September at arXiv.org. To get knowledge about a quantum system, Bohr averred, an observer had to probe it with an external apparatus and report the results in ordinary language.

Bohr’s view prevailed for decades. But two nagging issues continued to plague physicists and philosophers alike: How one single reality emerges from the multiple quantum possibilities, and how quantum physics applies to the whole universe, with no outside observer to conduct an experiment.

One early attempt to address those issues was developed by Hugh Everett III at Princeton University in the 1950s. Everett took quantum math at face value — if it contains multiple possible realities, he reasoned, then all the possible realities exist. When you make an observation, you get one result, of course, not a cloud of multiple quantum possibilities. Another of those possibilities actually does occur, however, in a sort of parallel universe occupying the same space. You just somehow split into different versions of yourself, each unaware of the other, occupying separate realms of existence with different experimental outcomes.

John Wheeler, Everett’s thesis adviser, initially supported the multiple-reality idea but later dismissed it as requiring “too much metaphysical baggage.” Nevertheless Everett’s view, later designated the “Many Worlds” interpretation of quantum mechanics, inspired further efforts to address the issues that Bohr’s approach had not resolved. One popular approach involved a peculiar phenomenon called quantum decoherence.

Decoherent reality

Decoherence offers a simple solution to the paradox of Schrödinger’s cat. In principle, the quantum description of the cat comprises both life and death, just as a rock could, in principle, simultaneously occupy different locations. But air molecules and dust particles and light beams bounce off of rocks. After a fraction of a second, only one location for the rock will be consistent with the paths of the deflected particles — a coherent wave describing multiple possibilities has thus “decohered” into just one outcome. Something similar would happen to the cat: Environmental interactions guarantee the cat to be either dead or alive before anybody looks in the box.

One especially elaborate variant of the decoherence theme has been developed over the past two decades by physics Nobel laureate Murray Gell-Mann and his collaborator James Hartle. In their approach, multiple realities in the quantum fog condense into various chains of events, each chain approximately observing the cause-and-effect rules of classical physics. In other words, people perceive the world as classical and predictable, rather than quantum and probabilistic, because they occupy a realm where predictable patterns have decohered from the coherent cloud of quantum possibilities. Each such chain of events would constitute a “consistent history.” More than one consistent history might emerge from the quantum cloud, similar to the many worlds of Everett’s interpretation.

Using the math describing decoherence, physicists can calculate the probabilities of the various consistent histories, says Gell-Mann, of the Santa Fe Institute in New Mexico. He and Hartle, of the University of California, Santa Barbara, emphasize that these consistent histories arise naturally in any “coarse-grained” view of reality. Quantum weirdness persists in the fine-grained view of nature at the subatomic scale, but decoheres into ordinary physics in the coarser-grained realm of macroscopic objects. It’s much like the way coarse-grained (and predictable) properties of a gas, like temperature and pressure, emerge from the unpredictable and unobservable behavior of tiny molecules bouncing off each other.

Viewed in this way, quantum physics can accommodate an entire universe with no reference to an outside observer — consistent histories decohere from within. “Our way of doing it … for a given initial condition of the universe, as well as a given unified theory, would give predictions for the probabilities of alternative coarse-grained decoherent histories of the universe,” Gell-Mann says.

Gell-Mann and Hartle’s approach is similar in some respects to Everett’s, and it incorporates Bohr’s view as well. Bohr’s analyses involved measurements by observers experimenting on systems within the universe. That approach wasn’t wrong, Gell-Mann says — just not general enough to deal with the universe as a whole.

“It’s correct in a sense, but it can’t be general, it can’t be the deep way to look at quantum mechanics,” Gell-Mann observed in a 2009 interview. “It’s a special case.… If you look at 13 billion years of the history of the universe, you can’t describe it that way until very recently.” 

Gell-Mann and Hartle’s view also offers natural explanations for the counterintuitive results of some quantum experiments. Multiple possible outcomes of such experiments are simply different results in different histories.

Consider a typical entanglement paradox similar to the Einstein-Podolsky-Rosen experiment. Two entangled photons fly away from a common source toward distant laboratories set up to measure polarization — the orientation of the light’s vibrations. When photon A arrives at its destination, detectors can record either a horizontal or vertical alignment. The experiment can be set up so that if “horizontal” is the answer for photon A, then photon B, no matter how far away, will be horizontal also. If the first photon was vertical, then so is the second. But no magic message was instantly sent from one photon to the other. In the Gell-Mann–Hartle view, one measurement simply reveals which consistent history you are in. If your measurement of photon A is vertical, you are in the history where both photons turn out to be vertical. In another consistent history, the photons are both horizontal.

“Those are two different branches of history — two different alternative coarse-grained decoherent kinds of history,” says Gell-Mann. “They have nothing to do with each other.… If it had been explained that way to Einstein, he might have accepted it.”

Desperate measures

Or maybe Einstein was right in the first place. Paul Dirac, one of the pioneers of quantum mechanics, considered that to be a possibility. “I think that it is quite likely that at some future time we may get an improved quantum mechanics in which there will be a return to determinism,” Dirac said in a 1975 public lecture. “But such a return to determinism could only be made at the expense of giving up some other basic idea which we now assume without question.”

Few experts today believe that a future quantum physics will restore determinism, and most efforts in that direction meet dead ends when confronted with experimental results. But one among those yearning for a return to determinism — the Dutch Nobel physics laureate Gerard ’t Hooft — has looked more deeply into the issue than most and sees some hope. He accepts the validity of experiments showing that hidden variables cannot explain quantum outcomes deterministically. To restore cause and effect, ’t Hooft believes, will require digging even deeper into reality than quantum mechanics has so far penetrated.

When James Clerk Maxwell developed the idea of electromagnetic fields in the mid-19th century, he pictured reality in the microcosm as a mesh of tiny gears that transmitted forces described by deterministic equations. Suppose, says ’t Hooft, that the reality underlying experience is not so much like gears and switches, but more like the bits and bytes processed by computers. Information on this subquantum level, at the root of reality, might be processed deterministically after all, just at a level beyond the reach of any conceivable mathematical description.

“The general consensus is that the amount of information that nature can store in a very tiny volume of space and time is gigantic, it is so tremendously big that there is no hope whatsoever to follow this thing with any rigorous mathematics at all,” ’t Hooft said in July at the Euroscience Open Forum conference in Turin, Italy.

But mathematical tools are available to deal with such situations — namely, the math of probability and statistics. In fact, ’t Hooft’s investigations suggest that statistical equations describing this world of information too small to be seen would reproduce the features of quantum mechanics, including superposition and entanglement. But as Dirac suspected, achieving this return to determinism would come at a cost — in this case, abandoning the idea that particles and fields are ultimately real.

“The particles and fields are very, very crude statistical descriptions,” ’t Hooft says. “Those particles and those fields are not true representatives of what’s really going on.”
Zeilinger, on the other hand, does not expect the future to return physics to the past. It is more likely, he suggested at the Turin conference, that an advanced theory going beyond today’s quantum mechanics will be even more counter­intuitive.

“In the end of the day,” he says, “the situation is such that when we ever succeed — and I think we will succeed to build a new theory even beyond quantum physics — when we have the new theory, people who attack quantum theory today … would love to have quantum mechanics back.”

Tom Siegfried is a contributing correspondent. He was editor in chief of Science News from 2007 to 2012 and managing editor from 2014 to 2017.

More Stories from Science News on Quantum Physics

From the Nature Index

Paid Content