Entanglement’s weirdness leads to new view on emergence of spacetime
Second of two (entangled) parts. Read part one.
Until his death in 1955, Albert Einstein hoped that someday science would do away with what he called spooky action at a distance.
His concern was quantum entanglement. Two entangled particles, even after traveling very far from one another, share a mysterious quantum connection. Measuring one tells you instantly what the outcome will be of making the same measurement on the other. If Alice’s quantum coin turned up tails, for instance, then she knows that Bob’s coin will show heads, wherever he is, whenever he looks at it.
To Einstein, that seemed possible only if the distant particle had acquired its property when the particles last encountered each other. (In other words, the reality of the property was determined “locally.”) So Bob’s coin would be locked into “heads only” when his and Alice’s coins parted ways. In Einstein’s view, both coins (or particles) possessed definite properties for their entire trip. But the math of quantum mechanics demands otherwise. In the quantum realm, particles do not possess precise properties until measured. A quantum particle’s spin axis, for instance, points neither up nor down until you measure it (like a spinning coin that is neither heads nor tails until you catch it).
But even if measuring a particle establishes a property that didn’t previously exist, Einstein wondered, how could measuring the spin of one particle here tell you what the spin of the other one will be far away? He could not fathom that a property measured at one location could suddenly cause a property to come into existence for a particle somewhere else. He concluded that quantum mechanics was simply incomplete. Einstein believed that a deeper theory, incorporating unobservable “elements of reality,” would explain the mysterious long-distance entanglement connection.
But Einstein’s intellectual adversary, the Danish physicist Niels Bohr, argued that nature does not conceal such a theory. Even before entanglement had been articulated as an issue in quantum physics, Bohr had perceived that Einstein’s desire to understand cause-and-effect in terms of spacetime pictures was doomed. In the quantum realm, you cannot construct both a cause-and-effect account of a process and a spacetime description of that process. Those views are mutually exclusive; one is complementary to the other.
“The very nature of the quantum theory,” Bohr said, “forces us to regard the space-time co-ordination and the claim of causality … as complementary but exclusive features.”
So when you ask how a measurement of Particle A (at one point in spacetime) “causes” something to happen to faraway Particle B, you are mixing up a spacetime description with a cause-and-effect description. That’s precisely what quantum physics does not allow, Bohr asserted.
And all the experimental evidence, including several new experiments closing possible loopholes in Bohr’s arguments, supports his view. Understanding entanglement, it now seems, will require an even more radical insight into the interplay of space, time and reality than Einstein had imagined. It will take more than a new, more comprehensive theory. It will require a new perspective on the foundations of existence itself.
Dashing Einstein’s hope
While Einstein was alive, his hope for a deeper theory seemed reasonable. As long as such a theory made all the same predictions for experimental outcomes that quantum mechanics did, then no experiment could contradict Einstein’s intuition. But in 1964 the Irish physicist John Bell devised a test that actually could tell the difference between standard quantum theory and one that explained away the spooky part. In the decades since, numerous experiments have exploited Bell’s insight to demonstrate that quantum entanglement is every bit as spooky as Einstein feared: There are no hidden bits of reality that act “locally” to predetermine quantum measurement results.
But for some reason physicists and philosophers (and a lot of other people) can’t quit arguing over what it all means. About the only thing everybody agrees on is that in all entanglement research the two experimenters are named Alice and Bob.
Typical entanglement experiments measure the polarization (orientation of the vibrations) of photons (particles of light). Pairs of entangled photons are repeatedly created at one site and sent (one from each pair) to distant experimenters (Alice and Bob). Without communicating, Alice and Bob randomly choose whether to orient a polarization filter horizontally or vertically. After numerous trials, Alice and Bob can compare their results.
Bell’s theorem proved that there’s a limit to how often Alice’s and Bob’s results would match if nature provides preexisting values for any measurable property (as Einstein believed). The standard quantum view (particle properties are not determined until they are measured) predicts that Alice and Bob’s matches can exceed that limit. And that’s what always happens.
Until recently, though, Einstein sympathizers could point to loopholes in the experiments. Perhaps nature was concealing some secret signaling system that alerted the particles (or the devices detecting the particles) in advance, so they could coordinate their results. Or perhaps the detectors (not being perfectly efficient) recorded particles selectively, somehow conspiring to preserve the illusion of spookiness.
But new experiments have closed those loopholes. Detectors have been set up far enough away so that no secret signal (respecting the speed of light limit) could have been transmitted fast enough to affect the results. And high-efficiency detectors (or high-efficiency experimental design) have eliminated the loophole of selective sampling.
By showing that Bell’s matching limits are violated, these experiments rule out theories that preserve “local realism.”
As some reports described them, the new experiments therefore establish “nonlocality” as a feature of nature. And some physicists do describe the results in that way. But whether the principle of locality is actually violated — implying nonlocality — is a complicated question. As physicist Leonard Susskind (with Art Friedman) writes in Quantum Mechanics: The Theoretical Minimum, “Nonlocality is surprisingly difficult to even define.”
In fact, Susskind writes, Einstein’s notion and Bell’s notion of nonlocality seem to differ. But in any case, in entanglement experiments no “influence” is being transmitted from one particle to another when one is measured. It’s just that measuring one provides knowledge about what the outcome of measuring the other will be.
It’s especially wrong to suggest, as some accounts have, that one particle’s measurement “instantaneously” determines the other’s. There is no such thing as “instantaneous” for particles separated in space. As Einstein’s own relativity theory demonstrates, from different points of view Alice’s measurement could appear to happen either before or after Bob’s.
So if no signal is telling these particles how to coordinate their results, what’s going on? And why so much confusion about it? The answer involves the great difficulty in comprehending how profoundly quantum physics has reconstructed ordinary human conceptions of reality.
Common sense, for instance, suggests that the whole is the sum of its parts. But entanglement illustrates a quantum reality in which a known whole consists of unknown parts. For an entangled pair of particles, the quantum math offers complete information about the whole entangled system, but provides no information about either entangled particle.
“In quantum mechanics, we can know everything about a composite system — everything there is to know, anyway — and still know nothing about its constituent parts. This is the true weirdness of entanglement, which so disturbed Einstein,” Susskind writes.
Whether this state of affairs constitutes “nonlocality,” though, depends on what nonlocality means. In one of the new entanglement papers (in Nature), Bas Hensen of Delft University of Technology and collaborators wrote that “local” means that “physical influences do not propagate faster than light.” But in the Bell experiments, as Susskind emphasizes, there is no signal from one particle to the other; Bob’s quantum description of his particle remains unchanged even after Alice measures it. Bob will know Alice’s result (or vice versa) only after communicating with her via ordinary slower-than-light channels, like texting.
So despite some news accounts to the contrary, there is no faster-than-light signaling in entanglement. In violating Bell’s idea of local realism, it’s the “realism” part that quantum mechanics challenges. As Hensen and colleagues define it, realism means that “physical properties are defined prior to and independent of observation.” Apparently they’re not.
Murray Gell-Mann, the Nobel laureate physicist famous for conceiving (and naming) the particles known as quarks, has worked extensively on the foundations of quantum mechanics in recent decades. In his view much of the confusion stems from trying to apply classical (prequantum) physics to a quantum mechanical system — or a quantum mechanical universe. Experiments violating the Bell matching limit, Gell-Mann once told me, have no need for nonlocality.
“It’s not nonlocal at all,” he said. “There’s nothing being propagated from this detector to that detector, it’s just that if … they’re entangled, if this polarization over here is determined, then the polarization over there is determined, because of the entanglement…. It’s got nothing to do with something passing from here to there. There’s nothing passing.”
In Gell-Mann’s view, the problem stems from forgetting that the universe as a whole is itself quantum mechanical. That requires that the math describing it incorporate multiple possible “realities” — a vast number of possible chains of events, each chain being a “consistent history” that approximates a reality that (if you don’t look too closely) seems classical. Whether all these “quasiclassical” realms of existence actually exist or not remains a lively argument. Perhaps they constitute multiple real universes, or perhaps they just catalog all the possibilities that nature chooses from. Either way, quantum entanglement merely reflects the fact that different measurement results occur in different branches of quantum histories. When Alice measures her particle, she finds out which branch of history she is in, and therefore also knows what Bob’s result will be.
“If it had been explained that way to Einstein,” Gell-Mann said, “he might have accepted it.”
Whether accepting it or not, Einstein would have been intrigued. And he would have been more intrigued by some of the latest research attempting to understand spacetime itself — Einstein’s specialty — in terms of quantum entanglement. Physicists have recently been pursuing a line of reasoning by which spacetime is “constructed” from networks of entangled quantum states. Somehow, the quantum states describing basic particles generate webs of entanglement that correspond in some way to the geometry of spacetime itself.
“The intrinsically quantum phenomenon of entanglement appears to be crucial for the emergence of classical spacetime geometry,” Mark Van Raamsdonk of the University of British Columbia argued in a 2010 paper.
If this approach, which many physicists regard as promising, proves fruitful, then the weirdness of entanglement has an obvious explanation. Entanglement can’t be visualized in spacetime terms because entanglement precedes spacetime. You need entanglement to have spacetime — it is somehow more fundamental than spacetime. So you cannot understand entanglement as something that happens within spacetime.
This strikes me as very close to Bohr’s original insight, first articulated in 1927, that a spacetime description and a cause-and-effect description are mutually exclusive. Almost nine decades later, physicists may be on the verge of understanding why those two views are incompatible, and may soon be able to show that entanglement itself provides the resolution of its own mystery.
Follow me on Twitter: @tom_siegfried