Uses and abuses of decoherence
Featuring Saturn’s moon Hyperion, particles of cosmic dust and (once again) Schrödinger's cat
Let me begin by re-stating the two rules that lay down how quantum mechanics works.
As you may remember from earlier posts, quantum mechanics provides us with a box of mathematical tools. These tools — state vectors, wave functions, density operators — serve to assign probabilities to possible measurement outcomes on the basis of actual measurement outcomes. So suppose that we want to calculate the probability of a particular outcome of a measurement M₂, given the actual outcome of an earlier measurement M₁. To do so, we must choose a sequence of measurements which may be made in the meantime, and we must apply either of the following rules, whichever is appropriate.
[Rule 1] If the intermediate measurements are made (or if it is possible to find out what their outcomes would have been if they had been made), we must square the magnitudes of the amplitudes associated with all possible sequences of intermediate outcomes and add the results.
[Rule 2] If the intermediate measurements are not made (and if it is not possible to find out what their outcomes would have been if they had been made), we must add the amplitudes associated with all possible sequences of intermediate outcomes and square the magnitude of the result.
In the context of the two-slit experiment with electrons (discussed here), M₁ indicates that an electron has been launched at G, and M₂ indicates the electron’s detection at D. Between G and D there is plate with two slits, and the only intermediate measurement considered indicates the slit through which the electron went. Thus there are two alternatives to which a complex number called “amplitude” has to be assigned, and these amplitudes were calculated in the aforementioned post.
Observe that Rule 1 applies not only if the intermediate measurements are made but also if it is possible to find out — by other measurements — what their outcomes would have been if they had been made.
In the early days of the theory, it was thought necessary for the proper study of quantum physical systems that they be sufficiently shielded from external influences. Measurement accordingly was thought to involve two interacting systems, a quantum system and the measurement apparatus. But shielding quantum systems from external influences is possible only for truly microscopic systems, up to and including relatively large molecules.1 When one is dealing with larger systems, their unavoidable interactions with the rest of the world (called “environment”) must be taken into account. Even a cosmic dust particle in intergalactic space reflects some of the electromagnetic radiation left over from the big bang, and the reflected radiation carries information about the particle’s whereabouts. By saying that the reflected radiation “carries” information, I mean that it can in principle be used for an indirect measurement of the particle’s position. This makes it possible to find out what the outcome of a direct measurement would have been if it had been made.
It is customary to say that the environment “monitors” or even “measures” the particle’s position. Decoherence theorists are aware that this manner of speaking is potentially misleading, otherwise they would not routinely resort to scare quotes. The dispersal of information about the particle’s whereabouts into the environment makes it possible to obtain information about the particle’s position indirectly via the environment, but by itself it does not amount to a measurement. The environment does not obtain information. Only people do.
Another scenario in which it is possible to find out what the outcome of a measurement would have been if it had been made, is the quantum eraser experiment discussed in this post. In this souped-up version of the two-slit experiment with electrons, Cesium atoms are used, and a pair of microwave cavities is placed in front of the slits. The atoms are launched in an excited state, and the design of the cavities ensures that they exit the cavities in their ground state, leaving behind a photon. Even when the slit taken by an atom is not measured, it is possible to find out what the outcome of this measurement would have been if it had been made. One only has to ascertain which cavity contains the photon left behind by the atom, and this can be done a long time after the atom has passed the slit plate and hit the screen.
An impressive illustration of the difference between a physical system that is shielded from the environment and one that is not, is provided by Saturn’s chaotically tumbling moon Hyperion. The irregular motion of this highly aspherical object was first detected by monitoring changes in its luminosity, then tracked by observations carried out during the Voyager 2 mission. While Hyperion, unsurprisingly, is always observed with its semi-major axis pointing in a definite direction, the angle characterizing its orientation would be spread over macroscopically distinct values after a mere 20 years if Hyperion’s interactions with the rest of the world could be switched off.2
The orthodox explanation why Hyperion’s semi-major axis keeps pointing in a definite direction begins with the assumption that the true, objective state of the universe is represented by a quantum state (state vector or wave function), and that this changes with time in a completely continuous manner, without ever “collapsing.” The question then is, how does “classicality” emerge? Whence the observed universe? Whence the objects it contains? Whence their definite properties?
If the measurement problem (which concerns the interaction between a quantum system and a measurement apparatus) is hard, understanding the emergence of classicality in terms of interactions between a quantum system, a measurement apparatus, and the environment can hardly be easier. “If everything is in interaction with everything else, everything is generically entangled with everything else, and that is a worse problem than measuring apparatuses being entangled with the measured systems,” Guido Bacciagaluppi3 observed. In the same vein, Maximilian Schlosshauer and Kristian Camilleri4 argued that
if everything is just gobbled up by ever-spreading entanglement and homogenized into one gargantuan maelstrom of nonlocal quantum holism, and if we can’t conceptually isolate and localize a system and regard it as causally independent from some (potentially distant) other system, then there are no systems that could be the object of empirical knowledge.
The keyword is entanglement. The concept was given a name by Schrödinger, who in his famous cat paper called it Verschränkung. In John Trimmer’s English translation of that paper it came to be rendered as “entanglement.” To those who understand the mathematical formalism of quantum mechanics to be a probability calculus, saying that two or more quantum systems are entangled means that the outcomes of measurements performed on these systems are statistically correlated, and this in a way that defies causal explanation. An example of such a situation has been discussed in this post.
To the Ψ-ontologist, who believes that the objective state of the universe is a quantum state in unbroken continuous evolution, saying that two or more systems are entangled means that their combined quantum state is a “superposition” like the notorious Schrödinger cat state, which we encountered in this post. One hour after the poor cat has been placed into a steel chamber (at which point the chances that the cat is alive or dead are 50:50), this state is given by the expression
|S-cat⟩ = √½ |A₁⟩⊗|cat(alive)⟩ + √½ |A₂⟩⊗|cat(dead)⟩.
Another way of writing the same state is this:
|S-cat⟩ = √½ |B₁⟩⊗|plus⟩ + √½ |B₂⟩⊗|minus⟩.
Here |plus⟩ stands for the expression √½ |cat(alive)⟩ + √½ |cat(dead)⟩, while |minus⟩ stands for the same expression with the plus sign replaced by a minus sign. B stands for an apparatus capable of indicating whether the quantum state of the cat is |plus⟩ or |minus⟩.
I concluded the aforementioned post with the following quotation from Luigi Picasso’s Lectures in Quantum Mechanics (Springer, 2016, p. 341): “tomorrow, when the observables that today do not exist will become available, we will be able, by means of two measurements, to resurrect dead cats.” Here is how this is supposed to work. Because the state |cat(dead)⟩ equals √½ |plus⟩ – √½ |minus⟩, we can take a dead cat and use apparatus B to determine whether the state of the cat is |plus⟩ or |minus⟩. We can then take the cat prepared in one of these states (no matter which) and use apparatus A find out whether the cat is dead or alive. If we find that it is alive, we have succeeded in resurrecting a dead cat.
How would a Ψ-ontologist explain why it is impossible to build an apparatus such as B? She might point to the fact that all the information we obtain in experiments concerns positions. This was stressed by (among others) John Bell: “in physics the only observations we must consider are position observations, if only the positions of instrument pointers”.5 So the question is: why do we not observe superpositions of instrument pointers, or pointers simultaneously pointing in macroscopically distinct directions?
The straightforward answer would be that everything that is accessible to direct sensory experience is localized. While Ψ-ontologists must deny themselves appeal to sensory experience, they can point to our bodies’ being localized in physical space. They can further point to the locality of physical interactions (i.e., the dependence of interaction laws on the relative positions of the interacting systems). The implication then is that for sufficiently large or massive objects (including instrument pointers) the environment selects a privileged basis.
To see what this means, suppose that |left⟩ and |right⟩ are pointer states, whose purpose it is to indicate the outcome of a measurement that has two possible outcomes. In this case the two vectors |left⟩ and |right⟩ form a basis in the (abstract) space of pointer states, as do the vectors
|plus⟩ = √½ |left⟩ + √½ |right⟩
and
|minus⟩ = √½ |left⟩ – √½ |right⟩.
The states |left⟩ and |right⟩ are “environmentally stable”: if the instrument pointer points left (or right), it will continue to do so. The states |plus⟩ and |minus⟩, on the other hand, are environmentally unstable: if the pointer were somehow prepared in either the state |plus⟩ or the state |minus⟩, it would almost instantly decohere. For all practical purposes it would cease to be a coherent superposition and become a statistical mixture of |left⟩ and |right⟩.
When one is dealing with a statistical mixture of possible outcomes, it is consistent to assume that exactly one of the possible outcomes is the actual outcome. But not when one is dealing with a coherent superposition, not even one that has become a statistical mixture for all practical purposes.6 For decoherence is never complete. As was pointed out by David Wallace,7
decoherence occurs on short timescales (not instantaneously); it causes interference effects to become negligible (not zero); it approximately diagonalizes the density operator (not exactly); it approximately selects a preferred basis (not precisely).
And even if decoherence were complete, the coherence would still exist. It only would be impossible to observe it in any way whatsoever. As Erich Joos and Hans Dieter Zeh — two pioneers of the environment-induced decoherence program — have quipped,8 “the interference terms still exist, but they are not there!”
There are scenarios in which it appears to make sense to say that the interference terms exist but are not there. Such a scenario is the aforementioned quantum eraser experiment. Even if no direct measurements of the slit taken by each atom are made, the interference terms are “not there,” i.e., not present in the predicted and observed distribution of marks on the screen. This is because the alternatives “atom went through the left slit” and “atom when through the right slit” are strictly correlated with the respective alternatives “photon in the left cavity” and “photon in the right cavity.” In this case Rule 1 applies, for by determining the cavity containing the photon, we can learn through which slit the atom went.
What appears to justify saying that the interference terms “still exist,” on the other hand, is that interference can be “restored” (not for the same atoms but in a different experimental arrangement) by opening the shutters that (in the first experimental arrangement) were separating the cavities. This is because the now exposed photocounter situated between the shutters responds with probability ½, and if the marks made by the atoms at the screen are sorted according as the photocounter does or does not respond, interference fringes (alternating bands of high and low detection frequencies) are observed. It must be added, however, that the usual phraseology of saying that by opening the shutters the which-way information “carried” by each photon is “erased,” is seriously misleading. For one thing, as mentioned, we are talking about different experimental arrangements. And for another, what is “erased” is not an actuality put a possibility that exists in the first experimental arrangement but not in the second.
Nothing analogous is possible in scenarios involving entanglement between alternative orientations of an apparatus pointer and states of the environment. The interference terms still “exist” as features of the quantum state of the universe, but since the entanglement that causes them to be “not there” cannot be undone, they cannot be restored. In other words, nothing analogous to the second experimental arrangement (in which the shutters are open and a photodetector is exposed) is possible. Zeh9 helpfully clarifies (the emphases are his): “although the resulting nonlocal superpositions still exist, we do not know, in general, what they mean (or how they could be observed).”
This puts a bright red flag not only on the environmental decoherence program but also on the reification of probability algorithms and the assignment of a quantum state to the universe as a whole. If something neither is there nor makes sense, what would we call someone who claims that it nevertheless exists? (Just asking!) And if such a thing does not exist, then neither does what implies its existence.
Although there was a time when ... decoherence was considered the “new orthodoxy” in the physics community to explain quantum measurements, at present it is quite clear that decoherence does not solve the measurement problem.10
If one attempts to make sense of quantum mechanics via the time-honored device of reifying a calculational tool (in this case a state vector or a wave function), one is led to ask: at what point does the objectification of a possible measurement outcome take place?11 While environment-induced decoherence provides an answer that is good enough for all practical purposes, it simply isn’t good enough from a logical or philosophical point of view.
If objective existence is attributed to the state vector of the universe, then the world experienced by us can only have a relative objective existence, and this not only in the sense that it lacks the absolute objective existence of the universal state vector. Decoherence theorists consider the quantum state of a physical system objective if it is “monitored” by the environment, so that it can be known indirectly (i.e., by observing the state of the environment) without perturbing it or re-preparing it in the process. But while environmental “monitoring” causes the interference terms to be “not there,” it does not cause the universal state vector to collapse. The latter remains a superposition of terms corresponding to different pointer orientations, which means that each of the possible pointer orientations indicates something that is objective relative to a particular branch of the universal wave function.
If, on the other hand, one begins with human experience — which, after all, is the universal context of science — then the objective existence of definite measurement outcomes is a given. The pointer points either left or right; the cat is either dead or alive. The question then is: to what extent is the definiteness inherent in direct sensory experience attributable to what is not accessible to direct sensory experience? How far can it be objectivized for all practical purposes? And the answer is: to the extent that observables are “monitored” by the environment.
The extent to which observables are “monitored” by the environment depends on the extent to which time and space are objectively differentiated. It depends on the extent to which the distinctions we make between regions of space or between intervals of time can be objectivized. If we partition space or time into smaller and smaller regions or increasingly short intervals, then, according to Wallace, “we will eventually reach a point where interference between branches ceases to be negligible, but there is no precise point where this occurs.” There is no shame in not knowing the extent to which definiteness can be objectivized, or the extent to which the differentiation of space and time into smaller and smaller regions or increasingly short intervals can be carried, before it ceases to be capable of objectivation. On the contrary, any attempt to draw a sharp line would be highly suspect.
How hard this can be may be inferred from a 1914 book by the mathematician (and politician) Émile Borel, who showed that even the gravitational effect resulting from shifting a small rock at the distance of Sirius by a few centimeters would completely change the microscopic state of a gas in a vessel here on earth, within seconds after the field (propagating at the speed of light) has arrived.
W.H. Zurek and J.P. Paz, Why we don’t need quantum planetary dynamics: Decoherence and the correspondence principle for chaotic systems, in D. Greenberger, W.L. Reiter, and A. Zeilinger (Eds.), Epistemological and Experimental Perspectives on Quantum Physics, pp. 167–177 (Springer, 1999).
G. Bacciagaluppi, The role of decoherence in quantum mechanics, in E.N. Zalta (Ed.), The Stanford Encyclopedia of Philosophy (Fall 2016 Edition).
M. Schlosshauer and K. Camilleri, What classicality? Decoherence and Bohr’s classical concepts, Advances in Quantum Theory, AIP Conf. Proc. 1327, pp. 26–35 (2011).
J.S. Bell, Speakable and Unspeakable in Quantum Mechanics, 2nd edition, p. 166 (Cambridge University Press, 2004).
D. Wallace, Decoherence and ontology (or: How I learned to stop worrying and love FAPP), in S. Saunders, J. Barrett, A. Kent, and D. Wallace (Eds.), Many Worlds? Everett, Quantum Theory, and Reality, pp. 53–72 (Oxford University Press, 2010).
E. Joos and H.D. Zeh, The emergence of classical properties through interaction with the environment, Zeitschrift für Physik B 59, 223–243 (1985).
In E. Joos, H.D. Zeh, C. Kiefer, D. Giulini, J. Kupsch, and I.-O. Stamatescu, Decoherence and the Appearance of a Classical World in Quantum Theory, p. 40 (Springer, 2003).
J.C.M. González, J.J. Arriaga, and S. Fortin, in O. Lombardi, S. Fortin, C. López, F. Holik (Eds.), Quantum Worlds: Perspectives on the Ontology of Quantum Mechanics, pp. 379–392 (Cambridge University Press, 2019).
As I pointed out in this post, I draw a distinction between objectification (as in “the disaster of objectification”) and what Schrödinger has called objectivation — the construction of an objective world from subjective experiences. The corresponding verbs are “to objectify” and “to objectivize,” respectively.