Quantcast
Channel: Philosophy – William M. Briggs
Viewing all articles
Browse latest Browse all 529

Quantum Potency & Probability

$
0
0

A note on a complex subject. Ed Feser points us to the paper “An Aristotelian Approach to Quantum Mechanics” by Gil Sanders, and in turn Sanders points us to “Taking Heisenberg’s Potentia Seriously” by Kastner, Kauffman, and Epperson. Feser’s book Scholastic Metaphysics is also not to be missed.

Heisenberg was, of course, brought up when Aristotle’s notions on the distinction between act and potency were still taught. He thought those ideas useful in explaining quantum (= discrete) curiosities that were flooding through physics.

Sander’s paper is a gentle and informative introduction to these topics, while Kastner et al. go a little deeper. Below are some quotes. I believe they are useful in dispelling the recurring idea that probabilities are ontic, i.e. real things. Probability is purely epistemological, a relative measure of evidence, whereas potency is a real feature of objects. I urge you to read the papers themselves; they are not long. If you know about Aristotelian beneathphysics already, then jump to the end about probability.

Sanders (the set up in brief):

A wave function is a complete mathematical description of the properties of particles (represented as state vectors) in a physical system. By itself the wave function is a superposition of all possible state vectors. With Schrödinger evolution, the wave function evolves as a linear superposition of different states. It is deterministic in that the current vector state will physically determine the resulting vector state. If we could know all the proceeding conditions, we could predict with certainty what the resulting state vector would be. The wave function generally evolves in accord to Schrödinger, but once some form of measurement is performed, the wave function collapses in the sense that it no longer operates in accord to Schödinger’s equation but in accord to the collapse postulate. Through a linear combination of these state vectors, the once indefinite superposition of state vectors nondeterministically produces some definite state vector. In other words, the collapse postulate tells us that once a particle is measured, it is no longer in a superposition of different states but collapses into a particle with definite properties and a definite position in a nondeterministic manner.

Sanders (Aristotle is on his way):

The methodology of physics is such that it must use the exceedingly abstract tools of mathematics in order to perform its inquiry. Mathematics is inherently quantitative and structural by nature, thus it is in principle incapable of capturing qualitative aspects of nature in the same way that a metal detector is in principle incapable of detecting plastic. Whatever does not fit this quantifiable method, like immanent teleology and causal powers, must be ignored; only mathematically definable properties are discoverable. The wave function, for example, is a mere abstract equation that is standardly interpreted to be a representation of something concrete, but as to what that is we do not know. At best physics can only give us a partial description of reality (unless abstract structure is all that exists), it fails to tell us what is the inner qualitative nature of the thing that exhibits this mathematical structure.

Sanders (Aristotle has arrived):

According to the world renowned physicist, Heisenberg, the wave function “was a quantitative version of the old concept of “potentia” in Aristotelian philosophy. It introduced something standing in the middle between the idea of an event and the actual event, a strange kind of physical reality just in the middle between possibility and reality” (1958, 41)…

A potentia is simply a thing’s potential to have its qualities or substance changed. For example, a piece of glass has the potential to shatter or it has the potential to melt into a fluid. The former kind of change is a change of qualities or accidents, whereas the latter is a change in substance. This stands in contrast to actus, which refers to the way a thing actually is here and now… A potentiality should not be confused with mere possibility. It is possible for a unicorn to exist, but it is not possible for a piece of glass to become a unicorn because it lacks that potential whereas it does have the potential to break. A piece of glass’ actuality limits the potential range of things that can be actualized.

Sanders (Aristotle has filled the room):

[Modern physics restricts] the “real” to actuality because their view of matter is still mechanistic, where material objects are mere forms, which corresponds only to actuality. The Aristotelian conception of matter is decidedly hylomorphic in that all material substances are composed of form and matter. Form (or structure) corresponds to actuality, whereas matter corresponds to the potency that persists through change. This matter is the substrate of a material substance that is receptive to different forms, whereas the form gives definite structure to the matter…Since matter and form are just more specific instances of potency and actuality, we already know that this analysis is plausible given the above argument for Aristotle’s act-potency distinction.

Sanders (skipping over a justification of hylomorphism and a proof that potency has a kind of existence, then this):

Additionally, hylomorphism entails a gradual spectrum of material beings with greater degrees of potentiality to greater degrees of actuality. Something has greater actuality if it has more determinate form (or qualities) and something has higher potency if it is more indeterminate with respect to being more receptacle to various forms. For example, a piece of clay has higher potency insofar as it is more malleable than a rock and thus more receptacle to various forms. A rock can likewise be modified to receive various forms, but it requires a physical entity with greater actuality or power to do so because it has more more determinate form as a solid object… [H]ylormophism predicts that you will find higher levels of potency because you are getting closer to prime matter. This is precisely what we find in QM. The macroscopic world has more actuality, which is why we experience it as more definite or determinate, whereas the microscopic world has far less actuality, thereby creating far less determinate behavioral patterns.

Sanders (finally QM):

Let’s start with the wave function, which if you recall, initially describes several mathematical possibilities (aka superposition) prior to collapse. QM forces forces us to reify the wave function in some way because by itself it would suggest that the quantum world only exists when we are measuring it, which is rather absurd….It’s far more plausible to interpret the wave function as real insofar as it describes a range of potential outcomes for particles that are low in act but great in potency. This view reinterprets superpositions as being the potentials of a thing or state, not as actual states in which all possibilities are realized.

Sanders (more QM):

Thus collapse occurs when there is contact between a perceptible object and a non-perceptible particle whereby contact with the perceptible object actualizes a particular potential (spin-y as opposed to spin-x) of the particle into a definite state. The actualization of certain outcomes at measurement has the result of affecting the range of potential outcomes of some other particle: “actual events can instantaneously and acausally affect what is next possible” (Kastner, 2017)… This problem is resolved if you’re an Aristotelian. Suppose you intended to visit Los Angeles but unbeknownst to you an earthquake sunk that traffic-ridden city into the ocean. This actualized event changed the range of potential places that I (or anyone else) could visit without acting upon other persons. In other words, actuality cannot directly alter a distant actuality without interaction but it can instantaneously and acausally change a distant range of potentials.

Kastner (skipping over the same material discussed in Sanders; the Kastner PDF was built, I’m guessing, from Windows, making it very difficult to cut and paste from; thus my laziness explains why I quote them less):

We thus propose a new kind of ontological duality as an alternative to the dualism of Descartes: in addition to res extensa, we suggest, with Heisenberg, what may be called res potentia. We will argue that admitting the concept of potentia in to our ontology is fruitful, in that it can provide an account of the otherwise mysterious nonlocal phenomena of quantum physics and at least three other related mysteries (‘wave function collapse’; loss of interference on which-way information; ‘null measurement’), without requiring any change to the theory itself…

As indicated by the term ‘res,’ we do conceive of res potentia as an ontological extant in the same sense that res extensa is typically conceived—i.e. as ‘substance,’ but in the more general, Aristotelian sense, where substance does not necessarily entail conflation with the concept of physical matter, but is rather merely “the essence of a thing . . . what it is said to be in respect of itself”.

Of course, “one cannot ‘directly observe’ potentiality, but rather only infer it from the structure of the theory.” If we could measure it directly, it would be actus not potentia. They use the phrase quantum potentia (QP).

Probability

The belief that all things had to be all act, pure actus, and contain no potentia accounts for many of the confusions about QM. One of those confusions was the concepts of probability and “chance”. Physicists were reluctant to throw away the useful idea of cause; there had to be some causal reason “collapse” was happening. That collapse is the movement of a potential to an actual, but they didn’t see it that way, thinking the superposition of waves was all act. How did this happen? Probability was discovered to be indispensable in applying and understanding QM. Thus some thought probability itself was ontic, that chance was an objective feature of the world, and that probability/chance was the causal agent that selected the collapse point.

After all, isn’t QM probability calculated as a mathematical-function of the physical-wave-function? Didn’t that make probability real?

Well, no. It’s true the probability is a mathematical-function, something like the “square of the corresponding amplitude in a wave function”. The probability thus takes as input aspects of reality, a reality (the wave) which contains both act and potential, and spits out a number. But so what? Conditioning on measures of real things doesn’t turn thoughts about things into the things themselves or into causal forces. (This does not rule out the mind-body projecting energy, but I don’t believe it can, and that is not what turning thoughts into causal forces means.)

If I tell you this bag has one black and one white ball and one must be drawn out blind, the probability of drawing a black is a function of reality all right, but your thoughts about that probability isn’t what is causing your hand to grasp a ball. There is no probability in the bag. Or in your hand, or anywhere except in your thought. That’s easy to see in balls-in-bags because, as the two papers emphasize, we are dealing with objects that contain mostly act. That the balls have the potential to be all sorts of places in the bag is what makes the probability calculation non-extreme (not 0 or 1).

This is made even more obvious by recalling two physicists can have different probabilities for the same QM event. Just as two people could have two different probabilities for balls in bags. Person A has the probability 1/2, given just the premise above, but Person B notices the bottom of the bag is transparent; Person B has probability 1 of drawing the black. Physicist A knows everything about the measurement apparatus except for one thing newly learned by B, an additional physical measure. Both have different probabilities. It will turn out, in both cases, B makes better predictions. But in neither case could the probabilities have caused anything to happen. Indeed, Person B has an extreme probability because the cause of selecting black is perfectly known—and obviously isn’t the probability.

Physicist B does not have that advantage Person B has. For in Physicist B’s case, we have a proof that we can never reach extreme probabilities for certain class of correlated (in the physics use of that word) events. It has to be something in act that moves the potential in the wave to act (“collapse”), but what that is is hidden from us. That isn’t “hidden variables”; that’s an understanding our knowledge of cause is necessarily incomplete.

Consider Kastner:

[W]e might plan to meet tomorrow for coffee at the Downtown Coffee Shop. But suppose that, unbeknownst to us, while we are making these plans, the coffee shop (actually) closes. Instantaneously and acausally, it is no longer possible for us (or for anyone no matter where they happen to live) to have coffee at the Downtown Coffee Shop tomorrow. What is possible has been globally and acausally altered by a new actual (token of res extensa). In order for this to occur, no relativity-violating signal had to be sent; no physical law had to be violated. We simply allow that actual events can instantaneously and acausally affect what is next possible…which, in turn, influences what can next become actual, and so on.

They mean causal in the efficient cause sense, of course; and we needn’t agree with them about physical “laws”. The probability, in their minds, ignorant of the closing, that they will meet inside the coffee shop is high (close to or equal to 1 depending on individual circumstances). That they will meet inside won’t happen, though. They did not have the right information upon which to condition. That knowledge was not a hidden variable in any causal sense. Bell lives on.

Now about how all this works in individual experiments, and the relation to probability, we’ll leave for another time.


Viewing all articles
Browse latest Browse all 529

Trending Articles