addressalign-toparrow-leftarrow-rightbackbellblockcalendarcameraccwcheckchevron-downchevron-leftchevron-rightchevron-small-downchevron-small-leftchevron-small-rightchevron-small-upchevron-upcircle-with-checkcircle-with-crosscircle-with-pluscontroller-playcrossdots-three-verticaleditemptyheartexporteye-with-lineeyefacebookfolderfullheartglobegmailgooglegroupshelp-with-circleimageimagesinstagramFill 1light-bulblinklocation-pinm-swarmSearchmailmessagesminusmoremuplabelShape 3 + Rectangle 1ShapeoutlookpersonJoin Group on CardStartprice-ribbonprintShapeShapeShapeShapeImported LayersImported LayersImported Layersshieldstartickettrashtriangle-downtriangle-uptwitteruserwarningyahoo

Cosmology, Quantum Mechanics & Consciousness Message Board › The nature of "freewill"

The nature of "freewill"

lan B.
user 10895495
London, GB
Post #: 196

Hello folks

Sorry l've been busy for a week or so, and currently it's hot and exhausting but l might as well get back into the fray as l've a lot of catching up to do on all the currently running threads.

My friend and editor of Conway Hall's Ethical Record, none other than Norman Bacrac, has composed and mailed to Prospero magazine the following argument in favour of the causal determination of our capacity to make decisions, in response to an earlier “metaphysical" claim by mathematical physicist John Lucas, long-time advocate of the famous Wigner's Friend argument, and forerunner of the style of argumentation widely used by Roger Penrose during the 1990s. Me in white this time (l will post Lucas's original immediately following this so that you'll be able to understand what it was to which norman was responding):


Sterling stuff Norman!

As you now read l have taken the liberty of disseminating your very cogently argued -- and admirably concise -- exposition to POPS and the customary 3 miscellaneous others. l will keep you updated on the (entirely deterministically predictable!) Hegelian resultants -- in the Newtonian sense -- of one particular correspondent if you don't find the prospect too irksome.

Cf this John Lucas is presumbaly the famous, somewhat doddery yet still mathematically able old chap who used to appear during the '90s at the LSE Graham Wallace Room in the Old Building in Houghton Street, right? l well remember Oxford's invited speaker Michael Lockwood running through the proof of Bell's Theorem in a mere 45 minutes, and everyone assembled thought not bad for a philosopher. (Which if l recall correctly is what Lucas at that point actually said!) He was ancient then, so by now he must be positively Zimmer-superannuated. He was also one of the allies of Eugene Wigner's Friend "argument" re the "collapse of the wave function", and this for traditional Christian theological reasons.

l might also post your argument onto the CONSCIOUSNESS, COSMOLOGY AND QUANTUM MECHANICS website Message/Discussion Board, but only with your permission.

lan


LETTER TO THE EDITOR OF PROSPERO FROM NORMAN BACRAC:
16 May 2012

John R. Lucas (Prospero 18-1) alleged that there were two scientific problems for my views on epiphenomenalism and free will, both stemming from the indeterminacy implicit in quantum mechanics. I do not dispute that in quantum mechanics certain events are not fully determined by the theory, having an arbitrariness about them, but neither epiphenomenalism nor, I believe, the freedom we can experience, require determinism. I explain how below.

John Lucas sets a rather high bar for epiphenomenalism to clear, which would also be impossibly high for any known alternative accounts of the relation between brain and consciousness: they apparently must “predict the exact state of the universe in time to come” (page 40). Let’s just consider Laplace’s mythical Intelligence which could predict the future of the universe. To achieve this, the Intelligence had just three requisites:-

(i) data, to infinite precision, of the position and velocity of every particle in the universe at a particular time*;
(ii) knowledge of all the laws of physics (assumed by Laplace to be deterministic);
(iii) calculating power of infinite speed and scope to utilise (i) and (ii).
*Even the smallest approximation in the data would rapidly permit wildly wrong results, suggests chaos theory.

The nearest we have come in reality to meeting these humanly impossible conditions, apart from specially simplified lab set-ups, is probably in the solar system. We can predict eclipses of the sun and moon quite well and foretell spaceship trajectories accurately, utilising the determinism of good old classical physics (here meaning Newton and a touch of Einstein). Whereas this ability to predict the future accurately is certainly a good test that one is dealing with a deterministic system, inability to predict due to the lack of one or more of Laplace’s requisites, is not evidence of the system’s non-determinacy. In particular, we cannot assert that today’s quantum mechanics represents the final ‘theory of everything’ [requisite (ii)].

However, even if the brain is subject to quantum randomness at its atomic level, it may work de facto deterministically at the level of neural events. Natural selection, where only the ‘fittest’ survive, has been refining animals’ decision-making powers for at least 500 million years. Animals constantly make life-or-death decisions using their sensory inputs, instinct and memory, (eg, ‘that new sound I hear is not a fox so I shall resume my lunch’ -- but if it were a fox, the foolish rabbit would become its lunch). Thus for good Darwinian reasons, the vital neural firings of surviving animals convey their data using the more robust, largely undegradeable digital (on/off) coding, with which we are all now familiar as the best way to transmit information through noisy channels. Accepting that we are just advanced animals, it follows that our unfettered, much-acclaimed ‘free’ choice – eg to vote for Boris or Ken – is also accomplished by this highly evolved faculty of the brain.

One’s choice in a given instance is necessarily the result of one’s history up that instant. So-called ‘libertarian’ free will, being contra-causal, is contingent on a supernatural gift of ‘free will’ for which no mechanism has, so far, been supplied. We can note that no moral significance can attach to anyone’s behaviour if it’s due to arbitrary brain (or, if you prefer, mental) activity. Furthermore, we rely on causality, the assumed determinism of the world and our nervous system, when planning any course of action and, as Hume noted, we rely on the logic of cause and effect when we venture to correct, influence or teach anybody.

Materialism regards consciousness as an additional attribute of matter, generated auto-matically by human and probably many animal brains; epiphenomenalism builds on this to affirm its inefficacy, thus preserving the continuity of physical processes. Epiphenomenalism does not make claims about the future, which seems to constitute John Lucas’s second objection about the lack of tense in physical law; it simply asserts, on the basis of brain scans, a psycho-neural correlation**: that all conscious experience occurs in the brain and its content is determined by the simultaneous brain state, which evolves as an integral component of the physical world.

As a matter of fact, techniques are now available which detect, immediately prior to one’s awareness of it, the sub-conscious build-up of those brain events expected to form a decision, thus allowing prediction of that choice (eg the neurologist Patrick Haggard’s work). There is no question of feeling an alien force constraining one – the choice is experienced as one’s own ‘free’ decision, where ‘free’ can mean only ‘unforced by any outside agency’. This is the only form of free will one needs.

**Actually a one-many correlation, in that the numerous rapid sub-atomic changes are unlikely to alter a particular conscious experience.
{For a radio discussion of these issues, listen to Podcast 31 from the website Philosophy Now}


lan B.
user 10895495
London, GB
Post #: 197


Here's Lucas's original thesis, to which Norman Bacrac replied:


Note on Freewill and Determinism


March 10, 2011


Indeterminsm said to undermine responsibility (``In Our Time'', March 10, 2011). But if quantum mechanics is, as we believe, indeterministic, we should not conclude that Darwin's Theory of Evolution was undermined. There would be many things we could not explain in physical terms, but the biological explanations remain unscathed. So, too, explanations of why we did things in terms of our reasons for doing them. It would be perfectly possible to ask someone why he did it, as it would to ask a biologist why the mammals (apart from the primates) came to be colour-blind.

A different problem arises with the freedom of the will. A free decision is one which could have been different, as well as typically being explicable in terms of reasons. Darwin's Theory of Evolution would not be undermined by indeterministic physics, but it would not be undermined by determinstic physics either. With human actions we are concerned not only with being able to answer the question ``Why did you do it?'', but with the agent's being a genuine initiator of action, and not merely the conduit through which the causal process works itself out. This is under threat from determinism. Determinism does defeat free will. In our age it is physical determinism that has been in issue, because physical determinism has been thought to be established by physical science. But if that is not so, the position is that the defeater, to use Alvin Plantinga's terminology, has been defeated.

The structure of the argument is:

1. We believe we have free will.

2. Not so, say philosophers, believing in determinism.

3. But the best view of quantum mechanics is that it is inderterminist.

4...No good, say the philosophers, because if it were indeterminst, that would mean that our actions were random (i.e. physically inexplicable). And if they were random, they would be inexplicable and we could not give a reason why we did them

6. The slide from physically inexplicable to rationally and morally inexplicable is fallacious; compare the parallel case of physically and evolutionarily inexplicable.



A former member
Post #: 132
Hmmm... Is this yet another diversionary tactic to evade the current thread on quantum mechanics? There is no scientific interest in this posting and I suggest that it would do better on a philosophy discussion board elsewhere.

Can we keep more to science on this message board please? Science is an exciting arena of new and important developments that are worth discussing. Let's talk about that sort of stuff, not these endless meanderings of philosophy.

Still, I will chuck in my own view which is that 'determinism' is merely a naive extrapolation from Newtonian mechanics, machines and computers. Strictly speaking no machine on the planet is 100% deterministic because every material thing is subject to the uncertainties of the real world such as highly improbable quantum fluctuations or statistical fluctuations in molecular arrangements or impact of powerful cosmic rays from a supernova. Determinism is a conceptual ideal. Actually the entire world is pervaded by uncertainty. So-called 'deterministic' things are predictable with 99.999.....9% confidence but never exactly 100%.

Here's a thought from cosmology: it's reckoned that the distribution of matter in the universe, with its clumps of galaxies and almost empty space between them, is explained by quantum fluctuations in the first fraction of a second of the Big Bang. So I suggest that if our galaxy of billions of stars is the result of an unpredictable quantum event 13.7 billion years ago then its existence just prior to that event had the same sort of interdeterminism as the computer on the desk or the person at that computer.

In their 'Free Will Theorem', which I mentioned before, Conway and Kochen define determinism in terms of

  • the totality of information (called I) in the past light cone of an entity (eg an electron or a person)
  • a decision function (called f) for that entity and
  • the action f(I) of that entity.

If the function f exists then the entity acts deterministically; if the function f does not exist then it does not. With that definition they prove that humans and sub-atomic particles are either all deterministic or all indeterministic entities. That is a theoretical result with obvious implications for human 'free will'. To judge from the given excerpt of philosophical musing, the authors are ignorant of this development.



lan B.
user 10895495
London, GB
Post #: 198

N.B. l was mistaken about who exchanged which views with whom:

Ian. I like your new thread - thanks. However, my letter to Prospero (to be published in Prospero 18 (3) in September) was in reply to Lucas's article in 18 (1) which is not the same as the extract from In Our Time and which you have probably not seen. I'll try to scan it and send it to you, ideally to replace the extract from IOT. You might then also want to send a comment on it to Prospero. Otherwise we'll get comments from people who don't know the original Lucas article.

Norman.


.. So we await developments! Hello Andrew. Interesting reply. Right, you say:

Strictly speaking no machine on the planet is 100% deterministic because every material thing is subject to the uncertainties of the real world such as highly improbable quantum fluctuations

.. literally true (since QM is!) ..

or statistical fluctuations in molecular arrangements

Ah! .. But deterministic according to (classical) statistical mechanics! Of course, we appreciate that the Maxwell/Boltzmann account isn't literally true -- because of QM itself of course --but then that consideration reduces your second alleged tranche of indeterminability to a subset of your first category.

or impact of powerful cosmic rays from a supernova.

They're about as straight-line deterministic as it gets -- and the higher their energy, the more "particle-like" they become. (Appreciate that we haven't yet resolved that one to your satisfaction Andrew.)

Determinism is a conceptual ideal. Actually the entire world is pervaded by uncertainty. So-called 'deterministic' things are predictable with 99.999.....9% confidence but never exactly 100%.

Again -- to rehash a prominent theme throughout the past 3 months -- the ontological issue of whether or not the universe is deterministic is strictly orthogonal to the issue of how accurately we are able to predict outcomes!

Our shortcomings are not an intrinsic property of the universe -- as opposed, say, to the Uncertainty Principle, which is. On the general theme of the nomic upshot of QM, even there we find that "the randomness itself isn't random" but it's structured, predictable from situations belonging to one category of initial conditions to the next, and calculable ab initio even in the case of experimental set-ups which haven't yet been configured.

QM's hallmark of discreteness, its lack of continuity between space-time loci involving the components of quantum-describable systems, showing that we can not draw repeatable trajectories for same-start experimental set-ups obscures the true situation in regard to single outcomes, but that fact that populations of such outcomes betray strictly predictable features precludes an absolute "anything goes" behaviour within the natural world itself. There is clearly some sort of precisely describable in principle nomic constraint on these populations, which strongly suggests that each "particle" is subject -- non-locally, evidently! -- to connections with elsewhere which globally speaking do show precise and repeatable correlations from one specifically described situation to the next. The empirical fact of quantum nonlocality mandates that these connections are "instantaneous" -- they real-number bisect the local inertially defined time-axis ..

.. but non-causally -- (instead, "spatially") -- so the system would still be determined, but not by the careers of its own local compositional antecedents! Determinism does not necessarily imply causality (whereas the converse claim is clearly untrue).

[l might add that Penrose's Process R has never been demonstrated but, rather, somewhat blandly assumed to be the case ever since von Neumann introduced the notion of "collapse of the wavefunction", and it is clearly part of the popular mythology of QM, precisely because it has never been demonstrated!. Thus persists -- as l keep pointing out within this forum -- the baleful stamp of respected authority-figures long-dead. All that we are ever faced with experimentally speaking is a localisation of the situation to coordinates (or other discrete states) which are very tightly defined in comparison to the acuity of human csensory capacities, but this latter is only a condition of so-called "quantum state collapse", never a proof. As always, the onus of proof otherwise lies with the claimant, .. and l personally greatly prefer the Occam's Razor approach: "If you don't need it to make things work, then whatever it is, it doesn't belong to the toolkit of this particular theoretical explanation".]


Here's a thought from cosmology: it's reckoned that the distribution of matter in the universe, with its clumps of galaxies and almost empty space between them, is explained by quantum fluctuations in the first fraction of a second of the Big Bang.

That's one popular interpretation! (Edward Tryon, circa 1970).

Conway-Specker's specification of course exhibits the logic which one can only expect, but l'm afraid that it doesn't impinge in any way whatsoever as to whether or not we are rationally accountable in some Court of Law! Rational accountability is not consistent with lack of determinism.

We would in the normal run of things regard any person or entity who/which behaved without in principle being able to generate some consistent rationale as some sort of either monster or animal. ls this the sort of situation which we are ostensibly intended to endorse if the "quantum freewill" account were really found to be the case?







A former member
Post #: 133
First of all Ian, I think you should stop promoting dodgy ideas about quantum physics. It appears to me that you do not understand the subject correctly and I would ask if your views are purely home-spun, or are they received ‘wisdom’ in the philosophy community, or do they come from an authority on the subject? On the quantum mechanics thread I made a challenge to which you have failed to give any adequate response. If your ideas come from an authority then you should have no difficulty in responding, with references. I am being fair about this; I haven’t yet called your views nonsense because you were given the opportunity to prove me wrong. Therefore you should respond to my criticism before you repeat any more dubious ideas about so -called ‘wave collapse’. I suspect that you do not actually understand my point and this is why you are unable to reply. If you like we can discuss this by email until the issue between us has been clarified.

So your bit about Penrose is presently inappropriate. So too is your last paragraph on Conway-Kochen which fails on three counts. First you got the name wrong. Secondly your comment about “the logic which one can only expect” is a joke; I suppose you thought Bell’s theorem on non-locality was pretty obvious as well! And thirdly, you have got a bit mixed up between mathematical physics and ethics. This MeetUp group is for discussing cosmology, quantum mechanics and (the science of) consciousness. So let’s not try to do ethics as well, OK?

I go along with the Conway and Kochen definition that determinism is present if and only if the function f(I) exists. This definition is very general and powerful. For example this definition implies that a discussion of a computer as a deterministic device is inadequate because the information set (I) has been restricted to a subset of the past light cone of the computer (by forgetting about the possible effect of cosmic rays for example) and that restriction is an artificial one made by the discussant. It does not fully represent the reality. Note also that this definition does not depend on ordinary notions of predictability; I am with you on that point.

Having dealt with those points the only theme left in your note is what seems to be your attempt to cling on to some determinism. You tried to argue that cosmic rays are “about as straight-line deterministic as it gets” but how wrong can you be? The emission of a cosmic ray is a quantum-level event and the path of this ‘particle’ through space is straight with a probability of about zero and is subject to physical effects some of which may involve other quantum events along the way. So your attempt to cling to some determinism there does not work. And indeed it cannot work because of the Free Will Theorem. Instead of a human operator setting up the measurements required in that theorem and being tested for deterministic behaviour, it could be a computer or a cosmic ray detector that does the job. The same conclusion about determinism will have to apply to any and all of these. Unless you can find that function f, it must be concluded that indeterminism and uncertainty flood our universe on every scale from electrons to galaxies. And that should now be the starting point for any discussion on free will.
lan B.
user 10895495
London, GB
Post #: 199

First of all Ian, I think you should stop promoting dodgy ideas about quantum physics.

Moi? Have you any historical examples? (Flabbergasted.)

It appears to me that you do not understand the subject correctly

.. (l was expecting you to furnish examples! crying)

and I would ask if your views are purely home-spun, or are they received ‘wisdom’ in the philosophy community, or do they come from an authority on the subject?

Au contraire. As said, l’m a resolute empiricist, and therefore prefer to look at the evidence rather than the half-baked, pop-philosophical views of admittedly superlative mathematicians whose expertise has gulled otherwise less-credulous others into accepting obvious hogwash.

Beware of authority figures until you have listened to or read their expositions in analytical terms. Only then – once it has been digested – should the fare be given critical scrutiny. ( .. And only if – like Einstein – they actually almost always get it right should we hand them our unswerving intellectual allegiance.) The curse of humanity (lMO) is that throughout recorded history – and doubtless also a long time prior to that – we have tended to follow any guru with sufficient qualifications in something-or-other, or even a sufficiently stentorian voice – witness the run-of-the-mill politician or religious leader. (As long as they actually stick to the “something-or-other” l don’t mind, but if they stray beyond their domain of masterful understanding, they’re more than apt to make decisions fully as gauche and jejune as any other unschooled commentator.

I repeat: All that the knowledge that we have about quantum-describable systems is derived on the basis of measurements. Measurements – per se – do not imply “collapse”. Consider: you know that angular momentum is conserved, and this principle applies fully as much to quantum systems as to so-called classical ones. If you think that some “collapse” has occurred at the end of, say, some photon’s career within a Young’s Interferometer after it has “hit the screen”, then what has happened to the angular momentum? Is it still localised? I told the forum about Mott’s 1930s quantum-theoretic analysis of the mechanism by means of which photographic emulsions work. The process operates by means of the migration of 3 silver ions along orthogonal crystal axes to meet at a vertex. This is clearly a cooperative process, not a local one! ( .. And it‘s not “classical”, either!)

Or what about Rydberg States? As the ground-state electron in the 1s orbital of a hydrogen atom, say, is promoted by interacting with a photon bath of the right frequency, it can hop up to progressively more excited states without ionising the parent atom by converging on the continuum limit. Calculation in advance using the Schrödinger equation – or even, in the case of hydrogen alone – Niels Bohr’s “Old Quantum Theory” analysis of 1913 – enables the energy gap between successive levels to be precisely derived, and so the photon bath must be continually reduced stepwise in frequency (i.e. increased in wavelength) to avoid ionisation. This is achieved experimentally by a series of discrete radio-frequency pulses as the continuum limit becomes closely approached. In this way it has been calculated that a hydrogen atom could in principle remain unionised at N = 40, but at that point it would have a diameter of around one metre. In fact, there is no reason why such an atom should ever ionise, except .. for the competing attractions of the rest of the universe!

Now, clearly the allowed eigenstates of the hydrogen atom are discrete (by definition) and so for an electron occupying any orbital within any shell of symmetry lower than that of the s shells – as all the rest of them indeed are – there will be periodic localisations as the maxima are approached. This quantum-mechanically determined maximum can be chaotically perturbed by radio-pumping the electron “slightly sideband”. This is clearly as “quantum-classical borderline” as could conceivably be achieved. (Strange, therefore, that despite around 60 years or so after experimental confirmation of the Rydberg effect, hardly anyone mentions it anymore, presumably because of the lingering classical prejudice about “matter”, and “molecules being in 2 places at once”, etc.!) The electron’s state is being measured millions of times per second. Exactly where – throughout any part of this process – has any kind of even vaguely detectable “collapse” occurred? The electron’s “position” is within the bounds of knowability – i.e. the Uncertainty Principle itself – “known”, which makes this “particle” have a “diameter” > 1 metre!

Come on, Andrew! cool

I have repeatedly given you the evidence and reasoning in support both of my “no-particle” and “no-collapse” claims, Andrew, but you have not responded in kind. The Decoherence Interpretation allows for the localisation of some parameters, but the absolute delocalisation – due to a spreading wave of entanglement of stupendously rapidly increasing complexity throughout the entire bulk of the measuring apparatus with which some photon, say, interacts – of the rest of the quantum state (which consists not only of measurable, real number-expressible terms). All you get is “position” – how accurately, though? – and even the flash on the screen – or the developing grain of emulsion – are in turn determined non-locally by the quantum state of the rest of the measuring instrument (whose own quantum state is incalculably complex, or rather, as Joos, Zeh and Omnès for instance have shown, could in principle be calculated only by building a second-order quantum-state measuring device whose radius would be around 10^20 times that of the radius of the observable universe).


On the quantum mechanics thread I made a challenge to which you have failed to give any adequate response.

Please see above.

If your ideas come from an authority then you should have no difficulty in responding, with references. I am being fair about this; I haven’t yet called your views nonsense because you were given the opportunity to prove me wrong. Therefore you should respond to my criticism before you repeat any more dubious ideas about so -called ‘wave collapse’. I suspect that you do not actually understand my point and this is why you are unable to reply. If you like we can discuss this by email until the issue between us has been clarified.

So your bit about Penrose is presently inappropriate. So too is your last paragraph on Conway-Kochen which fails on three counts. First you got the name wrong.


As you realise l was thinking of the Kochen-Specker Theorem (which Michael Redhead in his arrogantly Anglocentric, dismissive way would deliberately mispronounce as “Coach’n’Specker").


lan B.
user 10895495
London, GB
Post #: 200

[ .. Continued .. ]

Secondly your comment about “the logic which one can only expect” is a joke; I suppose you thought Bell’s theorem on non-locality was pretty obvious as well! ..

It ‘s clearly not obvious (which is why we justly celebrate John Bell in the way that we do, as Einstein; he didn’t think of the Bell Inequalities!)

It’s clearly true, after following through the analysis (which is why we justly celebrate John Bell in the way that we do, as Einstein; he didn’t think of the Bell Inequalities!)

As for the Conway-Kochen Theorem, it comes in for some serious flak from Wiki:


“Conway and Kochen have presented a “freewill theorem” which they claim shows that “if indeed we humans have free will, then [so do] elementary particles.” In a more precise fashion, they claim it shows that for certain quantum experiments in which the experimenters can choose between several options, no deterministic or stochastic model can account for the observed outcomes without violating a condition “MIN” motivated by relativistic symmetry. We point out that for stochastic models this conclusion is not correct, while for deterministic models it is not new.

In the way the free will theorem is formulated and proved, it only concerns deterministic models, but Conway and Kochen have argued that “randomness can’t help,” meaning that stochastic models are excluded as well if we insist on the conditions SPIN, TWIN, and MIN. We point out a mistake in their argument. Namely, the theorem is of the form:

(1) deterministic model with SPIN & TWIN & MIN → contradiction , and in order to derive the further claim, which is of the form:
(2) stochastic model with SPIN & TWIN & MIN → contradiction , Conway and Kochen propose a method for converting any stochastic model into a deterministic one:

“Let the stochastic element…be a sequence of random numbers (not all of which need be used by both particles). Although these might only be generated as needed, it will plainly make no difference to let them be given in advance.” [emphasis added]

In this way, (2) would be a corollary of (1) if the conversion preserved the properties SPIN, TWIN, and MIN. However, Conway and Kochen have neglected to check whether they are preserved, and indeed, as we will show, the conversion preserves only SPIN and TWIN but not MIN. We do so by exhibiting a simple example of a stochastic model satisfying SPIN, TWIN, and MIN. As a consequence, no method of conversion of stochastic models into deterministic ones can preserve SPIN, TWIN, and MIN. More directly, our example shows that (2) is false. Contrary to the emphasised part of the above quotation, letting the randomness be given in advance makes a big difference for the purpose at hand.

The relevant details are as follows. The reasoning concerns a certain experiment in which, after a preparation procedure, two experimenters (A and B), located in space-time regions that are spacelike separated, can each choose between several options for running the experiment. .. ”


Hmmm .. and one wonders, if it is true as alleged that: “if indeed we humans have free will, then [so do] elementary particles.”

.. Surely the antecedent and consequent are the wrong way round to serve the metaphysical libertarian’s purposes?


.. And thirdly, you have got a bit mixed up between mathematical physics and ethics. This MeetUp group is for discussing cosmology, quantum mechanics and (the science of) consciousness. So let’s not try to do ethics as well, OK?

Fine, but that leaves one wondering exactly what all the (fuss/intense desire to have) “freewill” was all about in the first place. I mean, I’m not stopping you from popping off to the supermarket, at just this moment, am I? (Sorry, that was a rhetorical device!)

I go along with the Conway and Kochen definition that determinism is present if and only if the function f(I) exists. This definition is very general and powerful. For example this definition implies that a discussion of a computer as a deterministic device is inadequate because the information set (I) has been restricted to a subset of the past light cone of the computer (by forgetting about the possible effect of cosmic rays for example)

.. No; sorry; any cosmic ray – indeed, any causal influence whatsoever – lies by definition within the past light-cone of the computer or measuring instrument. (Otherwise, they wouldn’t have been able to interact, would they?)

and that restriction is an artificial one made by the discussant. It does not fully represent the reality. Note also that this definition does not depend on ordinary notions of predictability; I am with you on that point.

(Now all we need is Peter’s acquiescence!)

Having dealt with those points the only theme left in your note is what seems to be your attempt to cling on to some determinism. You tried to argue that cosmic rays are “about as straight-line deterministic as it gets” but how wrong can you be? The emission of a cosmic ray is a quantum-level event and the path of this ‘particle’ through space is straight with a probability of about zero and is subject to physical effects some of which may involve other quantum events along the way.

No: as in all CERN experiments, linear momentum is conserved. Think billiard balls![/color]

So your attempt to cling to some determinism there does not work. And indeed it cannot work because of the Free Will Theorem. Instead of a human operator setting up the measurements required in that theorem and being tested for deterministic behaviour, it could be a computer or a cosmic ray detector that does the job. The same conclusion about determinism will have to apply to any and all of these. Unless you can find that function f, it must be concluded that indeterminism and uncertainty flood our universe on every scale from electrons to galaxies. And that should now be the starting point for any discussion on free will.

The function in question would be the convergence of quantum systems on the classical limit. I.e. the Correspondence Principle (established by means of Ehrenfest’s Theorem in 1927).



A former member
Post #: 134
Ah, lots of stuff chucked in there to open up multiple lines of discussion. More diversionary tactics. Just one thing missing though, namely any justification for your view on quantum mechanics; any response to my point of challenge. The suggestion has been made that at sufficiently small scales all quantum behaviour is essentially wave-like, so that the aspect called 'wave collapse' or abbreviated by Penrose to R is illusory, since everything is actually just waves. I have challenged this interpretation and so far no adequate response is forthcoming.

If I am not making my challenge clear let me say it this way. Quantum physics is in essence a mathematical theory of how things work. It has been recognised from the outset that it cannot be accurately conveyed by language, the maths is essential. The essential mathematical formalism comes in two parts: (a) quantum states which are mathematical objects with no known counterpart in the real world and (b) operators on those states which correspond to 'observations' or 'physical measurements'. The combination of quantum state and operator mappings gives rise to the eigenstates and eigenvalues that correspond to real world observations. Modern science and technology on the smallest scales depend absolutely on this basis of calculation, and on the two parts of the formalism.

The conceptual or philosophical problem with the formalism lies in the unresolved question of what it means for the true nature of things. Different quantum interpretations attribute widely different meanings. Every interpretation has to say something about quantum states, state operators and transition probabilities. You cannot disregard part of the formalism by implying that it doesn't apply, because in terms of the maths it certainly does apply and it has to be addressed. Otherwise you are not describing quantum physics, you are talking about a make-believe version instead. So my question to you, Ian, still not answered, is how exactly does your idea connect with the formalism?

I'll get back to free will after we have this important item sorted out.

lan B.
user 10895495
London, GB
Post #: 201


Andrew's repeated challenge:

>"So my question to you, Ian, still not answered, is how exactly does your idea connect with the formalism?"

Simple (!)

The formalism of QM nowhere and at no time -- including at any point within von Neumann's Mathematical Foundations of Quantum Mechanics (1932) -- mentions anything resembling Penrose's Process R (which if l remember rightly von Neumann labelled Process 1).

This is not exactly surprising, is it? .. Since as far as we know the process -- if it were actually to be the case! -- isn't algorithmically expressible. (Hence Penrose's great interest in the situation for theory-of-consciousness purposes whilst he was writing his series of popular books commencing in 1989, before returning his attention to twistor theory and conformally invariant treatments of spacetime-in-the-large.)

Penrose labours this property of the not-quite-theory-itself from The Emperor's New Mind onwards by asking in almost hand-wringing fashion why we go to all the often tremendous computational labour of writing down the Schrödinger equation for some system or other "only to throw it away", because the evolution of the Schrödinger Equation -- although deterministic -- gives us only a probability distribution. (Ever since Max Born, that's exactly what it's supposed to do!)

"Process R" or "Process 1" or whatever you choose to call it is not part of the theory. lt is rather part of a bridging proto-theory between outselves and the statistical outcomes which strictu sensu the kosher theory itself predicts. It is part of the framework by means of which we -- as physically uninterpreted observers -- actually manage to engage with the measuring process whilst pretending that we ourselves can, somehow, stand outside its predictions.

It is, in other words, a pragmatic fudge. (lMO!)

Have a nice w/e, everyone, basking in the luxuriant glow of the aftermath of a weak neutral current quantum process whose half-life is > 10^10 years. (Just as well for us, in "both directions"!)




A former member
Post #: 135
Absolutely wrong Ian.

By all means let's ditch Penrose and go with von Neumann. An essential part of his approach to quantum theory was the formalism of operators acting on the quantum state space. You know that and you even say that von Neumann referred to process 1. Your first paragraph therefore seems contradictory. Maybe you haven't quite got the significance of the maths: the connection between the operator formalism and what has been called 'wave-collapse'. So, same question still awaiting a decent answer: how exactly does this part of the formalism appear in your idea of quantum mechanics?

Well I suppose I should know the answer by now: the operator part of the maths does not make any connection with your idea of quantum physics. Which therefore shows that your idea is not the same as that of von Neumann. Yours is a simplified model from which you infer 'simplified' conclusions.
Powered by mvnForum

People in this
Meetup are also in:

Sign up

Meetup members, Log in

By clicking "Sign up" or "Sign up using Facebook", you confirm that you accept our Terms of Service & Privacy Policy