Cosmology, Quantum Mechanics & Consciousness Message Board › What IS "consciousness"?

What IS "consciousness"?

A former member
Post #: 5
But of course a "tape reading Turing machine" could, in principal, develop the conscious experience of seeing red.

We could not compare its experience with ours, or establish that its experience was similar to ours, as Camilla and Ian have pointed out. But that does not prevent the Turing machine from having such an experience, or talking about it in ways that establish, quite clearly that it is an experience equivalent to ours.

We can't compare human experiences in this way either - the Turing machine is no different from you and me in that regard.

Of course nobody would use a Turing machine to build an AI - hopelessly inefficient - not is there any obvious reason why we would endow an AI with the hardware to become that interested in colour, but these observations are irrelevant.

The underlying truth is that we are just machines anyway, and a conscious Turing machine that experiences "red" can be built.

It would of course be a chaotic Turing machine, by which we mean that the thoroughly non chaotic Turing machine would be programed to emulate a chaotic brain. No problem there - computers have been doing that for years.



So... remind me please... what, exactly is the question?



Peter

lan B.
user 10895495
London, GB
Post #: 73

But of course a "tape reading Turing machine" could, in principal, develop the conscious experience of seeing red.

Zeus! Just about to plough into the paper that Andrew has unearthed for us, but have to address these points urgently, pronto!

Look, we don't have anything resembling a scientific consensus on the kind of connective neuronal architecture that could conceivably generate (the secondary quality component of) experience. Occasionally, putative mechanisms have been proposed -- Nicholas Humphrey, myself even -- but they are few and far between. Worse, we don't have anything near a consensus even on what would be wanted for any such conscious machine to be in any position to generate experience within itself. This is because even the professional philosophers of mind are locked in fierce disagreement as to what the precise target of investigative enquiry should be. (Hence, with some justification, one might think, your final sentence below, Peter, quoted in white. What, indeed, was the question? Apparently it's very slippery in the eyes of many!)


We could not compare its experience with ours, or establish that its experience was similar to ours, as Camilla and Ian have pointed out. But that does not prevent the Turing machine from having such an experience, or talking about it in ways that establish, quite clearly that it is an experience equivalent to ours.

.. But from the mere fact that the machine is talking (or printing, or whatever) and convincingly, in the sense that it fulfils the terms of Turing's challenge -- it does not follow that the machine is conscious. It could be simply following a highly "intelligent", adaptive script. Cognitive scientist Ruth Kempson was one of the pioneers of LDI -- labelled deductive logic -- schemata during the 1990s, and had teams of PhDs beetling around inputting "common sense statements about the world" as soon as they thought about anything whatsoever, or were stimulated toward formulating the core prior generalities of essential reasoning within some news reportage or other scenario. (She demonstrated an example of an early-model Pentium Processor-driven computer being "typed at" with the statement: "The motorcycle entered the shop and bought a newspaper", whereupon the program immediately responded in perfectly acceptable English: "I do not believe you".) Roger Penrose in The Emperor's New Mind suggested that with sufficiently persistent questioning or by loading one's own narrative to the machine with arbitrarily extended and ridiculously real world-improbable scenarios one could "catch it out", by receiving back from it sentences which exemplify perfectly correct grammar, yet were the content of the sentences to have been instead uttered by a human being, he would long ago have been certified!

(Even so, lunatics are conscious, surely? They can sense, can they not? "The question is not, can they reason? Nor, can they talk? But, can they suffer?" .. as 19th century radical empiricist philosopher Jeremy Bentham -- founder of University College, the first of the independent London universities -- said, in combatting the usual type of argument used against those who advocate animal rights.)

I'm glad that you've sent this posting, Peter, because it gives me the opportunity to stae the case in a fairly general, umbrella manner against the expectations which most of the time are held and expressed by the overwhelming majority of AI professionals. They espouse what in Phil of Mind circles goes under the acronym GOFAI -- which for those new to the debate stands for Good Old-Fashioned Artificial Intelleigence. Very sixties! (Don't mistake that for mockery. I've been there!) GOFAI aficionados believe not only that some Turing Machine with sufficient memory can represent any state of affairs, it can, furthermore, actually become that state of affairs.

.. And this is because they unquestioningly believe that consciousness is a genus of information, and although they'll grant you that a simulation of a hurricane won't blow the cars along the street outside the window of the Meteorological Office, a simulation of information is, simply, more information (or information isomorphic, in some specified sense, to the original). If, therefore, the reasoning proceeds, consciousness simply is information, then any simulation of it must also be conscious!

Camilla ably put her finger on the crucial distinction which must be drawen here in order to avoid making this mistake. It is the distinction between what philosopher Ned Block has called phenomenal consciousness (p-consciousness) on the one hand and access consciousness (a-consciousness) on the other, and before anyone accuses either of us of deluging the annals of this Meetup group with a fast-multiplying plethora of unnecessarily verbose terminology, don't! Anything that we can say about anything whatsoever is a-conscious. That is, it's reportable! For instance, the physical, primary quality distance can be determined either visually or proprioceptively. A blind surveyor can determine the distance between 2 points and the angle between 2 lines of sight as well as can any sighted person -- well, not quite as well, but I'm developing a helpful aide-memoire. Only the essentials matter! This is because the primary quality, distance, is out there! It's not subjective. It's reportable! Similarly, any thoughts which you are having can be disclosed (even if only under torture or the use of technology already alluded to). Why? Because they're a-conscious. They're reportable! I've now said on this Board several times without objection from anyone -- so I take it that there's general agreement -- that we cannot in principle report any of our sensations because they are in the sense which makes the sensory aspect of the mind-body problem (or the Hard Problem, as it's become infamously labelled by David Chalmers). It's easy for someone who remains unconvinced to lose patience at this point, but please hang on. I can of course truthfully report: "I am now having a visual experience of bright crimson", but I cannot in principle describe such a state of affairs to any congenitally blind interlocutor. She'd simply have to take it on trust!

[ .. Continued .. Not too much further to go now, trust me! .. ]



lan B.
user 10895495
London, GB
Post #: 74

.. Whereas I can describe as fully and adequately as is necessary -- even to a blind person -- the distance of the summit of Mount Kilimanjaro from where I am currently standing, or describe Wiles' derivation of the final proof of Fermat's Last Theorem. (Well, not too certain about that one.) Try to describe my itch, though, well, if you just ain't wired up right, you'll never know!

We can't compare human experiences in this way either - the Turing machine is no different from you and me in that regard.

Unfortunately, though, I can agree that the statement is true only in that regard. It's good to try positivist lines of attack -- I'm a convinced positivist myself -- but where such lines turn out to be defeasible, then one should consider abandoning them.

Of course nobody would use a Turing machine to build an AI - hopelessly inefficient - not is there any obvious reason why we would endow an AI with the hardware to become that interested in colour,

.. the truly interesting question -- that is, the scientifically interesting question -- is: "Yes, and exactly how would you go about doing that?"

As far as I'm concerned, that's always the goal of discussing the nature of consciousness. Remember, despite all that I've said, I'm an optimist. I believe that conscious machines are in principle buildable, as does Peter. (After all, we're walking about, aren't we?)

But we're not -- and they won't be -- Turing machines!


but these observations are irrelevant.

Au contraire. They stimulate further debate!

The underlying truth is that we are just machines anyway, and a conscious Turing machine that experiences "red" can be built.

(Delete the word "Turing" and you have my absolute agreement!

It would of course be a chaotic Turing machine, by which we mean that the thoroughly non chaotic Turing machine would be programed to emulate a chaotic brain. No problem there - computers have been doing that for years.

Ah I see! (This is crucial.) No they haven't. They've instead been simulating "minds". They were never "electronic brains". They've always been "electronic minds".

They don't remotely resemble any real, biological brains. Architecturally, I mean. Surely, this observation is salient? (Golly gosh. The sheer contempt in which mathematicians can sometimes hold the rest of us! wink

Whenever software is written, we export into coded format expertise at specific types of problem-solving. We write down what we would (optimally) do, given the nature of the problem, our goals and the intellectual experience available to us at the period of writing it.


So... remind me please... what, exactly is the question?

Not such as to generate "42".

Peter



lan B.
user 10895495
London, GB
Post #: 75


Before I sign off -- and finally get round to reading Andrew's referenced paper -- I should try to dispel any impending charges of being "hoist by my own petard" in the sense that I might be perceived as "sailing between Scylla and Charybdis". That is, I don't espouse a soully occultism/Cartesian dualism, I believe that the ultimate explanation is physical but not as nomically "deep" and over-general in scope as quantum mechanics, and I believe us to be machines but not Turing Machines! Have I boxed myself into such a tight corner that I've finished up nowhere at all?

Although I don't believe that QM holds the truth about consciousness, the situation certainly feels like the situation in which one seeks to interpret QM, in the sense that whichever interpretation you adopt, the requirement to maintain internal consistency -- just as in any other area of scientific interest -- cannot fail to generate corollaries which one finds to be personally "unpleasant". In the case of GOFAI proponents, the unpleasant assdociaions almost certainly form when someone like me will insist on banging on about "reportability", and even deny that "consciousness is information" (in some, unspecified sense).

Why might some commentator wish to regard consciousness as being not concerned with information? It certainly "looks as though" it is, if only in the sense that the differently shaped and coloured polygons within my visual field at any one time map isomorphically onto the geometry of their real, external-world counterparts, and so by analogy when considering the other sensory modalities. So, OK, this is an isomorphism to "information", but does that mean that it's identical?

As Camilla made clear, the thesis of epiphenomenalismdenies such a claim. It claims that the subjective/phenomenological realm "sits on top" of the physical, causally connected neuronal firing sequences of the relevant parts of the brain, but -- again as Camilla pointed out -- it ostensibly "doesn't do anything". IMV the position is implicitly contradictory because in order for us even to be able to refer to "the subjective character of sensations" they surely need to be able to cause us to be able to speak or write about them, which makes them unequivocally physical! On the other hand, one could align oneself with Andrew's position and "save" the otherwise doomed-seeming account furnished by epiphenomenalism by proposing that every time, say, I behold a bright crimson patch, then it is somehow non-locally associated -- i.e. outside the causal chain of reference -- with structurally, temporally, positionally isomorphic sequences of neuronal firing, and thus that no matter how conscientiously analytical and precise we think that we're being in singling out any references to "secondary qualities" as such, rather than to the exactly point-to-point mappable underlying causal physical network of discharging neurons, we can, inevitably, only succeed in referring to the latter. (Whilst all the time mistakenly believing that instead we're referring to the former!) .. And the reason would be the exact isomorphism in every relevant particular which I have just described.

It strikes me as incoherent that we can always, in this situation, nevertheless qualify what we're saying. It's not clear to me that any of one's internal dialogue with oneself about the subject of secondary qualities as clearly dimensionally unanalysable within the language of physics or any of the other proper sciences and not as mere patterns of sense-organ stimulation induced by the appropriate stimuli -- can be satisfied by some causally, trivially equivalent up to isomorphism, physical feature alone, if one takes into consideration the probability that the only ultimate basis of the practice of referring (to anything) is taken to be causal. This model is of course customarily referred to as the causal theory of reference. I doubt very much whether any scientifically inclined reader of this Msge Board would be inclined to disagree!

A former member
Post #: 6
Hi Ian,

I 'm happy to delete "Turing", but I'd like to explain why the word Turing (put there because of Andrew's original statement) does not IMO cause a problem.

A Turing machine (or any conventional computer) can be programmed to simulate the sort of architectural structure that does resemble real biological brains. Hopelessly slow and inefficient, of course -but in principal... There were certainly programs that did that on a very small scale, and re-produced biological/animal behavior, when I was at Cambridge 40+ years ago.

The whole point, even then, was that those programs were simply learning systems - programs with some in-built desires/motivation and an ability for self modification. I don't think you can fairly characterize such programs by the phrase "Whenever software is written, we export into coded format expertise at specific types of problem-solving" even if some expertise in the nature of the learning process may be coded".

To the extent that a brain is (in it's simplest form) just a network with desires and the ability to modify itself, then those programs do simulate brains as well as minds.


However, the big difference between us seem to be:

"... But from the mere fact that the machine is talking (or printing, or whatever) and convincingly, in the sense that it fulfils the terms of Turing's challenge -- it does not follow that the machine is conscious. It could be simply following a highly "intelligent", adaptive script."

As far as I'm concerned. that's what, that's all consciousness IS. If the script is good enough, the script is conscious.

I'm guessing you don't agree: and I fear that the at this point the discussion degenerates into pure semantics in which the distinction between what the word "consciousness" means and what we mean by the word "consciousness" becomes the principal issue.

All of this gives me the feeling I'm missing something. If so, I'm sure you'll enlighten me.



Peter

A former member
Post #: 7
Hi Ian,

I'm sorry.. but I genuinely do not understand what this sentence means:

"Look, we don't have anything resembling a scientific consensus on the kind of connective neuronal architecture that could conceivably generate (the secondary quality component of) experience."

I have three specific problems:
1. "(the secondary quality component of)" please explain
2. Surely any learning machine architecture can achieve that, and the problem is purely technical - to actually build one with enough speed and resources, and sufficiently good visual/colour perception plus desires/motivations that make colour relevant.
3. Experience is an emergent phenomena, arising from the faact that a brain/mind of sufficient capability is making observations and constructing a world view.
Camilla M.
user 7151822
London, GB
Post #: 4
Emergent phenomenon: There's a scientific debate as to whether strong emergence exists. In it's weak expression, the phenomenon is merely a new description for entirely predictable properties of simpler constituents. Strong emergence requires that in principle unpredictable states turn up, this is what science would want to refute for it is in contravention of the reductionist paradigm.
Now you're probably going to lay chaotic systems at our door - but these produce analyzable, i.e. reverse engineerable from top down, properties, even as they are not precisely transcribed. Think iterative processes, strange attractors etc.

1. To answer your first specific problem above:
"1. "(the secondary quality component of)" please explain."
Subjective experience comprised sensory secondary qualities, such as 'raw feels', what an experience is like, colour etc, is/are neither predictable nor design reproducible, currently. We have no artificial instance of these, Ned Block's 'Phenomenal consciousness.'.
Compare to Strong AI: behavioural dispositions, performances, computations, reports, indexical & reflexive reports, thinking, in short Ned Block's 'Access consciousness' are all entirely predictable, reproducible, objectively accessible operations. It is strange to claim that any object performing an action is the same as experiencing it but this is Strong AI's claim. This is what you, Peter, are calling consciousness per se, simply mechanistic intelligent learning, with dispositions as revealed by external behaviour just synonymous with internal experience - the part of the claim you previously denied about external verification of knowledge of experience such as the colour red which can easily be upheld by the example of a computer simulation of an actor not actually being any person. It may exhibit their causally interactive plot participation through time closely enough but has no intrinsic experience. The Strong AI definition I deny gives the complete description of human consciousness.


2. Surely any learning machine architecture can achieve that, and the problem is purely technical - to actually build one with enough speed and resources, and sufficiently good visual/colour perception plus desires/motivations that make colour relevant.
Here you have not distinguished even between Weak AI and its Strong cousin.
The reason for the failure of developing anything new from the former is that Weak AI including the universal Turing machine allows for simulacra which may exhibit intelligence via formal symbol copies of performance - in principle anything can be simulated by a computer, so any process studied at all could be considered "computation", if you're willing to stretch the definition to the breaking point. Which is at once too broad to be useful but also present symbolic accounts do not entail experience since recursive information processing does not metamorphose into our other kind of rich unitary presentation and we now have no behaviour as in Strong AI from which to attempt to infer inner states. As Searle wrote: "What we wanted to know is what distinguishes the mind from thermostats and livers." And we could add from any sort of computational ikon.

Looking at the reductio ad absurdum for Searle's anti Strong AI argument, from The Chinese Room:
Let L be a natural language, and let us say that a “program for L” is a program for conversing fluently in L.
A computing system is any system, human or otherwise, that can run a program.

(1) If Strong AI is true, then there is a program for Chinese such that if any computing system runs that program, that system thereby comes to understand Chinese.
(2) I could run a program for Chinese without thereby coming to understand Chinese.
(3) Therefore Strong AI is false.

It is not logically the case that phenomenal higher properties necessarilyfall out of computation. If not necessarily, what might be some sufficient precise architectural conditions or performance procedures for, say, neural nets to produce some kind of experience? How we would check the veracity of such a self-generated report is another matter but this has no bearing on its possibility.
The Chinese Room may for Searle have ostensibly been championing intentionality (aboutness,) challenged by CRTT models as internal conceptual scripts or externally executed via structural conceptual role models but his definition of understanding as lying in the difference between complex original and simple derivative language or syntax may have shifted the AI debate onto at least examining the sufficiency of detailed human subroutines.

Hence your question 2) does not go through in the manner of Strong AI, as an intelligent learning machine alone, on insufficiency grounds through either computational complexity or in virtue of being chaotic systems, as types of merely more complicated simulacra.
And question
3. Experience is an emergent phenomena, arising from the faact that a brain/mind of sufficient capability is making observations and constructing a world view.
similarly conflates Strong AI's intelligent simulacra that is insufficient for the original richness of our subjective world view experience.
The extra premise required is something like Chalmer's causal organizational point I shall come to after the following.

I'd like to introduce Copeland's analysis of Searle's inferences and then onto Chalmer's proposition leading back to empiricism.
"Searle correctly notes that one cannot infer from X simulates Y, and Y has property P, to the conclusion that therefore X has Y's property P for arbitrary P.
But Copeland claims that Searle himself commits the simulation fallacy in extending the CR argument from traditional AI to apply against computationalism. The contrapositive of the inference is logically equivalent—X simulates Y, X does not have P therefore Y does not—where P is understands Chinese.
The faulty step is: the CR operator S simulates a neural net N, it is not the case that S understands Chinese, therefore it is not the case that N understands Chinese.
Copeland also notes results by Siegelmann and Sontag (1994) showing that some connectionist networks cannot be simulated by a universal Turing Machine (in particular, where connection weights are real numbers).

Chalmers (1996) offers a principle governing when simulation is replication. Chalmers suggests that, contra Searle and Harnad (1989), a simulation of X can be an X, namely when the property of being an X is an organizational invariant, a property that depends only on the functional organization of the underlying system, and not on any other details." This grounds us in the causal explanations for meaning, leaving intentionality, the mark of the psychological, aside.



Camilla M.
user 7151822
London, GB
Post #: 5
So Chalmer's organizational invariant principle for true cloning of logically predictable properties, not merely Turing type computational complexity of the Strong AI stripe that could churn out derivative script simulacra, offers us the necessary functional architectural step in the very broadest outline towards the empirical part of a solution. To the subjective only, non objectively capturable, appearance of our sensory qualities, comprising experience which is the difference that makes us conscious contra current machines.

Now, comes the difficult concurrence, wherein sensory qualities have to be analysed somehow in so far as they can be, some further explanation furnished as to how 'produced' and as to their having some formal role for consciousness since they can literally be referred to. Note that not all these further particulars will necessarily have to be causally empirical, we may enter the logical realm whilst avoiding psychological particulars such as words like meaning, understanding or intentionality.

Perhaps Ian can set out his version of a solution.

lan B.
user 10895495
London, GB
Post #: 77

"Perhaps Ian can set out his version of a solution."

Hmmm .. Opportunity knocks, as a once-popular catchphrase had it. (I never watched it, mind.)

Yes, thanks once again Camilla. Very comprehensive, and some ve-e-ery apposite quotational touches. I particularly enjoyed:


"As Searle wrote: "What we wanted to know is what distinguishes the mind from thermostats and livers." "

Once again the first 2 paras of the mail just above are spot-on concise, yet adequate (although probably hard to digest at face-value to those who have never before considered "consciousness" to be in the sense that Camilla and myself are advocating). Yes, Camilla has forcefully fleshed out the remit of the project (as I see it, admittedly; Camilla is of course further developing her own views in response to furious and sustained immersion in the technical philosophical literature.)

This also captures it in a nutshell:


"Looking at the reductio ad absurdum for Searle's anti Strong AI argument, from The Chinese Room:
Let L be a natural language, and let us say that a “program for L” is a program for conversing fluently in L.
A computing system is any system, human or otherwise, that can run a program.

(1) If Strong AI is true, then there is a program for Chinese such that if any computing system runs that program, that system thereby comes to understand Chinese.
(2) I could run a program for Chinese without thereby coming to understand Chinese.
(3) Therefore Strong AI is false."

Camilla, re your kind -- and not mutually conferred! -- invitation for me to set up my stall and look potentially ridiculous, many thanks. It might br better for interested parties to ask for my paper if sufficiently fascinated, but in any case I won't be able to do it "adequate justice" today because the ideas are novel, bizarre, counter-intuitive and therefore not intuitively obvious. (Like all good would-be scientific theories!) So they need formulating in at least as precise a way at indicating the solution as Camilla's own beautifully terse summary of the problem as expressed in her 2 paras in yellow just above.

OK if I get a single "yes" vote other than from Camilla alone then I will. Maybe I should create a website for the paper and any interested commentary on it. Thus far the only "outsider" to have surveyed it is author Ray Tallis.

lan B.
user 10895495
London, GB
Post #: 78

So, for Peter:

"However, the big difference between us seem to be:

"... But from the mere fact that the machine is talking (or printing, or whatever) and convincingly, in the sense that it fulfils the terms of Turing's challenge -- it does not follow that the machine is conscious. It could be simply following a highly "intelligent", adaptive script."

As far as I'm concerned. that's what, that's all consciousness IS. If the script is good enough, the script is conscious.

I'm guessing you don't agree: and I fear that the at this point the discussion degenerates into pure semantics in which the distinction between what the word "consciousness" means and what we mean by the word "consciousness" becomes the principal issue.

All of this gives me the feeling I'm missing something. If so, I'm sure you'll enlighten me. Peter"

I'm hoping that subsequent embellishments from both Camilla and myself might help to relieve any persisting confusion, because I appreciate of course that the approach is very novel to perhaps most people, even most people who are interested in the nature of consciousness.

Don't forget, secondary qualities are as defined by Galileo and Locke -- and, in principle as we have earlier seen -- by Democritus. They are the "inexpressible/unreportable component" of experience. The primary quality component can, logically in terms of the nature of literal optical imagery, only be abstracted after the fact from some distribution of light of various colours and intensities over the surface of the retina. These in-general diffusely-bound visual patterns are as earlier mentioned polygons, and therefore it is only at the contrast boundary between immediately neighbouring blocks of different colour shading that geometrical features such as orientation and topological features such as connectedness could even in principle enter into any efforts at subjective image decomposition and comprehension. Again, a simple thought experiment aids the intuition:

Imagine you're staying in a top-floor suite in some hotel on the outskirts of Paris, but facing the direction in which lies the Eiffel Tower. It's night-time, and the glow of distant street-lamps silhouettes the true Tower from the horizon. Now you switch off the lights of your room, become dark-adapted, and then look in the direction of the Tower. Unknown to you, some fiend has sneaked up and positioned a perfect miniature replica of the Tower on the parapet just outside the window, such that from the position that your eyes occupy as you turn to look, the 2 "Towers" side by side appear identical! Which is which? You will only be able to tell by performing an experiment, i.e. by moving in order to alter your visual perspective.

So one can now easily see the point that any primary, external, real-world quality such as distance or angle -- indeed any of the dimensional variables of physics -- is obliged to be estimated via the agency of contextual clues, whereas concerning secondary qualities (in the visual example; mutatis mutandis for the other sensory modalities) it is impossible to be mistaken about "what it is like for you", since there is no objective standard of comparison against which to measure up!

Th-tha-th-that's all folks! Love it or hate it! (It's at moments like these that the dice fall to decide who might be attracted to my paper, and who freociously repelled. Thanks once again to Camilla for her untiring pedagogical initiatives!)

Powered by mvnForum

People in this
Meetup are also in:

Sign up

Meetup members, Log in

By clicking "Sign up" or "Sign up using Facebook", you confirm that you accept our Terms of Service & Privacy Policy