addressalign-toparrow-leftarrow-leftarrow-right-10x10arrow-rightbackbellblockcalendarcameraccwcheckchevron-downchevron-leftchevron-rightchevron-small-downchevron-small-leftchevron-small-rightchevron-small-upchevron-upcircle-with-checkcircle-with-crosscircle-with-pluscontroller-playcredit-cardcrossdots-three-verticaleditemptyheartexporteye-with-lineeyefacebookfolderfullheartglobe--smallglobegmailgooglegroupshelp-with-circleimageimagesinstagramFill 1languagelaunch-new-window--smalllight-bulblightning-boltlinklocation-pinlockm-swarmSearchmailmediummessagesminusmobilemoremuplabelShape 3 + Rectangle 1ShapeoutlookpersonJoin Group on CardStartprice-ribbonprintShapeShapeShapeShapeImported LayersImported LayersImported Layersshieldstar-shapestartickettrashtriangle-downtriangle-uptwitteruserwarningyahooyoutube

Re: [philosophy-185] The Chinese Room Argument

From: John G.
Sent on: Thursday, March 11, 2010, 6:45 AM

Oh no, you're not missing anything. I think we are on the same page, given what you've just said. "Representation" is just a way of talking about, or drawing our attention to, certain chains or meshes of good old physical causation. It's real, it's out there, I use the term all the time myself. I'm just saying that choosing to call something representation does not confer any magical power on it (thermostat, yes; thermometer, no). In particular, I disagree with theories that say "System X represents in some clever way (strange loops? self-references? world models? self models in world models?), therefore system X is conscious and sees the redness of red." All flavors of representationalism strike me this way. They all devolve to functionalism.

-John Gregg


http://home.comcast.net/~johnrgregg/function.htm


Ken wrote:
Hi John,

   I have been working on a reply to Kar's objection to my representation example but have so far been stuck on the minimal requirements of consciousness question (no surprise there). I am hoping that a flash of clarity will occur soon.

   However, I am a bit baffled the objection to the reality of "representations". I do not mean anything special by that term, it is something that computer scientists, robot builders, and neuroscientists use without any qualms. Yes, I agree that it is "just another word for good old physical causation" but of a particular type. If the thermometer is in a causal loop that controls the temperature of the room (whether via a circuit controlling the AC directly, or via your visual motor system turning on the AC indirectly), then it can be said to represent the temperature for this larger system. (I.e., yes it suddenly "represents" when you use it for temperature control.)

   It seems to me that this definition of representation is unproblematic and objective - although it may have nothing to say about consciousness. It also seems to me that this concept can be extended to define what would constitute a self-directed representation (e.g. the position of the robot's arm, the current state of power in the robot's batteries, etc.) 

   Am I missing something?

-Ken
  







From: [address removed]
To: [address removed]
Subject: Re: [philosophy-185] The Chinese Room Argument
Date: Wed, 10 Mar[masked]:40:29 -0500


Ken-

"representation" is strictly in the eye of the beholder, if we are going to be good naturalists. "Representation" is just another word for good old physical causation, and confers no explanatory power beyond that, and your comments about the thermometer and the thermostat make this point, at least to me. Does anyone really want to say that a thermometer, interacting with its environment lacks some special properties that a thermostat, interacting with *its* environment, possesses? We are all in a big causal mesh; we are all one system. What if I don't have a thermostat, and I just keep an eye on a thermometer, and turn on the AC if the mercury says it is above 80? Then we have a thermostat, with me providing one of the components. Does it suddenly "represent" then?

In real life, there is no reference, only causation. There are no referential loops, only causal loops. There is no teleology (pulling type causation), only efficient (pushing type) causation. Unless, that is, we want to get spooky and mysterious. Pick your poison.

-John


Ken wrote:
Hi Sean,

    I think your point about the nature of representations is an important one but I have not heard a reply so far so here is mine...

 > My direct argument of why representationalism can't work even in principle is this: for any given physical process, the question >of what that process is supposed to represent is never inherent to the process, but requires the choice of the mind observing >the process. Hence I believe it makes no sense to even talk about a representation without already pre-introducing the mind >which is choosing that representation to focus on. In other words, in my view, representations can not generate minds because >they are the product of minds.   

   I disagree that there is no objective way to determine if a particular physical process is a representation. Certain physical systems exhibit goal-directed behaviors. A rock falling to the ground can be said to have the goal of reaching the lowest point, a cat chasing a mouse can be said to have the goal of catching the mouse. Certain of these physical systems have incorporated into themselves cybernetic control systems in which part of the external world relevant to the goal directed behavior is represented internal to the system on a subset of physical elements. For this internal set of elements to constitute a true "representation" it must have two properties. 1.) It must faithfully track (be correlated with) the external variable. 2.) It must be used by other parts of the physical system to guide behavior toward the goal.

    I think that this definition of "representation" meets the requirements of being "inherent to the process" and not being merely "products of the minds" of theorists. The rock does not represent anything since there is no subset of its physical makeup that is used by other parts of its physical makeup to in the pursuit of its goal. A thermometer does NOT represent by the same reasoning - even though it faithfully tracts the external temperature it does not use this to guide action toward a goal. A thermostat does represent since its internal thermometer is used to control a heater in a way that moves the room temperature toward a goal setpoint.
 
    It is in this sense that physical systems can be said to truly have internal representations.

    Of course simply having any old representation is not enough to produce consciousness. Representationalists say that in addition these representations to have to have the quality of self reference and specifically be representing that there is an entity (a fictive self) undergoing certain experiences. I believe that this type of representation in a physical system can also be determined in an unbiased manner by asking if a part of the physical system 1.) Faithfully tracts the types of information that the brain is currently attending to (i.e. a neuron that fires only if the creature's eyes are fixated on a red object AND if that red signal is available in the global workspace to guide behavior), and 2.) If the creature uses this representation to guide behavior toward a goal.
  
   For example, I might lay down an episodic memory (a particular pattern of synapses in my hippocampus) representing that I saw and heard the running of fresh stream water on my hike today, and then later in the day when I get thirsty I, on the basis of this memory, decide to back-track my steps to get water from this stream. This episodic memory then meets the requirements of 1.) Being a physical representation that faithfully tracked the internal fact that sights and sounds of water were available in my brain's global workspace at a particular location earlier in the day, and 2.) This representation was used in the pursuit of the goal of survival.

    This discussion also highlights the idea that we might should be looking to static representations (i.e. memories) as the real origin of the phenomenon of consciousness as opposed to looking at the instantaneous firing of particular sets of neurons. No matter what fancy properties a set of neurons has, if they do not causally effect another set of neurons down the line then they can have no impact period. No group of neurons can be the source of an experience unless they
have direct physical results in other neurons. I believe this is very similar to what Dennett was trying to get across in his Multiple Drafts model.
   
-Ken




> From: [address removed]
> To: [address removed]
> Subject: RE: [philosophy-185] The Chinese Room Argument
> Date: Fri, 5 Mar[masked]:38:39 -0500
>
> Hi Tom,
>
> Well, personally, I think I would start by vigorously disputing the first claim. I do think the evidence is extremely strong that third-person-observable behavior can be well modeled as such a system property. But I just can't see how that's ever supposed to generate consciousness, no matter how self-referential, entangled, parallel and loopy the data flow gets. From the representationalist accounts I've been exposed to (haven't read Metzinger's whole book, but have heard him and other representationalists speak) I can't see what representationalism offers that physicalism doesn't. Sure one is essentially 'mathematical' and the other 'physical', but in the context of trying to get at the first-person hard problem, it seems a distinction without a difference.
>
> My direct argument of why representationalism can't work even in principle is this: for any given physical process, the question of what that process is supposed to represent is never inherent to the process, but requires the choice of the mind observing the process. Hence I believe it makes no sense to even talk about a representation without already pre-introducing the mind which is choosing that representation to focus on. In other words, in my view, representations can not generate minds because they are the product of minds.
>
> Cheers,
> Sean
>
>
> -----Original Message-----
> From: [address removed] [mailto:[address removed]] On Behalf Of Tom Clark
> Sent: Friday, March 05,[masked]:31 PM
> To: [address removed]
> Subject: RE: [philosophy-185] The Chinese Room Argument
>
> Hi Sean,
> Ok, but then what do you do with the fact that the evidence thus far suggests consciousness is a system property associated with representational functions? Do you just ignore this evidence, or do you take it as a constraint on your theorizing about consciousness? Having read Metzinger and other representationalists, are you quite certain that representationalism offers us no explanatory resources?
> Re falsification: Were consciousness found to occur independently of the brain or other systems that instantiate representational functions, that would falsify representationalism. Thus far that hasn't happened as far as I know.
> best,
> Tom
>
> From: [address removed] [mailto:[address removed]] On Behalf Of Sean Lee
> Sent: Friday, March 05,[masked]:01 PM
> To: [address removed]
> Subject: RE: [philosophy-185] The Chinese Room Argument
>
> Hi Tom,
> I agree that if you believe, as you do, that you already have a good basic framework that puts useful constraints on the discussion, then there isn't much point in entertaining theories that fall wildly outside of that.
> However, I (and of course many others) disagree that any of the usual suspects (e.g. neural correlates, systems analysis, representationalism, physicalism, dualism) can provide that conceptual framework. In fact, the only thing I (and again, many others) know to say about the hard problem is to list the frameworks that I believe don't work. Obviously we'll argue about that more next Friday, but my only point regarding the UM is that if one believes there is currently no good framework at all, then it's better to be inclusive of even strange-seeming ideas currently lacking empirical support.
> To me the much more important criterion is not whether there is current empirical support for an idea (remember that string theory, loop quantum gravity, etc. fail that criterion too), but whether or not the idea is in some sense falsifiable through empirical means.
> Personally I suspect that any future account of the hard problem that even remotely approaches the truth will be so weird to us it will leave us all in a perpetual state of "WHAAA...???" Again I can't help quoting Mark Twain "No wonder fact is stranger than fiction: fiction has to make sense!"
> Cheers,
> Sean
>
> From: [address removed] [mailto:[address removed]] On Behalf Of Tom Clark
> Sent: Friday, March 05,[masked]:57 PM
> To: [address removed]
> Subject: RE: [philosophy-185] The Chinese Room Argument
>
> Sean,
> Have to disagree. There's considerable empirical evidence already available about the neural correlates of consciousness which strongly suggests it's a system property, see http://www.naturalism.org/kto.htm#Neuroscience
> <http://www.naturalism.org/kto.htm> . Unless we're constrained by evidence, then we might waste time on unfounded speculation or imagine we have an explanation in hand when in fact there's no empirical support for it, only our intuitions - see Empirical constraints on the concept of consciousness <http://sciconrev.org/2003/04/empirical-constraints-on-the-concept-of-consci
> ousness/> . I agree we should be humble in the face of the hard problem which as you say is pretty mind warping. And there's nothing wrong with conjecturing till the cows come home, hoping something falls into place. But if someone posits an unexplained explainer like God or Universal Mind as accounting for consciousness, I think it's good philo-scientific practice <http://www.naturalism.org/science.htm> to demand evidence for what is after all an empirical claim.
> best,
> Tom
> http://sciconrev.org/2003/04/empirical-constraints-on-the-concept-of-conscio
> usness/
>
> From: [address removed] [mailto:[address removed]] On Behalf Of Sean Lee
> Sent: Friday, March 05,[masked]:07 PM
> To: [address removed]
> Subject: RE: [philosophy-185] The Chinese Room Argument
>
> Regarding the Universal Mind (UM) hypothesis, I think the fact that there is yet no empirical evidence for it - in the context of the hard problem - is not a very strong criticism. In fact, given how far we're trying to reach out into the dark with our theories, demanding empirical evidence of any theory at this early stage is probably a highly unproductive constraint.
>
> Remember Democritus and the early materialists of ancient Greece: literally without any empirical evidence at all, he was able to formulate a coherent account of physical reality that we accept nowadays as pretty spot-on in its basic principles. But at the time he was struggling against the prevailing 'mental animist' school of thought - mini versions of the UM. The criticism that was leveled at Democritus was the same: "No empirical evidence at all for such weird fantasies as atoms."
>
> I say that not as a plug for the UM, but as a plug for remaining deeply humble before easily dismissing weird approaches. This is, after all, the mother of all mind-warping problems...
>
> From: [address removed] [mailto:[address removed]] On Behalf Of Tom Clark
> Sent: Friday, March 05,[masked]:42 AM
> To: [address removed]
> Subject: RE: [philosophy-185] The Chinese Room Argument
>
> < Similarly, a sufficiently complex (in some way) system is able to "tap into" the eternal subjective awareness. Is this a valid characterization?>
>
> I didn't mean to suggest that there's something independent of individual consciousnesses that they "tap into," since I see no evidence for such a thing (as I said to Kar, there's no empirical evidence for a Universal Mind). However, from the point of view of a conscious subject there's never any experienced non-existence, so what we should anticipate at death is not nothingness, but more experience had in different subjective contexts (had by different selves). Since death is the end of me it won't be me that has more experience, so there's no personal subjective continuity, but since there's no discontinuity in experience (no nothingness), I call that generic subjective continuity.
>
> Tom
>
>
>
>
>
>
>
> From: [address removed] [mailto:[address removed]] On Behalf Of jeff
> Sent: Wednesday, March 03,[masked]:07 PM
> To: [address removed]
> Subject: Re: [philosophy-185] The Chinese Room Argument
>
> This seems to be the most reasonable conclusion, and it is terribly exciting as it indicates there is more than one meaningful "tier" of complexity within universal systems. I have heard people argue that the brain is super-Turing, i.e. computes the uncomputable, but that is both wildly implausible and unnecessary, as there can be meaningful distinctions between Turing-equivalent systems (for example deterministic and non-deterministic machines, which solve exactly the same problems at vastly different rates).
> Hopefully Ken and his colleagues will succeed in pinpointing exactly where this complexity tier lies!
>
> I just read and enjoyed your essay on death. The analogy I immediately thought of is that consciousness (in the view you present) is an eternal abstraction like numbers. Numbers always existed and always will, but they don't do anything until some intelligent system "taps into" them. Similarly, a sufficiently complex (in some way) system is able to "tap into" the eternal subjective awareness. Is this a valid characterization?
>
> It also occurred to me that generic subjectivity must transcend "levels" of reality (as in the simulation argument, or the matrix). If this is so, then in some sense our awareness itself is the "ground level" of reality even if our bodies are nested within a thousand levels of simulations.
>
> On 03/03/[masked]:37 PM, Tom Clark wrote:
> Thanks Jeff. Yes, I hope you read Metzinger, seems like you might be able to grok it and gain from it. My talk of being a "sufficiently recursively ramified representational system" is largely handwaving, but recursion might have something to do with consciousness along with other things. See section 5 <http://www.naturalism.org/appearance.htm> of my speculative paper on consciousness which lists logical and adaptive characteristics of representation that might entail qualitative states bound into the unified gestalts that characterize normal experience. That very simple systems are recursive suggests that recursion alone won't do the trick given that the evidence strongly suggests consciousness correlates with fairly complex system properties.
>
> best,
>
> Tom
> http://www.naturalism.org/appearance.htm#part5
> <http://www.naturalism.org/appearance.htm>
>
>
> From: [address removed] [mailto:[address removed]] On Behalf Of jeff
> Sent: Wednesday, March 03,[masked]:10 PM
> To: [address removed]
> Subject: Re: [philosophy-185] The Chinese Room Argument
>
> Tom, I have to say your website is great.
>
> Now, there is little doubt that the brain contains a self-model and that it is intimately involved in most of what we do. But systems that contain self-models are not so unusual; for example a programming language can be implemented in itself, containing a full representation of its own meaning.
> The self-model theory leads directly to the morally-important question: are such programs conscious? To what extent is the self-model the way to identify conscious systems from the outside? (I suppose I should read the book, perhaps it answers that question)
>
> I'm interested in the notion "sufficiently recursively ramified" from a complexity-theory standpoint. The trouble is that there is a surprisingly low complexity ceiling of being computationally universal, which is achieved by some very simple systems. Once you are recursively ramified, how do you get even more recursively ramified?
>
> P.S. I fully agree with your response to my post about duplicating the self; I was hoping Kar could elaborate on why the brain-replacement might *not* work.
>
>
>
>
>
> --
> Please Note: If you hit "REPLY", your message will be sent to everyone on this mailing list ([address removed]) This message was sent by jeff ([address removed]) from The Boston Philosophy of Mind Group <http://philosophy.meetup.com/185/> .
> To learn more about jeff, visit his/her member profile <http://philosophy.meetup.com/185/members/7524269/>
> To unsubscribe or to update your mailing list settings, click here <http://philosophy.meetup.com/185/settings/>
>
> Meetup, PO Box 4668 #37895 New York, New York[masked] | [address removed]
>
>
>
>
> --
> Please Note: If you hit "REPLY", your message will be sent to everyone on this mailing list ([address removed]) This message was sent by Tom Clark ([address removed]) from The Boston Philosophy of Mind Group <http://philosophy.meetup.com/185/> .
> To learn more about Tom Clark, visit his/her member profile <http://philosophy.meetup.com/185/members/1910897/>
> To unsubscribe or to update your mailing list settings, click here <http://philosophy.meetup.com/185/settings/>
>
> Meetup, PO Box 4668 #37895 New York, New York[masked] | [address removed]
>
>
>
>
> --
> Please Note: If you hit "REPLY", your message will be sent to everyone on this mailing list ([address removed]) This message was sent by Sean Lee ([address removed]) from The Boston Philosophy of Mind Group <http://philosophy.meetup.com/185/> .
> To learn more about Sean Lee, visit his/her member profile <http://philosophy.meetup.com/185/members/2978255/>
> To unsubscribe or to update your mailing list settings, click here <http://philosophy.meetup.com/185/settings/>
>
> Meetup, PO Box 4668 #37895 New York, New York[masked] | [address removed]
>
>
>
>
> --
> Please Note: If you hit "REPLY", your message will be sent to everyone on this mailing list ([address removed]) This message was sent by Tom Clark ([address removed]) from The Boston Philosophy of Mind Group <http://philosophy.meetup.com/185/> .
> To learn more about Tom Clark, visit his/her member profile <http://philosophy.meetup.com/185/members/1910897/>
> To unsubscribe or to update your mailing list settings, click here <http://philosophy.meetup.com/185/settings/>
>
> Meetup, PO Box 4668 #37895 New York, New York[masked] | [address removed]
>
>
>
>
> --
> Please Note: If you hit "REPLY", your message will be sent to everyone on this mailing list ([address removed]) This message was sent by Sean Lee ([address removed]) from The Boston Philosophy of Mind Group <http://philosophy.meetup.com/185/> .
> To learn more about Sean Lee, visit his/her member profile <http://philosophy.meetup.com/185/members/2978255/>
> To unsubscribe or to update your mailing list settings, click here <http://philosophy.meetup.com/185/settings/>
>
> Meetup, PO Box 4668 #37895 New York, New York[masked] | [address removed]
>
>
>
> --
> Please Note: If you hit "REPLY", your message will be sent to everyone on this mailing list ([address removed]) http://philosophy.meetup.com/185/ This message was sent by Tom Clark ([address removed]) from The Boston Philosophy of Mind Group.
> To learn more about Tom Clark, visit his/her member profile: http://philosophy.meetup.com/185/members/1910897/
> To unsubscribe or to update your mailing list settings, click here: http://philosophy.meetup.com/185/settings/
> Meetup, PO Box 4668 #37895 New York, New York[masked] | [address removed]
>
>
>
> --
> Please Note: If you hit "REPLY", your message will be sent to everyone on this mailing list ([address removed])
> http://philosophy.meetup.com/185/
> This message was sent by Sean Lee ([address removed]) from The Boston Philosophy of Mind Group.
> To learn more about Sean Lee, visit his/her member profile: http://philosophy.meetup.com/185/members/2978255/
> To unsubscribe or to update your mailing list settings, click here: http://philosophy.meetup.com/185/settings/
> Meetup, PO Box 4668 #37895 New York, New York[masked] | [address removed]
>


Your E-mail and More On-the-Go. Get Windows Live Hotmail Free. Sign up now.



--
Please Note: If you hit "REPLY", your message will be sent to everyone on this mailing list ([address removed])
This message was sent by Ken ([address removed]) from The Boston Philosophy of Mind Group.
To learn more about Ken, visit his/her member profile
To unsubscribe or to update your mailing list settings, click here

Meetup, PO Box 4668 #37895 New York, New York[masked] | [address removed]





--
Please Note: If you hit "REPLY", your message will be sent to everyone on this mailing list ([address removed])
This message was sent by John Gregg ([address removed]) from The Boston Philosophy of Mind Group.
To learn more about John Gregg, visit his/her member profile
To unsubscribe or to update your mailing list settings, click here

Meetup, PO Box 4668 #37895 New York, New York[masked] | [address removed]

Hotmail: Powerful Free email with security by Microsoft. Get it now.



--
Please Note: If you hit "REPLY", your message will be sent to everyone on this mailing list ([address removed])
This message was sent by Ken ([address removed]) from The Boston Philosophy of Mind Group.
To learn more about Ken, visit his/her member profile
To unsubscribe or to update your mailing list settings, click here

Meetup, PO Box 4668 #37895 New York, New York[masked] | [address removed]