addressalign-toparrow-leftarrow-rightbackbellblockcalendarcameraccwcheckchevron-downchevron-leftchevron-rightchevron-small-downchevron-small-leftchevron-small-rightchevron-small-upchevron-upcircle-with-checkcircle-with-crosscircle-with-pluscrossdots-three-verticaleditemptyheartexporteye-with-lineeyefacebookfolderfullheartglobegmailgooglegroupshelp-with-circleimageimagesinstagramFill 1linklocation-pinm-swarmSearchmailmessagesminusmoremuplabelShape 3 + Rectangle 1ShapeoutlookpersonJoin Group on CardStartprice-ribbonShapeShapeShapeShapeImported LayersImported LayersImported Layersshieldstartickettrashtriangle-downtriangle-uptwitteruserwarningyahoo
Nathaniel
user 10963465
Group Organizer
Mesquite, TX
Post #: 156
Those are very good points. I remember having this discussion before. I think the example that was brought up was pain. Say, for instance, that I was an amputee and I was experiencing phantom limb pain. I have no right leg, but my right leg hurts! The pain is an illusion, but the experience of pain is real, and it must be real. I could also consume some hallucinogens and have a practically religious experience. I am experiencing an illusion, but the experience itself is not an illusion. I can understand that.

But that's not what I'm really talking about. I can be aware of internal thoughts for instance. This awareness might actually be illusory, in that it might not be informed by any real data, but the experience of that awareness would be real.

The only way science would be able to say much about consciousness would be if they could simulate a mind perfectly in real time to the point of being able to "see" the consciousness of that mind. This is far beyond us right now, and some say it always will be. Personally, I don't see why it would be impossible. If we can think with our meat brains, I don't see why non-meat couldn't think or simulate thought and consciousness.

But assuming that there is a real world out there of some sort that corresponds more or less to our experience, then consciousness must serve some purpose. It must do some real work, in terms of decision and action. It just doesn't make any sense to me that it is only there to 'fool' us. Everything we know about natural selection points away from this conclusion. If strict necessitarian determinism were true, why would there be a need for anything other than neural events, which are all that's needed for discharging necessitation? And if consciousness as consciousness makes decision and action possible, or even contributes to it, then the causal nature of the world must be different from the 18th century mechanistic model.

I agree with everything you said right up until the last bit. That it's to say that we can necessarily be reduced to machines in the same sense as they would have said in the 18th century. But I don't think we're anything special. I don't think any mystical junk holds us together or sits behind the wheel of our meat machine bodies. I think we're essentially very complex meat robots/computers. Heck, we don't even work quite like current day computers, we've got a lot more parallel processing going on and the calculations are a lot more noisy.

So, lets see what we agree on:
1) Experience is real even if what we are experiencing is an illusion.
2) Consciousness is tied to experience (perhaps even being the thing that does the "experiencing")
3) Consciousness must do something, otherwise it wouldn't be present. At the very least it must provide an adaptive advantage.

So where do we go from there?
Jim B.
user 4260314
Arlington, TX
Post #: 323
You make some good points. I think we basically agree except in those bits where we don't : )

The only way science would be able to say much about consciousness would be if they could simulate a mind perfectly in real time to the point of being able to "see" the consciousness of that mind. This is far beyond us right now, and some say it always will be. Personally, I don't see why it would be impossible. If we can think with our meat brains, I don't see why non-meat couldn't think or simulate thought and consciousness.

I think it's possible that science could one day simulate a mind that is conscious. There may not be anything that special about meat. On the other hand, it's also possible that there is something special about a certain kind of meat that is the product of millions of years of evolution. It may be that all of the deeply embedded and highly complex contingencies of such a process resulting in this particular slab of meat cannot be simulated. The actual process itself may be an irreplaceable aspect of what causes consciousness. It may be that only some of the slab's causal relations can be simulated, such as certain kinds of discrimination and behavior.

But assuming that science can create a conscious entity someday, and also assuming that scientists could somehow know that that thing is conscious, it doesn't seem that scientists would be able to "see" the consciousness in the way they can "see" digestion, cell division or even subatomic events. There's still the ontological 'split' there between first and third person perspectives. If a robot or computer or whatever were to become conscious, it would have a point of view. There would be a subjective 'feel' to be that thing, and as we've agreed, experiences, whether caused by meat, silicon, or whatever, have their own validity not reducible to their causal relations.

Scientists may be able to 'see' the processes that cause the consciousness, but that's causal, not ontological reduction. To understand x, you have to know more than just what causes x. You have to know what x does and also what x is. From what you've said, I get the sense that you think that knowing the causes of x is all that's needed to understand what x does and is. And there we have the nub of our disagreement about determinism which I keep veering back to because I can't help myself. Oh, the irony!

I don't think we're anything special. I don't think any mystical junk holds us together or sits behind the wheel of our meat machine bodies. I think we're essentially very complex meat robots/computers.

I don't see why we have to be restricted to only those two alternatives, that we are either mystical junk or complex meat machines. That sounds like Descartes. There's emergentism, there's systems holism. There's the understanding of complex systems, which is still in its infancy. There's complex self-organization, which is an intrinsic feature of everything from subatomic particles to atoms, molecules, cells, etc. There's the specification of complex systems, also known as information, which also seems to 'cause' things, but not through energy transfer. There's consciousness which also seems to be irreducible.

So, lets see what we agree on:
1) Experience is real even if what we are experiencing is an illusion.
2) Consciousness is tied to experience (perhaps even being the thing that does the "experiencing")
3) Consciousness must do something, otherwise it wouldn't be present. At the very least it must provide an adaptive advantage.

So where do we go from there?

Well put. We agree on all three points. My only point in relation to 3) is that if the nature and activity of a thing is totally reducible to its causes, then why aren't all living things just complex automata? Why the need for consciousness?
Nathaniel
user 10963465
Group Organizer
Mesquite, TX
Post #: 157
If evolution was necessary to produce consciousness, are you aware of genetic programming? There are programs out there that simulate evolution. The programs breed, cross over, replicate, mutate and reproduce based on fitness (how well they perform a specific task). It's not completely unreasonable to assume that technology can be engineered to evolve in much the same way biology does. Of course, just as it would be impossible for humans to evolve independently again, it would impossible for a consciousness just like ours to evolve again. It would necessarily be at least a little different.

But assuming that science can create a conscious entity someday, and also assuming that scientists could somehow know that that thing is conscious, it doesn't seem that scientists would be able to "see" the consciousness in the way they can "see" digestion, cell division or even subatomic events. There's still the ontological 'split' there between first and third person perspectives. If a robot or computer or whatever were to become conscious, it would have a point of view. There would be a subjective 'feel' to be that thing, and as we've agreed, experiences, whether caused by meat, silicon, or whatever, have their own validity not reducible to their causal relations.

Let lay a few things out and then pose a question. First, scientists have gotten to the point to where they can scan a cat's brain and tell what they're seeing. It's a grainy black and white image, but they can get the basics. It is not unreasonable to assume that scientists could one day be able to read my mind and tell what I'm seeing (I want this so bad to record my dreams BTW). The experience of seeing is related to qualia is it not? If I look at a tree, I experience the awareness of it's shape, color and form. Just by reading the patterns of neurons firing, we can see this qualia outside of the thing experiencing it. Now, let's take a step further. They are also developing bionic eyes, some of which bypass the optic nerve and directly stimulate the visual centers of the brain. Once again, the technology is a in it's infancy and the images are quite blury but they've basically got the equivalent of the cochlear implant for vision now. Now, imagine that you feed one right into the other such that it were possible for me to see what you are seeing. You are having the experience of seeing something, and I am experiencing it just as you do. If visual awareness is part of your conscious experience, and I am experiencing what you are visually aware of, then I am experiencing a portion of your consciousness. It's a bit of a stretch, but if you consider that they can also detect and predict behaviors based on this, then they might be able to sort out the whole thing such that someone (or even you) could observe the full breadth of your conscious experience.

In any case, you think I might believe that if we understood all the parts, that we might be able to understand the whole. Like if we just understood what caused consciousness, that we would understand consciousness. It might be possible, I certainly wouldn't rule it out. In fact, I have no reason to think that this could not be the case. The old dilemma that I was presented with was a simple one based on color. How do I know that my experience of the color blue is not identical to your experience of the color green? Your entire experience of color could be shifted in hue from mine and we would have no way of knowing. I know that the sky is blue and that the grass is green, but those are just names that we have assigned to the experience of those colors. You would most certainly call them the same thing, but you may also experience them differently. But does it really matter? It would merely mean that your brain uses a different symbol for the experience of that color. ax^2 + bx + c = y can be just as easily expressed as xa^2 + ya + z = b if the entire variable map is consistent. The symbols used seem almost irrelevant. This might not be the case though... there are some instances where people experience synesthesia (which I think are perfect candidates for research into how our brains produce experiences). They might interpret certain sounds as blue, or a texture as sweet... but then again, that's really just an instance of the variable map not being consistent and having a fair amount of overlap.

Well put. We agree on all three points. My only point in relation to 3) is that if the nature and activity of a thing is totally reducible to its causes, then why aren't all living things just complex automata? Why the need for consciousness?

What if an automata of sufficient complexity gained the emergent property of consciousness? What if what we experience as consciousness is just easier than being an automata? Generally speaking, nature tends to take the easiest route. In our quest for survival, it may simply be that it is easier to be conscious than not when presented with the complexities that we experience.

As far as the perception of agency goes, I can see a lot of use in that as well. If I say something and then immediately notice that someone is now acting towards me in anger, then it is in my best interest to assume that it was because of what I just said. I did, something happened, what happened was so because of what I did. It's perfectly reasonable. Now, this might only be an inference, so it has the potential to be an illusion, but it is still very useful, illusion or not.
Nathaniel
user 10963465
Group Organizer
Mesquite, TX
Post #: 158
Here's a link to a video detailing the cat brain reading:
http://www.youtube.co...­
Jim B.
user 4260314
Arlington, TX
Post #: 325
First, scientists have gotten to the point to where they can scan a cat's brain and tell what they're seeing. It's a grainy black and white image, but they can get the basics. It is not unreasonable to assume that scientists could one day be able to read my mind and tell what I'm seeing.

The point about consciousness having a separate ontology doesn't mean that there's no possible intersubjectivity of various degrees.

The more similar the physiology and behavior between two living things, probably the greater the degree of intersubjectivity. If two people are together and looking in the same direction, each can be pretty confident of what the other is seeing and hearing at a given moment. A cat and a person is more difficult, especially where the modalities are more dissimilar, such as smell and hearing. Between a human and a bat, or between a human and a bird, lizard or fly, it is even more difficult to imagine much overlap occurring. The human would actually have to possess or somehow to take on a simulation of that entire physiology to even get a remote sense of what that animal's experiences might be like. But even if this is someday possible, then to that extent the researcher is assuming that point of view, so that the ontological split still remains.

The point I was raising was that the first and third person perspectives are fundamentally different, that some things can only be understood from the inside. Frank Jackson came up with the thought experiment about "Mary," a neuroscientist working many centuries from now when neuroscience is 'complete', when every fact about the human brain and nervous system is completely understood ( Neuroscience, just like any other science, may never be complete, but it's just a hypothetical premise of this scenario, and is certainly possible.) Mary knows everything about every neural event. For instance, she knows everything about the neural events causing humans to report experiences of the color red. The only catch is that Mary has lived her entire life unable to experience colors other than black, white and shades of gray. (Let's say she herself has been the subject of an experiment from birth in which a chip has been implanted in her visual cortex so that all other colors are bypassed.) Now let's say that the chip is removed and she can experience red for the first time. Does she learn anything new about red that a complete knowledge of all relevant physical facts had not given her before?

To go back to the cat brain scan; if scientists can get approximations of what the cat is seeing, that does not mean that they are 'seeing' the cat's consciousness. They are only getting an approximation of what is in the cat's visual field. My point was that consciousness itself may not be perceivable the way tables and trees are. When I see my computer screen and keyboard, I am not seeing my consciousness, because I would always need my consciousness to see my consciousness, and so on. This problem goes back to the difficulty of giving a non-circular verbal definition of consciousness. Is it the container or the contents or both? The distinction between container and content, based on physical objects in space, is probably not relevant. There's a likelihood that consciousness as such may be too primitive and too ontologically sui generis to be objectifiable. How do you picture subjectivity as part of one's (pictured) worldview when subjectivity is the picturing?

It is not unreasonable to assume that scientists could one day be able to read my mind and tell what I'm seeing (I want this so bad to record my dreams BTW).

I never or rarely experience one modality in isolation. If I walk into Target, I'm not just having visual perceptions but an entire more or less unified complex of experiences, including a sense of my own body, movement, temperature, beliefs, memories, desires, fantasies, emotions, plans, projects, fears, a sense of other people and what I imagine they might be experiencing, and so on. I am seeing what I am seeing in relation to my own position and motion and to a jillion other things. None of these parts are experienced as separate things but all integral aspects of a greater ongoing whole. Yes, scientists may be able to extract one small part of this whole and see what I am seeing although I don't see how they could see it in the way I am seeing it which is as part of this particular whole that constitutes who I am. And to experience the whole, the scientist would in effect have to be me, to experience everything I am experiencing in just the way I am experiencing it, including my memories, fantasies, neuroses, etc and to know the meanings, associations and interrelations of all these things just as I do. In other words, that person would have to be me, and that introduces a whole range of other philosophical problems.

In a similar way, how could my dreams be recorded for others, or even me, to experience? Dreams are usually not just visual or just auditory, but even if they are, the images mean things. (Not to say that the meanings are that clear or straightforward.) Dreams usually have extremely complex associations and emotional resonances that may go back to infancy. They could relate to other dreams I've had or to dreams that I am only dreaming that I have had. This points up the inexhaustibility of experience in general. It can never be made fully explicit. That's why there's a sense of an inexhaustibility to dreams. To know and understand all of these images and resonances, another person would have to have virtually the same mental life as I do. That's why, when I relate a dream to someone else, I usually have to translate it into another language, a language of public discourse. Of course, the closer I am to that other person, say a twin if I had one, the more there is that can be shared in a tacit rather than just an explicit way.

I know that the sky is blue and that the grass is green, but those are just names that we have assigned to the experience of those colors. You would most certainly call them the same thing, but you may also experience them differently. But does it really matter?

That's exactly the point. It doesn't matter in terms of causal relations, such as how you or I would function. That's because consciousness is a different ontological category from causality and function.

What if an automata of sufficient complexity gained the emergent property of consciousness?

Then it would be an emergent property with its own ontology. I didn't mean to say that non-conscious animals are necessarily automata. It was just to make a point.

Jim B.
user 4260314
Arlington, TX
Post #: 326
I know that the sky is blue and that the grass is green, but those are just names that we have assigned to the experience of those colors. You would most certainly call them the same thing, but you may also experience them differently. But does it really matter?


That's exactly the point. It doesn't matter in terms of causal relations, such as how you or I would function. That's because consciousness is a different ontological category from causality and function.

Well, this is a problem and a possible contradiction in terms of consciousness being able to cause things to happen. The dilemma for me is that consciousness seems as if it has to have some ability to cause things, otherwise it wouldn't have been so consistently selected for. On the other hand, it seems as if there are aspects of it that are not directly causal.It could be that qualia lack causal power, at least directly, but that other aspects of consciousness, like beliefs, desires, and reasons can cause things, the last one lying more at the conceptual end of the cause spectrum. Maybe part of the problem is that neither cause nor consciousness is that well understood. Maybe the whole thing is wrong. It's not called "the world knot" for nothing.
A former member
Post #: 4
Late to this conversation as well, so again, I'm not going to read all of the responses. As I understand it, we still don't really, really know what consciousness is, but I think it is safe to say that "it" is not a thing, but rather a process.

Consciousness is an emergent phenomenon. It arises from a series of processes linked to biology and chemistry, but it is not any of these things by itself. Consciousness seems to be the process of neurons firing. It does not exist in the neurons, but in the space between them. It is certainly real, but not tangible.

I recommend reading Emergence: The Connected Lives of Ants, Brains, Cities, and Software, by Steven Johnson

From the Wikipedia page on Emergence:

An emergent behavior or emergent property can appear when a number of simple entities (agents) operate in an environment, forming more complex behaviors as a collective. If emergence happens over disparate size scales, then the reason is usually a causal relation across different scales. In other words there is often a form of top-down feedback in systems with emergent properties. The processes from which emergent properties result may occur in either the observed or observing system, and can commonly be identified by their patterns of accumulating change, most generally called 'growth'. Why emergent behaviours occur include: intricate causal relations across different scales and feedback, known as interconnectivity. The emergent property itself may be either very predictable or unpredictable and unprecedented, and represent a new level of the system's evolution. The complex behaviour or properties are not a property of any single such entity, nor can they easily be predicted or deduced from behaviour in the lower-level entities: they are irreducible. The shape and behaviour of a flock of birds or school of fish are also good examples.

One reason why emergent behaviour is hard to predict is that the number of interactions between components of a system increases combinatorially with the number of components, thus potentially allowing for many new and subtle types of behaviour to emerge. For example, the possible interactions between groups of molecules grows enormously with the number of molecules such that it is impossible for a computer to even list the arrangements for a system as small as 20 molecules.

On the other hand, merely having a large number of interactions is not enough by itself to guarantee emergent behaviour; many of the interactions may be negligible or irrelevant, or may cancel each other out. In some cases, a large number of interactions can in fact work against the emergence of interesting behaviour, by creating a lot of "noise" to drown out any emerging "signal"; the emergent behaviour may need to be temporarily isolated from other interactions before it reaches enough critical mass to be self-supporting. Thus it is not just the sheer number of connections between components which encourages emergence; it is also how these connections are organised. A hierarchical organisation is one example that can generate emergent behaviour (a bureaucracy may behave in a way quite different from that of the individual humans in that bureaucracy); but perhaps more interestingly, emergent behaviour can also arise from more decentralized organisational structures, such as a marketplace. In some cases, the system has to reach a combined threshold of diversity, organisation, and connectivity before emergent behaviour appears.

Powered by mvnForum

People in this
Meetup are also in:

Sign up

Meetup members, Log in

By clicking "Sign up" or "Sign up using Facebook", you confirm that you accept our Terms of Service & Privacy Policy