Jump to content
Goodbye Jesus

The Mystery Of Consciousness


Recommended Posts

I think the problem really lies in the question of what consciousness really is. How do we define it? If we define it as "I am aware of my existence", and if we make a machine that claims it has it, can I decide or prove that it does have it or not? Like you say, is there a difference in simulated consciousness or "real" consciousness? (Does Alan Turing ring a bell?)

Link to comment
Share on other sites

I think the problem really lies in the question of what consciousness really is. How do we define it? If we define it as "I am aware of my existence", and if we make a machine that claims it has it, can I decide or prove that it does have it or not? Like you say, is there a difference in simulated consciousness or "real" consciousness? (Does Alan Turing ring a bell?)

I agree. Chalmers doesn't like to use the word awareness to emcompass all of consciousness because he feels it is more applicable to the "easy" questions. Although awareness is a part of consciousness, it doesn't seem to be all there is to it.


I think that is what Chalmers is trying to do; he is trying to lay out what all consciousness entails in order to find a theory that will cover all of it. One may have to come at from both directions, top to bottom and bottom to top.


Dang-it...my bell is silent??

Link to comment
Share on other sites

Well, if we put it this way then. If I was able to put together a DNA by hand to produce a brain that were in essence working the same way as our brain and gave it interfaces of different kinds, like eyes and ears and a mouth etc. Could this constructed brain become conscious?

I think the question of how to create a consciousness is a good one. I've been looking into things that surround this topic for some time and I think you are right here Hans. I think the creation of mind necessitates the creation of life. But Decarte may have been wrong. Organisms are not machines.


Here is the route of study of that I'm currently engaged in. It is all due to one of my favorite biologists Robert Rosen.


Complex systems ---> anticipatory systems ---> living systems ---> cognitive systems


Rosen argues (he uses a lot of mathematics that I have yet to comprehend) that complex systems are fundamentally different from simple systems. All machines, including modern digital comptuters, are simple systems according to Rosen, whereas living systems are complex.


I would love to be able to unfold for you in greater detail what all that means, but I am still learning. Maybe in a few years it will all be a little clearer to me. This subject excites me.

Link to comment
Share on other sites

The big question I've had for quite some time isn't so much what/where is conciousness, or "what am I", as much as it is "why am I me?" I could have been someone else or something else, some other time, some other place, but here I am. Perhaps this is the next question on the scientists list, and I'm just getting ahead of myself. Perhaps also this question only arises in people who have less than ideal lives, and are often wondering/desiring alternatives. (???)

I tend to think this reflexive type question; one of self consideration is perhaps rooted in our use of language, giving us the ability to process abstractions. Actually I wonder how many people don’t ever really think about the question “who is me?” How many are just consumers and nothing more? I don’t know really. But I do recall something about stages of development where a fair percentage of people never really move to abstractions like this in thought. Perhaps it’s really more just a psychological phenomenon?


Can anyone point to where MS Windows is? Not MS Windows in general, but where is the core functionality of it and it's existence?

Sure. That’s easy. Go to Start > Run, then type in regedit. Delete everything you see in there, and that should pretty much solve that mystery for you. :lmao: (for the non-technical reader, that’s a very destructive thing to do, so don’t do it. A little disclaimer there. :wicked: )

Link to comment
Share on other sites

I see consciousness partly as a higher level process of the brain. The entire body is a huge feedback unit that the brain processes, constantly, nonstop. process feedback, execute, process feedback, execute...etc ad naseum


Imagine it this way if you will..

Imagine a sophisticated robot with a very simple body and very basic wheels as locomotion, sort of like the mars rover or the like. However this robot is millions of times more sophisticated because like the human body it consists of tiny little sensors all over its body both internal and external.


Now this little robot's "brain" is a huge library of processes that runs in parallel and serial, processing the constant deluge of information coming in from these little sensors. Let's say different areas of the robot's brain processes different types of sensors. And remember that these processes are constantly running, in a loop over and over, doing the same repetitious tasks they had been developed to do.


So the little robot is rolling along (driven along by it's complicated network of movement routines processing the constant feedback from the sensors in it's wheels, it's body, rather like a central nervous system). At the same time the vision sensors are constantly scanning the landscape, feeding the input into the vision routines. The visions routines analyze the different objects by detecting constrast, shape, outline, color, etc. But the vision routine has no idea what the objects are, so the output from vision is passed to the vision output processing routines. These routines will process the visual output (which lets say written as a proprietary image format) via its internal storage.


This vision output is passed to the image cross-referencing routines that cross reference the visual output to all images in storage in an attempt to find out what the item is. Fast and efficient, the cross-reference routines tag one of the objects as matching the shape of can. It is unrecognized what this particular can is since there are none like it in storage and is thus flagged as a new item.


At this point all the above processes can be labeled as "subprocesses" that 24/7 nonstop process all incoming data and file it away into storage.


Let's say all objects tagged as a "new item" take priority and are passed to the higher processes. This higher process is more of a flow control process, that determines where the information will be passed onto next. However, this process basically utilizes all the subprocesses to make this determination, based on rules applied to the subprocess outputs.



Now lets make this robot a bit more complicated, remember this is an advanced robot =) The entire robot is pretty much just a complex network of feedback sensors, however there is also an efficiency component to these sensors.

Basically each of the robot's sensors and physical components have say 5 stages of efficiency. So the robot's internal cooling system can operate at optimum efficiency if there is enough incoming energy to run it. However when there is not enough internal energy supply or if it is not required, those components can either go to a lower energy mode or standby (rather like today's laptops). Unlike laptops, there are 5 stages instead of just 2.


So back to our high level processing routines.

One are of the inputs to the high level process includes the efficiency state of each of the robot's physical components. For example if the cooling system for its "brain" is running at maximum, the "brain" can run more efficiently without overheating and thus process more information per second. If the feedback from the battery system is low, then the high level process can send out a command to crank down the efficiency of the components controlling movement, or deploy it's solar panels to gather energy. We'll call this high level process the "internal stimuli routine"


The "internal stimuli routine" (ISR) feeds into the "external stimuli routine" (ESR), also a high level process. Now the output from the visual image that was cross-referenced is in storage accessible by all processes. Tagged as a "new item" the ESR accesses the information. The ESR however only controls the flow of information and makes the higher level decisions based on the rules stored in its neural network.


So the ESR passes the information to another process, the "past experience" process. This subprocess takes the new object "can" and cross-references it with storage and checks what has actions have been done on similarly shape "cans" in the past. When finished this subprocess tells the ESR that such there is a 75% chance the "can" is an "oil can" based on past data, and the past action has been to "siphon into lubricant oil intake"


Now the little robot is not going to immediately go and siphon the unknown liquid, as previous versions of this robot that went around siphoning without first calculating risk and possible outcomes have met with untimely demises.


So another process churns through the "can", 75% "oil can" data and cross references possible outcomes based on the experience in storage. This process can substitute the "can" into past situations and estimate a probably outcome. All this data goes into storage. Some examples: previously the robot had shot a piece of wood with a laser setting it on fire, thus generating illumination - process substitutes "can" in cross referencing and calculates likelihood can will set on fire thus providing illumination by cross referencing data with shooting cans with lasers. Thus an entire table of information will be stored - each possibility with many an outcomes and a percent chance.


At the same time one of the subprocesses that drives the overall efficiency settings accesses storage and processes the fact that an "oil can" 75% has been sighted. Current efficiency of motors is at 2 due to low oil reserves, oil need is fairly critical asap. Quick cross reference with past experience determines a 90% chance that boosting "brain" cooling efficiency to 4 and motor speed to 4 will resolve processing decisions faster as well as quicker access to oil.


Efficiency switch is executed. Internal feedback sensors detect change in efficiency, information passed to ISR. "Boost in 'adrenaline'" information is passed to ESR from ISR. Clock speed of robot brain is increased, and thus brain runs faster. Robot decides best option to siphon liquid. Siphoning begins but siphon sensors detect that liquid does not match "oil" in consistency or chemical composition. Liquid rejected and information filed away that cans with "Coke" text not to be identified as beneficial oil substance.


Little robot continues onward as its internal sensors report lower and lower reserves of oil. Efficiency cranks downward as robot continues on way.




Sorry, that was looong, but for that simple robot, it doesn't not seem to meet the requirements of consciousness to me. While it can process possible outcomes there is no reason for it to process possible outcomes of itself yet.



I would think for that we would need to add to the scenario self exploration via its own feedback sensors. Touching and manipulating its components, reading the feedback from these actions and filing it away into the database. However if this robot was all alone, that does not seem like it would be enough to count as consciousness or becoming self aware.



But if it were to meet another robot like itself, or several others like itself, that provides an opportunity for it accumulate date about other robots. Cross reference that data with data about itself and the similarities as well as the differences begin to arise. At the internal process can create possibilities involving other robots and their outcomes. A quick substitution and outcomes featuring itself in the other robot's situation can be calculated.


At this point then, the conclusion should be that something is controlling the actions of the other robots. And the resulting substitution is that similarly something internally is processing feedback, making decisions and controlling yourself right? Is this consciousness? I think I've lost track of the definition of consciousness lol


Then we would get to communication between two robots, say they exchange data wirelessly, there needs to be some way of self identification, because they are all robots that act and look similar. If they cannot take images of themselves to use as ID, perhaps they would use accumulated data from the world that they find themselves similar too. Via its internal comparison/substitution routines, the little robot above is hard externally like a "little rock" and thus communications from the robot may identify itself with the tag of a "little rock"


Are we conscious yet? I don't know...


Let's say we remove similar components from a human ...

If a person had no nerves and could not process feedback from the outside world could they still be conscious?


What if a person's long term memory storage never functioned. Not like alzheimers where there is a gradual loss but rather long term memory never worked in the first place after birth. The brain would be unable to cross reference any current experiences with past experiences. Would it know that there are other human beings like itself? Would it remember? Can the person be considered conscious?


What if long term storage works? But only holds enough information for 2 weeks at a time?


And what if the brain works perfectly fine but there weren't any other humans on the planet. And the person has never seen another entity like him or herself, can they become self aware? When there is no information about other entities like himself to cross reference?


hmm... I wonder... then, would the mythical Adam be considered conscious? he was all alone to begin with wasn't he..



well, i don't know what my confused ramblings conclude if anything.. i sort of came up w/ it all right now in the middle of the night =)

Link to comment
Share on other sites

This topic is now closed to further replies.
  • Create New...

Important Information

By using this site, you agree to our Guidelines.