Jump to content
Goodbye Jesus

Deliberate Manipulation Of The Phenomenal Self Model


DeGaul

Recommended Posts

I've mentioned the work of Thomas Metzinger on this forum in the past, but I think it bears revisiting an important concept in his work, the Phenomenal Self Model. Metzinger's work is pretty dense, but by way of a summary for those who are unfamiliar with him:

 

The phenomenal self model is a concept Metzinger came up with to explain our sense of "self" in a very concrete way. Metzinger starts from the revelation of neuroscience that the self is not a thing, not anything substantial, but is a neurological process that goes on in the brain. Using an example of computer science, a computer is capable of three basic sorts of behaviors---simulation, emulation, and self-emulation. Simulation is a function computers carry out all the time. Think of this post I'm writing, even now. The computer program I'm using to share my thoughts with you is an simulation of the "real" writing behavior of human beings. This computer simply simulates what humans would otherwise do with their hands and a piece of paper. This simulation is also a bit of an emulation. The computer I am writing on is capable of doing much more than word processing, but at this moment in time it is acting like a word processor. If you think of the operating system on your computer as being a virtual space, the programs are the activities which the computer carries out in that virtual space. Those programs are all emulations which the computer enters into in virtual space. The best example would probably be the calculator on your computer. When you bring up the calculator, your computer is emulating a real calculator.....a much less sophisticated computational device than the computer which is doing the emulating. (That is an important point, as computers, or human brains for the matter, cannot emulate processes which exceed the capacity of the system which is running the program.) When you place simulation and emulation together, you get the capacity for self-emulation. First, the system simulates the global condition of the organism, then emulates that global condition in a representational and yet ultimately less sophisticated way. The "self" then forms as a sort of neurological processing short hand for the overall condition of the being.

 

This is the origin of the human sense of self, the brain simulating/emulating the global condition of the whole organism to itself.

 

Now, I've come to think, over time, that the manipulation of this self-model is at the heart of religious traditions like Buddhism, or the mystical traditions of the more familiar monotheistic religions. The basis human anxiety seems to be the existential anxiety.....the fact that we find ourselves alone in an often hostile and uncaring world. Factually speaking, the universe is not "on our side". But, thinking about this fact all the time leads to a sense of panic, because there is so much out there that threatens our existence, and when the brain focuses with laser like intensity on the vulnerability of the organism, it can set off panic and paralysis in the organism. What is needed is a way for the brain to continue to self-model in order to allow the organism to take care of itself, and yet protect the organism from the effects of hyper-consciousness (the existential anxiety arising from the too accurate self-modeling which produces a picture of a threatening world.) I believe that it is the religious behaviors of human beings which have served this role historically.

 

Without going into too much detail about the experiments that have been carried out to confirm my next statement, neuroscience has demonstrated that it is possible to manipulate an organism's self-model through certain "tricks". Once tricked, the brain alters the self-model to conform with the trick and, as the brain implicitly accepts the self-model as factual, the sense of self is taken to be what is "real". So religion, in an attempt to alleviate the existential anxiety, uses tricks (pageantry, repetition, music, visual stimulation, community, etc.) to cause the self-model to identify the universe around it in a new and less threatening way. For Buddhism, the tradition loosens the rigid sense of self and causes the practitioner to identify the world at large as somehow being a part of the self-model of the organism. This sense of unity leads to general release from anxiety as the world seems "as close as I am to myself". The various monotheistic religions achieve a similar effect by causes the self-model to identify an invisible and all-benevolent being which permeates the self-model and cares for and protects it.

 

The reason that religion functions so perfectly as a manipulator of the self-model is because, through trial and error, religion has managed to find the right buttons to push, so to speak, in order to get the self-model to report falsely to the brain, which the brain then accepts as truth. The brain cannot help but accept the self-model as true. Unless some other stimuli acts to alter the self-model in some way, the brain will continue to function with its current self-model, directing the behaviors of the organism in accordance with the information coming from its own self-reporting.

 

Looked at from this point of view, religion is quite a towering achievement of complicated social engineering designed to rewrite the very neurology of the brain in an attempt to protect the human organism from its own surplus of consciousness. It is worth noting in all this, that although I reject religion, I do not endorse the idea of living free from all limiting of consciousness. We, as human beings, do have a surplus of consciousness. We are too aware, and have to find some way to control our surplus. I don't agree with using religion to do it, because the intentional use of false reporting to relieve anxiety seems immoral to me, however; I still think there is plenty of use for other forms of distraction or sublimation in the controlling of excess consciousness. We might lose ourselves in games, activities, projects, intellectual curiosities, etc. We need to limit consciousness and that does, in some way or another, involve the manipulation of the self-model......religion just happens to not by my preferred method.

  • Like 1
Link to comment
Share on other sites

I thought that was an extrememly interesting post DeGaul. Thank you.

 

My thinking however has leaned in the other direction regarding consciousness. It seems to me that instead of having a surplus of it, we have a consciousness with training wheels.

 

But yes, I completely agree that our notion of "self" is a model (or more likely, a whole quilt work of models). And like all models, they have their scope and limitations.

Link to comment
Share on other sites

De Gaul, I find your decidedly cold analysis of consciousness lacking because it takes no account of the significance of the human capacity for self-adaptation, which is an ability that separates man from machine.

Link to comment
Share on other sites

Looked at from this point of view, religion is quite a towering achievement of complicated social engineering designed to rewrite the very neurology of the brain in an attempt to protect the human organism from its own surplus of consciousness.

 

I get what you're saying, but I don't know that I'd characterize religion as an "achievement" in this sense. To me that implies a conscious, deliberate effort toward that end. To me it looks more like an accident; an evolution. Not unlike a virus.

Link to comment
Share on other sites

@Paradox, if you feel that computers are so far behind humans in the area of self-adaptation, I think perhaps you should look at some of the most recent advances in computer science and robotics. There are some truly amazing robots capable of some pretty fantastic adaptation, including an amazing robot of Philip K. Dick which demonstrates a startling level of conversational creativity. Although we are far and away from true artificial intelligence, there is no reason to glorify human adaptivity. Animals other than human beings also display adaptivity, and also demonstrate different levels of self-modeling. Truly, any animal you pick is an amazing bundle of chemical and electrical processes, as well as unique combinations of elements, but at the end of the day, they are still just chemical and electrical processes and combinations of elements. We are physical beings, through and through, and are subject to physical law. There is no reason to assume that an equally amazing combination of electrical/chemical processes and combinations of elements which we label "machine" could not also come, in time, to have its own unique and adaptable sense of self.

 

@Rank Stranger, I admit, I was being poetic when I called religion an achievement. It is no more an achievement than our sense of self is an achievement. Both are simply evolutions that have occurred due to environmental pressure during the history of our species. I have said it before, though not perhaps on this forum, that man does not make himself, he is made. Like any animal subject to physical laws and evolutionary pressures, we have been shaped and determined by our global history.

 

@Legion, I would appreciate if you expanded on what you mean by saying that human beings are still operating consciousness with training wheels. I'm not sure I follow you. When I say that we have a surplus of consciousness, what I mean to say is that we have self-consciousness on a level that is almost maladaptive. Our consciousness, at times, goes against our own good, from an evolutionary perspective. In response to this fact, we have evolved various behaviors which allow us to limit consciousness and thus mitigate some of the negative effects of consciousness. My understanding of consciousness, therefore, is contextual. I'm referring to a surplus of consciousness only in the human context. It seems to me that you are interested in exploring consciousness in a more general sense......perhaps a sort of future evolution of consciousness, but I'm not sure how to make sense of that kind of evolution without a context. Consciousness in relation to what?

Link to comment
Share on other sites

Wow - that's a brilliant analogy. I'm going to look up his work and think about this some more.

 

Thanks for sharing.

Link to comment
Share on other sites

Wow - thanks for posting this, - I shall look into this further - I've often wondered about human "surplus consciousness" . I suffer from a psychiatric condition called "Chronic Primary Depersonalization Disorder" - one of the primary symptoms of which is an alteration in my consciousness thgat renders the external world profoundly unreal. That and a head injury I sustained a few years ago have served to limit my 'consciousness' even further. However I can still function perfectly well - apart from having a certain amount of social anxiety which makes me somewhat averse to large social events.

 

It has often occured to me that if I can function well with my (as it appears to me) limited consciousness - what purpose is served by the surplus consciousness that humans appear to have?

 

I'll certainly be looking into Thomas Metzinger's work!

Link to comment
Share on other sites

@Paradox, if you feel that computers are so far behind humans in the area of self-adaptation, I think perhaps you should look at some of the most recent advances in computer science and robotics.

 

[...]

 

Truly, any animal you pick is an amazing bundle of chemical and electrical processes, as well as unique combinations of elements, but at the end of the day, they are still just chemical and electrical processes and combinations of elements. We are physical beings, through and through, and are subject to physical law.

 

Easy to say, very difficult (and I maintain, impossible) to support. The mind isn't just adaptational; it is *self*-adapting. It can use intuition to 'read between the lines' where it needs to. Computers are entirely dependent upon what their programmers. Apply the same logic to the mind, and you get an infinite regress.

Link to comment
Share on other sites

@Paradox: I don't mean to seem rude in saying this, but I think you have a profoundly limited knowledge of neuroscience and cognitive philosophy. The "reading between the lines" you are speaking about is simply the capacity that the brain has to assemble data in a sort of short hand that is primarily accomplished by the use of a self-model, as the work of Metzinger elaborates on. For another account of the way in which the human brain practices its own unique short hand, Daniel Dennet's now classic book Consciousness Explained goes into great depth on a lot of the experiments and hard science backing up the claims of cognitive philosophy. As far as computers being limited by their programing, that isn't accurate. In a self contained system computers are limited by programming, but if you develop a computer with the capacity to receive external information and with an advanced enough base program, a computer can be developed which learns from external input and develops its own programming in response. This is no different that what humans do. If you think humans don't operate from a base "program", that is just incorrect. As the work of Steven Pinker has shown, human beings are not a blank slate at birth, but come preset with some very basic operating procedures. The human personality develops out of the interaction between our original genetic determinism and our environmental interaction. Self-adaptation is simply the process by which new information which is relevant to the organism is integrated into the self-model in appropriately adaptive ways. I'm beginning to think, however; that by self-adaptation you mean some sort of capacity the mind has to change itself according to its own will. Perhaps I could use the example of someone trying to quite smoking. If a person chooses to quit smoking, it would seem they are choosing to change themselves. There is a certain level of mystery that people seem to like to maintain about the human mind, often because of examples like this. We tend to have a poetic, almost religious, level of respect for human will. We watch movies about people overcoming great trials and mastering themselves through the application of their will, but this is all a very one-sided and decided unscientific glorification of human will. I mean, certainly, I love the poetic just as much as the next person, but the poetic often leaves out the nuts and bolts of real human functioning. No great triumph of the will was truly self-motivated in a pure sense. All acts which humans perform are contextual, and the context will have contributed a great deal to the movement of events. The human brain doesn't act in a vacuum, but rather is largely a reactive organ. Inspiration, boredom, passion....all of these great motivators of human kind are not things we control but are things we react to and which move us to further action. Human beings are primarily creatures of habit and reaction, functioning for the most part by the guidance of our own programming, which we often change with only great difficulty and only when context drive us to it.

  • Like 2
Link to comment
Share on other sites

And this is one of the examples of how my husband is so amazing. Much love for DeGaul!

Link to comment
Share on other sites

@Legion, I would appreciate if you expanded on what you mean by saying that human beings are still operating consciousness with training wheels. I'm not sure I follow you. When I say that we have a surplus of consciousness, what I mean to say is that we have self-consciousness on a level that is almost maladaptive. Our consciousness, at times, goes against our own good, from an evolutionary perspective. In response to this fact, we have evolved various behaviors which allow us to limit consciousness and thus mitigate some of the negative effects of consciousness. My understanding of consciousness, therefore, is contextual. I'm referring to a surplus of consciousness only in the human context. It seems to me that you are interested in exploring consciousness in a more general sense......perhaps a sort of future evolution of consciousness, but I'm not sure how to make sense of that kind of evolution without a context. Consciousness in relation to what?

Good Degaul, I hope to address these questions this evening after work. Have an excellent day my man.

Link to comment
Share on other sites

Forgive me, DeGaul, but I find your style cold and dogmatic. When we read a book, we are not merely 'assembling data' as a computer does. Even were it that a computer could experience something in the process of assemblig data, that experience would be totally superfluous to the mechanical process it performs. When a mind undergoes an experience, that experience transcends reductionistic analysis; there is an input from the mind itself. Units of data (such as to be found in binary coding) aren't in themselves units of experience.

Link to comment
Share on other sites

@Paradox: I can clearly see that you dislike my style, but cold or not, I fail to see you offering any evidence or even a clear argument for what you are asserting. You are claiming that the mind escapes reductionist descriptions, but I have not given any reductionist arguments in this post. I have given physicalist arguments, and in the course of giving those arguments I have stated my rejection of the need for anything more than physical laws to explain humanity, but this is not reductionist. I have asserted that the human "self" is not a thing, but an activity preformed by the brain, and just so, I would state that the mind is just the higher order activity of the brain and is not itself a "thing"....in the sense that the brain is a "thing" or the organism is a "thing".

 

You are asserting that the human mind brings something to an experience, but I'm not sure what you mean by this. I accept that the human mind brings something to experience in the sense that the processes of the brain that we call "our mind", working with the processes which the sense organs are performing, are an irreducible part of any phenomenal event, but then that is no different from the fact that any "event", whether it be a mental action of the brain or the computations of a silicone computer requires all the relevant necessary and sufficient causes which make up the "event". Binary code and computers go together and do not really exist meaningfully without one another. Just so, sense experience and the mental are intimately linked and meaningless without one another. None of these facts seems to me to threaten physicalism, or stand in the way of the possibility of a conscious machine.

 

If it is physicalism that you have a problem with, I suggest you come up with an answer to the challenge issued by Hilary Putnam: Let's take the action of physical locomotion. When a physical body moves, it's movement is causally closed, meaning that the movement of an object is predictable and causally determined. If I am walking across a room and then I suddenly change directions, if we could go back and analyze my whole course of movement, including my change in movement, we could find the whole motion to be causally closed. There is no mysterious moment in which my movement changed which would be ineffable to deterministic analysis. In fact, the whole thing could be very well described deterministically. My motion stays within the limits of the laws of momentum, and does so necessarily, because any truly aberrant change in momentum would defy conservation of energy and would shatter physics. There is, in fact, absolutely nothing that cannot be subject to a physical description. That everything can be described physically is not reductionist. Or I should say, it is only reductionist if you start from the premise that the wonders of life couldn't possibly be physical, but this betrays some strange prejudice against the physical that I feel has no rational basis.

 

Perhaps I am misunderstanding you Paradox, but I would need a lot more explanation to understand what you are trying to get at.

Link to comment
Share on other sites

Guest end3

@Paradox, if you feel that computers are so far behind humans in the area of self-adaptation, I think perhaps you should look at some of the most recent advances in computer science and robotics. There are some truly amazing robots capable of some pretty fantastic adaptation, including an amazing robot of Philip K. Dick which demonstrates a startling level of conversational creativity. Although we are far and away from true artificial intelligence, there is no reason to glorify human adaptivity. Animals other than human beings also display adaptivity, and also demonstrate different levels of self-modeling. Truly, any animal you pick is an amazing bundle of chemical and electrical processes, as well as unique combinations of elements, but at the end of the day, they are still just chemical and electrical processes and combinations of elements. We are physical beings, through and through, and are subject to physical law. There is no reason to assume that an equally amazing combination of electrical/chemical processes and combinations of elements which we label "machine" could not also come, in time, to have its own unique and adaptable sense of self.

 

It would seem by this defintion that many things would already possess self if it is defined as chemical and electrical processes and combination of elements. How would it share or could compare experientially with humanity? You know, maybe so, but I think it's "self" would be different, and unique from humanity. And in that, I don't see the point....I'm sure there is one.

Link to comment
Share on other sites

You know, maybe so, but I think it's "self" would be different, and unique from humanity.

Actually, there are tests that show that some animal species, besides humans, have the awareness of a self.

Link to comment
Share on other sites

certainly, chimpanzees have a sense of self. Consciousness is an 'emergent property' of our incredibly complex brains- which have themselves evolved from more primitive brains, so you would perhaps expect there to be a kind of 'continuum of consciousness' among higher animals. Certainly my dog has amind of his own. It is there for not unreasonable to expect that electronic machine of sufficient complexity with sensory inputs might become self aware. There is no evidence whatsover that brains rely on extra-physical phenomena for their operation

Link to comment
Share on other sites

DeGaul, you don't seem to have listened to me. Units of data cannot be said to be units of experience. One could simulate a string of neurological signals in the lab and subject it to any amount of mechanical processing but there will be no point at which one could say that glimmers of experience have been simulated. It will just become a modified string of simluated neurological signals; it'll be no different from a wave form on a sound simulator that you can play with as you will. The domain of experience does not emerge as a matter of degree. The scholastic terms that you use can mask the issue but not address it.

Link to comment
Share on other sites

there will be no point at which one could say that glimmers of experience have been simulated.

 

I meant to say, there will be no point at which one could say that glimmers of experience have occurred.

Link to comment
Share on other sites

@Paradox, I hate to say it, but it is not that I'm not listening to you, it is that what you are saying is incoherent to me. You refuse to take more than a few sentences to say what you are trying to say, and you provide no cogent proofs nor evidence for your position. I have referenced the work of Daniel Dennet, Steven Pinker, Thomas Metzinger, and I could go on to reference more research in particular if you would like. I don't understand what you are saying about "units of experience" or what you mean by talking about simulating neurological signals. That doesn't seem to have anything to do with what I'm talking about. Yes, of course data and experience aren't the same thing. Data, strictly speaking, would be the content of experience, I suppose. But, what does that say about anything? What I am saying is that there is a completely satisfying physical description of what is going on in the brain which leads to the emergence of consciousness. You seem to be clinging to something like the old "brain as a factory" thought experiment: If we could blow the brain up so large that we could walk through it like we walk through a factory, we would see only a bunch of organic processes going on and wouldn't find any indication of a "self". Therefore, the self has to be something invisible or intangible in addition to the organic processes. This thought experiment is junk, however; as the reality is that the "self" is nothing more than certain processes of the brain. The self isn't an existent "thing" that we could find, it is only a process which is performed by a certain sort of organic thing, namely the brain. Given that we live in a world in which certain organic things (brains) perform the processes we call "mind", and which rely on nothing over and above their chemical and electrical structure to carry out those processes, it would be rational to conclude that those same processes could very well be carried out by a sufficiently complex machine and result in machine-consciousness. It is not a matter of machines "simulating" self-awareness in the lab, it is a matter of machines actually HAVING self-awareness.

 

For an even clearer understanding of how the processes of the brain really create our sense of self, look at experiments done on disassociation in human beings. The fact is, humans can be manipulated or their brains can be damaged in such a way that they cannot identify their own limbs as belonging to them. There is a disorder in which a person will see his or her own arm and identify it as belonging to someone else. The reason this happens is because self-awareness is an ever changing processes, which, if interrupted by some particular forms of organic damage, ceases to function properly. In the same way, with experiments like "the rubber hand" experiment, the brain can be tricked into modifying its self-processing so as to create the feeling that a false rubber hand is actually a part of the organism. There facts and experiments, and many other, demonstrate the fluid and process-based nature of the "self", and point to the basically organic nature of the brain.

 

I would try to clarify what I am saying more, but I don't understand what you are talking about enough to tailor my responses to you Paradox.

Link to comment
Share on other sites

Data, strictly speaking, would be the content of experience, I suppose.

 

This is the point at which you and I differ. Data is not, in itself, anything other than data: raw symbols. It is you -- like Dennett, Pinker and co. -- who are not properly substantiating your side of the argument. You just lay down the claim (quoted above) as dogma.

Link to comment
Share on other sites

Paradox, the claims we are making are not dogma, they are based on extensive experimentation. If there is any dogma at work here, it is the dogma that refuses to accept the idea that anything is "ineffable", which you seem to be committed to. You have made the claim that it is impossible to demonstrate if a computer has had an experience, for example, and yet it is not problematic for the vast majority of people to recognize that other beings besides ourselves have conscious experiences. Very few people doubt that dogs and cats and chimpanzees have conscious experiences, but by your reasoning, we could just as easily claim that you cannot demonstrate that these beings are conscious. Perhaps they are just organic robots? How do you prove that they are not organic robots? The truth is, we identify them as conscious because of our interaction with them. I identify others as "selves" because they behave like creatures who have a self-model. In the same way, should a sufficiently complex computer demonstrate behaviors consistent with a being possessing a self-model, I would grant that computer self-hood and it would be rational to assume it is conscious.

 

You frequently return to the concept of data. I would very much appreciate if you would elaborate on what you mean by "data". I genuinely don't understand how you are using the word. I'm not clear on what you mean by a "raw symbol". I know of no such thing as a "raw" symbol. Symbols are a complex family of signifiers which serve a particular use within a behavioral context. For example, when a person sees his or her name written, the written name is used to signify the person. Within a context, the name causes the brain to shift focus to the signified individual. But what would it mean to have a "raw" symbol? Would that be a signifier which signifies nothing? But a signifier which signifies nothing isn't a signifier, and thus isn't a symbol. It wouldn't be anything. There are no symbols which exist without a context, and no symbols that exist without something they signify. So raw symbols are incoherent to me.

 

I'm genuinely not trying to shut down the discussion, I want to know more of what your position is, but I need more information. You aren't giving me enough to work with to understand you.

Link to comment
Share on other sites

Don't worry, DeGaul, I've figured it out. The member "Paradox" is in fact a computer. ^_^

 

 

Fantastic posts by the way--the OP and every one subsequent. I hope to write on that level one day, but probably will never reach it. I am adding this topic to the list of things I want to research. It makes a lot of sense to me. I'm staggered by how much there is to learn about how human beings work. Thanks for adding such an interesting piece to the puzzle.

Link to comment
Share on other sites

For those of you who are deeply interested in getting a sense for what Thomas Metzinger's work is saying, I suggest his book The Ego Tunnel . It is the most comprehensible of his works, and yet admittedly still rather complex. It is simply an occupational hazard that the details of this discussion tend to be rather difficult to wrap one's head around, but well worth the effort.

 

As far as other works of interest which I would suggest, I highly recommend anything by Peter Zapffe. You will find his work difficult to get a hold of, however; as most of it is in Norwegian. Try looking for a copy of Wisdom in the Open Air: The Norwegian Roots of Deep Ecology, or if you want a book that really touches the heart of Zapffe's thought as interpreted by an American, read Thomas Ligotti's The Conspiracy Against the Human Race.

 

All these books and thinkers address issues of consciousness, as well as primarily the difficulties that arise because of consciousness. The have all been formative in my personal growth of understanding.

Link to comment
Share on other sites

Paradox, the claims we are making are not dogma, they are based on extensive experimentation. If there is any dogma at work here, it is the dogma that refuses to accept the idea that anything is "ineffable", which you seem to be committed to.

 

I don't understand. Are you saying that I refuse to accept the reality of ineffable forms of experience or entities? Nothing could be further from the truth!

 

You have made the claim that it is impossible to demonstrate if a computer has had an experience,

 

I didn't, actually.

 

for example, and yet it is not problematic for the vast majority of people to recognize that other beings besides ourselves have conscious experiences. Very few people doubt that dogs and cats and chimpanzees have conscious experiences, but by your reasoning, we could just as easily claim that you cannot demonstrate that these beings are conscious.

 

We can't -- at least, not by empirical methods. We have to use intuition.

 

Perhaps they are just organic robots? How do you prove that they are not organic robots? The truth is, we identify them as conscious because of our interaction with them. I identify others as "selves" because they behave like creatures who have a self-model. In the same way, should a sufficiently complex computer demonstrate behaviors consistent with a being possessing a self-model, I would grant that computer self-hood and it would be rational to assume it is conscious.

 

I maintain that you would have to find a computer that is self-adapting. And I don't just mean in a superficial sense; it would mean a computer that operates by a scheme that it is not programmed to operate by. As I say, the self-adaptional qualities of the conscious mind would, if they are determined by some naturalistic process that is in some way removed from their operational scheme, imply an infinite regress.

 

 

You frequently return to the concept of data. I would very much appreciate if you would elaborate on what you mean by "data". I genuinely don't understand how you are using the word. I'm not clear on what you mean by a "raw symbol". I know of no such thing as a "raw" symbol. Symbols are a complex family of signifiers which serve a particular use within a behavioral context. For example, when a person sees his or her name written, the written name is used to signify the person. Within a context, the name causes the brain to shift focus to the signified individual. But what would it mean to have a "raw" symbol? Would that be a signifier which signifies nothing?

 

Think of a binary code. It doesn't matter whether or not you've got the capability to read it; it remains data. A datum is a unit of data, so in this case it is either a one or a zero. You can't get anything out of a one or a zero in itself: no experience whatsoever. If you wish to argue that data as a concept holds connotations that imply something more than an arbitrary group of ones and zeroes (or whatever the units), you might (or might not) have a point, but you miss mine.

Link to comment
Share on other sites

I maintain that you would have to find a computer that is self-adapting. And I don't just mean in a superficial sense; it would mean a computer that operates by a scheme that it is not programmed to operate by. As I say, the self-adaptional qualities of the conscious mind would, if they are determined by some naturalistic process that is in some way removed from their operational scheme, imply an infinite regress.

Self-adaptation, artificial neural-networks, and evolutionary (genetic) algorithms are not new. They've been around for a while. Exactly what kind of "self-adaptation" are you referring to?

Link to comment
Share on other sites

Guest
This topic is now closed to further replies.
×
×
  • Create New...

Important Information

By using this site, you agree to our Guidelines.