Jump to content
Goodbye Jesus

Deliberate Manipulation Of The Phenomenal Self Model


DeGaul

Recommended Posts

I see, so whether or not computers experience anything is purely a matter of degree of complexity...?

Right, so if I kick my computer and it flickers, that is a little bit of cruelty that I have inflicted upon it? Well, I hope the cops around here aren't of the zero-tolerance variety :Doh:

Your computer is probably an old single core Pentium, so it doesn't measure up to the AI research labs' systems. :grin:

 

Some of the systems have special built boards with artificial neural-nets. A neural net works. It is used for pattern recognition, like recognizing faces and such.

Link to comment
Share on other sites

"You never said I was factually wrong, and I don't believe I am."

 

Then let me be clear, I do think you are factually wrong.

 

 

"You are thinking of the school of scholasticism and have evidently been typing the wrong word into Wikipedia. I was just using a derivative of 'scholar'."

 

That was a rather snarky comment to make. I hardly need a wiki to tell me who the scholastics were. And it is not common in philosophy to refer to a thinker as scholastic unless you mean it in the formal sense of "a follower of scholasticism". (At least not among all the philosophers I have known over the years.)

 

"You have yet to say anything about how computers experience emotion. Do you think it would be a worthy endeavor to set up an institute dedicated to the prevention of cruelty to computers (of the A.I. version), if you learnt that many of them are getting abused, smashed up etc?"

 

No computer, as yet, has shown any sign of emotion because they do not have that capacity. I have been discussing self-modeling, which is a unique process of emulation/simulation which leads to a reporting of the organism to itself about the global state of the organism. I hold that this self-reporting is the source of self-hood, and as such, I grant that a computer could have a self. Beyond that, I am not asserting that there is no difference between human and (potential) computer selves. Our emotions are based in our biology, and are the result of our organism reporting to itself about the activity of complex chemicals within the organism. A computer does not have those chemicals, nor our biology, and so it is not clear why computers would need emotions, or if they could develop an analog to emotion. It seems likely that evolving computers could develop their own method of achieving the same end as what our emotions achieve, in evolutionary terms, without the computer method being the same as our own method. Without all the big terms and confusing jargon, I'm saying that a computer with a self would not be identical to a human with a self, but both would have selves, in the same way that a dog with a self is not the same as a human with a self. Each self-model would have unique content.

  • Like 1
Link to comment
Share on other sites

DeGaul,

 

Talking about books regarding artificial intelligence and emotions, I just bought a book called "Affect and Artificial Intelligence" by Elizabeth A. Wilson (2010).

 

I'm not sure how good this book is yet, but it sounds really interesting.

Link to comment
Share on other sites

Guest end3

I see, so whether or not computers experience anything is purely a matter of degree of complexity...?

Why not?

Right, so if I kick my computer and it flickers, that is a little bit of cruelty that I have inflicted upon it?

 

Why, after it has the recognition ability and experience, could it not identify itself as hurt, abused, etc? If I kick an infant, will he understand he is a victim of cruelty?......Except by maybe some genetic influence/complexity?

Link to comment
Share on other sites

"You never said I was factually wrong, and I don't believe I am."

 

Then let me be clear, I do think you are factually wrong.

 

You haven't explained how, except to express your own beliefs, which insofar as they contradict me are by no means facts.

 

"You are thinking of the school of scholasticism and have evidently been typing the wrong word into Wikipedia. I was just using a derivative of 'scholar'."

 

That was a rather snarky comment to make. I hardly need a wiki to tell me who the scholastics were. And it is not common in philosophy to refer to a thinker as scholastic unless you mean it in the formal sense of "a follower of scholasticism". (At least not among all the philosophers I have known over the years.)

 

http://www.thefreedictionary.com/scholastic

(Note senses 1 and 3 of the adjective)

I had simply referred to your 'over-scholastic language'. Beats me why that led you to refer to Aristotle & co..

 

"You have yet to say anything about how computers experience emotion. Do you think it would be a worthy endeavor to set up an institute dedicated to the prevention of cruelty to computers (of the A.I. version), if you learnt that many of them are getting abused, smashed up etc?"

 

No computer, as yet, has shown any sign of emotion because they do not have that capacity. I have been discussing self-modeling, which is a unique process of emulation/simulation which leads to a reporting of the organism to itself about the global state of the organism. I hold that this self-reporting is the source of self-hood, and as such, I grant that a computer could have a self. Beyond that, I am not asserting that there is no difference between human and (potential) computer selves. Our emotions are based in our biology, and are the result of our organism reporting to itself about the activity of complex chemicals within the organism. A computer does not have those chemicals, nor our biology, and so it is not clear why computers would need emotions, or if they could develop an analog to emotion.

 

You will have to explain what it is about chemicals that imbue them with this magical power.

 

You seem to have ignored all my comments about self-adaptation. A mind's framework for rational thinking responds to the instant, in the sense that its circumstances impose grounds for new schemes of rational thought -- new assumptions etc.. A.I.'s operational framework remains constant, even if it can be programmed to show superficial modifications in respect to behaviour.

 

And -- sorry -- I really think you will be laughed out of any court or research institute or what-have-you, if you pursue the ludicrous idea of there being cruelty to computers, what with the notion that the reality of experience emerges as a matter of degree and as a function of complexity.

 

Complexity? I am going to turn my computer off in a minute -:lmao:- I am not sure whther it likes that or not. Can anyone here advise? :lmao:

Link to comment
Share on other sites

Paradox, every post that I've put up here has been a running explanation of why you are factually wrong, you simply refuse to accept any of the arguments put to you.

 

"You seem to have ignored all my comments about self-adaptation. A mind's framework for rational thinking responds to the instant, in the sense that its circumstances impose grounds for new schemes of rational thought -- new assumptions etc.. A.I.'s operational framework remains constant, even if it can be programmed to show superficial modifications in respect to behaviour."

 

No, I haven't ignored your comments, I've answered them. The reason that the human organism can "self-adapt" as you call it, is because the neurological structure in the brain called the self-model is mailable and responds to external input. As the environment changes, the organism is effected and, as the human organism has evolved the proper brain structures making it capable of forming an emulation/simulation of itself within its own neural pathways which then self-presents to the brain, the new sense of self emerges in response to the changing state of the organism on a global level. We know that this self-model changes in response to environmental influence because we can trick the self-modeling part of the brain in the research lab. The rubber hand experiment is the classic example of this. First a test subject is seated at a table and his hand is placed in a box so that he cannot see it and a rubber hand is placed in front of him. The researcher then tells the subject to focus on the rubber hand while, at the same time, and assistant rubs the subjects real hand with a brush and the rubber hand with a brush simultaneously. In the span of about 30-60 seconds, the sense of "self" of "ownership" which we feel toward our own hand will slip into the rubber hand. The subject will feel, with absolute certainty, that the rubber hand IS his own hand. The reason for this is because the self-model had been made to falsely report to the brain. This can also happen in the reverse, as when a certain brain disorder occurs which causes the patient to no longer be able to identify her own limbs. No matter how intently the patient looks at her own limbs, she will feel no sense of ownership or "mineness" in relation to her own limbs. She will insist that the limbs belong to another person, possibly the researcher, or even insist that they are floating limbs rather than identify them as her own. Once again, this is an example of the self-model failing to report properly.

 

"A.I.'s operational framework remains constant, even if it can be programmed to show superficial modifications in respect to behaviour."

 

Human "operational framework" is also constant. The plasticity of the human brain is great, but not limitless. Other posters have pointed this fact out. There is not limitless capacity for change for the human organism. We operate within our sensory apparatus and by our physicality. The pioneering work of George Lakoff and Mark Johnson in the area of verbal metaphor and concept formation demonstrate how the very content of our thoughts is entirely conditioned by body experiences. The "operational framework" which constrains the human organism is written into our very bodies and brains. We operate within the limitations of our brain structures, and developments beyond the human are really the providence of mutation and natural selection. Perhaps that is really the difference between organic beings and potential computer beings.....it is hard to imagine how computer beings could mutate and thus evolve.

 

"And -- sorry -- I really think you will be laughed out of any court or research institute or what-have-you, if you pursue the ludicrous idea of there being cruelty to computers, what with the notion that the reality of experience emerges as a matter of degree and as a function of complexity."

 

It is generally accepted that consciousness emerges along a continuum among contemporary researchers. Not to harp on Metzinger, but as he is one of the founders of the Association for the Scientific Study of Consciousness, I think he is a relevant source:

 

"We have learned that consciousness reaches down into the animal kingdom. We have learned about psychiatric disorders and brain lesions, about coma and minimally conscious states, about dreams, lucid dreams, and other altered states of consciousness. All this had lead to a general picture of a complex phenomenon that comes in different flavors and strengths. There is no single on-off switch...consciousness is a graded phenomenon." The Ego Tunnel, p19

 

As consciousness is a graded phenomenon, it does exist in different levels of complexity and intensity, and in many different forms. It is not at all silly to suggest that sufficiently complex computers with sensory apparatus could become conscious. Continually mentioning your personal computer, however; is an empty example, as no one would suggest that a personal computer has the relevant level of complexity to manifest consciousness.

Link to comment
Share on other sites

There was time some years ago when I too thought that computers could manifest intelligence. I seriously doubt it now though.

Link to comment
Share on other sites

You only ever answer by reference to your own model. Assuming what one sets out to prove is not acceptable in logic. You don't answer the questions I pose qua model-independent questions.

 

It is generally accepted that consciousness emerges along a continuum among contemporary researchers.

 

Oh, really? That's news to me. You honestly seem to be confusing popular/trendy authorship with contemporary research.

 

Continually mentioning your personal computer, however; is an empty example, as no one would suggest that a personal computer has the relevant level of complexity to manifest consciousness.

 

So now you are changing your view and saying that there is not a gradation but a 'relevant level of complexity' at which my computer becomes conscious, and my kicking my computer will only be an act of cruelty when my computer is above such a level?

 

BTW I shouldn't hinge your argument specifically on the notion of consciousness, as distinct from mind, because you will be in danger if you suggest that my computer, with its low level of complexity, is in any way analogous to a mind that is in a coma.

Link to comment
Share on other sites

Paradox, I don't normally step away from a debate, but honestly, you are just exhausting. No, I'm not assuming my premises. No, I'm not performing illegitimate leaps of logic. No, I'm not referring to any kind of "trendy" or "popular" writing, I'm referring to the work of some of the most prominent minds in the business of cognitive philosophy today.

 

Looking back over the vast majority of posts to this thread, I think I can safely say that the majority of people here feel that I've made a pretty good case for what I'm saying, and I'm happy to know that many people who have read this thread will step away with the urge to explore the works of Thomas Metzinger. That feels like a successful post to me. Beyond that, I really don't have any desire to walk you through these ideas anymore. If you are interested in the vast collection of research related to the points I've been making, please reference the Association for the Scientific Study of Consciousness and Thomas Metzinger's own work.

 

If you have any relevant studies or researchers you'd like me to look at in order for me to expand my understanding, please provide them, but to be honest, I don't believe you are at all informed on these issues as you have not made a single reference I can find to any research studies or experimental evidence.

 

Thank you for your vigorous debate, it has helped to sharpen my views and has given me ideas about where my own research should develop in the future, but for now I'm afraid this debate has lost any kind of a point.

Link to comment
Share on other sites

I have found this thread fascinating from the beginning - I'm not a philosopher, but an electronics engineer specializing in chip design. In my job, I freqently use a construct called a 'Finite State Machine' which can be considered as a 'black box' with inputs and outputs. The outputs are a function of the current inputs and the current internal state of the FSM it would seem to me that brains (of insects, fish, reptiles, mammals or computers) could hypothetically be represented as very complex FSMs - in the case of the human brain with perhaps trillions of internal states. It wouldn't (in my view) particularly matter whether this state machine were implemented on a biological or silicon substrate. The difficulty from a philosophical point of view is how to understand the fact that complex systems (biological or otherwise) develop the emergent property of consciousness. It is quite clear to anybody who has owned a dog that they are conscious in some sense and have emotions-but nobody thinks that insects have consciousness in any meaningful sense, so clearly complexity is at the root of the emergence of consciousness.

Link to comment
Share on other sites

... complexity is at the root of the emergence of consciousness.

I lean towards strong agreement with this Andy, but I suspect that we are using different notions of complexity. But then I know some cutting edge scientists who disagree about the definition of this word.

 

I make a distinction between complicated and complex. I think machines or mechanisms can be arbitrarily complicated, but I think they lack the requisite organization to qualify being called complex. Increasingly to my mind (I am still learning here) complex implies that a system will manifest certain paradoxes.

 

This brings me to a question for those of who still believe that computational AI is still possible...

 

Do you think that a computer can be programmed to anticipate? That is, can they be made to sense things, reason about them, make predictions, and utilize these predictions to control their own behavior? In addition, do you think they can be made such that they will acquire new anticipations?

Link to comment
Share on other sites

nobody thinks that insects have consciousness in any meaningful sense

 

Why on earth do you say that? Have you never seen an angry swarm of wasps? At least, the nature of their behaviour -- when you throw a brick at their hive, or whatever -- to me implies anger; and anger implies higher-level processing of experiential data, and that processing implies consciousness. I can't prove it of course, unless I was to try to thrash out my self-adaptation argument yet further, but like DeGaul, I'm getting exhausted.

 

DeGaul: in your superior knowledge of the research (not just the trendy stuff!), I notice you do not so much as mention John Searle's Chinese Room and the literature that followed from it, which any self-respecting philosopher (as disinct from neuroscientist/ A.I. protagonist) would tackle as a first port of call. And as a self-proclaimed analytic philosopher, I imagine you have given yourself licence to dismiss, out of hand, metaphysicians (especially those who expressed philosophies of the mind such as Bishop Berkeley, Locke, Descartes, Kant etc.) of the Enlightenment and those contemporary academics, such as John Bennett and James van Cleve (oh, but I could go on and on!) whose work reflects their belief that it still holds merit.

Link to comment
Share on other sites

John Searle yeah. I think he's on to some things. I hope he read that book by Rosen I sent him.

Link to comment
Share on other sites

I can't bear this formulation of consciousness as a function of complexity. Our brains are using far complex operations, in the sense that more of our brain is active, when we are unconscious in relation to when we are conscious.

Link to comment
Share on other sites

Paradox, I want to read this sometime soon...

 

9780262540674-f30.jpg

Link to comment
Share on other sites

Does Dreyfus still believe we can't have a computer that is the world master in chess?

Link to comment
Share on other sites

So does Dreyfus still believe we can't have a computer that is the world master in chess?

:HaHa: I didn't know he made that prediction. However, I never would have placed chess playing in the category of things computers can't do.

Link to comment
Share on other sites

So does Dreyfus still believe we can't have a computer that is the world master in chess?

:HaHa: I didn't know he made that prediction. However, I never would have placed chess playing in the category of things computers can't do.

My understanding is that Dreyfus is of the opinion now that AI is possible. His critique was really about the approach and overly optimistic attitude in the field.

Link to comment
Share on other sites

My understanding is that Dreyfus is of the opinion now that AI is possible. His critique was really about the approach and overly optimistic attitude in the field.

Oh I think AI is possible, but I don't believe it will be possible with any artifact which is Turing computable.

Link to comment
Share on other sites

Since the conversation seems to be drifting from the old dead horse that paradox and I were beating into the sunset, I will add this: Yes, I know John Searle very well. In fact, I was heavily influenced by him in my early education as a philosopher, and although it is true that I have drifted from his line of thinking to the thinking of others, like Daniel Dennet, there is nothing inherently contradictory between Searle's work and anything I've said on this post. Let me summarize my argument in brief and show why Searle and I are not necessarily at odds:

 

1: Humans have a sense of self which is phenomenal, episodic, and capable of being mislead. This has been established through research, and I, like Metzinger, call it the Phenomenal Self-Model.

 

2: The Self-Model is a neurological process by which the human organism develops a concept of being in a world and of having a specific place in that world called "self".

 

3: Self-Models do not just manifest in human brains, but in other animal brains and in degrees.

 

4: Although we do not understand all the relevant features of the brain which allow for the emergence of consciousness, we do know that there is nothing magical about brain matter. Brain matter is just a collection of electrical and chemical elements, and so it is not unreasonable to assume that something other than brain matter might manifest consciousness.

 

Searle actually agrees with much of this line of argumentation. He is not necessarily a disbeliever in AI, what he argues against is the computational model of thought. What Searle rejects is the idea that human consciousness is algorithmic. But, even if we grant Searle that thought might not be algorithmic, we can still say (and Searle does say this) that if we could come to understand the relevant structures in the brain which give rise to consciousness we could, in theory, work to create an artificial brain with all the same relevant structures which may manifest consciousness.

 

The beauty of the work on self-modeling which Metzginer has explored is that it does not rely on the computational model of the human mind, necessarily. All it depends on is the research on human senses of self and how those senses of self are process based and fallible.

 

If a person wants to reject the computational model, as Searle does, however; one must then accept that it maybe possible to construct non-algorithmic thinking machines, because if human and animal brains are capable of decision making in a non-algorithmic fashion and their is nothing magical about brains, then the processes should, in theory, not depend on brain matter for its production.

 

As far as rejecting metaphysics out of hand, that is hardly the case. I've struggled with metaphysics for years and years, and only after hard and careful thought have I rejected much of what was said by metaphysicians. (And I haven't rejected all of it, mind you. Many great metaphysicians have made astonishing logical points which have relevance today, it is just their metaphysics which are confused.) The work of Wittgenstein is just too persuasive and too corrosive to metaphysics to make the project viable anymore, in my mind, except as a sort of poetic exercise in personal world-views.

 

I do believe that complexity is at issue here and you shouldn't dismiss its relevance, Paradox. I mean, I agree with Legion that complexity and complication are not the same thing. Complexity implies an internal and unified organization, but complication is just an unorganized mess. Still, it seems obvious that high order consciousness only manifests in organic beings with pretty complex brains. I don't think we could legitimately believe that the conscious part of the brain is "simple" in any meaningful way. Once again, even given Searle's reading, one has to accept that mind and consciousness are emergent properties of millions of neurons working together in complex ways which we don't entirely understand at this point. That seems to me to make it at least rational to assume complexity is a part of the issue here.

  • Like 1
Link to comment
Share on other sites

My understanding is that Dreyfus is of the opinion now that AI is possible. His critique was really about the approach and overly optimistic attitude in the field.

Oh I think AI is possible, but I don't believe it will be possible with any artifact which is Turing computable.

Exactly! That's been my point all these years. :grin: I think we finally reached each other.

 

Adaptive electronics, analog micro-cell chips (or whatever we could call them), evolutionary processes of structure, etc, now we're talking.

 

And I believe (and I'm not alone) that consciousness will not arise from AI per se, but from the a sub-conscious structure. And we will have the faults and errors that comes with it.

Link to comment
Share on other sites

Oh, and perhaps to be fair it is long overdue that I sort of give the gist of my "philosophical pedigree" so that perhaps everyone can better understand where I'm coming from as a philosopher:

 

I come from pretty humble beginnings, studying philosophy in Wisconsin as a young man, mostly studying Thomistic philosophy and Theravada Buddhism.

From Wisconsin, I traveled to California to study under the great D.Z. Phillips, a student of Rush Rhees who was himself a student and dear friend of Wittgenstein. (Rush Rhees actually had an active hand in helping Wittgenstein write The Investigations, and many of the ideas in it are ideas that Wittgenstein and Rush Rhees hashed out together.) My exposure to Wittgenstein forever shaped the future of my thinking, and D.Z. became a dear friend and much loved teacher to me. When D.Z. died as I was entering my first year of doctoral work, I was devastated. I left academic philosophy behind and enlisted in the Army to go to war. I think part of me didn't think I would come back from war. Maybe a part of me was thinking it would be heroic suicide or something. Well, anyway, I did come back (relatively recently) and have been trying to make a life for myself since. I still research in philosophy, but I stay away from academia for the most part. I still feel the loss of D.Z., and I think about him more as friend then teacher as the years go by. I miss him still. My own research as lead me to the works of Peter Zapffe, and I would say that, next to D.Z. and Wittgenstein, I am most influenced by that singular, depressive Norwegian.

 

So, there is that....in the interest of full disclosure.

 

And I agree with you both Ouroboros and Legion, I think the future of AI definitely depends on getting away from the Turing style model.

Link to comment
Share on other sites

Well said.

 

Btw, another book that could be interesting, if you'd like to see AI from a programmers view, is "Introduction to Neural Networks with Java" by Jeff Heaton. It contains code and explanations for machine learning, fuzzy logic, genetic algorithms, etc.

 

It's interesting to see that people still believe that computers cannot learn, when we have OCR, handwriting recognition, speech recognition, anticipatory systems in cars even, that actually... learn and anticipate based on principles we've learned about the brain and how it works.

Link to comment
Share on other sites

....in the interest of full disclosure.

Bullshit. You just wanted to brag! :HaHa: That's quite alright. You've earned it in my opinion, and I don't mind ego.

Link to comment
Share on other sites

Let me summarize my argument in brief and show why Searle and I are not necessarily at odds:

 

1: Humans have a sense of self which is phenomenal, episodic, and capable of being mislead. This has been established through research, and I, like Metzinger, call it the Phenomenal Self-Model.

 

I feel you confuse 'self' with the concept what Merleau-Ponty termed the body image (this is not to be read in the way in which it is construed in psychology). Selfhood is not in itself capable of being misled; it is nothing more and nothing less than the basic symptom of consciousness. One's self doesn't expand in volume, notionally or actually, as one's body grows, for example.

 

2: The Self-Model is a neurological process by which the human organism develops a concept of being in a world and of having a specific place in that world called "self".

 

3: Self-Models do not just manifest in human brains, but in other animal brains and in degrees.

 

I think the idea that your 2 and 3 -- with the term 'in degrees' at the end of 3 -- are consistent with one another, will not bear scrutiny.

 

4: Although we do not understand all the relevant features of the brain which allow for the emergence of consciousness, we do know that there is nothing magical about brain matter. Brain matter is just a collection of electrical and chemical elements

 

In regard to living brain tissue, we don't know anything of the kind.

 

The beauty of the work on self-modeling which Metzginer has explored is that it does not rely on the computational model of the human mind, necessarily.

 

It's got to rely on something. You will have to have a very, very radical philosophical system indeed, if you are saying that computers can work without algorithms or equivalent method of processing commands.

 

Incidentally, if you think Wittgenstein basically blows (cosmological) metaphysics out of the water, I feel you are overlooking what Wittgenstein himself overlooked: that language simply does what language does. You can't use language as a reliable system for making inferences about the workings of the mind.

Link to comment
Share on other sites

Guest
This topic is now closed to further replies.
×
×
  • Create New...

Important Information

By using this site, you agree to our Guidelines.