Jump to content
Goodbye Jesus

Deliberate Manipulation Of The Phenomenal Self Model


DeGaul

Recommended Posts

For those of you who are deeply interested in getting a sense for what Thomas Metzinger's work is saying, I suggest his book The Ego Tunnel . It is the most comprehensible of his works, and yet admittedly still rather complex. It is simply an occupational hazard that the details of this discussion tend to be rather difficult to wrap one's head around, but well worth the effort.

 

As far as other works of interest which I would suggest, I highly recommend anything by Peter Zapffe. You will find his work difficult to get a hold of, however; as most of it is in Norwegian. Try looking for a copy of Wisdom in the Open Air: The Norwegian Roots of Deep Ecology, or if you want a book that really touches the heart of Zapffe's thought as interpreted by an American, read Thomas Ligotti's The Conspiracy Against the Human Race.

 

All these books and thinkers address issues of consciousness, as well as primarily the difficulties that arise because of consciousness. The have all been formative in my personal growth of understanding.

 

Thank you for these book recommendations, Degaul. I see Ligotti has quite a few books out now since "Songs of a Dead Dreamer" and I need to catch up some reading!

 

I haven't heard of Metzinger but it sounds interesting. I have just ordered it for my Kindle reader. I am sure I would like it since I already agree with his conclusion that there is no self ( via Amazon review). It is very difficult to present this view and make it intelligible. I seem to have failed with my thread on Nondualism in the Spirituality forum. Anyway, you write much better than I do. Some of these brain experiments are very interesting. I have read a portion of The Tell-Tale Brain, A Neuroscientist's Quest for What Makes Us Human by V.S. Ramachandran. You might be interested in it if you haven't read it.

 

I can't go in for Neoplatonism despite some of its attractions. I just don't think souls or ideal worlds or dimensions out there exist other than in our minds. If I have mis-characterized this philosophy, I am sure someone will correct me.

Link to comment
Share on other sites

Guest end3

I think what you got here is a logistics problem of large proportions. I mean, I can see your point, but this would mean that many natural things would have "self". And then, in a working body, you have many, many, many "selves" tied together to form a collective self.....ratios, instructions.....

 

I see it as two different types of mechanisms to collect experience. Seems like it would be limited to the sum total of its access to experience and systems capabilities.

 

And then you have multiple multiple years of evolution refining the machine, version billion, the new machine possibly being built by an older machine....Wait, why would it want to reproduce?.....would it view reproduction as successful or competition?

 

Makes me wonder what the machine would do with itself after a prolonged periond of time collecting experience. Would it just sit there thinking itself sucessful or unworthy?

 

Makes you realize how far humanity has come.....having higher order "selves" let's say......community, culture, etc.

 

My self, my layered, multi-systems conglomerate collective self is thinking beer would be a successful venture today. Praise the collective agreement. Glory!

Link to comment
Share on other sites

I maintain that you would have to find a computer that is self-adapting. And I don't just mean in a superficial sense; it would mean a computer that operates by a scheme that it is not programmed to operate by. As I say, the self-adaptional qualities of the conscious mind would, if they are determined by some naturalistic process that is in some way removed from their operational scheme, imply an infinite regress.

Self-adaptation, artificial neural-networks, and evolutionary (genetic) algorithms are not new. They've been around for a while. Exactly what kind of "self-adaptation" are you referring to?

 

Since you seem to be asking for cold, engineers' language, I'll do my best to answer in those terms. I'd say I am asking for a system that can change its course of operation, to a newly ordered scheme, without any pre-programming that serves to prompt and completely organise the transition to that scheme.

Link to comment
Share on other sites

Since you seem to be asking for cold, engineers' language, I'll do my best to answer in those terms. I'd say I am asking for a system that can change its course of operation, to a newly ordered scheme, without any pre-programming that serves to prompt and completely organise the transition to that scheme.

We don't. We still operate by a certain given sets of "machinery" that doesn't change. It's not like we suddenly develop a new kind of neurons or pathways in our brains. We develop new paths and change our configuration, but we don't develop new kinds. So we fall under the same limitations that you are suggesting.

 

But it's true that we don't have any "device" of configuration yet that produce its own artificial neurons, but it wouldn't surprise me if we'll see it in the next 15 years. Considering the shortage of raw material for building our traditional computers is looming on the horizon, we will have to invent a new computer design.

 

http://en.wikipedia.org/wiki/Evolvable_hardware

Evolvable hardware refers to hardware that can change its architecture and behavior dynamically and autonomously by interacting with its environment.

Is that what you're thinking of?

Link to comment
Share on other sites

Guest end3

Since you seem to be asking for cold, engineers' language, I'll do my best to answer in those terms. I'd say I am asking for a system that can change its course of operation, to a newly ordered scheme, without any pre-programming that serves to prompt and completely organise the transition to that scheme.

 

Can we do that ourselves? Maybe an example? And please pardon my ignorance, but why wouldn't a sufficiently experienced machine not be able to do that?

Link to comment
Share on other sites

 

We don't. We still operate by a certain given sets of "machinery" that doesn't change. It's not like we suddenly develop a new kind of neurons or pathways in our brains. We develop new paths and change our configuration, but we don't develop new kinds. So we fall under the same limitations that you are suggesting.

 

 

 

I wasn't talking about limitations, I was talking about schemes of operation. To answer end3's question, an example would be when we change our mind about how to proceed through the remainder of the day, purely on a whim; or when we use a hunch to change our mind about the best course of action to take in a given circumstance.

Link to comment
Share on other sites

I wasn't talking about limitations, I was talking about schemes of operation. To answer end3's question, an example would be when we change our mind about how to proceed through the remainder of the day, purely on a whim; or when we use a hunch to change our mind about the best course of action to take in a given circumstance.

And do you know what caused that change? Are you 100% certain it was total random chance that you changed your mind, or was it a supernatural entity called "yourself" that did it? How can you be sure it wasn't caused by natural circumstances?

Link to comment
Share on other sites

Guest end3

I wasn't talking about limitations, I was talking about schemes of operation. To answer end3's question, an example would be when we change our mind about how to proceed through the remainder of the day, purely on a whim; or when we use a hunch to change our mind about the best course of action to take in a given circumstance.

And do you know what caused that change? Are you 100% certain it was total random chance that you changed your mind, or was it a supernatural entity called "yourself" that did it? How can you be sure it wasn't caused by natural circumstances?

 

I typically don't agree with brother Hans, but may have to here. I think we are talking about the "genetic potential" of the machine in question, and experience level. Take Hans' example.......could it be the the nose picked up some part per billion odor that sparked the whimsical change......or by past generations experiences passed down to me. Why am I necessarily scared of snakes? I have no experience with snakes.

 

Pray for me Hans....lol.

Link to comment
Share on other sites

I typically don't agree with brother Hans, but may have to here. I think we are talking about the "genetic potential" of the machine in question, and experience level. Take Hans' example.......could it be the the nose picked up some part per billion odor that sparked the whimsical change......

Exactly.

 

or by past generations experiences passed down to me. Why am I necessarily scared of snakes? I have no experience with snakes.

Very true. It seems like some feelings, expressions, reactions, or whatever we could call it, are innate in our nature, like built in fears.

 

Pray for me Hans....lol.

Chackabahaya.

 

I was speaking in tongues there. :grin:

Link to comment
Share on other sites

I typically don't agree with brother Hans, but may have to here. I think we are talking about the "genetic potential" of the machine in question, and experience level. Take Hans' example.......could it be the the nose picked up some part per billion odor that sparked the whimsical change......or by past generations experiences passed down to me. Why am I necessarily scared of snakes? I have no experience with snakes.

 

 

We are not talking about a random change that prompts a chain of instinctual activity or something resembling an interfered-with set of operations. We are talking about starting off with a rational course, and then adapting to a hunch in a fully controlled way, such as to move seamlessly to a new rational course. In thinking rationally we use reason, and I am referring to the way we modify the framework of our assumptions so that they meet our immediate circumstances. The fact that we can so amend our idea of 'what is rational' indicates that we respond in ways that discard all preconceptions. Programming does no such thing; it is one single course of rationality through and through, with no faculty for reassessing circumstances on the basis of uniqueness.

Link to comment
Share on other sites

Paradox, I don't see any point in belaboring what many people have already pointed out, so I'll just come out and say rather bluntly that you are just wrong. You have no grasp of how the human mind works or of the science of neurobiology, especially how it relates to the evolution of the human mind. You have said that saying that the human mind is programmed in some way will result in an infinite regress. That is not correct. The human mind IS programmed and its "programmer" is natural selection. There is no infinite regression involved in this, anymore than there is in any particular structure of any animal which developed in response to the genetic arms race. The capacity of the human organism to modify assumptions about itself has to do with the capacity it has to modify its reporting about itself. The human brain is a self-modeling organism, and it has the capacity to alter its self-model in response to incoming information from its environment. The human capacity for "hunches" comes from the fact that this self-modeling and complex reaction system are not all completely conscious. We do not have conscious action to all the decision making processes in our organism. In fact, consciousness, and particularly the self-model (the content of which we call "Ego") is an episodic phenomena which the organism activates in different modes only as necessary. (In deep sleep, for example, the self-model becomes inert and the organism has no "point of view" to speak of.) The reason the human being seems so much more adaptable than a machine intelligence is because the human being has many, many, many years of evolution and complex interaction with the environment in which we have evolved. Our sense organs are highly advanced and complex systems which work in tandem with the self-modeling process in our brains in order to make our method of self-reporting flexible enough to help us to survive in a dangerous world. That we do not have computers yet with that kind of sophistication is not surprising, however; as it has been pointed out, evolving computer programs exists, as well as computers with the beginnings of rather complex sensory apparatus. On the basis of their developing ability to interact with the environment, computers can be developed which have the capacity to both self-report and modify the self-model which results from such reporting in response to a changing environment. Now, before you try to come back at me with something like the accusation that the "sense organs" which computers have are themselves programed by us and limited, once again, the sense organs that WE have are programmed, but natural selection, and are not something we can alter.

 

To appeal to Metzinger again, as evolved creatures, we do not perceive reality, but rather tunnel through it. We have the blinders of our own sensory apparatus firmly in place, shaping and providing the information which the brain turns into a sense of self.

 

Now, I will admit that many of the details of interpretation in neuroscience today are up for debate, but as a whole, the project of neuroscience which recognizes the human organism as a complex reactionary being created through the process of natural selection and subject to all the vagaries of its own evolutionary history, is not controversial. Human beings are not ineffable, whimsical creatures, but are very well defined survival machines, just like all living organisms are. If we seem whimsical to ourselves, it is only because, as I've already said, the human consciousness does not have access to any and every decision making process at work in the human organism. Introspection will never be enough for human beings to develop an accurate picture of how and why we do what we do. We need empirical research and experimentation for this very reason. We do not "know ourselves" in an accurate and naive way. Our self-knowledge is simply knowledge of our own self-model and is limited to that information which the self-model has evolved to report so as to ensure our survival as an organism.

  • Like 1
Link to comment
Share on other sites

Paradox, I don't see any point in belaboring what many people have already pointed out, so I'll just come out and say rather bluntly that you are just wrong.

 

Fine! Just remember what I said about your way of laying down dogma, and about over-scholastic language (which you use very liberally in your posts, especially his most recent one) masking the fundamental issues.

Link to comment
Share on other sites

And just for laughs.

 

 

Link to comment
Share on other sites

Guest end3

We are not talking about a random change that prompts a chain of instinctual activity or something resembling an interfered-with set of operations. We are talking about starting off with a rational course, and then adapting to a hunch in a fully controlled way, such as to move seamlessly to a new rational course. In thinking rationally we use reason, and I am referring to the way we modify the framework of our assumptions so that they meet our immediate circumstances. The fact that we can so amend our idea of 'what is rational' indicates that we respond in ways that discard all preconceptions. Programming does no such thing; it is one single course of rationality through and through, with no faculty for reassessing circumstances on the basis of uniqueness.

 

 

Are our preconception the result of our many years of learning/experiencing as children? I don't know, just a thought that we learn rationality or are able to move seamlessly through the complexity of our given system, our body. Just saying that the difference may be that we are so much further ahead of "computers", that I leave room, not much mind you, but room for the possibility of what is spoken of here....a machine learning "the best choice" and then refining that over a given set of experiences.....and then compounding that with integrated systems and experiences. It's a neat thought, but way out there I think.

 

The notion of disregarding preconceptions equalling the unknown of genetics. I don't know, just speculation.

 

I don't explain myself very well, I will try another attempt later. Thanks for the response.

Link to comment
Share on other sites

I don't explain myself very well, I will try another attempt later. Thanks for the response.

For once, I think I understood what you said. :grin:

 

And I think, if I understood you right, that your point is correct.

Link to comment
Share on other sites

I leave room, not much mind you, but room for the possibility of what is spoken of here....a machine learning "the best choice" and then refining that over a given set of experiences.....

 

A machine doesn't learn. And it doesn't make choices. And, for that matter, it doesn't have experiences. It doesn't know of time; all it does is respond to input. A mind, on the other hand, recognises the present -- its immediate environment -- for its being unique to the instant.

Link to comment
Share on other sites

A machine doesn't learn.

Heuristic machines do.

 

And it doesn't make choices.

Self-adaptive machines do.

 

And, for that matter, it doesn't have experiences.

Experience is the same as memories of past events, and why wouldn't self-learning machines not have that? That's part of principle they're built upon: To learn from experience.

 

It doesn't know of time;

Your computer knows time to a much higher accuracy than you.

 

all it does is respond to input.

What do you think a keyboard is? What does the word "input device" mean to you?

 

A mind, on the other hand, recognises the present

Actually, it does not. Your cognition of the present is delayed with about 0.25-0.5 seconds.

 

-- its immediate environment --

That's one of the basic principle for the adaptive machines, to reach and learn from interaction with the environment.

 

for its being unique to the instant.

Which they are.

 

Are you by chance playing any computer games? Adaptive AI is making its way to your own computer (or XBox, or whatever is your poison).

 

There are so many books, research institutes, and information out there about these things. It is a bit surprising that you don't know anything about these things.

Link to comment
Share on other sites

Paradox, it doesn't make a person dogmatic to say that someone else is factually wrong. And the use of scholastic language is not a method for covering over issues but is a method of precision. And really, factually speaking, I don't speak like a scholastic. The medieval scholastic tradition has a very particular method of discussing issues based primarily on the writings of Aristotle. I, however; am an analytic philosopher and my formal training is in the tradition of Frege and Wittgenstein....spiced up by the works of Peter Zapffe and Schopenhauer, of course, but still, I speak and write like an analytic. That is my tradition, and at least in the United States and large parts of Germany and England, it is the primary school of philosophy, dealing largely with cognitive science issues and the interaction between science and logical analysis.

 

I hate to make any kind of conjecture as to your person, but you seem to be clinging to the idea that humans and machines are so different with an almost ethical tone to what you are saying. Do you somehow feel that it devalues human beings to speak about them this way? You often call engineering language cold. Why? Do you feel that speaking about human beings in precise scientific terms takes something away from the human experience? Is that were you are coming from?

  • Like 1
Link to comment
Share on other sites

Oh, and Ouroboros.....great replies on this post. You seem to know a lot about AI. I'd be interested if you have any favorite references you could give for further reading.

Link to comment
Share on other sites

Oh, and Ouroboros.....great replies on this post. You seem to know a lot about AI. I'd be interested if you have any favorite references you could give for further reading.

Oh, gosh. I read books about neuro-nets, fuzzy logic, AI, etc many years ago, and the books are in storage at the moment. I can't remember their titles, but I suspect they're outdated anyway. The first neuro-net software I played around with was late 80's, early 90's, just to give some timeline. :HaHa: Damn, I'm old...

 

But I think "Understanding Artificial Intelligence" by Scientific America, however outdated (2002), is a good start, and it's available on Kindle (one of my requirements to buy books currently--since I can fill a bedroom with my books and I have no more storage...).

 

It seems like my programmer son (son#2) is planning on a degree in AI. Love it! :grin:

 

To add to the discussion, I think what is lacking in AI today is not the "intelligence" part, but the ability to predict and feel. AI systems are built to be efficient, but the difference between current AI and humans is that we are not so efficient. We screw up, make mistakes, forget, etc, while the AI systems tend to be a bit "too perfect." That's my humble opinion about it. But there are books about this too... I think it's emotions and irrationality that makes us human, not just intelligence or logic. :shrug: It's a mix... probably. But that doesn't mean we might get there one day with self-adaptive nano-tech "computers" (they won't be computers anymore).

Link to comment
Share on other sites

I started off religiously answering each of your answers to my post, but ended getting a mesage saying : "You have posted more than the allowed number of quoted blocks of text". So I have had to shorten it.

 

A machine doesn't learn.

Heuristic machines do.

 

No they don't; they just apply a filter devised by the programmer. With learning, you respond on your own terms by way of relating, experientially, to what is fed you.

 

And, for that matter, it doesn't have experiences.

Experience is the same as memories of past events, and why wouldn't self-learning machines not have that? That's part of principle they're built upon: To learn from experience.

 

So one has a CD full of data that enables one's A.I. computer to respond in various ways, and if one takes that out of its rightful place, in the CD drive, and destroys it, are you saying that this is tantamount to destroying unique, beautiful, romantic, painful, exquisite (select any adjective as appropriate) memories?

 

It doesn't know of time;

Your computer knows time to a much higher accuracy than you.

 

Not if I play with the clock -- I can advance it by 8 hours and it doesn't feel jetlagged -- it just 'accepts' (if I may use your anthropomorphic language) what I have done to it; it doesn't know anything to do with the passage of time at all. In other words, insofar as it can be said to 'know' anything, its knowledge is entirely a regurgitation of ideas that are imposed upon it by knowing beings.

 

A mind, on the other hand, recognises the present

Actually, it does not. Your cognition of the present is delayed with about 0.25-0.5 seconds.

 

Delay means nothing. I used the term 'recognise'. I recognise the present for what it means to me.

 

To your "[it's] one of the basic principle for the adaptive machines, to reach and learn from interaction with the environment", I say, see preceding comments. Also, my chief point was that I assimilate my environment at the instant, respond intuitively/instinctively and subsequently reflect on my actions. This sequence of operations does not characterise those of a 'learning' machine, for which the 'reflection' is on the same emotional level (ie. no emotion whatsoever) as the reflected-upon-response itself.

 

for its being unique to the instant.

Which they are.

 

No, machines can't recognise uniqueness for what it is; all they can do is make distinctions by way of their prescribed mode of categorisation.

 

I can't say I play computer games. And when you say 'It is a bit surprising that you don't know...' it sounds like you have got a microchip in your head in regard to the framing of the concept of knowledge.

Link to comment
Share on other sites

Paradox, it doesn't make a person dogmatic to say that someone else is factually wrong.

You never said I was factually wrong, and I don't believe I am.

 

And the use of scholastic language is not a method for covering over issues but is a method of precision. And really, factually speaking, I don't speak like a scholastic. The medieval scholastic tradition has a very particular method of discussing issues based primarily on the writings of Aristotle.

 

You are thinking of the school of scholasticism and have evidently been typing the wrong word into Wikipedia. I was just using a derivative of 'scholar'.

 

I, however; am an analytic philosopher and my formal training is in the tradition of Frege and Wittgenstein....spiced up by the works of Peter Zapffe and Schopenhauer, of course, but still, I speak and write like an analytic. That is my tradition, and at least in the United States and large parts of Germany and England, it is the primary school of philosophy, dealing largely with cognitive science issues and the interaction between science and logical analysis.

 

I might have said that I am honoured to be in an exchange with an analytic philosopher, but that might be construed as patronising sarcasm, given the views I have already expressed. Instead I will say that I regard analytic philosophy in the same way that a logician regards number theory. (There is nothing patronising about the latter statement (no irony).)

 

I hate to make any kind of conjecture as to your person, but you seem to be clinging to the idea that humans and machines are so different with an almost ethical tone to what you are saying. Do you somehow feel that it devalues human beings to speak about them this way? You often call engineering language cold. Why? Do you feel that speaking about human beings in precise scientific terms takes something away from the human experience? Is that were you are coming from?

 

You have yet to say anything about how computers experience emotion. Do you think it would be a worthy endeavour to set up an institute dedicated to the prevention of cruelty to computers (of the A.I. version), if you learnt that many of them are getting abused, smashed up etc?

Link to comment
Share on other sites

No they don't; they just apply a filter devised by the programmer. With learning, you respond on your own terms by way of relating, experientially, to what is fed you.

Robot Formulates Hypotheses And Does Experiments

 

So one has a CD full of data that enables one's A.I. computer to respond in various ways, and if one takes that out of its rightful place, in the CD drive, and destroys it, are you saying that this is tantamount to destroying unique, beautiful, romantic, painful, exquisite (select any adjective as appropriate) memories?

CD? I'm not talking about CDs. I'm talking about memories and learning in artificial neural networks, just like your brain does.

 

Delay means nothing. I used the term 'recognise'. I recognise the present for what it means to me.

Still, your ability to be aware or recognize is a delayed function in your brain.

 

 

To your "[it's] one of the basic principle for the adaptive machines, to reach and learn from interaction with the environment", I say, see preceding comments. Also, my chief point was that I assimilate my environment at the instant, respond intuitively/instinctively and subsequently reflect on my actions. This sequence of operations does not characterise those of a 'learning' machine, for which the 'reflection' is on the same emotional level (ie. no emotion whatsoever) as the reflected-upon-response itself.

Deep Blue beat Kasparov by learning, anticipating his moves, and changing strategy.

 

No, machines can't recognise uniqueness for what it is; all they can do is make distinctions by way of their prescribed mode of categorisation.

Not if they're built to work the same way as our brains.

 

I can't say I play computer games. And when you say 'It is a bit surprising that you don't know...' it sounds like you have got a microchip in your head in regard to the framing of the concept of knowledge.

There are games out today where the NPCs (non-playing characters, like enemies or allies) learn, anticipate, and change behavior.

Link to comment
Share on other sites

You have yet to say anything about how computers experience emotion. Do you think it would be a worthy endeavour to set up an institute dedicated to the prevention of cruelty to computers (of the A.I. version), if you learnt that many of them are getting abused, smashed up etc?

Yes, current models of AI lack emotions and imagination. There are AIs that anticipate and change behavior accordingly, but in general, they don't too much "think outside the box." Not yet at least, with exception to the "Adam" robot (linked in previous post).

 

I read somewhere that current AIs only measure up to a few percent of human capacity, so you're right that we're not there yet, but we will be, unless our world ends or we run out of resources.

 

 

icub-278x225.jpg

Humanoid robot that learns like a child.

http://www.robotcub.org/

Link to comment
Share on other sites

You have yet to say anything about how computers experience emotion. Do you think it would be a worthy endeavour to set up an institute dedicated to the prevention of cruelty to computers (of the A.I. version), if you learnt that many of them are getting abused, smashed up etc?

Yes, current models of AI lack emotions and imagination. There are AIs that anticipate and change behavior accordingly, but in general, they don't too much "think outside the box." Not yet at least, with exception to the "Adam" robot (linked in previous post).

 

I read somewhere that current AIs only measure up to a few percent of human capacity, so you're right that we're not there yet, but we will be, unless our world ends or we run out of resources.

 

Humanoid robot that learns like a child.

 

I see, so whether or not computers experience anything is purely a matter of degree of complexity...?

Right, so if I kick my computer and it flickers, that is a little bit of cruelty that I have inflicted upon it? Well, I hope the cops around here aren't of the zero-tolerance variety :Doh:

Link to comment
Share on other sites

Guest
This topic is now closed to further replies.
×
×
  • Create New...

Important Information

By using this site, you agree to our Guidelines.