Jump to content
Goodbye Jesus

Deliberate Manipulation Of The Phenomenal Self Model


DeGaul

Recommended Posts

  • Moderator

Does it ever occur to you that not every single human being on the planet's brain works in exactly the same way, and that you sound like an arrogant academic twat who dismisses those who refuse to speak in academic terms, and cite "experts" like they are some lower form of life? You don't know everything and being alive is not a one size fits all affair. God it must be hard to live in america with all this bullshit constantly flying around.

 

Says the "ex" prostitute who chose to live a lifestyle that didn't require her to use the academic part of her brain. Perhaps she felt above academia? Or perhaps not worthy?

 

Galien, don't embarrass yourself.

 

Noumena - Stop! :stop: There is no need.

  • Like 1
Link to comment
Share on other sites

Why would I want to listen to anyone so full of pride over their own achievement? I wouldn't. I don't care if you have seven degrees, if you have no control over your own ego then you have nothing to teach me.

 

You don't have to.

Link to comment
Share on other sites

IN THE NAME OF ALL THAT IS SACRED, CAN WE PLEASE LOCK THIS THREAD?

Link to comment
Share on other sites

IN THE NAME OF ALL THAT IS SACRED, CAN WE PLEASE LOCK THIS THREAD?

 

Hold your breath for ten, then three in, three out till its all ok :)

Link to comment
Share on other sites

IN THE NAME OF ALL THAT IS SACRED, CAN WE PLEASE LOCK THIS THREAD?

But nothing is sacred. :shrug:

 

Hold your breath for ten, then three in, three out till its all ok :)

I'm glad that you can see the humor in all this. :grin:

Link to comment
Share on other sites

Guest end3

IN THE NAME OF ALL THAT IS SACRED, CAN WE PLEASE LOCK THIS THREAD?

But nothing is sacred. :shrug:

 

Hold your breath for ten, then three in, three out till its all ok :)

I'm glad that you can see the humor in all this. :grin:

 

 

All these life questions with no for-sure answer sure spark the feelings.

Link to comment
Share on other sites

All these life questions with no for-sure answer sure spark the feelings.

Yes. A lot of sparks. Life-sparks if you will. :)

Link to comment
Share on other sites

I do take a great deal of pride in my study of philosophy, but not in my degree. It isn't the degree that matters, but the hours of hard work I put in to get it. I'm no golden child. When I was getting that degree I was desperately poor, living off of good will and the scraps I made as a security guard. I fought for that degree, and then, once I got it, I fought for my country, and now I do anything I can to make ends meet for my wife and I. I work hard everyday, and I have had more than enough experiences to build up plenty of common sense. If you think it is my degree that makes me arrogant, you'd be very wrong about that. I'd say it is the bullshit I've seen and all the crap I had to deal with in the Army that made me intolerant of a too tolerant attitude that many people seem to admire so much.

 

Ouroboros, paradox seems to want me silenced pretty badly, and if it means that much to him, please close the thread. (At your own discretion, of course.) I've learned a lot from this thread. Especially from you and from Legion, and I really will try to get to Rosen Legion, I just hope that I really am up to reading him. He is pretty dense, from what I've been told.

  • Like 1
Link to comment
Share on other sites

Ouroboros, paradox seems to want me silenced pretty badly, and if it means that much to him, please close the thread. (At your own discretion, of course.)

I think this is a good thread too. It got a bit heated at times, but it seemed to cool down and continue regardless.

Link to comment
Share on other sites

Well, if you want to continue, in the interest of cooling it down, let me change to the question of not just AI, but android possibilities. I mentioned Roboroach before, and the computer run on rat neurons, but I think there is a level of ethical concern that needs to be addressed here: If we can control a roach with a computer interface, is that sort of a freakishly immoral thing to do? I've always loved the novels of Philip K. Dick, especially because of the sort of ethical dilemmas he raises in relation to "artificial" life. I think I mentioned in a different thread how Metzinger himself expressed concern at the creation of AI because we couldn't have a clear idea of the potential suffering that such a creature might have to deal with. Are we getting a little too close to playing god by manipulating the very foundations of person-hood like this? I don't really know, but there is something kind of creepy about imagining all those rat neurons slaved to the functions of a computer. Even if it isn't immoral, it certainly gives a person the willies.

Link to comment
Share on other sites

Honestly, I don't know if it is immoral or not. :shrug: I believe people react more from emotion than reason to this.

Link to comment
Share on other sites

I tend to think you are right. It does seem more like an emotional reaction. Can I ask you, Ouroboros, have you ever had an experience with non-algorithmic programs? I have to admit, I have only a very weak understanding of the whole concept of a non-algorithmic program. The way it was explained to me is that it is a kind of program which doesn't have a well-defined input or output, but instead sort of gathers input from sensory apparatuses which scan over a large range of external data and then creates something like a novel response to the input. It sounds similar to some of the stuff you mentioned about evolutionary programing and such. I'm pretty interested in your thoughts on this. Also, if you're reading this Legion, I wouldn't mind hearing what you think about this as well.

Link to comment
Share on other sites

DeGaul I've not kept up with the latest advances in computation over the past 6 years or so. But if these "non-algorithmic programs" (which strikes me as an oxymoron) can be be simulated on a Cray or any other computer with Von Neuman architecture then they are still Turing computable.

 

I suspect what is required for an artifact to manifest complexity is an ability for its software outputs to change its own hardware. In this way a causal loop (an impredicativity) is established.

Link to comment
Share on other sites

Now that is interesting, a computer with hardware that displays plasticity. I wonder if such a thing is possible?

Link to comment
Share on other sites

I tend to think you are right. It does seem more like an emotional reaction. Can I ask you, Ouroboros, have you ever had an experience with non-algorithmic programs?

I suspect that you are thinking self-organizing information systems (or networks). Do you have any experience in using bit-torrent? Information is decentralized. Your access to the bits and pieces are semi-random. It's limited by who is out there with the piece you need. How far away are they in the sense of time? You don't control the routers, or which path or how long it will take, to reach you. Etc.

 

I think the phrase "non-algorithmic" can be a little misleading. There are algorithms involved, but the algorithms are run and executed by independent "agents" in a network, just like each neuron follows a given natural and physical behavior (algorithm) but is not in control over the other neurons in the brain. So the algorithms exist, but only localized (in a reductionistic way), while the overall "algorithm" is the result of the process (it emerges in a "holistic" way). Perhaps the word "meta-algorithm" is a better choice? Just thinking.

 

There are some companies that specialize in building artificial neural network cards where the chips are like "mini computers." Each "mini computers" is like a cell. When you have thousands of these on a card, and they process semi-independent over each other, you have a computer that does not work like the famous Turing machine anymore.

 

One reason, IMO, a Turing model (traditional computers) can't compete with this kind of distributed network processing is because of the linearity of the processing. In a linear process, you have to start looping code over the segments that emulate single "cells" in a network. While the network can process in a parallel fashion, real time, the linear process will slow down exponentially for every added cell. So basically, with a Turing machine it could take hours, days, years, to emulate one second in a network model.

Link to comment
Share on other sites

DeGaul I've not kept up with the latest advances in computation over the past 6 years or so. But if these "non-algorithmic programs" (which strikes me as an oxymoron) can be be simulated on a Cray or any other computer with Von Neuman architecture then they are still Turing computable.

 

I suspect what is required for an artifact to manifest complexity is an ability for its software outputs to change its own hardware. In this way a causal loop (an impredicativity) is established.

The Internet is not a Turing machine, but it consists of many independent Turing machines. The Internet emerges from the parts. And it also changes its design, structure, and behavior.

 

And on another note, I suggest that you look into how protein synthesis works in our cells, and then compare the message RNS and the ribose decoding of the codons with the Turing model and the tape. The likeness is striking. (Except for some differences. For instance the Turing machine is more advanced, can go forward and backward and change code, while translation process is only one direction, read-only decoding.)

 

 

Not saying that this is all to how a cell works, there is a lot more, but protein synthesis, cell division, etc, in the cell are all natural processes that most likely can be expressed with algorithms.

Link to comment
Share on other sites

Now that is interesting, a computer with hardware that displays plasticity. I wonder if such a thing is possible?

That's exactly the way we have to go. And I did post some links earlier with research going on in this area. One important application of self-organizing and/or "evolutionary" devices is for space flight. A trip to Mars would require a system that can adapt and repair itself. I believe this research is just in its early stages, but this is most definitely where we have to go.

 

I found a very good article (blog) about the 10 (+1) real differences between our contemporary computers and the brain. If we understand these differences, we can work our way to make a new "computer" modeled after the brain: http://scienceblogs.com/developingintelligence/2007/03/why_the_brain_is_not_like_a_co.php

 

---

 

DeGaul, this link might interest you: SyNAPS by DARPA.

Link to comment
Share on other sites

I don't see why software can't be developed to 'model' brain function as an application of standard computer hardware. Computers already work great, and they're getting faster all the time- some say exponentially.

 

Computers aren't really my 'thing'- I find them interesting only because they're useful and only to the extent that they're useful; I'll never be a programmer beyond some VERY basic stuff that I've learned in classes. But I do know that there is already software out there that refines complex designs via an 'evolutionary' process (which is akin to SOME learning functions in our brains). Two applications that I'm aware of are shipping logistics and complex piping and flow systems... but I'm sure there are others.

 

So as neuroscientists learn more about brain function on the macro and micro level, I see no reason why it can't be modeled via software. Hell, other folks in this thread have already pointed out that primitive animal nervous systems have already been modeled very successfully. A human-like mind via software may be just a matter of incremental steps. And while it IS a huge leap between the current cockroach-level models and that magnificent clump of cells within my own oversized head... it's only been half a century or so for commercial (and consumer-level) computers to progress from punch-tapes to our current state. Why can't software make a similar leap?

Link to comment
Share on other sites

I'm totally in agreement with you Ouroboros, I mean about trying something like replacing non-algorithmic with meta-algorithmic, or something to that effect. Maybe we could call it, "non-intentional algorithms", as with the example of the internet or bit torrents, it seems to me that what you get is a sort of holistic algorithm that arises in a somewhat unpredictable way out of very predictable and planned algorithms.

 

Yes, self-repairing machines seem to be the way forward, but the little bit that I'd like to add to this idea is allowing for mutation on a machine level. It is the mistakes which are made when DNA replicates that provide the raw material for evolutionary change. I think that a truly evolving computer system would have to have room for novel and not necessarily useful changes in basic processes. Maybe a sort of randomizer which serves to alter code in random ways which are generally not debilitating? And what about something like machine "reproduction"? Machines can certainly learn from one another, but would there be a purpose to machine reproduction? If a machine had a self, would it have any reason to make more robot selves? And if it did, why would it want anything more than a copy of itself? I mean, would it be like asexual reproduction, with room for diversity solely coming out of random code changes?

 

Ahhh, I know this is all just speculation, but it is kind of fun speculation.

 

If anything, all this talk has only further convinced me that AI, in a full-bodied sense, is genuinely possible. Very cool.

Link to comment
Share on other sites

This is certainly possible - I work with devices called Field Programmable Gate Arrays (FPGAs) which can be configured to implement

whatever digital logic you like. These can also be dynamically self-reconfigured (to a certain extent) while operating

 

I have been inspired by this thread to experiment with the implementation of neural networks on one of these devices - although for this purpose

dynamic self-reconfiguration is not required.

 

Most implemnetations of Neural Networks I have seen are run as software on normal computers.

 

I'm not clear why it is necessary for consciousness to arise from non Turing compatible machines - although I'm not saying that consciousness can arise from software processes.

 

I have found a useful book on Neural Networks here:

Neural Network Design

 

 

My local university also runs a course in "Computational Neuroscience" - if only I had the time...

 

 

 

 

Link to comment
Share on other sites

Andyjj, if you could, maybe let us know what happens when you implement that neural net. I'd be really curious to hear how your experiment goes. I've been getting swamped with work lately, since I started this thread, and so I've been having less time to devote to working out computational models of consciousness. I plan on doing a lot of reading this weekend, however; so hopefully I'll be able to contribute something actually useful to this thread again, but don't expect too much from me until the weekend :grin:. (And until I recover from a work project this Friday that will take me overnight.....I have to help run a new computer network at a Mr. Money or something like that....the details are still pretty foggy on my supervisor's end.)

 

Thanks to everyone who keep contributing new links and names of reference books, etc. I think that's really the best way to turn this thread into a resource for interested people, not just a discussion.

Link to comment
Share on other sites

Ahhh, I know this is all just speculation, but it is kind of fun speculation.

 

If anything, all this talk has only further convinced me that AI, in a full-bodied sense, is genuinely possible. Very cool.

Absolutely.

 

My thoughts are that when we have nanotechnology to create evolutionary devices, there is a huge risk that these new artificial brains not only are able to do what we do, but they will be faster and smaller. I'm certain that our biological apparatus is not the most efficient "brain" that could be made. We will be out-thought and out-smarted by our own creation when it comes to that point. But it will take a long time before we get that far.

Link to comment
Share on other sites

Couldn't be bothered reading all 9 pages but noticed somewhere that someone made the argument that AI will never be like human consciousness because binary information (what computers ultimately translate their data input (like audio and visual) into) isn't the equivalent to actual human experiences. Was this addressed (or even a valid point for that matter)? (If it was, a link will suffice, thx)

 

Otherwise, I do find this whole conversation interesting, will have to read through it tomorrow when I got more time (going to bed now).

Link to comment
Share on other sites

our brains are constructed from neurons - the information processed by these are electrical pulses - which are sort of digital information - although they get converted to analog potentials inside the brain. I don't see a problem in principle in exchanging a biological neuron for a silicon one.knowing how to connect them together correctly is another matter!

Link to comment
Share on other sites

as an aside, I read in last weeks new scientist magazine that scientist had managed to detect (by planting electrodes under the skull, on the surface of the brain) that someone was thinking 'oo' 'ee' and 'aa' sounds. It surely can't be long before it will be possible to directly capture intentional thoughts.

Link to comment
Share on other sites

Guest
This topic is now closed to further replies.
×
×
  • Create New...

Important Information

By using this site, you agree to our Guidelines.