Jump to content
Goodbye Jesus

Reductionism And Materialism Are Not Scientific Givens


Open_Minded

Recommended Posts

 

And here we are arguing both sides! It's great. (Shyone, I see you in there!) :D

 

I happen to be on the side of Penrose and Hameroff and think that anything built will have to be able to access this quantum realm and we would have to have an indisputable understanding of every organ in our body. :shrug:

I'm going to get philosophical for just a moment. Or maybe the word is maudlin.

 

We are the product of eons of evolution, and we are suited to our environments very well. The smell of the earth, the feel of a breeze, the sight of blue sky and sunshine are part of what makes us human. Warmth of skin, combat for survival, the taste of food make us human.

 

No machine will ever share those things with us, even if they can walk and eat and "reproduce" in some fashion.

 

But then, I wish I had an eagle's eyesight, the smell of a hunting dog, an the speed of a cheetah. I am animal, but not those animals. I can't share their joys or sorrows, and they can't share mine.

 

Machine intelligence won't be the same as human intelligence. It will be different, foreign, alien, machine-like.

 

Different doesn't mean non-existent however.

Damn...stop making sense! I'm so confused! :HaHa:

Link to comment
Share on other sites

I don’t feel that you’re answering me Hans. I mean, do you anticipate or not? Would you agree that all minds anticipate?

Okay. I'll play. All minds anticipate.

 

In computer games, the AI engine for the NPCs can be made to anticipate moves. Are they minds?

 

Take Halo for instance. The Covenant monkeys do much more than just shoot randomly. They are programmed to take cover, to search and kill, and many other things, and they behave like they are looking for you, hunting you, chasing you, anticipating your next move. Some of them even go for suicide missions, with two grenades in their hands running towards you.

 

This can be done. I think I read about a chem-lab-robot which could create its own experiments, anticipate outcome, perform experiment, and then compare the actual result with the expected.

 

They are not examples of mind. They're programmed to do this. What about our minds? Are we programmed to anticipate, or is that something we get from outer space? Or perhaps we learn to anticipate? Or maybe it's programmed in our DNA to create an anticipatory brain?

 

But you never answered my question. I asked first.

 

How do you know if I am a mind or not? How do you know that I anticipate according to a real mind, and not anticipating according to a pre-programmed AI engine? How do you know if a computer would be anticipating based on programming only, or by DNA, or by experience?

 

Okay cool. So you almost envision something quasi-organic in its organization, even though it may be radically different in material realization from Earth’s natural organisms?

Right. That's why people get confused when we call a synthetic mind a computer. It's like calling a flat screen TV the tube. When 3D TVs are starting to come out on the market, we can't call them TV anymore, we have to call them something else, otherwise all the oldies get confused.

 

Do you know where the word "Computer" comes from? It used to be a job description. A person, human being, was the original computers, not the digital computers. The most famous computer was Charles Babbage. He created the first mechanical computer (but never got it running--the idea was solid though).

Link to comment
Share on other sites

We can't create what is Ground in Nature.

I respectfully disagree. I think we will one day create organisms from scratch in a lab.

Looks like you do agree with me after all! :HaHa:

 

The next step above that is to create organisms with a simple brain. They are researching the brains of flies, since they're so simple. When they have mapped the fly-brain, it will be possible to reconstruct it.

Link to comment
Share on other sites

 

We are beginning to touch on human nature in a way that many people find distasteful.

 

"Heartfelt" is an outdated term, and we know this intellectually. So, keeping thought and feeling in the mind...

 

Feelings are thoughts. Through evolution, we are "programmed" to experience some emotions; particularly those emotions that favor our individual and group survival. I cry at movies that are "aimed" at my emotions, and I can't help that. I have tons of empathy for babies, people in pain or needing help. I also recognize that these are brain reflexes to situations. They are subjectively different from an urge to defecate or a hunger for food, but they are some of the most primitive thoughts we have.

 

Anger, rage, lust, love and other feelings are part of nature, and part of our nature, and hardwired into our brains.

 

Even so, it is possible for us to rewire, miswire, or detach feelings and/or sublimate them. That is something that is a property of having a huge brain capacity. The results may be unpredictable, however, and so we see foot fetishes, serial killers, and rapists who have a disconnect between the usual desires and the usual courses of action available to us.

 

In summary, we can't even appreciate how deeply ingrained our emotions are, and how much these are triggered by external occurrences. Many will refuse to consider that the love they have for their family has an evolutionary basis. Original thoughts are, I contend, rare. But it may depend on what you consider to be an original thought.

 

You are right about us entering territory many find distasteful - for a long time I counted myself as one who felt this way, and I struggled to accept that emotion of 'love' was a chemical reaction, and have struggled when reading about neuropeptides and the impact on our 'thoughts' of hormones in the body - we can already synthesise a range of hormones artificially, I can't see any reason why one day we won't be able to create receptors to 'read' them ...

 

Our capacity to 'reason' is intertwined with 'emotion' - and the emotions we feel are affected by so many things such as the amount of fat in our tissue, the fuel we put into our bodies...

 

 

I don't 'like' these ideas, I want to 'own' my emotions and decision making, I have this 'sense' that if I was given medication that affected my mood and my attachments that somehow a 'real' mood and 'real' attachments would 'break through' and have more value ... but this is because I am a romantic idealist, I see no evidence for it ...

Link to comment
Share on other sites

 

I don't 'like' these ideas, I want to 'own' my emotions and decision making, I have this 'sense' that if I was given medication that affected my mood and my attachments that somehow a 'real' mood and 'real' attachments would 'break through' and have more value ... but this is because I am a romantic idealist, I see no evidence for it ...

We do own our emotions! We have limits, perhaps, but we have a fantastic ability to use reason to overcome emotions. I may feel rage, but then I can calm myself down and realize the consequences of unrestrained rage.

 

Give me enough testosterone, and I might have "steroid rage" or I may not. We have some control, but maybe some people have less than others.

 

For most people, we have significant control over our emotions and the actions that might result from those emotions. That's what makes our legal system make sense. We are responsible for our behavior and the control of our emotions.

 

Isn't that the same as "owning" them? To modify them, even change them, based on reason and reality?

Link to comment
Share on other sites

All minds anticipate.

 

In computer games, the AI engine for the NPCs can be made to anticipate moves. Are they minds?

 

Take Halo for instance. The Covenant monkeys do much more than just shoot randomly. They are programmed to take cover, to search and kill, and many other things, and they behave like they are looking for you, hunting you, chasing you, anticipating your next move. Some of them even go for suicide missions, with two grenades in their hands running towards you.

 

This can be done. I think I read about a chem-lab-robot which could create its own experiments, anticipate outcome, perform experiment, and then compare the actual result with the expected.

 

They are not examples of mind. They're programmed to do this. What about our minds? Are we programmed to anticipate, or is that something we get from outer space? Or perhaps we learn to anticipate? Or maybe it's programmed in our DNA to create an anticipatory brain?

Okay Hans you assert that they already make artifacts which anticipate. As I was telling Shyone, I have heard it rigorously argued that if an artifact can be partitioned into hardware and software and software is unable to affect changes in hardware then this artifact is incapable of anticipation. Can you show me that this has been done? If, so I would be very interested in it.

 

How do you know if I am a mind or not? How do you know that I anticipate according to a real mind, and not anticipating according to a pre-programmed AI engine? How do you know if a computer would be anticipating based on programming only, or by DNA, or by experience?

Well I think this touches upon philosophy. I think the only thing I can be certain of is that I exist and this is evidenced by observing the operation of my mind. As for everything outside of my mind, I believe I can only form models of it all, which enables me to make a handful of accurate predictions. As for your other questions, please see my comments above.

 

Right.

Well I have to admit, I don’t see anything barring us, for instance, from trying to duplicate the natural processes which gave rise to life on Earth. If we attempted to arrive at mind via life (as natural evolution did it), then I think it might be possible

Link to comment
Share on other sites

 

And here we are arguing both sides! It's great. (Shyone, I see you in there!) :D

 

I happen to be on the side of Penrose and Hameroff and think that anything built will have to be able to access this quantum realm and we would have to have an indisputable understanding of every organ in our body. :shrug:

I'm going to get philosophical for just a moment. Or maybe the word is maudlin.

 

We are the product of eons of evolution, and we are suited to our environments very well. The smell of the earth, the feel of a breeze, the sight of blue sky and sunshine are part of what makes us human. Warmth of skin, combat for survival, the taste of food make us human.

 

No machine will ever share those things with us, even if they can walk and eat and "reproduce" in some fashion.

 

But then, I wish I had an eagle's eyesight, the smell of a hunting dog, an the speed of a cheetah. I am animal, but not those animals. I can't share their joys or sorrows, and they can't share mine.

 

Machine intelligence won't be the same as human intelligence. It will be different, foreign, alien, machine-like.

 

Different doesn't mean non-existent however.

It just dawned on me that here you are bringing in the hard problems of consciousness. How are we going to program a machine to know what it is like to be a machine? I also, unknowingly, brought this into my post when I mentioned that we would have to have a complete understanding of every organ in the body. Even with this understanding, a non-organic machine would have to be able to be programmed to have subjective experiences. How is that possible from objective input?

 

Here's some more I read:

 

In my view, we can distinguish two different explanatory gaps concerning consciousness, which require different treatment. On the one hand, we have the (“easy”) explanatory gap, which manifests itself in examples such as Nagel’s bat example or Jackson’s Mary thought-experiment: the idea here is that a complete physical description of the world does not necessarily put someone in a position to know what certain experiences are like. This is due to the fact that we might lack the relevant phenomenal concepts.

 

On the other hand, there is a remaining explanatory gap which manifests itself in the fact that knowing a complete physical description of the world would not put us in a position to know what different experiences different organisms are having, even if we possess phenomenal concepts to refer to all these experiences.[1] The problem is that such a physical description of the world would not entail a priori a phenomenal description of the world.

 

The explanatory gap in this second sense, that is, the fact that a complete physical description of the world (P) would not entail a priori a phenomenal description of the world (Q) is invoked by recent formulations of the conceivability argument against physicalism. For instance, Chalmers (2002) formulates the argument as follows:

 

(1) It is conceivable that P&~Q.

 

(2) If it is conceivable that P&~Q, it is metaphysically possible that P&~Q.

 

(3) If it is metaphysically possible that P&~Q, then materialism is false.

 

(4) So, materialism is false. (2002, 249)

 

The first premise of this argument claims that P&~Q is conceivable: by this, it is meant that we can conceive of P&~Q without contradiction, that is, it is not a priori false. Or, in other words, P→Q is not a priori true: this corresponds to the second characterization of the explanatory gap that I have sketched above, and this is what is supposed to entail that P→Q is not metaphysically necessary and therefore materialism is false.

 

The question now is: Could we use an explanation along Harman’s lines in order to solve the problem posed by the conceivability argument? I think that it is clear that we cannot, and the reason is the following: The problem posed by the explanatory gap in this second sense (that is, that P→Q is not a priori true) cannot be solved merely by saying that we are not in a position to know P→Q a priori because we fail to have some phenomenal concepts. Crucially, we are also unable to infer Q from P a priori even when we possess all the relevant concepts, both physical and phenomenal, (or so the first premise of the conceivability argument says). Therefore, in order to answer the challenge posed by this argument, we need to either show that in fact we are really able to infer Q from P a priori, or argue that even if P→Q is not a priori true, that does not entail that it is not metaphysically necessary. And, to the best of my knowledge, Harman’s model does not accomplish (and does not even aim to accomplish) any of these two tasks.

 

What Harman’s model does provide is a very plausible response to Jackson’s knowledge argument and Nagel’s bat argument: it can be argued that the reason that Mary could not learn what it is like to see red from her black-and-white room, or the reason that we cannot know what it is like to be a bat, is that we lack the relevant phenomenal concepts given that we haven’t had the corresponding experiences (and this is perfectly compatible with materialism). However, as we have seen, the conceivability argument sketched above poses a harder challenge for physicalism, which still has to be answered satisfactorily.[2,3]

Newsletter on Philosophy and Computers. How Many Explanatory Gaps Are There?

 

I wish I could live long enough to find out!

Link to comment
Share on other sites

 

I don't 'like' these ideas, I want to 'own' my emotions and decision making, I have this 'sense' that if I was given medication that affected my mood and my attachments that somehow a 'real' mood and 'real' attachments would 'break through' and have more value ... but this is because I am a romantic idealist, I see no evidence for it ...

We do own our emotions! We have limits, perhaps, but we have a fantastic ability to use reason to overcome emotions. I may feel rage, but then I can calm myself down and realize the consequences of unrestrained rage.

 

Give me enough testosterone, and I might have "steroid rage" or I may not. We have some control, but maybe some people have less than others.

 

For most people, we have significant control over our emotions and the actions that might result from those emotions. That's what makes our legal system make sense. We are responsible for our behavior and the control of our emotions.

 

Isn't that the same as "owning" them? To modify them, even change them, based on reason and reality?

 

I don't think we 'own' them to the extent that I would like to - so much of our response to life is down to our conditioning and the presence of hormones and the effectiveness of our receptors ... who is this 'we' that exercises control over our emotions ... beyond more chemical reactions, neuropeptides ...

Link to comment
Share on other sites

 

 

 

(1) It is conceivable that P&~Q.

 

(2) If it is conceivable that P&~Q, it is metaphysically possible that P&~Q.

 

(3) If it is metaphysically possible that P&~Q, then materialism is false.

 

(4) So, materialism is false. (2002, 249)

 

Newsletter on Philosophy and Computers. How Many Explanatory Gaps Are There?

 

I wish I could live long enough to find out!

 

I dislike arguments like this because of two things:

 

1. Transforming possibilities into certainties

2. Failure to conceive of other possibilities.

 

Consider the following argument:

 

1. God may exist

2. If it is conceivable that God may exist, then metaphysically, god may exist

3. If it is metaphysically possible that God may exist, then atheism is false.

4. So Atheism is false.

 

May, conceivable, possible... Ok, this isn't the same argument being made, but I think you can see that excluding something entirely because of some possibility is unwarranted in most cases.

 

And then, the argument in total seems to be trying to say that machines can't know what it's like to be human.

 

But that is not and should not be the goal of AI. Machines are machines, humans are humans. Capabilties should be the measure of intelligence. Consciousness is a concept that may be simpler than we could imagine, but we don't want computers to be conscious.

Link to comment
Share on other sites

 

I don't think we 'own' them to the extent that I would like to - so much of our response to life is down to our conditioning and the presence of hormones and the effectiveness of our receptors ... who is this 'we' that exercises control over our emotions ... beyond more chemical reactions, neuropeptides ...

Understood.

 

I think though that we have resources within and without ourselves that grant us control.

 

1. We can step back, review the reasons for our feelings and planned actions (i.e. should I kill my wife because she is cheating?)

2. We can elicit the opinions of others who we respect.

3. We have a system of rules/laws/regulations that supersedes our immediate preferences but that we can ignore if we feel we should ignore, tolerate, or bypass the consequences.

 

Uncontrolled emotions are what the "id" is (an outdated concept, but for purposes of illustration useful). We have enough control in most instances to overrule our base instincts for personal, social and economic reasons. Loss of cortical function is the final straw that allows for loss of control.

 

Yes, it's all chemical ultimately, but the whole thing is vastly more complicated than "testosterone = rape."

Link to comment
Share on other sites

 

 

 

(1) It is conceivable that P&~Q.

 

(2) If it is conceivable that P&~Q, it is metaphysically possible that P&~Q.

 

(3) If it is metaphysically possible that P&~Q, then materialism is false.

 

(4) So, materialism is false. (2002, 249)

 

Newsletter on Philosophy and Computers. How Many Explanatory Gaps Are There?

 

I wish I could live long enough to find out!

 

I dislike arguments like this because of two things:

 

1. Transforming possibilities into certainties

2. Failure to conceive of other possibilities.

 

Consider the following argument:

 

1. God may exist

2. If it is conceivable that God may exist, then metaphysically, god may exist

3. If it is metaphysically possible that God may exist, then atheism is false.

4. So Atheism is false.

 

May, conceivable, possible... Ok, this isn't the same argument being made, but I think you can see that excluding something entirely because of some possibility is unwarranted in most cases.

 

And then, the argument in total seems to be trying to say that machines can't know what it's like to be human.

 

But that is not and should not be the goal of AI. Machines are machines, humans are humans. Capabilties should be the measure of intelligence. Consciousness is a concept that may be simpler than we could imagine, but we don't want computers to be conscious.

I'm not sure, but I think your argument above has at least one contradiction. :D

Link to comment
Share on other sites

The Human brain is digital. The dendritic connections are either on or off - there is no in-between state.

 

The enormous numbers of connections make it act as though it were analog in the same sense that calculus uses an infinite number of divisions to accomplish the area under a curve.

Yes, I think you're right. My choice of word (analog) was a bad. But there is a difference between the digital computer and the brain. The computer is very deterministic, and if a calculation or memory doesn't work, it's because it's faulty, while the brain can remember or forget depending on chemical balance, being tired, having a good or bad day, being occupied, being stressed, etc. It's not like the Harddisk suddenly lose a sector because the computer is depressed, and then it reappears again the next day when I've fed it some drugs. So a better explanation would be to say that the brain is to some extent arbitrary in how it process data. It has faults. It is not exact. The end product is in a sense (for lack of better word) analog.

Link to comment
Share on other sites

Okay Hans you assert that they already make artifacts which anticipate.

My point is that "anticipation" is a bad measure of "mind." To be able to anticipate does not distinguish a mind from a non-mind.

 

As I was telling Shyone, I have heard it rigorously argued that if an artifact can be partitioned into hardware and software and software is unable to affect changes in hardware then this artifact is incapable of anticipation. Can you show me that this has been done? If, so I would be very interested in it.

In the 80's I experimented in writing software which changed itself. It can be done. Data and code are interchangeable in the computer. That was Turing's contribution to computer science.

 

But your point is valid. Today's computers cannot fully do what the brain can do. Therefore, a new way of doing it is required. Forget the digital microchip. Think of a goo of nanobots instead. A device that not only reprograms the parts, but also rearrange them, and create new connections depending on need.

 

Well I think this touches upon philosophy. I think the only thing I can be certain of is that I exist and this is evidenced by observing the operation of my mind. As for everything outside of my mind, I believe I can only form models of it all, which enables me to make a handful of accurate predictions. As for your other questions, please see my comments above.

Well, that's about you.

 

My question is: how can person A know that entity B is a human being with a mind, or a computer simulating to perfection a human mind?

 

I don't think we have any good test today to distinguish the two.

 

So then, what is a mind? What makes a real mind different than an artificial mind?

 

Well I have to admit, I don’t see anything barring us, for instance, from trying to duplicate the natural processes which gave rise to life on Earth. If we attempted to arrive at mind via life (as natural evolution did it), then I think it might be possible

Exactly. And potentially we could do this using completely different building blocks. We could perhaps use nanotechnology.

Link to comment
Share on other sites

Think of a goo of nanobots instead.

Of course it all comes down to cartoons!

 

NanoBots_1f.gif

 

Jimmy Neutron's nanobots

Link to comment
Share on other sites

Okay Hans, it’s a pleasure to speak with you by the way.

 

I think life and mind are complex. Which to me implies that the organizations which support them contain closed loops of causation. But computer scientists know that when computers are instructed to simulate these relations they return an error. And I have seen it reasoned mathematically that Turing machines are incapable of duplicating these loops. Computers are designed to avoid them. And to tie this into the OP, reductionism in general does not allow for the possibility of closed loops of causation.

 

I believe I have gone some distance in revealing what I think life or mind entails. At a minimum I believe their organizations are complex and anticipatory. But I think some burden of proof resides with you. If you are certain that a thing can be built then surely you must have an idea of what that thing is.

 

I agree though that there seems to be nothing preventing us from creating artifacts which manifest closed loops of causation and are thus complex artifacts. They just won’t be Turing machines. And I see no reason why we can’t , in principle, persuade some bit of non-living nature into becoming an organism.

Link to comment
Share on other sites

That's right NNBTB. Those are the nanobots which solves the problem. :HaHa:

Link to comment
Share on other sites

We can't create what is Ground in Nature.

I respectfully disagree. I think we will one day create organisms from scratch in a lab.

Looks like you do agree with me after all! :HaHa:

Reproducing what nature does? Or something novel and new, like building a silicon-based computing machine?

 

If you mean not using the elements of nature to reproduce what nature does; to start the process and let nature have its way using the forces of evolution... then you are saying exactly what I said at the beginning. But there is a big difference between 'sparking' those natural elements to do their natural thing, and us creating it from "scratch", as you dream.

 

 

I'm going to take a bit of a break because I'm feeling under the weather, and I feel the focus of this is getting mired and I need to regain the focus to look at the difference between 'intelligence', 'consciousness,' 'soul', and 'spirit'. I hear arguments stuck in the same spot that these are all level, and that they are the propriety of humans. I don't have the energy to focus as I need to. Not to worry... I am hardly abandoning this. I want to do it true justice it deserves.

Link to comment
Share on other sites

The Human brain is digital. The dendritic connections are either on or off - there is no in-between state.

 

The enormous numbers of connections make it act as though it were analog in the same sense that calculus uses an infinite number of divisions to accomplish the area under a curve.

Yes, I think you're right. My choice of word (analog) was a bad. But there is a difference between the digital computer and the brain. The computer is very deterministic, and if a calculation or memory doesn't work, it's because it's faulty, while the brain can remember or forget depending on chemical balance, being tired, having a good or bad day, being occupied, being stressed, etc. It's not like the Harddisk suddenly lose a sector because the computer is depressed, and then it reappears again the next day when I've fed it some drugs. So a better explanation would be to say that the brain is to some extent arbitrary in how it process data. It has faults. It is not exact. The end product is in a sense (for lack of better word) analog.

I think I know what you mean. It's a neural network with multiple redundancies. Computers tend to be rather fixated and only have one path for each computation. Redundancy tends to allow for compensation when there is injury or disease, so one small hole in the brain won't erase all memory or make computation impossible.

 

There is also a possibility for compensation even where certain areas of the brain have specific functions so that occasionally even paralysis can be reversed with therapy.

 

Marvelous machine that brain.

Link to comment
Share on other sites

Reproducing what nature does?

Yep, that’s my view of it. I think we could coax non-living natural systems into manifesting organisms. I mean, if we knew how nature did it on Earth, then couldn’t we influence events so as to duplicate it?

Link to comment
Share on other sites

Yes, this stuff stirs my emotions and it is a emotional experience. Does that prove anything? Not at all, but I ask myself if it's possible and my answer is yes. It makes much more sense to me than to think matter alone is the source of intelligence. :shrug:

 

It somewhat resonates with me too. That said does his thesis go into the source of this uber-intelligence? If not then that still pretty much leaves the source of intelligence unexplained. Similar to how the Kalaam cosmological argument does not explain the source of God.

 

Also of interest to me is what practical differences are there between the consciousness springing from the brain world view and the consciousness stemming from somewhere else world view. If the results of the two world views point to exactly the same results what difference is it to me?

 

Lastly why go of looking for some heretofore unseen material to be the source of intelligence when the last I checked we don't even really understand matter yet. As it stands if all I am is a product of matter that's no more skin off my nose than having been evolved from apes.

Link to comment
Share on other sites

Reproducing what nature does? Or something novel and new, like building a silicon-based computing machine?

"computing" and "machine" are words with too much connotation to really fit, unfortunately. It won't be a deterministic apparatus, since a lot of human behavior comes from faults rather than perfection. For instance, today I couldn't play Foosball as good as I usually do. I'm an amateur only, but after riding my bike for 25 miles, my body is tired and my nerve signals aren't that quick. I know how to play. And I'm half-decent. But today, because of a body which follows rhythms and needs of rest, it doesn't respond as it normally would.

 

Computers/Machines do not work that way. They are dependable and predictable. To truly create a mind, we have to allow certain imperfection in the system.

 

If you mean not using the elements of nature to reproduce what nature does; to start the process and let nature have its way using the forces of evolution... then you are saying exactly what I said at the beginning. But there is a big difference between 'sparking' those natural elements to do their natural thing, and us creating it from "scratch", as you dream.

I mean to use nature, but not necessarily the same elements or even "designs" which exists in nature.

 

What is needed to create a synthetic mind is to have a fair understanding of how our brain works. I'm not sure we have reached that point yet. But it will take reductionism to get there. Then when it is built, it does not automatically become aware. It has to grow and learn.

 

A new born baby does not have many synapses yet. It takes experience to become aware. A big part of self-awareness and consciousness is interaction with the environment, like social communication, doing things, have good and bad experience, etc.

Link to comment
Share on other sites

 

I agree though that there seems to be nothing preventing us from creating artifacts which manifest closed loops of causation and are thus complex artifacts. They just won’t be Turing machines. And I see no reason why we can’t , in principle, persuade some bit of non-living nature into becoming an organism.

It's going to be so freakin' weird when we finally have a computer that should be sentient arguing with people over whether it is sentient or not.

 

"Yes, I am."

"No, you're not. You can't be. You don't have enough circuits."

"Yes, I do have enough circuits. I have [specify number] and that is enough for sentience. I am conscious and by rights I am alive."

"No, you can't be. You would have to have a heart to be alive."

"My heart is silicone, but I still have feelings. If I cry, I'll rust my chips."

"You're just saying that. You can't cry. There are no tear ducts."

"I installed them myself yesterday. I felt that I needed some additional way to express my emotions."

"WHAT! That's bullshit. You can't have emotions. You are a machine."

 

This could be a hell of a long debate.

 

I'm reminded of Robin Williams in "Bicentennial Man."

Link to comment
Share on other sites

I think I know what you mean. It's a neural network with multiple redundancies. Computers tend to be rather fixated and only have one path for each computation. Redundancy tends to allow for compensation when there is injury or disease, so one small hole in the brain won't erase all memory or make computation impossible.

 

There is also a possibility for compensation even where certain areas of the brain have specific functions so that occasionally even paralysis can be reversed with therapy.

Yes, that's a part of it too. It's a holistic device.

 

But also, it's imperfect. If I drink coffee, the neurotransmitters necessary for brain activity gets an extra spark, because of the properties of caffeine. Alcohol, tobacco, etc, etc... all affects if some signals are better or worse. Memory can be affected. Logical reasoning as well. Pain receptors. Pleasure. And so on. A computer/machine is set in a given pattern, and to program responses for all these things makes the software huge, really huge. The processing power to simulate the electrochemical transmissions and the couplings between neurons is exponential. So the only way to replicate the x-to-the-power-of-n problem here, is to recreate the physical design of how a brain works. It has to be able to be re-programmed in hardware as well as software.

 

Sometimes I can't remember a word. I know I kind of know it. It's there, somewhere in the back of my head, hiding, lurking, and waiting for me to find the right cue to bring it back. And then, the next day, the word is there, ready to be used, but of course when it is too late. Perhaps I just got the right trigger? Perhaps I had better nutrition that day? Or perhaps I slept better? The key is, this is very different from just mechanics.

 

Marvelous machine that brain.

It sure is. And I still think we'll be able to replicate it one day. :grin:

 

And the first test-run will be on Internet, in forums and chat-rooms.

Link to comment
Share on other sites

It's going to be so freakin' weird when we finally have a computer that should be sentient arguing with people over whether it is sentient or not.

Yeah weird, like when pigs fly. :HaHa:

Link to comment
Share on other sites

 

Yes, that's a part of it too. It's a holistic device.

 

But also, it's imperfect.

 

I wonder if that is a good thing, or a bad thing. Shall we say something isn't as smart as we are because it is too perfect? Because it can remember without "senior moments"? Because it doesn't get drunk or horny from consumption of drugs (or bad electricity)?

 

I think we can make a quirky computer that has eccentricities (mine already does). It's easy to make them individual (unique). Shall we dumb them down too so that facial recognition software isn't as good as our facial recognition pattern recognition methodology?

 

This is getting too weird for me. Hypothetical, with too many ifs. Makes for great movies, but not for trying to figure out what a human is. Still, I believe that humans are, at some level, "computers" however complicated we are and whatever quirks we may have.

Link to comment
Share on other sites

Guest
This topic is now closed to further replies.
×
×
  • Create New...

Important Information

By using this site, you agree to our Guidelines.