Jump to content

Thinking About Artifical Inteligence.


LGMR
 Share

Recommended Posts

I've been thinking, if one day we're able to make a computer/robot with artificial intelligence that's just as complex (or more) than we are, wouldn't that prove that god(s)' job of creating us wasn't that hard to began with?

 

And isn't god supposed to be infinitely greater than us? (I mean it'd have to be to create the universe!)

 

Thoughts?

Link to comment
Share on other sites


Note: All Regularly Contributing Patrons enjoy Ex-Christian.net advertisement free.

A.I. is a software issue.

We have more than enough hardware than is required to do the job.

No one however knows how to do it.

 

Life however is both hardware and software (if you'll forgive the simplistic analogies).

It is both A.I. and self reproducing. Totally different ball game.

 

They have last I checked made A.I. about as intelligent as a cockroach.

Though that was a few years ago so they could be further on.

Still a long way off in any case and even if made is still many orders of magnitude simpler than life.

Link to comment
Share on other sites

A.I. Systems running as software on computers is unlikely to provide much in the way of simulating even simple organisms - even on supercomputers.

 

Silicon chips that implement neural networks directly are far more promising.

 

A question that gets a lot of debate is this: If we could make an android that could do everything a human could do - would it be conscious? Are Intelligence and Consciousness separable?

 

 

There's a couple of interesting articles in this month's Scientific American. One is about machine intelligence, and the other is about quantum effects in large objects.

 

It's not impossible that quantum effects in biological systems (some kind of entanglement effect) could be implicated

in the emergence of consciousness.

 

The article about Machine Intelligence is suggesting that if a machine can identify that something is wrong in a picture (e.g. An elephant on top of the eiffel tower) it would necessarily be conscious. I don't think that follows at all. It would be haps 'merely' be intelligent.

Link to comment
Share on other sites

A.I. Systems running as software on computers is unlikely to provide much in the way of simulating even simple organisms - even on supercomputers.

 

Can you support that statement?

 

Silicon chips that implement neural networks directly are far more promising.

 

I hear a lot about neural networks but very few people actually understand them once the hype has gone.

For example, I asked you if you can support the initial statement for a reason.

A neural network on a computer is nothing complicated. It can be performed on a stock standard von neumann architecture.

Making a neural networked hardware configuration simply allows it to perform faster.

 

Over twenty years ago a neural networked computer done in hardware cost a small fortune.

It was an interesting idea. Fun for research departments.

Today I can go buy a single CPU PC and get 100 times the performance with more memory and cheaply. Basically do more for less.

I can buy a more expensive PC and get far better speed.

The results are however identical if the PC is limited to the same memory model and algorithms.

 

Basically, neural network is the method that processes are interacting with each other.

A hardware one is fixed.

A software one allows for dynamic reconfiguration.

 

That means its more flexible ultimately.

IOW, it will makes little difference as CPU processing speeds increase and memory increases.

Link to comment
Share on other sites

I don't doubt that neural networks can be implemented in software on van neumann architectured. But you would run into performance issues, if you tried to implement a huge number of neurons, because CPUs run sequentially, rather than massively parallel like a brain.

 

There exist chips that are dynamically reconfigurable (FPGAs) that you can implement any logic function you like on, and change it on-the-fly from a CPU or even self-modifying . So hardware isn't quite as fixed as you might think.

 

Unfortunately CPU speeds are maxing out around 3GHz for PCs due to heat dissipation and other limitations of the silicon processes, which is why they are going multicore. Graphics processing chips (GPUs) are probably more interesting for neural network implementation as they can do parallel processing.

 

Don't have any figures to back up my arguments, but von neumann architectures must run slower than parallel systems of comparable complexity.

  • Like 1
Link to comment
Share on other sites

I don't doubt that neural networks can be implemented in software on van neumann architectured. But you would run into performance issues, if you tried to implement a huge number of neurons, because CPUs run sequentially, rather than massively parallel like a brain.

 

There exist chips that are dynamically reconfigurable (FPGAs) that you can implement any logic function you like on, and change it on-the-fly from a CPU or even self-modifying . So hardware isn't quite as fixed as you might think.

 

Unfortunately CPU speeds are maxing out around 3GHz for PCs due to heat dissipation and other limitations of the silicon processes, which is why they are going multicore. Graphics processing chips (GPUs) are probably more interesting for neural network implementation as they can do parallel processing.

 

Don't have any figures to back up my arguments, but von neumann architectures must run slower than parallel systems of comparable complexity.

 

 

Performance in sequence and parallel are a function of the individual speed of components.

e.g. 1million neural nets operating at 1ms does 1 billion operations per second in parallel but sequentially can be tripled by doing 3GHz timing sequentially.

 

 

FPGA's aren't what you think. They are Field programmable gate array's. Emphasis on the word field. They can't reprogram themselves*, they get plugged into and programmed by an external laptop. They are logic gates with many interconnections but that doesn't make them compatible with a dynamically reconfigurable neural net. So you can program a CPU within the gates or many of them but the interconnection costs a lot of hardware performance. Its far easier and cheaper to do it sequentially with modern high speed CPU's.

 

CPU's and GPU's both parallel process. The main difference is that a GPU is geared to solve specific mathematical problems extremely fast and is done at the hardware level.

If the process requires those specific mathematical functions to be done then it will be useful. Personally I see it useful just as it is now, as a secondary processing device but not the primary one.

 

I agree that CPU speeds will always have a limit but there is no need to scrap a highly efficient system in favour of a much harder more expensive one when the limit is not reached. With a modern computer, a cheap desktop PC, it is possible to simulate 100's of millions of neural nets. It is possible to emulate much less but is still in the order of a million or so.

 

As Ouroboros says above, swarm computing solves that problem easily. Link 1000PC's together and you have one powerful system, each processing millions of neurons at the speeds of the fastest firing neurons (about 1mS) at a constant rate.

 

Basically till we know of suitable software to do A.I. there's simply no point blindly targeting hardware configurations in the hope we hit the jackpot when there are easier and more cost effective methods at our disposal. They can all be simulated to prove a concept and emulated to demonstrate it.

 

Edited to clear up the * above.

FPGA's are available in high speed dynamically changeable configurations. The problem is that the reconfiguration is:

1) Either predetermined by the designer for what ever reason.

2) They can change dynamically small sections of themselves that the designer has chosen to suit that purpose.

 

 

 

 

Link to comment
Share on other sites

Your comments about reconfiguration are correct, however cannot the same be said about software code?

 

Neural net weights can be adapted just as easily in FPGAs as in software. However, from what I have seen of software neural nets (which isn't much), the weights are impemented in floating point arithmetic, which couldn't be implemented easily without taking up a huge amount of FPGA resource. Fixed point arithmethic might run into quantization problems.

 

If I had the time, I'd investigate this further. Real neurons don't use floating point arithmetic, so maybe there's an alternative way of implementing neural nets on FPGAs that makes better use of their architecture.

Link to comment
Share on other sites

 

Neural net weights can be adapted just as easily in FPGAs as in software. However, from what I have seen of software neural nets (which isn't much), the weights are impemented in floating point arithmetic, which couldn't be implemented easily without taking up a huge amount of FPGA resource. Fixed point arithmethic might run into quantization problems.

 

Its not the FP or the weighing that's the big problem. Neurons make/break paths between themselves and reinforce those pathways. Doing that in FPGS's is not possible unless you implement a CPU and a program on the FPGA which defaults to a software solution.

Which is why I said just implement a software solution to begin with.

 

 

If I had the time, I'd investigate this further. Real neurons don't use floating point arithmetic, so maybe there's an alternative way of implementing neural nets on FPGAs that makes better use of their architecture.

 

Yes they do.

Voltage is real. Current is real. Time is real. All of these are used as part of a neuron's process. Therefore, neurons use real values.

Floating point in computer architecture is simply a method to represent real value's with a large dynamic range. The trade off is accuracy. Its merely a form of representation. You can do the same thing with fixed point if the DR is not wider than the bit width.

Don't get caught up in floating point, fixed point or integer. They are all ways to do the same thing and can be used interchangeably. Its the dynamic range of the format and the error tolerance allowed that decide if any particular can or cannot be used.

 

 

 

 

 

Link to comment
Share on other sites

  • 3 weeks later...

as computers become more and more advanced AI will be able to be more and more advanced. it is inevitable in our capitalist society as people will not get a computer that is worse than the one they have so progress is essintial. with increased concentions in the computer (same as the brain) AI will be a allowed the get better.

 

AI is esintialy granted in capitalism with computer technology, eventualy anyways.

 

we still have a long way to go.

Link to comment
Share on other sites

I don't think true artificial intelligence (self awareness) can ever be replicated with digital computers for one very important reason. A digital computer can never emulate itself in real time. I think self-emulation is a crucial component to self awarenes;, being able to think of one's self in the abstract and being able to put one's self into imagined scenarios. However, this may not be a barrier for anolog computers such as the human brain.

Link to comment
Share on other sites

  • 2 weeks later...

I don't think true artificial intelligence (self awareness) can ever be replicated with digital computers for one very important reason. A digital computer can never emulate itself in real time. I think self-emulation is a crucial component to self awarenes;, being able to think of one's self in the abstract and being able to put one's self into imagined scenarios. However, this may not be a barrier for anolog computers such as the human brain.

 

 

Are you saying that the human brain may be intelligent?

 

 

BTW, no computer, analog or digital or brain for that matter can emulate itself in real time.

Though I'm not sure where you're going with the self-emulation concept. It may be that we have different views on what emulation is.

 

 

 

Link to comment
Share on other sites

I've been thinking, if one day we're able to make a computer/robot with artificial intelligence that's just as complex (or more) than we are, wouldn't that prove that god(s)' job of creating us wasn't that hard to began with?

Yes.

 

For what it's worth I'm currently reading I Am A Strange Loop by Douglas Hofstadter over at Indiana University. Although the book is primarily a philosophical "thinking about thinking" book written to be comprehensible to lay persons, the author is also active in the AI community and has made important contributions to it so I'm looking forward to the parts of the book that discuss how machine intelligence might work. I'm pretty sure that true breakthroughs in AI that start to finally deliver on its promise are going to come from this kind of meta-thinking. We can't build simulacrums of ourselves, much less equivalents, if we don't fully understand what we really are to begin with. I don't think humans understand their own thought processes and mechanisms of self-perception well enough to know how to begin to tackle the problem of reproducing that in computer systems.

 

The difficulty of AI is not so much inherent in AI as it is in properly defining the problem in the first place. It's hard for a mind to think about and understand itself. That is the true difficulty in the matter.

Link to comment
Share on other sites

Machine intelligence is a pipe dream which has squandered untold numbers of thought-hours and diverted the attention of otherwise decent minds from soluble problems. It's really kind of sad. All those bright people trying to put a square peg in a round hole.

 

I think we will one day create artificial organisms and even artificial minds, but they won't be machines.

Link to comment
Share on other sites

Machine intelligence is a pipe dream which has squandered untold numbers of thought-hours and diverted the attention of otherwise decent minds from soluble problems. It's really kind of sad. All those bright people trying to put a square peg in a round hole.

 

I think we will one day create artificial organisms and even artificial minds, but they won't be machines.

 

And you base this on?

Link to comment
Share on other sites

Machine intelligence is a pipe dream which has squandered untold numbers of thought-hours and diverted the attention of otherwise decent minds from soluble problems. It's really kind of sad. All those bright people trying to put a square peg in a round hole.

 

I think we will one day create artificial organisms and even artificial minds, but they won't be machines.

 

And you base this on?

Years of refining my own intuition. It's enough for me, but likely not enough for you. And I'm okay with that.

Link to comment
Share on other sites

Years of refining my own intuition. It's enough for me, but likely not enough for you. And I'm okay with that.

 

 

Your intuition is your own. It certainly is not enough for me but likewise anyone's intuition is not enough as its a very personal thing.

 

Still, I'm curious what years of refining you have on AI research.

 

 

Link to comment
Share on other sites

Guest
This topic is now closed to further replies.
 Share

×
×
  • Create New...

Important Information

By using this site, you agree to our Guidelines.