Jump to content
Goodbye Jesus

Robotics


Saviourmachine

Recommended Posts

Consequences

Actually I am not gonna emphasize the consequences of building artificial life or artificial intelligent beings for xianity. I suppose the topic is interesting enough for its own sake, and the consequences are just side effects. Good science always had the side effect to really step on the hearts of religious people. Why? Because they happen to believe in things that weren't true. I can't know of course which things you believe in that are falsehoods. So, I will just tell what I am doing now.

 

Robotics

That's in the field of robotics. There is an ongoing line of research to make robots more and more autonomous. They have to navigate through rooms e.g. A new European project is called Replicator, it's about self-replicating robots. Those robots are micro-robots that have docking mechanisms that enable them to stick together to form a larger robot organism. Such an organism can take the form of a spider for example and is able to go into unstructured environments where normal wheeled robots can't go. There they can e.g. measure gas leakages or so. For this, such robots have to observe their environment and act upon them in quite complex manners. How can we do this?

 

Use nature!

There are mainly two approaches. The first one is to take all tools we developed in the industry or engineering disciplines, like Kalman filters, Monte Carlo techniques, L-systems, Reaction-Diffusion systems, but I wont handle them now. This because I favor personally the other approach. Mother nature has been able to build insects that are able to observe their environment by all kind of sensors, and use this information to navigate through an environment, self-sustain by finding food, avoiding predators, finding and convincing mating partners, etc. etc. That's pretty neat, so a very good reason to use natures techniques.

 

Unknown

And that's basically neural networks and underlying that evolutionary mechanisms. Pure neural networks would do for me, but perhaps we aren't smart enough to find out with topologies (network structures) are good enough to learn things, and we need some hand from evolution there. The cortex of higher mammals does exist out of 6 layers for example. There are no studies that tell us why 6 layers are needed, why not 5 or 7? There is actually not a lot known about the overall structure. And our brain gives us our emotions, our passions, our hope, our lust, our hunger, our proud. Lots of things have to be found out.

 

Kind of stuff current research is busy with

The current research is still focusing on the exact way in which information flows and is stored. Most recent idea about learning is "spike timing dependent plasticity" [sTDP]. A neural network has to respond to patterns that are correlated (occur together). The neurons are structured in layers. When a neuron further in the "hierarchy" is firing before one that is lower in the hierarchy, it apparently was able to predict that the latter would fire. So, the strength between those two neurons is increased by a small factor. This is called Hebbian learning.

 

Hardware needed

The current PCs are not quick enough, they are wired incorrectly basically. Neural networks are like a large bunch of small processors with each a small memory element. This means that there is no big bus/wire needed from a big processor to a big memory unit. And those wires mean energy consumption (electrons flow: they have to be charged). It is difficult to compare, but for the same computational capacity it can be safely assumed that a biologically neural network uses several factors of amplitude less energy than a modern processor architecture. One way to go is to use FPGA (field programmable gate arrays): lots of gates on one chip. FPGAs are already used more and more, because for example a shift register can be implemented in hardware itself and all components can be very fast. For example a Fourier transformation is often needed in all kind of data that has to do with waves: sound, images, video. A VOIP telephone may e.g. use an FPGA chip to quickly convert speech to internet packages. Another way to go is even more promising: neuromorphic engineering. For example the retina of a rat is used as inspiration to build analog electronic circuits that fire when they see change. Those use of a fraction of the energy of similar conventional cameras.

 

Or hardware borrowed ;-)

The third option is to use biological tissue. It is possible to grow it, like rat neurons flying an airplane. This may seem impressive, but perhaps it only knows up and down and left and right or so. :-) And if you search for it, you will find articles in which a moth as well as a lamprey controls a robot by its brain. This is perhaps the way to go, if we can't make our electronics good enough for now...

 

There are a lot of things I can say more, about for example concepts like "embodiment" or "grounding", that are very relevant in building artificial intelligent beings. But I will keep it for now with this. :-) Feel free to contribute, correct, remark, etc!

Link to comment
Share on other sites

  • 5 weeks later...

For anyone interesting, you can read more stuff on my homepage. If this is off track, I will stop bothering you with this of course.

 

http://annevanrossum.nl/new/page.php?4

Link to comment
Share on other sites

I used to be interested in robotics Saviourmachine. I was and continue to be interested in the mind. And I once thought that a good way to learn about the mind was to attempt to build one. In my naivete I had a nearly unbounded faith in computers. I thought that surely a mind could be instantiated using computers if only they had the appropriate sensors, effectors, and software. And AI was my dream.

 

I have since come to seriously doubt all that. I now believe that an understanding of minds will have to have deep roots in biology. I am almost certain that many here grow weary of my constant references to biologist Robert Rosen. But I think you might find his work interesting Saviourmachine. In his book Life Itself he tries to show the limitations of mechanism and machines. I am still trying to understand his arguments, or I would present them here myself.

 

In the time that I was interested in AI, my focus was almost exclusively upon the process of learning. To me, making a machine with the capability of learning was key. Now I suspect that learning is the movement from one set of anticipations to another. When one possesses understanding then one is able to accurately anticipate and predict. But I also now suspect that machines are incapable of anticipation. I think they are purely reactive.

Link to comment
Share on other sites

  • 4 weeks later...
I used to be interested in robotics Saviourmachine. I was and continue to be interested in the mind. And I once thought that a good way to learn about the mind was to attempt to build one. In my naivete I had a nearly unbounded faith in computers. I thought that surely a mind could be instantiated using computers if only they had the appropriate sensors, effectors, and software. And AI was my dream.

 

I have since come to seriously doubt all that. I now believe that an understanding of minds will have to have deep roots in biology. I am almost certain that many here grow weary of my constant references to biologist Robert Rosen. But I think you might find his work interesting Saviourmachine. In his book Life Itself he tries to show the limitations of mechanism and machines. I am still trying to understand his arguments, or I would present them here myself.

 

In the time that I was interested in AI, my focus was almost exclusively upon the process of learning. To me, making a machine with the capability of learning was key. Now I suspect that learning is the movement from one set of anticipations to another. When one possesses understanding then one is able to accurately anticipate and predict. But I also now suspect that machines are incapable of anticipation. I think they are purely reactive.

Thanks for the reference Legion Regalis, ;-) I didn't see it yet. And I haven't read anything about Robert Rosen either. There is indeed a lot more to creating artificial intelligence that connecting together a bunch of sensors, effectors and controllers. And one of the issues most philosophers try to formulate is the one of "embodiment".

 

There is one interpretation of "embodiment" as being a prerequisite that an intelligent being has to be built from/within a biological substrate. However, it is quite hard to find some arguments for that. Are there some magic properties about chemicals like neurotransmitters, or about the way biological cells interact, or about the way the genetic or cellullar program unrolls? Personally I don't think so. There is however the impressive parallel processing power of the biological machine (huge amounts of cells working in parallel, huge amounts of neurons working in parallel, etc. etc.). It is not likely that we are able to copy this to a pure electronic medium in the near future. I am going to Telluride, Collorado, this summer, to a bunch of neuromorphic engineers. That is so to say the most state-of-the-art of copying those biological constructs to (analog) electronica, preserving most of the original stuff. However, it's still in their infancy. But it is not just a matter of scale, and that's where the second interpretation of "embodiment" becomes relevant.

 

The way robots are designed currently is very naive. If there are artificial neural nets used then it's often still the case that there are world models or expert systems attached that convey a lot of knowledge from the designer to the robot. That's understandable, but it hampers research enormously. If we don't start to implement how the robot should acquire new knowledge for itself, we are doomed. For example in reinforcement learning (learning by giving rewards) the designer defines beforehand which the individual behavioural modules are that constitute the total behaviour. So, they build a "collission avoidance" module, and a "search food" module, and a "go straight" module, and some overall sightseer module that swaps from strategy, or "subsumes" those underlying modules. However, this strict decomposition is very ad-hoc! It is not grounded in an overall fitness function like in natural evolution. The designer thinks in such cases that he has to solve the "credit assignment problem" him/herself. So, there are two or three people that start to consider emergent hierarchical control structures (e.g. Bruce Digney). And this is only about reinforcement learning. The same shifts to self-organizing principles is needed everywhere.

 

It is very important to take one step further then the reactive paradigm (the subsumption architecture of Brooks e.g.). The way organisms store knowledge should be taken as inspiration. Sensorimotor invariances are a rudimentary form of memory, they go beyond the reactive paradigm. The second interpretation of "embodiment" as a system in which its actions directly have influence on the way it perceives the world: there are sensorimotor couplings via the environment, is enlighting. Without this embodiment the knowledge an artificial being acquires, will always be alien to the being itself.

 

Science moves very slowly, but the things that are good will be amplified in the end. That what works, will work. :-) I would like to conclude with one reference to Di Paolo who with "Homeostatic Adaptation to Inversion of the Visual Field and other Sensorimotor Disruptions" (free available via citeseer) also conveyed the necessity of moving beyond pure reactive systems. The neural networks in the robots he developed didn't evolve till "perfection" but retained some inherent flexibility to be able to anticipate changes in the environment. In this case wearing goggles that turned the visual field upside down.

 

In case you say that "machines are incapable of anticipation" I think you are touching something valuable. Could you ellaborate on the level of anticipation of the form of anticipation that you have in mind?

Link to comment
Share on other sites

SM,

 

I think you and I are quite close in our view of life vs machine. LR and I are in constant battle, representing two different sides. :)

 

To add my 2-cents here, the only way to make the artificial become as close to the real as possible, is to have the device "learn" through baby steps. I was thinking about how babies are in the beginning, the first months, how they flap their arms and legs, and have no control of their motor skills. During the first couple of months, that's what the brain is learning, to control and anticipate, motion and response. They neurons are programming themselves. Not until robots learn the same way, about walking, talking and "thinking", will the artificial be anything near what humans are.

 

I think one of the deeper problems though is to figure out how and why the brain manages to divide the sections of the brain the way it does. Like speech or understanding of speech is located mostly in specific areas. And how and why one of the side of the brain becomes more "abstract" and the other one more "logical"? It seems that parts of the fundamental "programming" of the brain is that there's genetic code that ensure that certain sections of the brain have a specific purpose. Like the frontal lobe, if damaged, people tend to have problems controlling anger or making the right, moral, decisions. So there's a fundamental blueprint in the DNA how the brain is organized. These things needs to be solved. Do you agree?

Link to comment
Share on other sites

Could you ellaborate on the level of anticipation of the form of anticipation that you have in mind?

I so dearly wish that I could SM. But I am still trying understand. Maybe after a couple more years of study I could elaborate. But I doubt I could do it effectively now.

 

LR and I are in constant battle, representing two different sides. :)

You strike me as a reductionist Hans. I used to be also. Now I am in the process of becoming an ex-reductionist. I am moving towards a relational approach. I am not there yet though. And a good part of my frustration resides in not being able to express myself effectively.

Link to comment
Share on other sites

LR and I are in constant battle, representing two different sides. :)

You strike me as a reductionist Hans. I used to be also. Now I am in the process of becoming an ex-reductionist. I am moving towards a relational approach. I am not there yet though. And a good part of my frustration resides in not being able to express myself effectively.

That's the funny thing. I think that's part of the problem why you keep on failing to understand me. I'm not really as hard core reductionist as you think, but every time we get into an argument it starts with that you assume that I take the extreme position of opposition. And you keep on failing to understand me as a result, because if I write "something like a machine" you think a linear CPU computer with so-and-so-much RAM, and I think of a artificial bio-device that is nothing like any CPU.

 

May I ask you, did Rosen explain the problem? Did he reduce the complexity of the problem into words and documents that could transfer his ideas about the issue? Was he a reductionist?

After all, he dissected the problem to explain it. What's next step? Someone manages to dissect the problem even further and give us the tools to understand (in Neo-Math if you will, and Neo-Computers, or Bio-Computers) to create artificial life (and not made by the next generation of sequential processing units).

 

Listen, a neuron is a neuron is a neuron. There's nothing magical about neurons. Neurons are matter, they process through chemo-electrical systems. There's protons, neutrons, proteins, electrical charges, all of it reacts and acts by the natural terms. They are nature. There's nothing magical at all, but they act and exist by the same fundamental principles of everything else that exists. There's nothing added to it, you agree to that. The only issue you have is that humans can't understand it, and will never understand it. And I disagree on that level. We do not understand it today, but to say that we will never understand it, is to take a position of infallibility and precognition of knowledge for the future, and neither you nor I are omniscient to that level. So I will continue to assume that there is a possibility that human mind will one day be understood, in concepts and descriptions beyond what math and science is using today. The language of science is still evolving. Math is expanding. It becomes more than just predicable number crunching formulas. Fuzzy logic is one example. Or genetic algorithms. Or chaos theory. They go beyond the scope and understanding of 1+1=2, or even f(x)=1+x^2.

Link to comment
Share on other sites

May I ask you, did Rosen explain the problem? Did he reduce the complexity of the problem into words and documents that could transfer his ideas about the issue? Was he a reductionist?

Yes, Rosen attempted to convey the ideas to others. But I do not yet understand his main arguments. At the moment I am only following an intuition that he was on to something.

 

The only issue you have is that humans can't understand it, and will never understand it.

This is incorrect. I don't believe that we currently understand life or the mind. But this does not imply that we will never understand them. I think we can understand them and I hope that we will.

Link to comment
Share on other sites

Yes, Rosen attempted to convey the ideas to others. But I do not yet understand his main arguments. At the moment I am only following an intuition that he was on to something.

In my email exchange with his daughter, I got the impression that he was saying what you are saying below. Science and mathematics of today (or his day) were on the wrong track, by trying to simplify the concepts of life and mind. But it seemed like Rosen was not against the thought that we might one day be able to both understand it and even create artificial life. His criticism was against the idea that you can formulate consciousness and intelligence as a string of binary digits in a machine code program. Like we talked about before, it's predictable, and it's deterministic. But whatever it is that makes the mind indeterministic, when we know what it is in nature that makes the mind indeterministic, is it possible to take the same "thing" whatever it is, and put it into whatever "device" we create and make it also indeterministic? I think that is possible. That day we have the seed of AI, working on the same premises as our mind. It's created by man, but it acts and behaves and "thinks" just like us.

 

The only issue you have is that humans can't understand it, and will never understand it.

This is incorrect. I don't believe that we currently understand life or the mind. But this does not imply that we will never understand them. I think we can understand them and I hope that we will.

Sorry. You see, that's where we are missing each other. I never said that we today understand it or can make it today with current technology. My position is exactly the same. Today we don't understand it fully, and we can't do it yet, but one day I do believe there's a chance we will understand it, and be able to replicate it. Call it unnatural if you want, but there's a big chance that neither you nor I can tell the difference between the real and the artificial thing.

 

---

 

Lets say we can describe life one day with something we call "Rosen's Math" (in honor of his contribution). And this math can describe the complexity and the probabilities of the indeterministic system. With the help of Rosen's math we build a device that replicate the fundamentals of the processes that this math describe. Have we created a "Rosen's Machine"?

Link to comment
Share on other sites

You keep bombarding me with questions Hans.

 

Let’s cut to chase. If you were going to create a mind then how would you do it? I once thought that the way to do it was by using computers. But now I suspect that life is a prerequisite for mind. So I suspect that if you want to make a mind, then you must also make an organism.

 

My main suspicion (and where I think we are missing one another) is that organisms are not machines. But to be clear and unambiguous about this I would have to tell you definitively what a machine is and what an organism is and why they are fundamentally different. Alas I am not yet able to do this with authority. At the moment I have an intuition only. And Rosen’s work resonates with that intuition.

 

Perhaps in a couple of more years I will be able to express myself more clearly.

Link to comment
Share on other sites

You keep bombarding me with questions Hans.

 

Let’s cut to chase. If you were going to create a mind then how would you do it? I once thought that the way to do it was by using computers. But now I suspect that life is a prerequisite for mind. So I suspect that if you want to make a mind, then you must also make an organism.

Yes. I agree. And I think we'll be able to make an artificial organism, based on the same concepts, and understandings (including Rosen's) of what life and mind is.

 

My main suspicion (and where I think we are missing one another) is that organisms are not machines. But to be clear and unambiguous about this I would have to tell you definitively what a machine is and what an organism is and why they are fundamentally different. Alas I am not yet able to do this with authority. At the moment I have an intuition only. And Rosen’s work resonates with that intuition.

True. Organisms aren't machines in the traditional sense of what we mean with "machines". Biological organisms are not like the machines the way you think of them. When you think machine, you think, a deterministic machine built on the concepts of Turing ideas. But what would you like to call a human artifact that not simulate, and not imitate the biological events, but is a biological device? Do we have to invent a new word for it to be accepted?

 

I noticed that the word "machine" is despised because people think of it as a box with cogwheels. But what are neurons? Are neurons anything else than matter? Are there any events in the neuron that is outside of nature? So if we make a "xyz" (instead of using the word machine or device or artifact), that is built on artificial neurons, is that a living thing, or is it a machine? Does it have a mind or is it dead and automatic? But if it constitute indeterministic processes, how can we say it's automatic?

 

So what word would we use for the human made, artificial Xyz, that is built on the same concepts as biological neurons, and this Xyz learns to talk and walk and "think" the same way as we learned to talk, walk and think? Since machine is out of the question. And device is pretty much a machine. And artifact is just a man-made, dead, thing. We have no word to use.

 

 

Perhaps in a couple of more years I will be able to express myself more clearly.

I hope I will too. :grin:

Link to comment
Share on other sites

Regarding determinism and contemporary science I think books like "The end of certainty" from Prigogine are necessary reading material.

 

But let I focus on the discussion at hand. Yeah, we have to try to understand how the brain works. And yes developmental robotics is also something that has to be taken seriously. How can we expect that an artificial intelligent being will be smart right away, while we humans need so much time to do something that may be called intelligent. One of those examples can be found in "Learning to bounce: first lessons from a bouncing robot" by Lungarella et al. They were inspired by a study by Goldfield et al. Kids, 6 months old, were secured in a harness and were allowed this new exciting setting. They - or there neural circuits :-) - started to explore this environment. First they kicked irregular and variable, which was followed by a phase that had bursts of periodic kicking, and sustained bounching, that was finally followed with a phase with bounching sustained twice as long as before, and with considerable larger amplitude. Lungarella et al. used specific neural circuits with oscillatory characteristics, called central pattern generators, to study an implementation on a robot. They found out that having a fixed coupling between hip, knee and ankle joint can explain the difficulty in adaptation to its environment in the first phase that a child shows. The joint synchrony finally enables the infant to find the basic pattern in the second phase. And finally loosening the intersegmental coupling enables the child to exploit more complex spatiotemporal patterns. Another aspect that was learned in this study, is that the sensorial feedback that can be obtained by putting the feet on the floor is not really necessary to learn rhythmic activity, but that it may be needed for the sake of stability.

 

I have some other nice article that describes the neural architecture with respect to binocular vision, more specifically detecting distances with two eyes, about subsequent feature extraction and finally place cells that react upon certain characteristic location through which e.g. a rat moves through its environment It's called "Bio-inspired Networks of Visual Sensors, Neurons and Oscillators" by Ghosh et al. The approach is quite mathematical from nature, but it's a nice illustration about the things that science is able to infer already.

 

Most promising is however that we will be able to start to design systems as a meta-designer. So, it may be convenient to have a mathematical description of the visual processing pipeline in the mammal brain, in the form of a principle component analysis*, but in the end we are gonna build the system with a neural architecture anyway. This is not often done, I think currently only in robotics, but it will be the best approach in the end. The design space in a lot of problem domains is just to big to take all dimensions into account as a human designer.

 

* Principle component analysis is a fancy term for an easy decomposition. A dynamic pattern is like a movie. A movie exists out of a lot of frames. Storing each frame would occupy a lot of space. So a kind of reduction is needed. For that a type of template frames are used. A real-world frame can than be mapped until a set of template frames with each another weight. Even if a bunch of template frames are used, this is still a reduction compared to the overwhelming richness all natural images may show.

Link to comment
Share on other sites

Guest
This topic is now closed to further replies.
×
×
  • Create New...

Important Information

By using this site, you agree to our Guidelines.