Software
The autonomous controller I am writing for MicroRaptor is a totally new way (for me) of writing this type of code. I've tried to, for everything in this controller, to think about how a person would think.
The main areas in this "brain" are for motion control, short term memory, and long term memory. Short term memory is a very limited place (basically, 6-8 slots) to put things from long term memory that are being actively "thought" about, or worked on. For instance, the current goal that is being accomplished will take up one slot, and will get processing cycles. Part of accomplishing goals may involved going somewhere, so the current navigation engine will get a slot. The physical act of moving involves playing back "muscle memory", which is a fancy way for saying the forces being applied by the actuators over a span of time, so the motion engine will get a slot. Doing landmark-based navigation involves recognizing certain patterns that the sensors "see", so the patterns that are being watched for get a slot as well.
Some aspects of this version of the system are much more hand-coded and set up than the real version will be. Unfortunately, the gumstix just doesn't have the storage capability that the real system will have, so things have to be a little more explicit, and self-directed learning will be mostly supressed for now.
The thing that this system is intended to do is prove that the overall architecture works, and that the motion system I am building does the job. The motion system is one part of this version of the controller that will be all-out in terms of capabilities. The robot will, once it can walk at a basic level, be able to self-tune motion profiles for smoothness and efficiency, using a technique that will look something like an evolutionary system.
The goal system and the navigation system are basically special cases of the general knowledge representation system I am building. Navigation will be self-directed, but the map will have to be set up manually. The map system is vector-based, and the robot will make no effort to build an accurate 3D map, nor will it attempt to ever determine exactly where it is. The simple fact of the matter is, building "accurate" 3D maps of the robot's environment suffers from the same problem that actuator rigidity suffers from - the environment is far to changing and dynamic to be worth the effort.
Think about how we get to someone's house, if we've never been there before... "Take a left at Jackson Street, then the second stop sign is where you turn right onto Builders Lane, and my house is the fourth one on the left, and there will be a Jeep parked out on the road, and I've got a basketball net on the side of my driveway, and the street number is 442."
Think about how a typical robot would solve this:
"Turn to bearing 275 degrees at GPS coordinate XX.XXXX, YY.YYYY, continue on that bearing for 326.3 meters, turn on a bearing of 72 degrees, continue on that bearing for 183 meters, then stop."
This robot will navigate much more like the first example, although on a much smaller scale. Take for example the firefighting competition - most robots do exactly what the second example does, using precise encoders to measure exactly how far the robot has gone before it has to turn. From my perspective, it makes a lot more sense to say something like "Go straight, maintain a distance of 20 cm from the right wall, and continue until the left rangefinder reports a large change in value, which indicates a doorway."
Long term memory is a place to store things that the robot will need when it is trying to accomplish a goal. The controller will be able to look up things in long term memory associatively, by following connections. In the beginning, many of those connections will be hard-coded, but eventually new connections will be made. Long term memory will be stored in an object-oriented database, and so things that it learns, and connections that are made will be persisted between sessions. When this robot is powered up, it will not be starting with a blank slate. One of the first things I will have to tell it, each time I power it up, is where it is. It will have a representation of the world it knows about (vector-based, with nodes and paths), so once it knows where it is, it will be able to figure out how to get anywhere else in that "world".
The main areas in this "brain" are for motion control, short term memory, and long term memory. Short term memory is a very limited place (basically, 6-8 slots) to put things from long term memory that are being actively "thought" about, or worked on. For instance, the current goal that is being accomplished will take up one slot, and will get processing cycles. Part of accomplishing goals may involved going somewhere, so the current navigation engine will get a slot. The physical act of moving involves playing back "muscle memory", which is a fancy way for saying the forces being applied by the actuators over a span of time, so the motion engine will get a slot. Doing landmark-based navigation involves recognizing certain patterns that the sensors "see", so the patterns that are being watched for get a slot as well.
Some aspects of this version of the system are much more hand-coded and set up than the real version will be. Unfortunately, the gumstix just doesn't have the storage capability that the real system will have, so things have to be a little more explicit, and self-directed learning will be mostly supressed for now.
The thing that this system is intended to do is prove that the overall architecture works, and that the motion system I am building does the job. The motion system is one part of this version of the controller that will be all-out in terms of capabilities. The robot will, once it can walk at a basic level, be able to self-tune motion profiles for smoothness and efficiency, using a technique that will look something like an evolutionary system.
The goal system and the navigation system are basically special cases of the general knowledge representation system I am building. Navigation will be self-directed, but the map will have to be set up manually. The map system is vector-based, and the robot will make no effort to build an accurate 3D map, nor will it attempt to ever determine exactly where it is. The simple fact of the matter is, building "accurate" 3D maps of the robot's environment suffers from the same problem that actuator rigidity suffers from - the environment is far to changing and dynamic to be worth the effort.
Think about how we get to someone's house, if we've never been there before... "Take a left at Jackson Street, then the second stop sign is where you turn right onto Builders Lane, and my house is the fourth one on the left, and there will be a Jeep parked out on the road, and I've got a basketball net on the side of my driveway, and the street number is 442."
Think about how a typical robot would solve this:
"Turn to bearing 275 degrees at GPS coordinate XX.XXXX, YY.YYYY, continue on that bearing for 326.3 meters, turn on a bearing of 72 degrees, continue on that bearing for 183 meters, then stop."
This robot will navigate much more like the first example, although on a much smaller scale. Take for example the firefighting competition - most robots do exactly what the second example does, using precise encoders to measure exactly how far the robot has gone before it has to turn. From my perspective, it makes a lot more sense to say something like "Go straight, maintain a distance of 20 cm from the right wall, and continue until the left rangefinder reports a large change in value, which indicates a doorway."
Long term memory is a place to store things that the robot will need when it is trying to accomplish a goal. The controller will be able to look up things in long term memory associatively, by following connections. In the beginning, many of those connections will be hard-coded, but eventually new connections will be made. Long term memory will be stored in an object-oriented database, and so things that it learns, and connections that are made will be persisted between sessions. When this robot is powered up, it will not be starting with a blank slate. One of the first things I will have to tell it, each time I power it up, is where it is. It will have a representation of the world it knows about (vector-based, with nodes and paths), so once it knows where it is, it will be able to figure out how to get anywhere else in that "world".
0 Comments:
Post a Comment
Subscribe to Post Comments [Atom]
<< Home