progress report - Python console and neural nets
Sorry for not reporting sooner - we have a number of ongoing projects, so rather than leaving everyone in suspense, here's a progress report ...
- New Python-based console for SRV-1
We have a test version of the update for pySRV1Console - it works well on some systems, but has synchronization problems on other systems. Rather than holding onto this any longer, we are making a test version available, and would welcome feedback from users on how it performs - you can post feedback on the Surveyor Robotics Forum. Here's the download link:
On systems where it works well, pySRV1Console is providing a video frame rate which is close to performance of the Java-based SRV1Console. The issue with Internet Explorer has been resolved, so this version should work with any browser, and the image processing commands are almost 100% functional.
If you want to run from source (pySRV1Console.py), you'll need Python 2.4 and the PySerial package (http://pyserial.sourceforge.net/). If you have Python installed, the console is started with:
python pySRV1Console.py -com /dev/cu.SLAB_USBtoUART
python pySRV1Console.py -com COM4
If you don't have Python installed, there's an pySRV1Console.exe which should run on Windows without any Python libraries installed. The command line parameters are the same as the Java SRV1Console, with one addition: '-com' to specify the com port string:
pySRV1Console -com COM4
Once it's running, the main console page is:
We hope to have any remaining performance issues resolved in the next week or so, and in the mean time, we welcome your feedback.
- Neural Nets
As discussed here a few weeks ago, we have been looking at ways to add adaptive / emergent behavior capabilities to the core functions of the SRV-1. It's still too early to report any significant breakthroughs, but we have produced some useful test results with simple backpropagation neural networks for color and shape classification, and we're currently exploring how we might also employ self-organizing-maps (SOM) as an alternative or in combination with backpropagation networks. It is clear that we will be able to integrate core neural network functions into SRV-1 firmware - the main challenge is working out the structure of the commands and flow of data through the SRV_protocol and built-in commands for the C and BASIC interpreters. My goal is to have a version of firmware available for testing some basic neural net functions in a few weeks.
Posted Thu, 25 Jan 2007 14:25 |
HTML Link | see additional stories ...
first steps in enabling adaptive / emergent behavior in the SRV-1
I have recently been thinking about the functions we might add to firmware that could enable the robot to locate its position based on visual cues and navigate through an area with a sense of purpose. We need this capability for RoboCup soccer play as well as variety of other activies (self-guided remote monitoring, object searches, execution of specific tasks, etc). This is not a new problem - many researchers have focused energy on "simultaneous localization and mapping (SLAM)", and a number of interesting techniques have been explored. My original thought was to develop some kind of map storage mechanism and some pattern matching routines, and that would give us a base level of functionality, but such an approach wouldn't easily adapt to changes in the environment or varying lighting conditions.
Though both were written 20+ years ago, two books that have shaped my thinking on this subject are "Vehicles, Experiments in Synthetic Psychology" by Valentino Braitenberg (1984) and "Self-Organization and Associate Memory" by Teuvo Kohonen (1987) (plus I ordered a more recent volume by the same author called "Self Organizing Maps"). Both books essentially explore artificial neural network structures which might apply nicely to the types of capabilities we wish to develop.
In particular, there are some specific functions which I'd like to add:
- color classification that can handle to various lighting conditions, textures, and multi-color patterns (think of a checkerboard floor or a mixed color carpet) in place of our current fixed threshold approach
- shape/pattern recognition, so that we might discriminate blobs of like color that have different form
- spacial localization, based on the occurrence of features that the robot had previously "learned", as well as adaptation to changes
In all cases, it would seem that artificial neural network techniques might be appropriate, especially if we can scale the input and output dimensions in order to keep the computational requirements within reasonable bounds. I found a nice tool for modeling some of these techniques - it is the Python-based "Conx" code that is found in the Pyro Robotics toolkit. Pyro has a forthcoming interface to the SRV-1, plus we're already working in Python with the pySRV1Console, so this should be helpful in modeling some of the techniques on the host before committing to implementation in firmware on the robot. Once we have something interesting to test, we'll post something here in the journal.
Posted Mon, 08 Jan 2007 14:25 |
HTML Link | see additional stories ...