Surveyor Robotics Journal
   



email:
support@surveyor.com

web:
Surveyor Corporation

rss:
Subscribe

Archives
August 2010
July 2010
June 2010
May 2010
April 2010
March 2010
February 2010
January 2010
December 2009
November 2009
October 2009
September 2009
August 2009
July 2009
June 2009
May 2009
April 2009
March 2009
February 2009
January 2009
December 2008
November 2008
October 2008
September 2008
August 2008
September 2008
August 2008
July 2008
June 2008
May 2008
April 2008
March 2008
February 2008
January 2008
December 2007
November 2007
October 2007
September 2007
August 2007
July 2007
June 2007
May 2007
April 2007
March 2007
February 2007
January 2007
December 2006
November 2006
October 2006
September 2006
August 2006
July 2006
June 2006
May 2006
April 2006
March 2006
Februray 2006
January 2006

       
Mon, 08 Jan 2007

first steps in enabling adaptive / emergent behavior in the SRV-1

I have recently been thinking about the functions we might add to firmware that could enable the robot to locate its position based on visual cues and navigate through an area with a sense of purpose. We need this capability for RoboCup soccer play as well as variety of other activies (self-guided remote monitoring, object searches, execution of specific tasks, etc). This is not a new problem - many researchers have focused energy on "simultaneous localization and mapping (SLAM)", and a number of interesting techniques have been explored. My original thought was to develop some kind of map storage mechanism and some pattern matching routines, and that would give us a base level of functionality, but such an approach wouldn't easily adapt to changes in the environment or varying lighting conditions.

Though both were written 20+ years ago, two books that have shaped my thinking on this subject are "Vehicles, Experiments in Synthetic Psychology" by Valentino Braitenberg (1984) and "Self-Organization and Associate Memory" by Teuvo Kohonen (1987) (plus I ordered a more recent volume by the same author called "Self Organizing Maps"). Both books essentially explore artificial neural network structures which might apply nicely to the types of capabilities we wish to develop.

In particular, there are some specific functions which I'd like to add:
  • color classification that can handle to various lighting conditions, textures, and multi-color patterns (think of a checkerboard floor or a mixed color carpet) in place of our current fixed threshold approach
  • shape/pattern recognition, so that we might discriminate blobs of like color that have different form
  • spacial localization, based on the occurrence of features that the robot had previously "learned", as well as adaptation to changes

In all cases, it would seem that artificial neural network techniques might be appropriate, especially if we can scale the input and output dimensions in order to keep the computational requirements within reasonable bounds. I found a nice tool for modeling some of these techniques - it is the Python-based "Conx" code that is found in the Pyro Robotics toolkit. Pyro has a forthcoming interface to the SRV-1, plus we're already working in Python with the pySRV1Console, so this should be helpful in modeling some of the techniques on the host before committing to implementation in firmware on the robot. Once we have something interesting to test, we'll post something here in the journal.

Posted Mon, 08 Jan 2007 14:25 | HTML Link | see additional stories ...