|Surveyor Robotics Journal|
Tue, 29 Aug 2006
some thoughts about the SRV-1 communication protocol
The current communication protocol between the SRV-1 robot and the base station host is quite simple - we've defined a set of single byte commands that control robot motion, video resolution, wander mode, etc, and associated a set of buttons with those commands. The definition of the commands can be found in the main SRV1Console directory on the host in the srv.config file.
The commands are defined in hex, but if you translate these to ASCII, you'll see that all of the commands are simple keystrokes, so if you had a terminal program connected to the USB radio's serial port on the host, you could type the commands and control the robot directly.
In fact, if you have a numeric keypad, you'll see the '8' key (hex 38) drives the robot forward. The '7' key is "drift left", the '9' is "drift right, the '4' is turn left, the '5' is stop, etc ...
Look at srv.config and you'll see a series of definitions like these which set the order in which the buttons appear in SRV1Console and match the button graphics with the actual commands -
If you were connecting to the radio through a serial port, you would see a spew of weird characters that actually represent the binary data in the stream of JPEG frames that are being sent from the robot. If you could slow down the flow of characters, you would see that each frame starts with a 12 byte header. The first 4 bytes are the characters "SVFR" (short for Surveyor video frame), followed by the image type - "IMJ1" is 80x64 JPEG, "IMJ3" is 160x128 JPEG, "IMJ5" is 320x240 JPEG, then followed by a 4-byte length, which is the size of the JPEG frame in bytes (sent low byte first). We've defined some other image types, including raw uncompressed pixels and binary images that are useful for showing thresholded pixel data after image processing, but we only use those in testing, so they aren't part of the typical SRV-1 operation.
As noted above, this is a very simple interface. However, we're starting to find that it might be a bit too simple, as there's no handshake or acknowledgment mechanism between the host and robot, and because the radio is effectively half-duplex (the Zigbee radio really doesn't support sending and receiving at the same time, though it works sometimes), some commands that get sent from the host are getting clobbered by the JPEG data and are not reaching the robot.
We're considering a change to a handshake-based protocol, so that every command that is sent from the host receives an acknowledgment, and video frames are only sent in response to a frame request. We didn't use this approach originally because of latencies it would create in receiving freshly captured frames. However, with the increase of the camera interface to 921kbps from 115kbps, the delays in pulling frames from the camera are greatly reduced, and the robot can respond quickly with a fresh frame when a frame request is received. This approach also simplifies the interface on the host side, as the host code will no longer have to parse incoming data to try to find frame headers or try to extract other data from incoming streams.
We haven't settled on a format for the messages yet, but would like to keep them simple, so that a terminal program can still be used for debugging. The commands could still be single byte, and the ACK response might be simple 1 or 2 character header, e.g. # or ## followed by the command ... e.g. '8' to drive forward would get the response #8 or ##8, and the command to grab a 160x128 video frame would receive the SVFRIMJ3xxxx ....
It seems that change in this direction is a good idea, but I thought it would be worthwhile to ask for comments from code developers who are already working (or considering working) with the SRV-1 to see if there are any issues or suggestions. Please email your thoughts to email@example.com subject:SRV-1 protocol. Thanks !