The proposed solution exploits recent advances in computer vision to conceive of a single-camera + gyro + accelerometer vision-based navigation solution such that the processing will be lightweight (requiring only a single optical flow sample per frame). Known landmarks (natural or artificial) will have absolute positions known to planetary exploration worker robots. The worker robot can identify it's absolute position by observing known landmark features and deriving range from the raw attitude sensor data and the video stream. By observing one or more landmark features during camera motion, the position uncertainty of the range and bearing from the vehicle can be estimated. Each range / bearing measurement to known landmarks acts as a constraint for the camera position in the landmark navigation space (which may be arbitrarily defined and not oriented the same as the global navigation frame). Combining the worker's rough knowledge of it's own position can further reduce the position error estimates. The single-camera passive ranging technology leverages Navy SBIR funded work for early simulation tasks.
NASA can use this technology for navigation by small robots on other planets, where GNSS, magnetometer, and UTC time may not be known with accuracy. NASA could integrate this technology into a landmark navigation failsafe for it's UAS fleet for when GPS and/or magnetometers are jammed or unavailable for environmental reasons.