Skip Navigation
Center Independent Research & Development: GSFC IRAD

Advancements in Optical Navigation Capabilities

Completed Technology Project
1486 views

Project Description

The logo for the Goddard Image Analysis and Navigation Tool (GIANT)

The Goddard Image Analysis and Navigation Tool (GIANT) is a tool that was developed for the Origins, Spectral Interpretation, Resource Identification, Security-Regolith Explorer (OSIRIS-REx) mission to allow GSFC civil servants to perform independent verification and validation of the OSIRIS-REx navigation products.  Before the development of GIANT, there were limited Optical Navigation (OpNav) capabilities at Goddard, and they were spread across many tools with many different maintainers.  GIANT corrects this by providing a state-of-the-art OpNav tool suite in a single, extensible python package.

GIANT serves as a preprocessing tool that is necessary to include OpNav measurements in orbit determination (OD) software.  GIANT ingests OpNav images, performs image processing on the images, and extracts observables from the images which are then passed to an OD tool.

The objective of this IRAD is to enhance GIANT with more advanced capabilities.  Specifically, we will add image crossover, scale and rotation cross correlation, limb-based navigation, and weighted attitude/geometric distortion estimation capabilities to the existing GIANT package.

Image crossover measurements refer to the process of identifying the same feature in multiple images.  This correspondence can be used to form a strong geometric constraint on the location and orientation of the camera between the images in an orbit determination solution. We will update GIANT to include the ability to generate these crossover measurements through both standard surface feature navigation methods as well as using feature descriptor based methods from the field of computer vision.

The primary workhorse for relative OpNav is cross correlation.  In cross correlation, the a priori relative position and orientation of the camera with respect to a celestial object is used to generate a predicted image of the object.  The predicted image is then correlated with the actual image in order to locate the object in the field of view.  This correlation typically only includes translational effects, and does not account for changes in scale (or distance to the object) or rotation.  Scale and rotation cross correlation adapts the ideas of translational cross correlation to be able to handle rotation/scale offset.  We will add these techniques to GIANT and combine them with the translational correlation techniques to create a more robust correlation routine.

Limb-based OpNav measurements provide a 3 degree of freedom (3DOF) relative position estimate by identifying points along the limb of a tri-axial ellipsoid body in an image, and then using the limb points in combination with a model of the body to generate the position estimate. There are two current state-of-the-art techniques for this type of measurement: limb scanning and ellipse matching.  We will implement both limb scanning and ellipse matching techniques in GIANT.  We will also investigate extending the ideas of limb-based OpNav to irregular bodies (such as asteroids and comets).

The current attitude and geometric distortion estimators in GIANT only consider the uncertainty in the measured star locations in an image when performing the estimation.  We will re-implement these estimators so they are capable of also handling the uncertainty of the star positions and motions reported by the star catalogues in the estimation.  This will allow for an easy method to de-weight stars with poor positional knowledge and keep them from negatively affecting the estimation solutions.

More »

Anticipated Benefits

Project Library

Primary U.S. Work Locations and Key Partners

Light bulb

Suggest an Edit

Recommend changes and additions to this project record.
^