Near-Earth object identification and characterization via spacecraft currently relies on collecting a large number of images, downlinking all of the images to Earth, and then using image processing algorithms on the ground to analyze the data sets. This mission architecture requires large amounts of storage on the spacecraft, high downlink bandwidth, and introduces a long turnaround time for follow-up operations by the same spacecraft platform. This is untenable for platforms that operate under heavy constraints of mass, power, and storage as well as spacecraft in orbits with minimal communication opportunities. To facilitate these types of missions, we are developing terrestrial image analysis algorithms optimized for the spacecraft computing environment to autonomously identify and track near-Earth objects. We are applying these algorithms toward image training data from previous missions to acquire performance metrics, including how the algorithms scale to the reduced processing power of a spacecraft computer.