Skip Navigation
Center Independent Research & Development: GSFC IRAD

Real-Time Analytics Test System for Distributed Spacecraft Missions

Completed Technology Project
596 views

Project Description

RTAP

Upcoming DSM missions will produce unprecedented amounts of data during both I&T and flight, overwhelming current command/telemetry systems. The amount of data, coupled with multiple spacecraft to integrate and test could increase I&T cost and risk in a time when we are trying to reduce overall mission costs. The goal of this research project is to develop a working prototype of a software system that runs on a cluster of commodity hardware and is able to, in real-time, process and visualize 10Gbits of science data per second, and reliably store and retrieve 1TByte of science data per day.   Using a real-time analytics platform for system testing will reduce overall project cost and risk by allowing for in-test decision making, interactive data exploration, and most importantly, allowing questions to be asked about our data, and ultimately about our flight system, that we were not able to ask before.

Projects typically select and use a command and telemetry system for four applications: (1) send a stream of commands (control), (2) receive and display a stream of telemetry (status), (3) convert that telemetry into human-readable units (convert), and (4) monitor the converted telemetry for potentially dangerous values (alert).  A system designed for these four purposes – control, status, convert, and alert – is not designed to provide meaningful secondary data products needed to understand the performance of the spacecraft -based observatory.  Nor is it designed to handle the extremely high data rates that result from the combined data feed of multiple spacecraft present in a distributed spacecraft mission.  As a result, these systems are heavily supplemented with offline data processing tools.

 

Offline data processing typically requires a data archive of all the raw and converted telemetry, a system for requesting the data, and then a diverse set of custom-built tools for analyzing the data.  The custom-built tools are often developed in MATLAB, IDL, python, Excel, or other desktop applications.  Once the spacecraft are close to being complete, these tools begin a conversion process into a language like C or Java so that data produced “in-flight” can be batched processed at higher rates of efficiency into secondary data products for consumption by the scientific community.

 

There are three fundamental problems with relying on offline data processing tools for analyzing performance during ground-based system testing: (1) The time it takes to provide performance information disqualifies it from informing any decisions that need to be made while a test is running.  (2) The tools typically used in offline data processing are designed for single user desktop applications and therefore do not scale for processing large data sets.  (3) The turn-around time required to restructure a data inquiry often prohibits data exploration. 

 

The real-time analytical test system is perfectly suited to address key characteristics of distributed spacecraft missions which are unable to be met by the current technologies.  Here are some of the ways that this technology will enable future distributed missions:

  • The system is targeting the real-time processing of 10Gb/sec in order to saturate current network infrastructures using commodity computers.

 

  • Our current set of tools for testing limit the way we test and the types of questions we can ask in testing.  By developing a real-time analytical test infrastructure that is designed from the ground up to handle extremely high data rates we can enable in-test decision making, data exploration, and the storage/retrieval of all data measured and collected against a system for its entire life-time.  Instead of running procedures to collect a specific measurement for a short period of time, we will be able to setup the measurement equipment once and collect forever.

 

  • The system employs a cluster of commodity hardware to reliably store massive amounts of data, bounded only by available power and cost.  The unbounded storage capacity is a result of a horizontally scalable architecture.

 

  • The tools we currently use are able to display telemetry from multiple spacecraft in real-time, but they rely on post-processing tools to analyze that data.  The system being proposed combines the two to provide a data visualization tool that displays not just data, but the analysis of the data in real-time.

 

  • The core problems facing real-time analytics in system testing are the same problems facing any big data analytics process.  The data IO speeds required to access large amounts of data require a distributed approach to data storage and retrieval.  The computing speed required to work on large datasets require a parallelized approach to data processing.  The complexities of generating meaningful information out of a diverse set of interconnected data points requires a highly intuitive user interface.  The system is attempting to not just provide tools for mining, organizing, and visualize large amounts of simultaneous housekeeping telemetry, but to go one step further and provide those tools for the science data as well.
More »

Anticipated Benefits

Project Library

Primary U.S. Work Locations and Key Partners

Light bulb

Suggest an Edit

Recommend changes and additions to this project record.
^