Skip Navigation
Center Innovation Fund: JSC CIF

Real-time Stitched Video for Contextual Path Planning

Completed Technology Project
511 views

Project Description

Project Image   Real-time Stitched Video for Contextual Path Planning
The crew and flight controllers who control the Space Station Remote Manipulator System (SSRMS) currently rely on multiple camera views to confirm or infer clearance between the SSRMS and the surrounding structural environment. The same is true for other remotely operated equipment - surgical instruments, submersible vehicles, and telerobotics. This project explores operational efficiencies gained through the use of an array of cameras surrounding an object (such as the SSRMS boom or Mars Rover) and pointed along the object's long axis or direction of travel. This project will design and build a state-of-the art situational awareness tool for the operators of exploration vehicles. An array of cameras placed around the boundary of the vehicle will feed video to a vehicle-embedded processor which will synthesize the scene data into a single composite view presented to the operator. The composite view will encompass a 360° view around the vehicle from a bird's eye perspective. Robotics platforms used to benchmark this technology include the Dextrous Manipulator Trainer (DMT), model SSRMS, and Multi-Mission Space Exploration Vehicle (MMSEV). Initial (ICA) Investigation: Preventing collisions is the first priority for safe operations of the Space Station Remote Manipulator System (SSRMS). This depends on the ability of the crew and flight controllers to verify enough clearance exists between the SSRMS, its payload, and surrounding structure. In the plan, train, and fly stages of each mission significant time is spent developing, documenting, and executing a camera plan that allows each portion of the SSRMS trajectory to be monitored. This time could be decreased and operational situational awareness increased by using an array of cameras mounted around a boom on the SSRMS that point along a boom. The output of the these cameras could be stitched together to provide a one composite view that provides clearance monitoring 360° around the boom. Further, this technology could be used in any application where it is desirable to see proximity on two or more sides of an object - surgery, tele-robotics, deep sea exploration. This investigation will ask operators (crew and flight controllers) to compare clearance monitoring of a sample trajectory using conventional external camera sources versus a stitched video presentation from a camera array. A test plan, script, and scoring for comparison will be used to determine if stitched camera arrays lend themselves to clearance monitoring. The project investigator researched the required technology, including hardware and software, to perform video stitching to identify an approach that can be used for operator evaluation in the ICA project. Initial (Innovation Charge Account Project (ICA)) results: A cadre of robotics professionals from JSC Robotics Operations and Astronaut Office participated in a benchmarking effort to quantify efficiency and safety metrics both with and without the use of a stitched camera array. A modified Cooper-Harper scale was used to determine operator workload. Other metrics included time required to perform task, motion stopped due to lack of clearance views, whether contact was made with external structures. Results showed a reduced operator workload, faster completion of the task, and reduced contact with external structure. Additionally, the technology was presented to the JSC community at Innovation Day 2012 where it won the People's Choice Award. Second Phase: Rearranging image pixels from multiple cameras to accomplish a perspective shift is computationally expensive. In the last decade, advances in CPU performance and direct to memory image capturing methods have improved the frame rate and latency associated with video stitching. In the previous phase (FY '12 ICA, People's Choice Winner), the collaborator was able to achieve 10 frames per second with less than a second latency using off the shelf CPU and camera hardware. The purpose of Phase 2 is to demonstrate the technology on a larger vehicle (Multi-Mission Space Exploration vehicle) using high-bandwidth (GigE) network, increased CPU/GPU resources, and high-performance cameras. Second Phase Results: Ten video cameras (the minimum required to obtain coverage around the vehicle while providing enough image overlap) were placed around the upper surface of the MMSEV. The video streams were piped to an on-board high-end PC where software written in MATLAB performed the perspective shifts and homographic alignment. The resulting single view was displayed in Graphical User Interface (GUI) that allowed the operator to see the composite 'birds-eye' view or zoom in on a view from a particular camera when clearance was a concern. The MMSEV was maneuvered around the simulated Martian landscape at JSC known as the Rock Pile. To date the maximum achieved frame rate is 2 frames per second. To increase frame rate current efforts are focused on transferring the homographic algorithms to a Xylinx field-programmable gate array (FPGA) processor. More »

Anticipated Benefits

Project Library

Primary U.S. Work Locations and Key Partners

Technology Transitions

Light bulb

Suggest an Edit

Recommend changes and additions to this project record.
^