Skip Navigation
Center Innovation Fund: JSC CIF

Interface Anywhere

Completed Technology Project
724 views

Project Description

Project Image   Interface Anywhere
Current paradigms for crew interfaces to the systems that require control are constrained by decades old technologies which require the crew to be physically near an input device to interact with the system. Often this requires an interruption to move to a specific worksite which has a keyboard and/or the device they need to interact with. This project demonstrated it is possible and practical to use a combination of voice and gesture to control systems while greatly enhancing the range of locations the user can operate the system from. Evaluation feedback from engineering peers echoed our thoughts on the possibilities for these control modalities, but more importantly, offered critical feedback on what they perceived as intuitive voice and gestures and also emphasized the need for improved reliability. Ultimately this project established a basis for identifying areas for further investigation in order to design a natural user interface that offers inherently reliable system control and has stimulated interest and participation from multiple academic institutions. To illustrate the viability of this technology, a prototype Natural User Interface (NUI) was developed as a proof-of-concept for system control. Gesture and voice controls were developed and compared. Both methods of control input allowed the user to be in a general area as opposed to a specific location. Voice input highlighted the ability to achieve truly hands-free interaction. The prototype provided for control of a simplistic lighting system and cabin temperature control system which represented a typical space habitat system requiring interaction. The user interacted with the system through a Microsoft Kinect sensor which includes an infrared light source and receiver and array microphone capability. Code was developed in conjunction with the skeleton tracking and speech recognition capabilities in the software development kit to allow the use of gestures and speech as a source of input. To allow the user to monitor system response and to 'see' the results of their commands, a simple set of graphical user interfaces (GUIs) were developed to include the logical equivalent of commands and telemetry feedback. In addition, an end effector unit was developed using an Arduino microprocessor which received serial port inputs from the application software to control light emitting diodes which represented habitat light systems and a servo motor which represented an air mixing valve that might be used to support temperature control setpoint. To interact with the system, the user began with a main display where they could navigate to the lighting controls display and a thermal control display. The lighting controls consisted of on-off buttons for three areas of the simulated habitat (lab, crew quarters, airlock). Using voice commanding, the user could control the lights in each habitation area (e.g., "lab off" would select the off button). Gesture control employed both the left and right arms. Once the participant told the system to track their arm, a focus box would move over the lighting controls as controlled by the left hand. Once the focus box was over the desired on-off control, the button would be activated by a wave of the right hand. On the thermal control screen, a slider was shown with setpoint values ranging from 62 and 82 degrees. Using simple voice commands such as "temp 70", the setpoint indicator would move to that temperature. When the participant told the system to track the hand, the slider would move in parallel with the right hand. To stay at a specific temperature, the participant simply told the system to stop tracking. The appropriate lights on the end-effector unit would turn on/off consistent with the GUI indications and the servo would move to a position in its range of motion that was scaled to be consistent with the temperature setpoint command. Development, test and evaluation of this small system emphasized the importance of designing it in a way that ensures the actions the system takes are consistent with the intent of the user and that the application software contains logic necessary to correctly interpret the sensed inputs. This is particularly challenging to accomplish while trying to ensure that the gestures and voice commands remain natural and easily remembered/applied. It was also important to allow the user to enable/disable the processing of voice/gesture at any given time during use. Through this project we have identified the need to investigate and develop gesture and voice 'vocabularies' that will be perceived as natural and easily remembered by users. We have also determined that design approaches using mode control are relevant but require additional considerations when using one or more NUI input modalities. Progressively more complex system development would be the best approach to mature the design practices required for spacecraft application. It is understood that other options exist for NUI input sensing which may be preferable for certain applications. However, it is believed that this forward work would apply to alternate NUI input sensing options as well. More »

Anticipated Benefits

Project Library

Primary U.S. Work Locations and Key Partners

Technology Transitions

Light bulb

Suggest an Edit

Recommend changes and additions to this project record.

This is a historic project that was completed before the creation of TechPort on October 1, 2012. Available data has been included. This record may contain less data than currently active projects.

^