To capitalize on developments in voice and gesture control, we must identify a framework by which reliable control interfaces can be developed before these technologies will be taken seriously for application in spaceflight or commercial industry. Previous projects have successfully generated prototype interfaces for concept demonstration purposes but fall short of fully characterizing the available technologies and systematically identifying intuitive voice/gesture 'vocabularies' and design principles which will be necessary for flight applications. This project will leverage from previous projects but will establish a foundation (vocabularies, design principles, technology baseline and framework) from which serious integrated solutions to human interface needs can be built. Our goal is to develop a number of prototype interfaces with specific applications that would lead to specific voice and gesture solutions.
Natural User Interface (NUI) is a term used to describe a number of technologies such as speech recognition, multi-touch, and kinetic interfaces. Gesture and voice control are two exciting computer input modalities for the NUI. Some believe that NUI is the next step forward from the traditional graphical user interface (GUI), which employs a mouse and keyboard as the primary means of input. The goal of NUI is to develop interfaces that do not have a steep learning curve and the interactions with these interfaces are "natural" and intuitive to the user. One disadvantage of current GUIs is that they require the users to be physically near an input device in order to interact with the system. With the NUI, however, through using voice and gesture commands, the user is able to interact with the system anywhere they want within the defined work environment. Such "Interface Anywhere" gesture ability is achievable using infrared technology. The voice recognition portion is achievable through array microphones and Kinect software. Our NUI system of choice for this proof of concept is the Microsoft Kinect sensor. The heart of the system and what makes gesture recognition possible is the Kinect's skeleton tracking ability. The Kinect contains an infrared projector and receiver, a normal RGB camera, and an array of four microphones. The system tracks multiple users in x-, y-, and z-space. Based on the depth information, the Kinect generates a skeleton using joint position.
More »There are a number of benefits provided by these types of motion tracking technologies, not least of which is the ability to interface with computer without the use of traditional input devices (e.g., mouse and touchpad). The use of gesture and voice commanding has the potential to increase efficiency and decrease workload. Currently, when crewmembers perform procedures they must constantly stop and move to a computer in order to move to the next step in the procedure. Gesture and/or voice commanding would allow for completely hands-free operation.
Another area in which motion tracking would be of use is in robotics teleoperations. In this case, the crewmember can move naturally within the confines of the vehicle and/or habitat, while the robot mimics the operations. Additionally, motion tracking could be used with augmented reality for training of mission operations. These could include training of maintenance or medical procedures, whereby the crewmember would follow along with a video and their movements would be overlaid in order to train on the operation.
More »Organizations Performing Work | Role | Type | Location |
---|---|---|---|
![]() |
Lead Organization | NASA Center | Houston, Texas |
KBRwyle, Inc. | Supporting Organization | Industry | Houston, Texas |
Start: | 3 |
Current: | 3 |
Estimated End: | 4 |