Engineers and astronauts on NASA missions must frequently follow lengthy and complex sets of directions to perform the tasks required of them. Missing or not properly executing even a single step of the directions for these tasks is often costly and can be life threatening. Consider the following scenarios. Scenario 1: An astronaut forgets a step of a safety-critical procedure, such as when donning a spacesuit for an extravehicular activity. Because of the lack of a reminder, this error goes unnoticed. The astronaut is forced to re-enter the station early and is consequently unable to perform the days experiments. Scenario 2: When on space missions, astronauts frequently must perform maintenance, monitoring, or repair tasks which occur outside of their space vehicle. These tasks are often extremely complex; consequently, the astronauts usually must be guided in their work by a ground control personnel. Several issues arise from communication delays and the requirement of being in constant communication with ground control. Scenario 3: The amount of training each astronaut undergoes to prepare for a mission is extensive. The massive amount of information that astronauts must absorb coupled with the long duration of their training means that they simply cannot recall everything they have learned. Astronauts may need to schedule time to contact trainers on Earth, as there may not be a system for training and task review for all tasks while on a mission. The issues described previously can be alleviated with a system comprised of two components. The basis of the system is a method for tracking the current status of a task of an astronaut. Operating concurrently with this task-tracking system is a context-specific feedback component. In scenario 1, this feedback component would alert the astronaut to differences between the current task state and the expected task state to prevent dangerous safety errors while donning the suit. In scenario 2, this feedback component would similarly perform error detection during the maintenance task, but it would also be able to provide instructions for the next steps in the task in a manner that minimizes the error rate. Lastly, in scenario 3, this component would guide the astronaut through the task while providing error checking, but it would group and phrase its instructions in a manner which maximizes recall. The outcome of this research will be a set of principles to computationally represent tasks, heuristics for how to monitor task progress and visually detect errors, and specific feedback and instructional strategies designed to either minimize errors or maximize recall. These principles, heuristics, and strategies will be evaluated in lab studies in the context of NASA missions. This research will draw from a variety of fields including machine learning, computer vision, natural language processing, cognitive psychology, and human-computer interaction, and it will directly contribute to reducing resource use and improving safety on NASA missions. The proposed research has potential benefits for two NASA groups: the human-systems integration division (crew training) and the intelligent systems division (planning and scheduling). The project addresses the problem of heavy reliance on visual displays and described in STR 4.4.1 (Multi-Modal Human-Systems Interaction) by using speech as a primary input and output modality. It also addresses issues described in STR 11.2.3 (Human-System Performance Modeling) by alleviating the need for ground control contact during maintenance tasks and by offering innovative modes of human-automation interaction.