The addition of automation has greatly extended humans’ capability to accomplish tasks, including difficult, complex, and safety critical tasks. The majority of Human - Automation Interaction (HAI) results in more efficient and safe operations; however, certain unexpected automation behaviours, or “automation surprises” can be frustrating and, in certain safety critical operations (e.g., transportation, spaceflight, medicine), may result in injuries or the loss of life. (Mellor, 1994; Leveson, 1995; FAA, 1995; BASI, 1998; Shaylor, 2000; Sheridan, 2002). The next generation of space exploration systems will place an increased reliance on automation. Traditional techniques for the design and evaluation of automation interfaces rely on subject-matter-experts, human-in-the-loop testing (i.e., usability testing), guidelines, heuristics, and rules-of-thumb. Given the volume and time-line for the development of new automation required for space exploration, the time and cost required to perform these evaluations by human factor experts will be prohibitive. Further, guidelines, heuristics, and rule-of-thumb have previously yielded sub-optimum designs (as they are focused on the interface, not on the process of interaction between human and automation interface). State of the art cognitive science and Human-Automation Interaction (HAI) approaches may provide the type of analysis needed, but are not currently usable by designers without extensive cognitive science expertise. The automation design community needs methods that are usable by designers early in the design process to meet the demands for the development and testing of automation required for space exploration. The objective of this research project is to develop a set of methodologies and tools to support the design and evaluation efficient, and robust interaction with automation. The research plan is to integrate existing foundational research results into HAI methods and tools usable by designers. This work is divided into three areas, with the ultimate goal of developing a suite of tools to support each area. It is important to note that the idea of the project is to develop and evaluate the tools in actual design processes, and the level and type of support and evaluation will be dependent upon the scope and maturity of each design domain. The three areas are organized around an abstraction of the primary foci of the design process. Analyze: The first set of methods and tools are intended to help designers to identify, describe, and evaluate the different parts of the job. Depending on the stage of the design process, these methods are referred to as work domain analyses, task decompositions, task analyses, knowledge elicitation, and in the later stages, validation. Formulate: The second set of methods and tools is intended to bridge the gap from the analysis of the work domain. Specifically, once the structure of the work domain and tasks has been determined, methods and tools are needed to link the task structures to corresponding interface structures that can later be refined and evaluated. We will be drawing upon research from a number of different communities for this effort including ecological interface design and design patterns. Build: This set of methods and tools are intended to enable rapid development and evaluation of automation, including the user-interface and the underlying automation behavior. The specific focus of this effort is to develop methods and tools that are usable by designers who are expert in the design domain, but aren’t necessarily formally trained in computer programming, or human performance analysis. We will primarily be using research in formal methods to help support the Build effort. The outcomes of this research will be methods and tools for the automation of the design and evaluation of the automation interfaces. These tools will provide the means to: (i) meet the demand for analysis required for space exploration development time-line, (ii) enable increased iterative human factors testing of automation prototypes early in the design process, (ii) reduce the cost of development by design and testing of proposed systems early in the development life-cycle, (iii) reduce the cost of training and the maintenance of proficiency, (iv) improve safety (and reduce the costs of inefficiency and unsafe operations) through significant reduction in failure to complete task metrics. The 2010 AITD (Automation Interface Design Development) efforts can be summarized in terms of the three efforts: To continue the Analyze method and tool development, the team will identify and analyze a new application domain. To develop and evaluate the Formulate methods and tools, the team will conduct a study to evaluate performance of a modified Scheduling and Planning System for Exploration (SPIFe) interface that will be made comparable to the existing interface (e.g., the graphical elements will be made similar for both interfaces) to examine the evaluation of task to interface structure. If funding allows, the group will also help develop a new version of the SPIFe tool which incorporates the functionality needed for the Attitude Determination and Control Officers (ADCO) planning tasks.