It would seem, at first glance, that an Self-adjusting autonomous system (SAS) that serves only as an advisor requires a lower level of intelligence than an SAS that has authority to execute its decisions, i.e., a truly autonomous system. But bad advice from an SAS will cause humans to lose confidence and trust and will impair the functioning of the team. Thus, regardless of autonomy and authority, the SAS must solve the problem in time, well, and always; and it must provide a reason for the solution, subject to time constraints. The main prerequisite to solving the problem well is having a correct, complete and tractable problem formulation. This is the key technical challenge in adaptable and resilient artificial intelligence (AI). To the best of PI's knowledge, contemporary research directions in AI focus on learning algorithms, while the fundamental problem formulation remains unresolved. The approach relies on a key observation: On average, humans under stress experience physiological and psychological reactions that include tunnel vision - a severe reduction in the space of variables (actions) and solutions, and environmental and other variables that should be at the disposal of the decision maker at every step of the decision making process. This property degrades the performance of all successful time-critical decision making models, such as the Recognition Primed Decision Making. The key hypothesis of the proposed approach is that successful human decision makers do not succumb to tunnel vision in safety-critical and time-critical situations. To enable this property in all H-M interactions, the proposed SAS will develop and maintain the broadest relevant problem formulation and counteract the tunnel vision. If successful, the SAS will also facilitate handling the unknown unknowns via dynamic development of permitted and prohibited states. This is another major unresolved problem in contemporary AI. The next step in this research will combine mathematical modeling and optimization approaches with recognition primed decision making and organic computing. To the best of PI's knowledge, the proposed approach to formulating the problem of SAS and the proposed scheme for ingestion of the unknowns have not been attempted elsewhere. While there is extensive research in human-computer interaction, the focus is on representing information and data (in particular, displays) and on teaching machines to mimic human thinking, rather than mapping between the human and machine decision spaces. The proposed research aims to preserve the best characteristics of human and machine decision-making abilities, combining the two in a seamless manner. Impact: The proposed work directly addresses the technical area of Autonomous Systems. If ultimately successful, the resulting software will play an active role in aviation safety, as well as in other H-M interactions, both in the exploration and aviation domains. The work rests on the assumption that even an advisory SAS must be able to make correct decisions for the entire decision problem. This means that the approach is applicable to robotic and other intelligent systems that operate with complete autonomy, which supports exploration at great distances. If successful, the fundamental concept has a high likelihood of adoption into NASA mainstream projects related to autonomous systems and of external technology transfer.
More »The goal is to develop a concept and an associated software-enabled mechanism for human-machine (H-M) real-time decision making in time-critical and safety-critical environments. Self-adjusting autonomous systems (SAS) are spreading from well-defined control activities, such as manufacturing, to complex activities with multi-faceted human interactions and decision making, such as those involved in piloting an aircraft, because SAS' ability to solve large problems of certain types far exceeds that of humans: problems with millions of variables are tractable for machines. However, until SAS are proven and perceived to be as or more adaptable than humans, and resilient in the face of unanticipated faults and variable conditions, humans will have to remain in ultimate control of decision making, while supported by machine-based information and advice. State-of-the-art H-M interactions have numerous well-documented unresolved difficulties, including lack or excess of trust, both of which can lead to serious problems, especially in time-critical and safety-critical situations, where human decision makers quickly become overwhelmed with information. In these situations, humans become either reluctant to take advice from machines or lose situational awareness and basic skills via overreliance on machines. The proposed concept aims to address this gap.
More »Organizations Performing Work | Role | Type | Location |
---|---|---|---|
Langley Research Center (LaRC) | Lead Organization | NASA Center | Hampton, Virginia |
Ames Research Center (ARC) | Supporting Organization | NASA Center | Moffett Field, California |
Tufts University | Supporting Organization | Academia | Medford, Massachusetts |