Imaging technologies are key to the diagnosis and treatment of medical conditions that astronauts on an exploration mission might encounter and to research activities that characterize and understand the adaptations to micro- and partial gravity environments. Key limitations of the currently available imaging capabilities include the complexity and operator dependency of the procedures to acquire high quality results. Currently, two imaging procedures critical to space medicine and research, ultrasound and optical coherence tomography (OCT), are performed on the International Space Station (ISS) by astronauts with the assistance of real-time communication with experts on the ground using a method called remote guidance. With this methodology astronauts have acquired diagnostic and research quality data that are central to our understanding of the physiological consequences of weightlessness, including altered cardiac function, muscle atrophy, and the spaceflight-associated neuro-ocular syndrome (SANS). However, with the time delay in communications that will be inherent in exploration missions while traveling great distances from Earth, remote guidance will no longer be practical, particularly when one-way transmissions may take up to 10 minutes or more. To fill this gap, we developed advanced audio-visual training modules, a form of just-in-time (JIT) training, to acquire medically necessary and research-relevant images. Building on our extensive experience with remote guidance of ISS astronauts and demonstrated success with just-in-time training, we employed an augmented reality (AR) system (Microsoft HoloLens) that included three-dimensional graphics of relevant anatomy, step-by-step audio instruction, reference images demonstrating adequate and inadequate quality, and troubleshooting guides. The ability of untrained subject-operators to acquire high-quality ultrasound and OCT images was evaluated by expert reviewers and compared to current JIT techniques for quality and time efficiency.
Tutorials to acquire ultrasound and OCT images were adapted for use as a PowerPoint presentation viewed either on a laptop computer or using an augmented reality platform with a heads-up display. Instructional material presented by the two different training modalities was identical, except that the augmented reality tutorial provided additional spatial guidance to complete the scanning protocols. Additional guidance included arrows superimposed on the ultrasound and OCT keyboards when specific controls were needed and virtual green dots superimposed on the body over the ultrasound targets' approximate locations to guide ultrasound probe placement. Twenty subjects attempted to acquire ultrasound and OCT images using the tutorials provided viewed on a laptop computer, and 20 subjects attempted the same diagnostic imaging procedures with instructions viewed using the AR system. No subject had prior experience performing these procedures or using this imaging hardware, and no subject participated in this study as a member of both groups. Subjects used guidance from the tutorials to acquire five ultrasound targets that constitute a subset of images in a trauma-induced injury assessment protocol (images of the heart, lungs, liver, kidney and spleen) and a single thirteen-line, vertical raster scan centered on the macula of the left eye using OCT. Time limits were imposed upon the subjects to acquire each of the ultrasound or OCT targets. This is not dissimilar to the environment on ISS or anticipated on future exploration missions in which communication windows or mission objectives can impact the available time to complete imaging procedures.
After the data collection session, subjects completed a survey to capture their thought about the equipment, procedures, instructional material, and future improvements. Images were stored by the subjects and evaluated off-line in a blinded fashion by imaging professionals. Four radiologists from Vanderbilt University with training in ultrasound analyses evaluated the ultrasound images. Four trained evaluators at the Doheny Image Reading Center (University of California, Los Angeles), with previous experience using the grading techniques utilized in this study, evaluated the OCT images. Somers' D was used to assess the effect of the use of either laptop or HoloLens on the resulting image scores.
Results and Discussion
Nearly 70% of the ultrasound images and 53% of OCT scans that these previously untrained subjects acquired were considered to be diagnostically adequate. The quality scores for OCT images and 3 of the 5 ultrasound targets were not different between groups, but the laptop group performed better for the other 2 ultrasound targets. The time to acquire ultrasound images was, on average, shorter for the AR group (52 compared to 57 minutes). Interestingly, the survey results indicated that the AR group did not find the instructional material as helpful as the laptop group (p= 0.042), even though the content was the same for both. While it is evident that both the augmented reality and laptop groups struggled with the complexity of both ultrasound and OCT, there was a subset of subjects who appeared to embrace the technological challenge and performed well. This might be related to the education level, learning styles, or comfort level with technology, but these factors were not specifically evaluated. While some subjects embraced the technology, some were overwhelmed by the task of familiarizing themselves with a new technology (i.e., AR) while concurrently attempting complex, operator-dependent imaging which also was new to them. The inclusion of subjects not necessarily representative of the astronaut corps represents a limitation of the study. Additionally, the degree of familiarity with laptop computers and PowerPoint may have created a bias for the laptop group.