In this reporting period, significant efforts were taken to define the central problem of Spacecraft Optimization Layout and Volume (SOLV), establish a layout evaluation methodology, and outline an overarching model logic map that would govern how the SOLV model would progress from development to deployment. Refinements were made to the Critical Task Volume Database in this year. Revision 2 of the database was released on 11/10/16. Additional volume data were collected in the areas of medical, exercise, extravehicular activity (EVA), body waste, food prep, and group meet and eat tasks. The team also completed a task attribute analysis that “rated” each task against the 16 attributes as identified by SOLV. Some attributes, such as “Gradient Cuboid,” “Operational Adjacency,” and “Share Functional Equipment,” are taken into account as formal constraints in the SOLV code. Rating of attributes for all others was captured in the Task Attributes spreadsheet. Of the 16 task attributes, the team determined that Privacy, Reconfigurability, and Task Time most significantly contributed to whether a task volume could share space or overlap with another. Based on the three ratings for each task, normalized, the team created a concurrency table that defined the overlap allowable for each task. This table was then incorporated into the SOLV code to drive the overlap constraint in the layout generation. Lastly, the team also refined the functional adjacency map initially developed in previous years, to determine task adjacency relationships that would help drive the packing layouts. As part of SOLV’s layout evaluation methodology, the team is deploying surveys across the NASA Subject Matter Expert (SME) community to collect expert opinions and judgement on SOLV’s layout evaluation factors and metrics, in order to establish a factor weighting and scoring system, and drive the model logic for evaluating layout performance. Surveys will be conducted in three main phases: • Factor Priority Surveys ; • Interaction Effects Surveys ; • Manual Layout Evaluation Surveys. To date, we have completed the Factor Priority surveys for non-astronaut SMEs via three main sessions and multiple splinters. Data collection from astronauts will take place in June 2017. All received responses have been processed, submitted through export control, and sent to the University of North Carolina-Charlotte (UNCC) teammates. Data analysis of the survey results at UNCC is ongoing, and initial work indicated that additional post-processing of the data would be required to improve the consistency of the responses and find patterns within responses that were deemed “inconsistent.” Upon completion of the analysis of results from the Factor Priority surveys, top primary design factors could be identified. These factors would then help scope the next two phases of the surveys for interaction effects and manual layout evaluation. In all, 15 subjects from five Subject Matter Expert (SME) groups participated in the Factor survey, generating 28 responses. During this year, the team also made significant progress on the code development for each of the SOLV modules. The final SOLV model must integrate the following modules: • Gradient Cuboid code -- Converts task volume inputs into gradient cuboids and governs how they can interact. • Overlap Packing code -- Generates layouts of the gradient cuboids. • Layout Evaluation code – Establishes the model weighting system and the model response surface via Canonical Correlation Analysis (CANCORR), and contains hard-codes of the Data Envelopment Analysis (DEA) and Choquet Integral (CI) functions that establish the model scoring system for layout evaluation. • Assessment Report – A “scorecard” that provides evaluation results and design information for every volume and layout solution generated by SOLV. This enables the user to compare options and choose the best starting point for his/her design. • Additional code and scripts that integrate the modules to enable smooth model functions from user input to scorecard output. To date, work is ongoing in refining the Gradient Cuboid code and the Overlap Packing code. CANCORR, DEA, and CI analyses have begun, and will continue to be refined as the team deploys the surveys across NASA SME community. To integrate the SOLV modules and enable automated feeding of data from one code to the next, we worked to build an input-output flow that helped define all the ins and outs for every code, and the mechanisms for the data flow. Gradient Cuboid code provides an interface to pull data from the Critical Task Volume Database (CTVD) and allows the user to design and output representative volume (gradient cuboid) for each critical task. Key design improvements were made to the Gradient Cuboid code in this past year in the areas of data upload, user interface, and exporting format. For example, the code now loads CTVD as a native Excel spreadsheet; there is no longer a need to convert the database into .csv format, which can be a messy process. Also, the code now allows a user to select individual task volume data for a given critical task, and specify number of cuboids for each critical task to export to the Overlap Packing module. The code can graphically display the representative gradient cuboid and its allowable overlap as dashed lines for improved visualization, and export to the Overlap Packing module all the selected gradient cuboids as a MATLAB structure vector wrapped in a .mat file. Overlap Packing code serves as a layout design mechanism. It generates an initial set of layout designs via mathematical optimization. The latest mathematical optimization formulation for overlap packing was developed based on a design use case, and the results were demonstrated at the End-of-Year Review user demo session on 5/3/2017. Key design improvements made in this year included improvement in the application of packing constraints, and exporting capabilities. For example, the code can now generate up to 8 packing layouts, and export their physical data to the Model Weighting and Scoring Code for layout evaluation. The code can also now generate interactive 3D PDF files for greater user visualization of the packing layouts. This will be very useful when the team administers the Manual Layout Evaluation surveys to the NASA SME community. The Layout Evaluation module integrates and calibrates the physical data from the output of the Overlap Packing code with the psychophysical data from the surveys. As data from the survey continue to come in, the design of the analysis flow has evolved to the following steps: • Survey Analysis: From the pairwise comparisons of 13 design factors in the Analytic Hierarchy Process (AHP) surveys, the team was able to identify the six top design factors believed to have the greatest impact to the “goodness” of a layout. • CANCORR: This analysis method seeks to find the linear combination of two variables with the greatest correlation. It uses adjacency distances between the tasks as input variable, and design scores for each factor as output variable, to identify the main drivers of performance. • DEA: This analysis method uses the main driver variables identified from CANCORR to look for designs with the best efficiency scores. These “frontier” designs represent the most efficient designs that use the least amount of resource while obtaining high design factor scores. • CI: This analysis method uses the top factors, the most efficient designs, and pairwise comparisons from the AHP Surveys to obtain an overall score of each design for each of the four metrics, taking into account the primary, secondary, and tertiary interactions of the factors. CI are represented with step-plots of score versus fuzzy measure. Development of the SOLV Assessment Report format began in the latter part of the past year; therefore, much work remains to be done in this area. This report, formatted in Excel, would provide a basis for a heuristic assessment of the overall costs and benefits achieved for a given solution. The report is organized into a User Guide, a Main Scorecard, and Supplemental Information on each layout performance metric and layout solution. In November 2016, the team met multiple times to update project goals for NASA-STD-7009A Credibility Scores. The discussion resulted in a reassessment of project credibility levels as defined by 7009A. In March 2017, the baseline Verification and Validation (V&V) Plan was updated. It captures the plans for verification and validation of the model in adherence to NASA-STD-7009A, and would serve as the repository of V&V results throughout the SOLV development process. The team also met on 4/12/2017 to formulate verification testing strategies for each module of SOLV. The team discussed how to perform error estimation, how to characterize data uncertainties, and how to test for parameter sensitivities of the model end-to-end. Resulting plans will be refined and incorporated into SOLV-003. A formal internal End-of-Year Review was held at Johnson Space Center on May 3, 2017. A preliminary user demo was conducted during the review to demonstrate functionalities and the mechanisms of the modules, as well as format and structure of SOLV inputs and outputs. Review results were officially documented within the Human Research Program (HRP) quarterly deliverable for May 2017. The review also identified 26 actions and critical next steps for next year. NOTE (5/31/2017): Dr. Sherry Thaxton is no longer Principal Investigator as of Feb. 6, 2017. New Principal Investigator will be: Maijinn Chen, M. Arch. (KBRwyle). Project continues with M. Chen as PI. See that project for subsequent reporting.