Growing food in space will not only allow us to extend the length of future missions in space, but also significantly increase the astronauts' well-being. The proposed research focus on the fundamental sensing and manipulation challenges of automating parts of the operations of in-space greenhouses to facilitate tele-operation. Specifically, we are investigating machine learning techniques to extract the growth stage of plants from a combination of volumetric, color, and infrared data, and novel algorithms for manipulating flexible structures using two arms, much like a human gardener does when picking a fruit.More »
This project focuses on fundamental challenges associated with the automation of in-space greenhouse operations. Such systems would not only allow us to extend the length of future missions in space, but also significantly increase the astronauts' well-being.More »
|Organizations Performing Work||Role||Type||Location|
|University of Colorado Boulder||Lead Organization||Academia||Boulder, Colorado|
|Kennedy Space Center (KSC)||Supporting Organization||NASA Center||Kennedy Space Center, Florida|
The objective of this research is to address the fundamental perception and manipulation challenges in autonomous food production, that is automating the growth of plants using robotics. Plants are highly unstructured objects that drastically vary in size, shape and appearance over the course of their lifetime. While there exist a lot of specialized hardware to deal with specific plants from the fields of precision agriculture and greenhouse automation, this research aims at overcoming the challenges that general-purpose robotic manipulators face. In particular, this research addresses three thrusts: (1) perception of objects using 3D depth sensors, (2) modeling and prediction of flexible objects, and (3) system integration of perception and manipulation on a two-arm manipulation platform.
The most significant technical achievements of this Early Career Faculty fellowship are (1) fundamental contributions to manipulation of flexible objects, (2) path planning taking into account dynamics, and (3) robotic system engineering that tightly integrate 3D perception and motion planning. Initially focusing on experimental works involving real plants , we have identified a series of fundamental challenges that stand in the way of general robotic autonomy, including manipulation of flexible objects [3,4], non-visual tactile sensing [1,6,7,8], motion planning [2,9,10], and robotic software engineering  and programming [11,13]. These results have broadly contributed to the state-of-the-art in robotics research – reflected by a series of publications in top robotic conferences (ISER, ISRR, ICRA, IROS), and high-impact journals. Many of our results are also directly applicable to current space systems including Robonaut [2,5,11,13] and dynamical systems in general [9,10], whereas other might create the foundation for next-generation autonomous systems [1,3,4,6,7,8] that tightly integrate sensing and actuation into the materials they are made of and are agile and flexible.