Gestures commands allow a human operator to directly interact with a robot without the use of intermediary hand controllers. There are two main types of hand gesture interfaces: data glove-based devices and computer vision techniques. Data glove-based devices are worn by the human and capture hand movements through embedded sensors. Computer vision techniques interpret hand movements by using the video feed from cameras. We will assess the feasibility of using both approaches when the person commanding a robot is wearing EVA gloves. This is because EVA gloves can restrict movements of the hand and affect gesture recognition accuracy and recognition speed. We plan to program a small robot to accept inputs from a data glove inserted into an EVA glove and from computer vision software that can recognize gestures when the user wears an EVA glove. We have a robot, data glove, and computer vision software so the effort will be on integrating these elements for an assessment. If the results of the assessment show an advantage of one technique over another for commanding a robot with EVA gloves, in-depth studies can be proposed to refine and evaluate the technique. Gesture commanding can be applied and evaluated with NASA robot systems. Application of this input modality can improve the way crewmembers interact with robots during EVA. There is a need to assess the feasibility of using both approaches when the person commanding a robot is wearing EVA gloves.