Though we dream of the day when humans will first walk on Mars, these dreams remain in the distance. For now, we explore vicariously by sending robotic agents like the Curiosity rover in our stead. Though our current robotic systems are extremely capable, they lack perceptual common sense. This characteristic will be increasingly needed as we create robotic extensions of humanity to reach across the stars, for several reasons. First, robots can go places that humans cannot. If we manage to get a human on Mars by 2035, as predicted by the current NASA timeline, this will still represent a 60 year lag from the time of the first robotic lander. Second, while it is possible to replace common sense in robots with human teleoperated control to some extent, this becomes infeasible as the distance to the base planet and the associated radio signal delay increase. Finally, as we pack more and more sensors onboard, the fraction of data that can be sent back to earth decreases. Data triage (finding the few frames containing a curious object on a planet's surface out of terabytes of data) becomes more important. In the last few years, research into a class of scalable unsupervised algorithms, also called deep learning algorithms, has blossomed, in part due to state of the art performance in a number of areas. A common thread among many recent deep learning algorithms is that they tend to represent the world in ways similar to how our brains represent the world. For example, thanks to decades of work by neuroscientists, we now know that in the V1 area of the visual cortex, the first region that visual information passes through after the retina, neurons tune themselves to respond to oriented edges and do so in a way that groups them together based on similarity. With this behavior as a goal, researchers set out to devise simple algorithms that reproduce this effect. It turns out that there are several. One, known as Topographic Independent Component Analysis, has each neuron start with random connections and then look for patterns that are statistically out of the ordinary. When it finds one, it locks onto this pattern, discouraging other neurons from duplicating its findings but simultaneously trying to group itself with other neurons that have learned patterns which are similar, but not identical. My proposed research plan is to develop existing and new unsupervised learning algorithms of this type and apply them to a robotic system. Specifically, I will demonstrate a prototype system capable of (1) learning about itself and its environment and of (2) actively carrying out experiments to learn more about itself and its environment. Research will be kept focused by developing a system aimed at eventual deployment on an unmanned space mission. Key components of the project will include synthetic data experiments, experiments on data recorded from a real robot, and finally experiments with learning in the loop as the robot explores its environment and learns actively. The unsupervised algorithms in question are applicable not only to a single domain, but to creating models for a wide range of applications. Thus, advances are likely to have far-reaching implications for many areas of autonomous space exploration. Tantalizing though this is, it is equally exciting that unsupervised learning is already finding application with surprisingly impressive performance right now, indicating great promise for near-term application to unmanned space exploration.