Since improvements in clock speed in computing processors have ceased, but silicon density continues to grow, a natural result has been that multi-core processors have become the mainstays of computing. Unfortunately although most of the world's computers are parallel, most of the world's software remains serial. Running multiple tasks in parallel is easy; however improving the run-time of single tasks via parallelism remains one of the greatest challenges in computing today. Serial programs can be rewritten in an explicitly parallel manner to take advantage of multiple cores. However rewriting programs by hand is extraordinarily time-intensive and expensive, especially considering the vast repository of serial code worldwide, developed at enormous expense over the last several decades. Parallelization in a compiler is an alternative to rewriting code by hand. Such a compiler automatically transforms input serial code into output parallel code. Its advantage -- avoiding the need to rewrite programs -- is so compelling that if a practical, robust, and efficient parallelizer were available, it would undoubtedly be widely used. However there are no production-quality parallelizing compilers today. Although much research has been done, existing research prototypes of parallelizing compilers have proven inadequate. Even when parallel code is generated, the increasing complexity of modern computing systems makes it difficult to achieve even a reasonable fraction of a system's peak performance. We are developing the automatically parallelizing AESOP compiler which advances the state of the art in two ways: (i) we have designed a goal-directed transformation decision algorithm that is far more effective at finding the sequence of program transformations leading to lowest run-time specific to each loop; and (ii) we use extensive system characterization to automatically learn about the underlying hardware, and apply compiler transformations in a customized way to extract the maximum performance on the specific target computing platform. AESOP is built on top of the production-quality open-source LLVM serial compiler. The problem of obtaining parallel software, important everywhere, is even more acute for NASA. NASA disproportionately uses time-intensive performance-critical scientific code in both terrestrial and flight computing, but which are mostly serial today. An increasing number of sophisticated sensors in space probes generate huge amounts of digital data at a faster rate than ever before. Since communication bandwidth to earth is growing much more slowly, on-board flight processing of data has become necessary to reduce raw data to a much smaller set of processed results that are more feasible to transmit to earth. Demands for high-performance parallel code abound in NASA's terrestrial computing as well. We have recently started extensive discussions with NASA officials at the GSFC (GSFC). After exchanging visits between UMD and GSFC, specific teams at GSFC (identified inside) have expressed an interest in adapting AESOP for their needs. NASA has recently started using parallel computing platforms, in particular the Tilera and SpaceCube radiation-hardened processors, but most of their existing programs are serial. One group at GSFC is interested in us applying the AESOP compiler to parallelize their serial code for both platforms. Another group at NASA has expressed an interest in us using AESOP to enhance the parallelism of the weather-forecasting code that runs on the 8000-core Linux cluster at GSFC. If funded via NSTRF, I plan to explore as many such projects as possible by dividing my time every week between UMD and GSFC over the next several years -- possible because they are only 4 miles apart. Regardless of the funding, I been offered a summer 2011 internship at GSFC by three different NASA groups. I will accept one of the offers soon, and look forward to a productive collaboration.