Exploration of other planets will require teams of robotic rovers to first precede and later assist human explorers. A critical challenge will then be to generate plans for these rovers so as to maximize their productivity while minimizing the time and energy spent to complete a set of tasks. The tasks assigned to an agent may feature temporal constraints as well as complex interdependencies with other tasks, including those assigned to another agent. Numerous techniques have been developed to efficiently compute policies for a single agent using continuous resource Markov Decision Processes (CR-MDP). However, expanding such algorithms to situations involving multiple agents is difficult, given that the complexity of solving decentralized Markov Decision Processes (DEC-MDPs) has been shown to be NEXP-complete. Even when using approximate algorithms, it is difficult to achieve the scale-up necessary to model the size and complexity of real-world domains using distributed CR-MDPs (CR-DEC-MDP). I intend to explore methods for solving CR-DEC-MDPs more efficiently, including (1) fast, locally optimal methods that exploit domain structure; and (2) efficient methods of convolution using fast Fourier transform (FFT).More »
Exploration of other planets will require teams of robotic rovers to first precede and later assist human explorers. This project aims to adddress the critical challenge of generateing plans for these rovers so as to maximize their productivity while minimizing the time and energy spent to complete a set of tasks.More »
|Organizations Performing Work||Role||Type||Location|
|University of Southern California (USC)||Lead Organization||Academia||Los Angeles, California|
|Jet Propulsion Laboratory (JPL)||Supporting Organization||FFRDC/UARC||Pasadena, California|
The goal of this technical work is to establish metrics for evaluating multi-objective approaches and the solutions that they generate. These metrics will encompass aspects of multi-objective optimization techniques such as runtime, solution quality (exact versus approximate), solution diversity (coverage of the Pareto frontier), etc. By creating these metrics will be able to assess the strengths and weaknesses of both evolutionary approaches and the epsilon constraint method for solving multi-objective optimization problems. These approaches will be compared using NASA domains such as scheduling for the Deep Space Network. A secondary goal will be to also look into issues relating to representation and visualizing of Pareto frontier which is a critical step in the decision making process. When the number of objectives is small (2 or 3), visualization is mostly straightforward as the Pareto frontier can be represented in Euclidean space. When the number of objectives exceeds three, novel visualization techniques are required. For these situations, we will examine which approaches or combinations of approaches effectively present the Pareto frontier in a comprehensible manner.
We have conducted a series of experiments to analyze the effects of parallelization on multi-objective evolutionary algorithms using three different multi-core computers described in the table below. Two of these are end-user systems such as may be found in a typical high-end office environment. The Linux server is a more expensive ($25K) rack-mounted server such as might be found in a typical data center.
For an initial set of experiments, we modified the serial GDE3 algorithm to make use of the Java 7 ForkJoin framework and compared run times when ForkJoin was constrained to use just a single core. We found the results to be virtually identical, and so as a basis of comparison we use the “1-core” results to measure speedup.
The effect of adding additional cores for system A (laptop) is shown in Figure 2A. For these runs, the maximum heap size was set at 12GB. Going from one to two cores led to a speedup factor of 1.8, but the improvement leveled out quickly and the maximum speedup (with 8 cores) was a factor of 3.0. In contrast, the desktop system (Figure 2B), with a total of 24 cores, continued to show noticeable improvement until about half of these were utilized: the overall speedup factor topped out at about 7.3.
This uniform improvement with increasing number of cores on systems (A) and (B) did not hold on the third system (C) however. Figure 2C shows the results of executing the test on the Linux server system (C). This server has somewhat older and slower processors and so took longer per generation than the other two systems by about a factor of 2 to 3. Increasing the number of cores to 12 showed a similar proportional improvement to system (B), about 7.3x speedup, but adding more cores initially made little difference, then performance worsened dramatically: using 24 cores was about the same as 2! This effect, while unexpected, has been reported as a consequence of memory bandwidth limitation, which is consistent with it appearing on the oldest of our test systems.