Skip Navigation
Space Technology Research Grants

Task Allocation Using Continuous Resource Distributed Markov Decision Processes

Completed Technology Project
510 views

Project Description

Project Image   Task Allocation Using Continuous Resource Distributed Markov Decision Processes

Exploration of other planets will require teams of robotic rovers to first precede and later assist human explorers. A critical challenge will then be to generate plans for these rovers so as to maximize their productivity while minimizing the time and energy spent to complete a set of tasks. The tasks assigned to an agent may feature temporal constraints as well as complex interdependencies with other tasks, including those assigned to another agent. Numerous techniques have been developed to efficiently compute policies for a single agent using continuous resource Markov Decision Processes (CR-MDP). However, expanding such algorithms to situations involving multiple agents is difficult, given that the complexity of solving decentralized Markov Decision Processes (DEC-MDPs) has been shown to be NEXP-complete. Even when using approximate algorithms, it is difficult to achieve the scale-up necessary to model the size and complexity of real-world domains using distributed CR-MDPs (CR-DEC-MDP). I intend to explore methods for solving CR-DEC-MDPs more efficiently, including (1) fast, locally optimal methods that exploit domain structure; and (2) efficient methods of convolution using fast Fourier transform (FFT).

More »

Anticipated Benefits

Project Library

Primary U.S. Work Locations and Key Partners

Technology Transitions

Light bulb

Suggest an Edit

Recommend changes and additions to this project record.
^