For the decision support scenarios that are particularly relevant to NASA, such as planning for human space missions, human operators will need a system that can (i) predict the explicability of plans based on their characterizations in the current context, and synthesize explicable plans based on this prediction; generate excuses when an explicable plan does not exist, and produce explanations when parts of the plans are expected to be difficult to understand, (ii) replan when the current solution needs to be updated to incorporate human feedback as a result of changes in the situation, while keeping track of the previously made commitments to the operators. These activities ensure that both planning and replanning are explicable to the operators in the loop, thus facilitating more effective decision support. In this project, we propose to develop a framework to realize explicable automation to work with humans. The proposed framework supports explicable planning and replanning by Recommending Explicable plans that are easily understandable, explaining plan recommendation via excuse and explanation generation, and replanning to Accommodate previous commitments of the human for Decision Support (READS). The system is expected to facilitate natural human-machine interaction, which has broader impacts in a variety of other applications, such as command and control. Hence, the proposed system represents an important step to realize automated systems with humans in the loop.