6.231 Dynamic Programming and Stochastic Control
Sequential decision-making via dynamic programming. Unified approach to optimal control of stochastic dynamic systems and Markovian decision problems. Applications in linear-quadratic control, inventory control, resource allocation, scheduling, and planning. Optimal decision making under perfect and imperfect state information. Certainty equivalent, open loop-feedback control, rollout, model predictive control, aggregation, and other suboptimal control methods. Infinite horizon problems: discounted, stochastic shortest path, average cost, and semi-Markov models. Value and policy iteration. Abstract models in dynamic programming. Approximate/neurodynamic programming. Simulation based methods. Discussion of current research on the solution of large-scale problems.
6.231 will be offered this semester (Spring 2019). It is instructed by J. N. Tsitsiklis.
This class counts for a total of 12 credits. This is a graduate-level class.
© Copyright 2015 Yasyf Mohamedali