6.231 Dynamic Programming and Stochastic Control

Class Info

Sequential decision-making via dynamic programming. Unified approach to optimal control of stochastic dynamic systems and Markovian decision problems. Applications in linear-quadratic control, inventory control, resource allocation, scheduling, and planning. Optimal decision making under perfect and imperfect state information. Certainty equivalent, open loop-feedback control, rollout, model predictive control, aggregation, and other suboptimal control methods. Infinite horizon problems: discounted, stochastic shortest path, average cost, and semi-Markov models. Value and policy iteration. Abstract models in dynamic programming. Approximate/neurodynamic programming. Simulation based methods. Discussion of current research on the solution of large-scale problems.

This class has 6.041B, 18.600, 18.100A, 18.100B, and 18.100Q as prerequisites.

6.231 will be offered this semester (Spring 2019). It is instructed by J. N. Tsitsiklis.

This class counts for a total of 12 credits. This is a graduate-level class.

You can find more information at the MIT + 6.231 - Google Search site or on the 6.231 Stellar site.

MIT 6.231 Dynamic Programming and Stochastic Control Related Textbooks
MIT 6.231 Dynamic Programming and Stochastic Control On The Web

© Copyright 2015