6.231 Dynamic Programming and Stochastic Control


Class Info

Sequential decision-making via dynamic programming. Unified approach to optimal control of stochastic dynamic systems and Markovian decision problems. Applications in linear-quadratic control, inventory control, resource allocation, scheduling, and planning. Optimal decision making under perfect and imperfect state information. Certainty equivalent, open loop-feedback control, rollout, model predictive control, aggregation, and other suboptimal control methods. Infinite horizon problems: discounted, stochastic shortest path, average cost, and semi-Markov models. Value and policy iteration. Abstract models in dynamic programming. Approximate/neurodynamic programming. Simulation based methods. Discussion of current research on the solution of large-scale problems.

This class has 6.041B, 18.600, 18.100A, 18.100B, and 18.100Q as prerequisites.

6.231 will not be offered this semester. It will be available in the Spring semester, and will be instructed by .

This class counts for a total of 12 credits. This is a graduate-level class.

You can find more information at the MIT + 6.231 - Google Search site.

MIT 6.231 Dynamic Programming and Stochastic Control Related Textbooks
MIT 6.231 Dynamic Programming and Stochastic Control On The Web
MIT + 6.231 - Google Search

© Copyright 2015