6.231 Dynamic Programming and Stochastic Control

Class Info

Sequential decision-making via dynamic programming. Unified approach to optimal control of stochastic dynamic systems and Markovian decision problems. Applications in linear-quadratic control, inventory control, resource allocation, scheduling, and planning. Optimal decision making under perfect and imperfect state information. Certainty equivalent, open loop-feedback control, rollout, model predictive control, aggregation, and other suboptimal control methods. Infinite horizon problems: discounted, stochastic shortest path, average cost, and semi-Markov models. Value and policy iteration. Abstract models in dynamic programming. Approximate/neurodynamic programming. Simulation based methods. Discussion of current research on the solution of large-scale problems.

This class has 6.041B, 18.204, 18.100A, 18.100B, and 18.100Q as prerequisites.

6.231 will not be offered this semester. It will be available in the Spring semester, and will be instructed by J. N. Tsitsiklis.

Lecture occurs 2:30 PM to 4:00 PM on Tuesdays and Thursdays in 56-114.

This class counts for a total of 12 credits.

You can find more information at the http://www.google.com/search?&q=MIT+%2B+6.231&btnG=Google+Search&inurl=https site or on the 6.231 Stellar site.

Required Textbooks
Save up to a ton by purchasing through MIT Textbooks!
MIT 6.231 Dynamic Programming and Stochastic Control Related Textbooks
MIT 6.231 Dynamic Programming and Stochastic Control On The Web

© Copyright 2015