6.231 Dynamic Programming and Stochastic Control
Sequential decision-making via dynamic programming. Unified approach to optimal control of stochastic dynamic systems and Markovian decision problems. Applications in linear-quadratic control, inventory control, resource allocation, scheduling, and planning. Optimal decision making under perfect and imperfect state information. Certainty equivalent, open loop-feedback control, rollout, model predictive control, aggregation, and other suboptimal control methods. Infinite horizon problems: discounted, stochastic shortest path, average cost, and semi-Markov models. Value and policy iteration. Abstract models in dynamic programming. Approximate/neurodynamic programming. Simulation based methods. Discussion of current research on the solution of large-scale problems.
6.231 will not be offered this semester. It will be available in the Spring semester, and will be instructed by J. N. Tsitsiklis.
Lecture occurs 2:30 PM to 4:00 PM on Tuesdays and Thursdays in 56-114.
This class counts for a total of 12 credits.
You can find more information at the http://www.google.com/search?&q=MIT+%2B+6.231&btnG=Google+Search&inurl=https site or on the 6.231 Stellar site.
© Copyright 2015 Yasyf Mohamedali