Chapter 17 defined a proper policy for an MDP as one that is guaranteed to reach a
Question:
Chapter 17 defined a proper policy for an MDP as one that is guaranteed to reach a terminal state. Show that it is possible for a passive ADP agent to learn a transition model for which its policy n is improper even if n is proper for the true MDP; with such models, the value determination step may fail if y = 1. Show that this problem cannot arise if value determination is applied to the learned model only at the end of a trial.
Fantastic news! We've Found the answer you've been seeking!
Step by Step Answer:
Related Book For
Artificial Intelligence: A Modern Approach
ISBN: 9780137903955
2nd Edition
Authors: Stuart Russell, Peter Norvig
Question Posted: