Chapter 17 defined a proper policy for an MDP as one that is guaranteed to reach a

Question:

Chapter 17 defined a proper policy for an MDP as one that is guaranteed to reach a terminal state. Show that it is possible for a passive ADP agent to learn a transition model for which its policy n is improper even if n is proper for the true MDP; with such models, the value determination step may fail if y = 1. Show that this problem cannot arise if value determination is applied to the learned model only at the end of a trial.

Fantastic news! We've Found the answer you've been seeking!

Step by Step Answer:

Related Book For  book-img-for-question
Question Posted: