Question: Let us define a gridworld MDP , depicted in Figure 2 . The states are grid squares, identified by their row and column number (

Let us define a gridworld MDP, depicted in Figure 2. The states are grid squares, identified
by their row and column number (row first). The agent always starts in state (1,1), marked
with the letter S. There are two terminal goal states, (2,3) with reward +5 and (1,3) with
reward -5. Rewards are 0 in non-terminal states. (The reward for a state is received as
the agent moves into the state.) The transition function is such that the intended agent
movement (Up, Down, Left, or Right) happens with a probability of 0.8. With a probability
of 0.1 each, the agent ends up in one of the states perpendicular to the intended direction.
If a collision with a wall happens, the agent stays in the same state.
Figure 2: Left: Gridworld MDP, Right: Transition function
(a)
Define the optimal policy for this gridworld MDP.
(b)
Suppose the agent does not know the transition probabilities. What does
the agent need to do in order to learn the optimal policy?
(c)
The agent starts with the policy that always chooses to go right, and exe-
cutes the following three trajectories: 1)(1,1)-(1,2)-(1,3),2,
and 3)(1,1)-(2,1)-(2,2)-(2,3). What are the First-Visit Monte Carlo estimates for
states (1,1) and (2,2), given these trajectories? Suppose =1.
(d)
Using a learning rate of =0.1 and assuming initial values of 0, what
updates does the TD-learning agent make after trials 1 and 2, above? For this part,
suppose =0.9.
 Let us define a gridworld MDP, depicted in Figure 2. The

Step by Step Solution

There are 3 Steps involved in it

1 Expert Approved Answer
Step: 1 Unlock blur-text-image
Question Has Been Solved by an Expert!

Get step-by-step solutions from verified subject matter experts

Step: 2 Unlock
Step: 3 Unlock

Students Have Also Explored These Related Databases Questions!