as an example. a. Repeat Exercise 3.16 using hill climbing. Does your agent ever get stuck in
Question:
as an example.
a. Repeat Exercise 3.16 using hill climbing. Does your agent ever get stuck in a local minimum? Is it possible for it to get stuck with convex obstacles?
b. Construct a nonconvex polygonal environment in which the agent gets stuck.
c. Modify the hill-climbing algorithm so that, instead of doing a depth-1 search to decide where to go next, it does a depth-k search. It should find the best k-step path and do one step along it, and then repeat the process.
d. Is there some k for which the new algorithm is guaranteed to escape from local minima?
e. Explain how LRTA* enables the agent to escape from local minima in this case.
Ep 4.18 Compare the performance of A* and RBFS on a set of randomly generated problems in the 8-puzzle (with Manhattan distance) and TSP (with MST-see Exercise 4.8) domains.
Discuss your results. What happens to the performance of RBFS when a small random number is added to the heuristic values in the 8-puzzle domain?
Step by Step Answer:
Artificial Intelligence: A Modern Approach
ISBN: 9780137903955
2nd Edition
Authors: Stuart Russell, Peter Norvig