Question
Problem statement There are two agents named R1 and G1. Both are searching for a heart as shown in the below configuration as H that
Problem statement There are two agents named R1 and G1. Both are searching for a "heart" as shown in the below configuration as H that gives everlasting power. Both agents are trying to reach the heart. In this process many obstacles may be encountered to reach the heart. Help them in finding the best path to reach the heart from any arbitrary start positions. [Dynamically fetch the start position while executing the code] For the agent R1 the obstacle is the green room. If R1 enters the green room it incurs a penalty of +10 cost and if it uses the red room it incurs a penalty of -10 points. For the agent G1 the obstacle is the red room. If G1 enters the red room it incurs a penalty of +10 cost and if it uses the green room it incurs a penalty of -10 points. In addition to the given cost, for every transition an agent visits incurs a path cost of 1. For any arbitrary node n the heuristic to reach the Heart h(n) is given by the below: Manhattan distance + Color Penalty where, Color Penalty = +5 if the node n and goal node is in different colored room and Color Penalty = -5 if the node n and goal node is in same colored room Use the A* algorithm for both the below configurations and interpret which agent works well in which environment. Justify your interpretation with relevant performance metrics. Note: The agents are not competing with each other. You need to run the simulation for both agents in each of the below scenarios separately & submit the results of 4 runs.
1. Use the above mentioned algorithm for all the scenarios and implement in PYTHON. 2. Print the simulation results. 3. Include code in your implementation to calculate the space complexity and time complexity for the informed search and print the same. For local search interpret the significance of the hyperparameters if any applicable .
======================================================
For the Above Question I got the below answer. But the code is not Working in Python.
Could you please help me on this to run the Python Code and display the Output.
how do I initialise the graph. initialising in terms of A B C D node is not giving me any results.
how do I set up a 6*6 maze in terms of x,y co-ordinates taking into consideration the maze structure and the penalty points associated.
also if have to move in all 8 directions, how do I improvise the code.
===============================================
Answer Which was Provided to me is as follows:
import heapq graph = { 'A': ['B', 'C', 'D'], 'B': ['A', 'E', 'F'], 'C': ['A', 'G', 'H'], 'D': ['A', 'I', 'J'], 'E': ['B', 'K', 'L'], 'F': ['B', 'M', 'N'], 'G': ['C', 'O', 'P'], 'H': ['C', 'Q', 'R'], 'I': ['D', 'S', 'T'], 'J': ['D', 'U', 'V'], 'K': ['E', 'W', 'X'], 'L': ['E', 'Y', 'Z'], 'M': ['F', 'H', 'I'], 'N': ['F', 'J', 'K'], 'O': ['G', 'L', 'M'], 'P': ['G', 'N', 'O'], 'Q': ['H', 'P', 'Q'], 'R': ['H', 'R', 'S'], 'S': ['I', 'T', 'U'], 'T': ['I', 'V', 'W'], 'U': ['J', 'X', 'Y'], 'V': ['J', 'Z', 'A'], 'W': ['K', 'B', 'C'], 'X': ['K', 'D', 'E'], 'Y': ['L', 'F', 'G'], 'Z': ['L', 'H', 'I'] }
def heuristic(node, goal): x1, y1 = node x2, y2 = goal manhattan = abs(x1 - x2) + abs(y1 - y2) color_penalty = 5 if node[0] != goal[0] else -5 return manhattan + color_penalty
def cost(current, neighbor, agent): if agent == 'R1': if neighbor[0] == 'G': return 10 elif neighbor[0] == 'R': return -10 elif agent == 'G1': if neighbor[0] == 'R': return 10 elif neighbor[0] == 'G': return -10 return 1
def a_star_search(start, goal, graph, heuristic): # Create a priority queue to store the unexplored nodes frontier = [] # Push the start node into the priority queue with a cost of 0 heapq.heappush(frontier, (0, start)) # Create a dictionary to store the costs of the nodes costs = {start: 0} # Create a dictionary to store the parents of the nodes parents = {start: None} # Create a variable to store the optimal path path = [] # Loop until the priority queue is empty while frontier: # Pop the node with the lowest cost from the priority queue current_cost, current_node = heapq.heappop(frontier) # If the current node is the goal node, construct the path and return it if current_node == goal: while current_node is not None: path.append(current_node) current_node = parents[current_node] return path[::-1] # Iterate through the neighbors of the current node for neighbor in graph[current_node]: # Calculate the cost of reaching the neighbor from the start node cost = costs[current_node] + 1 # If the neighbor has not been explored or the new cost is lower than the previous cost, update the cost and parent of the neighbor if neighbor not in costs or cost
# Define the start and goal nodes start = 'A' goal = 'H'
# Define the graph as a dictionary graph = { 'A': ['B', 'C'], 'B': ['A', 'D', 'E'], 'C': ['A', 'F'], 'D': ['B', 'G'], 'E': ['B', 'G'], 'F': ['C', 'G'], 'G': ['D', 'E', 'F', 'H'], 'H': ['G'] }
# Define the heuristic function def heuristic(node, goal): # Calculate the Manhattan distance between the node and the goal manhattan_distance = abs(node[0] - goal[0]) + abs(node[1] - goal[1]) # Add the color penalty if node[2] != goal[2]: color_penalty = 5 else: color_penalty = -5 # Return the total heuristic cost return manhattan_distance + color_penalty
# Run the A* search path = a_star_search(start, goal, graph, heuristic)
# Print the optimal path print(f"Optimal path: {path}")
# Define the start and goal nodes start = 'A' goal = 'H'
# Define the graph as a dictionary graph = { 'A': ['B', 'C'], 'B': ['A', 'D', 'E'], 'C': ['A', 'F'], 'D': ['B', 'G'], 'E': ['B', 'G'], 'F': ['C', 'G'], 'G': ['D', 'E', 'F', 'H'], 'H': ['G'] }
# Define the heuristic function def heuristic(node, goal): # Calculate the Manhattan distance between the node and the goal manhattan_distance = abs(node[0] - goal[0]) + abs(node[1] - goal[1]) # Add the color penalty if node[2] != goal[2]: color_penalty = 5 else: color_penalty = -5 # Return the total heuristic cost return manhattan_distance + color_penalty
# Run the A* search path = a_star_search(start, goal, graph, heuristic)
# Calculate the cost of the path def calculate_path_cost(path, graph, agent): cost = 0 for i in range(len(path) - 1): # Add the cost of transitioning between nodes cost += graph[path[i]][path[i+1]] # Check if the agent encounters an obstacle if (agent == 'R1' and path[i+1] in green_rooms) or (agent == 'G1' and path[i+1] in red_rooms): # Add the penalty for encountering the obstacle cost += 10 return cost
# Example usage path = ['A', 'B', 'C', 'H'] cost = calculate_path_cost(path, graph, 'R1') print(cost) # Output: 18 (1 + 1 + 10 + 6)
import heapq import math
# Define the search grid grid = [[' ', ' ', ' ', ' ', ' ', ' '], [' ', 'G', ' ', ' ', ' ', ' '], [' ', ' ', ' ', ' ', ' ', ' '], [' ', ' ', ' ', 'H', ' ', ' '], [' ', ' ', ' ', ' ', ' ', ' '], [' ', ' ', ' ', ' ', ' ', ' ']]
# Define the start and goal positions start_pos = (2, 2) goal_pos = (3, 3)
# Define the movement directions and costs directions = [(1, 0), (-1, 0), (0, 1), (0, -1)] costs = [1, 1, 1, 1]
# Define the heuristic function def heuristic(pos): # Calculate the Manhattan distance manhattan_distance = abs(pos[0] - goal_pos[0]) + abs(pos[1] - goal_pos[1]) # Calculate the color penalty color_penalty = 5 if grid[pos[0]][pos[1]] != grid[goal_pos[0]][goal_pos[1]] else -5 # Return the total heuristic value return manhattan_distance + color_penalty
# Define the A* search function def a_star(start, goal, agent): # Create a heap to store the unexplored nodes heap = [] # Push the start node onto the heap with a cost of 0 heap.append((0, start)) # Create a dictionary to store the cost of each node costs = {start: 0} # Create a dictionary to store the parent of each node parents = {start: None} # Create a set to store the explored nodes explored = set() # Loop until the heap is empty while heap: # Pop the node with the lowest cost from the heap current = heapq.heappop(heap)[1] # If the current node is the goal, return the path if current == goal: path = [] while current in parents: path.append(current) current = parents[current] return path[::-1] # If the current node has been explored, continue if current in explored: continue # Mark the current node as explored explored.add(current) # Expand the current node for i, (dx, dy) in enumerate(directions): # Calculate the next position next_pos = (current[0] + dx, current[1] + dy) # Skip the next position if it is outside the grid or is an obstacle if next_pos[0] = len(grid) or next_pos[1] = len(grid[0]) or grid[next_pos[0]][next_pos[1]] == 'G' or grid[next_pos[0]][next_pos[1]] == 'R'
# Calculate the cost of the next position next_cost = costs[current] + costs[i] # If the next position is the goal and the agent is R1, add the penalty for the green room if next_pos == goal and agent == 'R1': next_cost += 10 # If the next position is the goal and the agent is G1, add the penalty for the red room if next_pos == goal and agent == 'G1': next_cost -= 10 # If the next position has not been explored or the cost of the next position is lower than the current cost, update the cost and parent of the next position if next_pos not in costs or next_cost
# Define the main function def main(): # Run the A* search for R1 path_r1 = a_star(start_pos, goal_pos, 'R1') print(f'Path for R1: {path_r1}') # Run the A* search for G1 path_g1 = a_star(start_pos, goal_pos, 'G1') print(f'Path for G1: {path_g1}')
# Call the main function main()
Please help in execution of the Python Code to run the simulation or to produce the required output.
======================================================
Step by Step Solution
There are 3 Steps involved in it
Step: 1
Get Instant Access to Expert-Tailored Solutions
See step-by-step solutions with expert insights and AI powered tools for academic success
Step: 2
Step: 3
Ace Your Homework with AI
Get the answers you need in no time with our AI-driven, step-by-step assistance
Get Started