Answered step by step
Verified Expert Solution
Link Copied!

Question

1 Approved Answer

Here's the code that needs to be completed import random # Environment: [room_num, room 1 status, room 2 status] # Percept: [room_num, room_status] NUM_DAYS =

image text in transcribedHere's the code that needs to be completed

import random

# Environment: [room_num, room 1 status, room 2 status] # Percept: [room_num, room_status]

NUM_DAYS = 40

STAY = 'STAY' VACUUM = 'VACUUM' RIGHT = 'RIGHT' LEFT = 'LEFT' DIRTY = 'DIRTY' CLEAN = 'CLEAN'

ROOM_NUM = 0 # index into percept and environment to get Room # STATUS = 1 # index into percept to get status (dirty or clean)

def get_dirt(p): return random.random()

############################################################### class Agent:

def __init__(self, name, p, vacP, moveP, oneP, twoP): self.name = name self.p = p self.vacP = vacP self.moveP = moveP self.oneP = oneP self.twoP = twoP perceptHistory = [] def getName(self): return name def getAction(self, percept): self.perceptHistory.append(percept) curr = self.perceptHistory[-1] if curr[STATUS] == DIRTY: return VACUUM elif curr[ROOM_NUM] == 1: return RIGHT elif curr[ROOM_NUM] == 2: return LEFT #### End Class Agent #############################################

#### rumba_simulate ############################################## def rumba_simulation(f, agent, p, vacP, moveP, oneP, twoP):

score = 0 f.write(f'Agent: {agent.getName()} ') f.write(f'probability of dirt: {p} ') f.write(f'points for moving: {moveP} ') f.write(f'points for vacuuming: {vacP} ') f.write(f'points for exactly one room clean: {oneP} ') f.write(f'points for exactly two rooms clean: {twoP} ') # COMPLETE THIS FUNCTION HERE:

return score #### end rumba_simulate #########################################

#################################### # DO NOT ALTER ANYTHING BELOW THIS ####################################

print('Example calls to agent:')

print(myAgent.getAction([1, CLEAN])) print(myAgent.getAction([2, CLEAN])) print(myAgent.getAction([1, DIRTY])) print(myAgent.getAction([2, DIRTY]))

f = open('A2P1_RumbaSim_' + name + '_output.txt', 'w')

f.write(f'{name} ') f.write(f'{header1} ') f.write(f'{header2} ')

p = .1 vacP = -1 moveP = -1 oneP = 1 twoP = 2

myAgent = Agent(name, p, vacP, moveP, oneP, twoP) rumba_simulation(f, myAgent, p, vacP, moveP, oneP, twoP)

p = .2 vacP = -.4 moveP = -.3 oneP = 1 twoP = 3

myAgent = Agent(name, p, vacP, moveP, oneP, twoP) rumba_simulation(f, myAgent, p, vacP, moveP, oneP, twoP)

p = .4 vacP = -.4 moveP = -.3 oneP = 1 twoP = 3

myAgent = Agent(name, p, vacP, moveP, oneP, twoP) rumba_simulation(f, myAgent, p, vacP, moveP, oneP, twoP)

p = .05 vacP = -.5 moveP = -.3 oneP = 2 twoP = 5

myAgent = Agent(name, p, vacP, moveP, oneP, twoP) rumba_simulation(f, myAgent, p, vacP, moveP, oneP, twoP)

f.close()

Problem 1 (20 pts) Rumba-World Simulation First, look at the sample output for this problem. Then, write the method rumba_simulation( f, agent, p, vacP, moveP, oneP, twoP) This simulates the Rumba World, with the following parameters: f is the file to write to, p is the probability of dirt landing appearing, vacP and moveP are the penalties for vacuuming and moving (these are negative values, so they should be added to the score), oneP and twoP are the (positive) points added to the score each morning for having exactly one room or two rooms clean, respectively. The simulation works as follows for each day: 1. Display (write to the file) the Morning number, the current environment (in the form [Room \#, Room 1 Status, Room 2 status]), the current the percept (in the form [Room \#, that room's Status], and the total score so far. The simulation starts at [1, CLEAN, CLEAN]. 2. Pass the percept to agent, and get its action by calling the agent's getAction function. 3. For each room, use the get_dirt function to determine if dirt should appear. If the function returns True, set that room's status to DIRTY. Otherwise, do not change its status. 4. Add the bonuses and penalties to the total score. The simulation goes for NUM_DAYS days, which means actions will be asked for on Morning 0 through morning NUM_DAYS 1. The last thing displayed will be Morning \# NUM_DAYS, the environment, and the final score - no action is asked for on the last morning

Step by Step Solution

There are 3 Steps involved in it

Step: 1

blur-text-image

Get Instant Access to Expert-Tailored Solutions

See step-by-step solutions with expert insights and AI powered tools for academic success

Step: 2

blur-text-image

Step: 3

blur-text-image

Ace Your Homework with AI

Get the answers you need in no time with our AI-driven, step-by-step assistance

Get Started

Recommended Textbook for

Temporal Databases Research And Practice Lncs 1399

Authors: Opher Etzion ,Sushil Jajodia ,Suryanarayana Sripada

1st Edition

3540645195, 978-3540645191

More Books

Students also viewed these Databases questions