Answered step by step
Verified Expert Solution
Link Copied!

Question

1 Approved Answer

I need to make the sokoban game using C + + but I get lost while building the assignments so please finish it ( I

I need to make the sokoban game using C++ but I get lost while building the assignments so please finish it (I just jave U 2 codes to check and finish) and here's some things you need to know :
First, we need to create a 2D array with a size of 7x7 or 9x9 to represent the initial state. In this array, we define the empty spaces, boxes, rocks, and storage locations, but we do not specify the player's location. Then, we ask the user to input the player's location by entering the index of the location. There's no need to write code to verify if the user entered a correct index or not. For example, if the array size is 7x7 and the user enters the player's location as 5,9, we don't need to check, but inform them that the location is incorrect.
After that, we need to call the Q-learning algorithm and create the Q-learning table.
You should ensure that every state has a unique id
Then, we just need to handle the output, by printing for the user all the steps the player needs to take to place all the boxes in the storage locations.
.
.
.
this is my tries of Q-learning
#include
#include
#include
using namespace std;
const int STATE_COUNT =6;
const int ACTION_COUNT =6;
const int TARGET_STATE =5;
const double ALPHA =0.8; // Learning rate (if needed for adjustments)
const double GAMMA =0.8; // Discount factor for future rewards
double QTable[50][50]={0};
double RewardTable[50][50]={
{-1,-1,-1,-1,0,-1},
{-1,-1,-1,0,-1,100},
{-1,-1,-1,0,-1,-1},
{-1,0,0,-1,0,-1},
{0,-1,-1,0,-1,100},
{-1,0,-1,-1,0,100}
};
int actionsList[10];
int actionsCount;
void PrintMatrix(double matrix[50][50], int rows, int columns);
void FindActions(double matrix[50][50], int state, int actionsList[10]);
double MaxQValue(double matrix[50][50], int state);
int PerformEpisode(int startState, double QTable[50][50], double RewardTable[50][50]);
int BestAction(int state, double QTable[50][50]);
void TrainModel(int startState, int episodes);
int main(){
cout << "Initial Q-Table:" << endl;
PrintMatrix(QTable, STATE_COUNT, STATE_COUNT);
cout << "Reward Table:" << endl;
PrintMatrix(RewardTable, STATE_COUNT, STATE_COUNT);
cout << "Enter the number of episodes to run: ";
int episodes;
cin >> episodes;
TrainModel(1, episodes);
cout << "Final Q-Table:" << endl;
PrintMatrix(QTable, STATE_COUNT, STATE_COUNT);
return 0;
}
void PrintMatrix(double matrix[50][50], int rows, int columns){
for (int i =0; i < rows; ++i){
for (int j =0; j < columns; ++j){
cout << matrix[i][j]<<"\t";
}
cout << endl;
}
}
void FindActions(double matrix[50][50], int state, int actionsList[10]){
actionsCount =0;
for (int i =0; i < ACTION_COUNT; i++){
if (matrix[state][i]>=0){
actionsList[actionsCount++]= i;
}
}
}
double MaxQValue(double matrix[50][50], int state){
double maxQ =0;
for (int i =0; i < ACTION_COUNT; ++i){
if (matrix[state][i]> maxQ){
maxQ = matrix[state][i];
}
}
return maxQ;
}
int PerformEpisode(int startState, double QTable[50][50], double RewardTable[50][50]){
int currentState = startState;
int stepCount =0;
while (true){
FindActions(RewardTable, currentState, actionsList);
if (actionsCount ==0){
break;
}
int action = actionsList[rand()% actionsCount];
double maxQ = MaxQValue(QTable, action);
QTable[currentState][action]= RewardTable[currentState][action]+ GAMMA * maxQ;
if (action == TARGET_STATE){
currentState = rand()% STATE_COUNT;
break;
} else {
currentState = action;
}
stepCount++;
}
return currentState;
}
int BestAction(int state, double QTable[50][50]){
int best =0;
double maxQ =0;
for (int i =0; i < ACTION_COUNT; ++i){
if (QTable[state][i]> maxQ){
maxQ = QTable[state][i];
best = i;
}
}
return best;
}
void TrainModel(int startState, int episodes){
srand(static_cast(time(nullptr)));
cout << "Training started..." << endl;
for (int i =0; i < episodes; ++i){
cout << "Episode "<< i <<":"<< endl;
startState = PerformEpisode(startState, QTable, RewardTable);
PrintMatrix(QTable, STATE_COUNT, STATE_COUNT);
}
}
.
.
and this is the pseudocode the the Sokoban game using Q-learning code
Initialize Q-table
Initialize R-table
For each episode:
Initialize Environment
While not isGoalState and not isInDeadlock:
Select one possible action (x)
Execute action (x), observe reward, and next state (Next_State)
# Get the maximum Q-value for the next state across all possible actions
max_Q = max(Q[Next_State][action] for action in possible actions)
# Update Q-value

Step by Step Solution

There are 3 Steps involved in it

Step: 1

blur-text-image

Get Instant Access to Expert-Tailored Solutions

See step-by-step solutions with expert insights and AI powered tools for academic success

Step: 2

blur-text-image

Step: 3

blur-text-image

Ace Your Homework with AI

Get the answers you need in no time with our AI-driven, step-by-step assistance

Get Started

Recommended Textbook for

Beginning VB 2008 Databases

Authors: Vidya Vrat Agarwal, James Huddleston

1st Edition

1590599470, 978-1590599471

More Books

Students also viewed these Databases questions

Question

Calculate the cost per hire for each recruitment source.

Answered: 1 week ago

Question

What might be some advantages of using mobile recruiting?

Answered: 1 week ago

Question

What external methods of recruitment are available?

Answered: 1 week ago