Answered step by step
Verified Expert Solution
Link Copied!

Question

1 Approved Answer

Answer in c + + please in one cpp file, I ' d prefer a detailed step by step explanation if possible. Thanks ( I

Answer in c++ please in one cpp file, I'd prefer a detailed step by step explanation if possible. Thanks
(I tried to solve it by my own but Im stuck here so your solution should depend on my codes in the last of qustion )
First, we need to create a 2D array with a size of 7x7 or 9x9 to represent the initial state. In this array, we define the empty spaces, boxes, rocks, and storage locations, but we do not specify the player's location. Then, we ask the user to input the player's location by entering the index of the location. There's no need to write code to verify if the user entered a correct index or not. For example, if the array size is 7x7 and the user enters the player's location as 5,9, we don't need to check, but inform them that the location is incorrect.
After that, we need to call the Q-learning algorithm and create the Q-learning table.
You should ensure that every state has a unique id
Then, we just need to handle the output, by printing for the user all the steps the player needs to take to place all the boxes in the storage locations.
.
here is my codes
pseudocode the the Sokoban game using Q-learning Home work:
Initialize Q-table
Initialize R-table
For each episode:
Initialize Environment
While not isGoalState and not isInDeadlock:
Select one possible action (x)
Execute action (x), observe reward, and next state (Next_State)
max_Q = max(Q[Next_State][action] for action in possible actions)
Q[s][x]= R[s][x]+\gamma * max_Q
s = Next_State
End While
End For
and Q-learning:
#include
#include
#include
using namespace std;
const int STATE_COUNT =6;
const int ACTION_COUNT =6;
const int TARGET_STATE =5;
const double ALPHA =0.8; // Learning rate (if needed for adjustments)
const double GAMMA =0.8; // Discount factor for future rewards
double QTable[50][50]={0};
double RewardTable[50][50]={
{-1,-1,-1,-1,0,-1},
{-1,-1,-1,0,-1,100},
{-1,-1,-1,0,-1,-1},
{-1,0,0,-1,0,-1},
{0,-1,-1,0,-1,100},
{-1,0,-1,-1,0,100}};
int actionsList[10];
int actionsCount;
void PrintMatrix(double matrix[50][50], int rows, int columns);
void FindActions(double matrix[50][50], int state, int actionsList[10]);
double MaxQValue(double matrix[50][50], int state);
int PerformEpisode(int startState, double QTable[50][50], double RewardTable[50][50]);
int BestAction(int state, double QTable[50][50]);
void TrainModel(int startState, int episodes);
int main(){
cout << "Initial Q-Table:" << endl;
PrintMatrix(QTable, STATE_COUNT, STATE_COUNT);
cout << "Reward Table:" << endl;
PrintMatrix(RewardTable, STATE_COUNT, STATE_COUNT);
cout << "Enter the number of episodes to run: ";
int episodes;
cin >> episodes;
TrainModel(1, episodes);
cout << "Final Q-Table:" << endl;
PrintMatrix(QTable, STATE_COUNT, STATE_COUNT);
return 0;}
void PrintMatrix(double matrix[50][50], int rows, int columns){
for (int i =0; i < rows; ++i){
for (int j =0; j < columns; ++j){
cout << matrix[i][j]<<"\t"; }
cout << endl;}}
void FindActions(double matrix[50][50], int state, int actionsList[10]){
actionsCount =0;
for (int i =0; i < ACTION_COUNT; i++){
if (matrix[state][i]>=0){
actionsList[actionsCount++]= i;}}}
double MaxQValue(double matrix[50][50], int state){
double maxQ =0;
for (int i =0; i < ACTION_COUNT; ++i){
if (matrix[state][i]> maxQ){
maxQ = matrix[state][i];}}
return maxQ;}
int PerformEpisode(int startState, double QTable[50][50], double RewardTable[50][50]){
int currentState = startState;
int stepCount =0;
while (true){
FindActions(RewardTable, currentState, actionsList);
if (actionsCount ==0){
break;}
int action = actionsList[rand()% actionsCount];
double maxQ = MaxQValue(QTable, action);
QTable[currentState][action]= RewardTable[currentState][action]+ GAMMA * maxQ;
if (action == TARGET_STATE){
currentState = rand()% STATE_COUNT;
break; } else {
currentState = action;}
stepCount++;}
return currentState;}
int BestAction(int state, double QTable[50][50]){
int best =0;
double maxQ =0;
for (int i =0; i < ACTION_COUNT; ++i){
if (QTable[state][i]> maxQ){
maxQ = QTable[state][i];
best = i;}}
return best;}
void TrainModel(int startState, int episodes){
srand(static_cast(time(nullptr)));
cout << "Training started..." << endl;
for (int i =0; i < episodes; ++i){
cout << "Episode "<< i <<":"<< endl;
startState = PerformEpisode(startState, QTable, RewardTable);
PrintMatrix(QTable, STATE_COUNT, STATE_COUNT); }}

Step by Step Solution

There are 3 Steps involved in it

Step: 1

blur-text-image

Get Instant Access to Expert-Tailored Solutions

See step-by-step solutions with expert insights and AI powered tools for academic success

Step: 2

blur-text-image_2

Step: 3

blur-text-image_3

Ace Your Homework with AI

Get the answers you need in no time with our AI-driven, step-by-step assistance

Get Started

Recommended Textbook for

Professional SQL Server 2012 Internals And Troubleshooting

Authors: Christian Bolton, Justin Langford

1st Edition

1118177657, 9781118177655

More Books

Students also viewed these Databases questions

Question

Do you think the banquet is a ritual? Why or why not?

Answered: 1 week ago

Question

How can speakers enhance their credibility?

Answered: 1 week ago