Answered step by step
Verified Expert Solution
Question
1 Approved Answer
Hello I need C++ code for Discrete event simulator.. Discrete Event Simulator General Idea Jobs are passed into a CPU queue After the jobs are
Hello I need C++ code for Discrete event simulator..
Discrete Event Simulator General Idea Jobs are passed into a CPU queue After the jobs are completed, they either exit, go into the disk or go to the network For the diskl and disk2 queues, the job will go to the shortest queue (random if the queues are both the same length) If the queue for the component(disk, disk2, network) is empty, then the job is immediately handled Durations of jobs are randomly generated depending on what type of job it is, ie CPU job has a duration between CPU_MIN and CPU_MAX Fifo Queue The fifo queues will all work similarly in that they will accept jobs and have a dequeue and enqueue method Dequeue will return the job that has been in the queue the longest, the first one that was in it Enqueue will add the job into the queue, be sure to realloc if the queue is full isEmpty will return 1 if the queue is empty and 0 if the queue is not empty (used to know when to start a job immediately), Size to get the current size of the queue (useful to know if one queue is bigger than another) Priority Queue Similar to regular queue but is implemented as a min heap The smallest time job is always at the root of the priority queue Can implement as an arraylist or a linked list If a job is pushed into the priority queue, then the time of the job is compared to its parent and if it is less than its parent, the two jobs are swapped If a job is popped out of the priority queue then the last element that was pushed is swapped with the root, then both children are checked, whichever child has the shorter time is swapped with the root and the swapped child is now checked Log File The log file will write the config file as well as the calculated statistics of your program after it has terminated The execution and termination of a job. Ie. O Job 1 arrives at time 10 O Job 2 arrives at time 15 o Job 1 finishes at time 25 The average size and max size of each queue The utilization of each component The throughput (jobs completed per unit of time) Implementation This might all seem to make sense in theory, but how is it implemented? Your program will have 5 non-trivial data structures: one FIFO queue for each component (CPU, disk1, disk2, network), and a priority queue used to store events, where an event might be something like "a new job entered the system", "a disk read finished", "a job finished at the CPU", etc. Events should be removed from the priority queue and processed based on the time that the event occurred. This can be implemented as a sorted list or minheap. When your program begins, after some initialization steps, it will add to the priority queue the arrival time of the first job and the time the simulation should finish. Suppose that these are times 0 and 100000 respectively. Priority in the queue is determined by time of the event. The priority queue would look like: when what job1 arrives 100000 simulation finished Until we're finished, we start to remove and process events from the priority event queue. First, we remove the event "job1 arrives" because it has the lowest time (and, therefore, the highest priority). In order to process this JOB_ARRIVAL_EVENT we: 1. set the current time to 0, i.e., the time of the event we just removed from the queue 2. determine the arrival time for the next job to enter the system and add it as an event to the priority queue 3. send job1 to the CPU To determine the time of the next arrival, we generate a random integer between ARRIVE_MIN and ARRIVE_MAX and add it to the current time. This will be the time of the 2nd arrival. Suppose that we end up with 15 (units). We also need to add job1(that just arrived) to the CPU. Because the CPU is idle, it can start work on job1 right away. We add another event to the event queue. This is the time at which job1 will finish at the CPU. To do this, we generate a random integer between CPU_MIN and CPU_MAX and add it to the current time. Suppose that we end up with a CPU time of 25 (units). The event queue is now: when what 15 job2 arrives 25 job1 finishes at CPU 100000 simulation finished We proceed the same way until we remove a SIMULATION_FINISHED event from the queue. The main loop of your program will then be something like: while the priority queue is not empty (it shouldn't ever be) and the simulation isn't finished: read an event from the queue handle the event Each event in the event queue will be of a given type. It seems simplest to store these as named constants, e.g., something like CPU_FINISHED=1, DISK1_FINISHED=2, etc., and to write a handler function for each possible event type. We've just described what the handler for a job arrival event might do. What should a disk finished handler do? Remove the job from the disk, return the job to the CPU (determining how much CPU time it will need for the next CPU execution, seeing if the CPU is free and if not, add it to the CPU's queue just as we did previously). We should also look at the disk's queue. If the disk's queue isn't empty, we need to remove a job from its queue, determine how long the job will require using the disk, create a new "disk finished" event and add it to the event queue. The network finished handler will behave in a similar way to the disk finished handler when the NETWORK_FINISHED event occurs. TIME to USE? NEW FIFO QUEUE CPU TERMINATE TIME to USE? DISK 1 FIFO QUEUE TIME to USE? DISK 2 FIFO QUEUE TIME to USE? Network FIFO QUEUE Discrete Event Simulator General Idea Jobs are passed into a CPU queue After the jobs are completed, they either exit, go into the disk or go to the network For the diskl and disk2 queues, the job will go to the shortest queue (random if the queues are both the same length) If the queue for the component(disk, disk2, network) is empty, then the job is immediately handled Durations of jobs are randomly generated depending on what type of job it is, ie CPU job has a duration between CPU_MIN and CPU_MAX Fifo Queue The fifo queues will all work similarly in that they will accept jobs and have a dequeue and enqueue method Dequeue will return the job that has been in the queue the longest, the first one that was in it Enqueue will add the job into the queue, be sure to realloc if the queue is full isEmpty will return 1 if the queue is empty and 0 if the queue is not empty (used to know when to start a job immediately), Size to get the current size of the queue (useful to know if one queue is bigger than another) Priority Queue Similar to regular queue but is implemented as a min heap The smallest time job is always at the root of the priority queue Can implement as an arraylist or a linked list If a job is pushed into the priority queue, then the time of the job is compared to its parent and if it is less than its parent, the two jobs are swapped If a job is popped out of the priority queue then the last element that was pushed is swapped with the root, then both children are checked, whichever child has the shorter time is swapped with the root and the swapped child is now checked Log File The log file will write the config file as well as the calculated statistics of your program after it has terminated The execution and termination of a job. Ie. O Job 1 arrives at time 10 O Job 2 arrives at time 15 o Job 1 finishes at time 25 The average size and max size of each queue The utilization of each component The throughput (jobs completed per unit of time) Implementation This might all seem to make sense in theory, but how is it implemented? Your program will have 5 non-trivial data structures: one FIFO queue for each component (CPU, disk1, disk2, network), and a priority queue used to store events, where an event might be something like "a new job entered the system", "a disk read finished", "a job finished at the CPU", etc. Events should be removed from the priority queue and processed based on the time that the event occurred. This can be implemented as a sorted list or minheap. When your program begins, after some initialization steps, it will add to the priority queue the arrival time of the first job and the time the simulation should finish. Suppose that these are times 0 and 100000 respectively. Priority in the queue is determined by time of the event. The priority queue would look like: when what job1 arrives 100000 simulation finished Until we're finished, we start to remove and process events from the priority event queue. First, we remove the event "job1 arrives" because it has the lowest time (and, therefore, the highest priority). In order to process this JOB_ARRIVAL_EVENT we: 1. set the current time to 0, i.e., the time of the event we just removed from the queue 2. determine the arrival time for the next job to enter the system and add it as an event to the priority queue 3. send job1 to the CPU To determine the time of the next arrival, we generate a random integer between ARRIVE_MIN and ARRIVE_MAX and add it to the current time. This will be the time of the 2nd arrival. Suppose that we end up with 15 (units). We also need to add job1(that just arrived) to the CPU. Because the CPU is idle, it can start work on job1 right away. We add another event to the event queue. This is the time at which job1 will finish at the CPU. To do this, we generate a random integer between CPU_MIN and CPU_MAX and add it to the current time. Suppose that we end up with a CPU time of 25 (units). The event queue is now: when what 15 job2 arrives 25 job1 finishes at CPU 100000 simulation finished We proceed the same way until we remove a SIMULATION_FINISHED event from the queue. The main loop of your program will then be something like: while the priority queue is not empty (it shouldn't ever be) and the simulation isn't finished: read an event from the queue handle the event Each event in the event queue will be of a given type. It seems simplest to store these as named constants, e.g., something like CPU_FINISHED=1, DISK1_FINISHED=2, etc., and to write a handler function for each possible event type. We've just described what the handler for a job arrival event might do. What should a disk finished handler do? Remove the job from the disk, return the job to the CPU (determining how much CPU time it will need for the next CPU execution, seeing if the CPU is free and if not, add it to the CPU's queue just as we did previously). We should also look at the disk's queue. If the disk's queue isn't empty, we need to remove a job from its queue, determine how long the job will require using the disk, create a new "disk finished" event and add it to the event queue. The network finished handler will behave in a similar way to the disk finished handler when the NETWORK_FINISHED event occurs. TIME to USE? NEW FIFO QUEUE CPU TERMINATE TIME to USE? DISK 1 FIFO QUEUE TIME to USE? DISK 2 FIFO QUEUE TIME to USE? Network FIFO QUEUEStep by Step Solution
There are 3 Steps involved in it
Step: 1
Get Instant Access to Expert-Tailored Solutions
See step-by-step solutions with expert insights and AI powered tools for academic success
Step: 2
Step: 3
Ace Your Homework with AI
Get the answers you need in no time with our AI-driven, step-by-step assistance
Get Started