Question
Scheduling Program simulation. Write a program in C that simulates a system program. explain the process of the system 1) Your program must use some
Scheduling Program simulation. Write a program in C that simulates a system program. explain the process of the system
1) Your program must use some form of "visual presentation" to show, at least, the following four components:
- A CPU
- A ready queue showing all the processes waiting to be dispatched to use the CPU
- An I/O device
- An I/O queue showing all the processes waiting to use the I/O device
2)The description of a simulated process includes the following information:
- The system runs from INIT_TIME (usually 0) to FIN_TIME.
- Jobs enter the system with an interarrival time that is uniformly distributed between ARRIVE_MIN and ARRIVE_MAX.
- Once a job has finished a round of processing at the CPU, the probability that it completes and exits the system (instead of doing a disk read or network send, and then further computation) is QUIT_PROB.
- Once a job has been determined to continue executing, a probability function is used to determine whether it will do disk I/O or use the network. This probability is NETWORK_PROB.
- When a job needs to do disk I/O, it uses the disk that's the least busy, i.e., the disk whose queue is the shortest. (This might seem a bit silly, but we can pretend that each disk has the same information.) If the disk queues are of equal length, choose one of the disks at random.
- When jobs reach some component (CPU, disk1, disk2, or network), if that component is free, the job begins service there immediately. If, on the other hand, that component is busy servicing someone else, the job must wait in that component's queue.
- The queue for each system component is FIFO.
- When a job reaches a component (a different job leaves a component or the component is free, and this job is first in the queue), how much time does it spend using the component? This is determined randomly, at runtime, for every component arrival. A job is serviced by a component for an interval of time that is uniformly distributed between some minimum and maximum defined for that component. For example, you'll define: CPU_MIN, CPU_MAX, DISK1_MIN, DISK1_MAX, etc.
- At time FIN_TIME, the entire job simulation terminates. We can ignore the jobs that might be left receiving service or waiting in queue when FIN_TIME is reached.
3)an integer counter will simulate the system clock there will be an output log file (the name of the log file will be entered by the user as third command-line input parameter). Every time unit in your simulation will be listed in this file along with what event(s) took place at that time. Every 5 time units (5, 10, 15, ) also display the current contents of the Ready Queue and I/O Queue (before events at that time begin).
4)The simulator shall print an appropriate message when a simulated process changes its state. It shall print a message when it performs one of the following actions: Each message is prefixed with the current simulation time.
- Starts a new process
- Schedules a process to run
- Moves a process to the I/O Waiting (Blocked) State
- Preempts a process from the Running State
- Moves a process back into the Ready State (due to I/O completion)
- Starts servicing a process' I/O request
- When a simulated process is interrupted (because its current CPU burst is longer than the quantum) the process is preempted and re-enter the ready queue. When a simulated process completes its current CPU burst, it will then use its I/O burst, the simulator changes the process' state to Blocked. At this point, the CPU becomes idle and the dispatcher may select another process from the ready queues.
- The simulated system has only one CPU and one I/O device. The I/O request of a process will be performed only if the I/O device is available. Otherwise, the process requesting the I/O operation will have to wait until the device is available. I/O is handled by the simulated device on first-come-first-serve basis. Upon completion of its I/O burst, a process will change from Blocked state to Ready and join the Ready Queue again.
- When a simulated process terminates, the simulator then outputs a statement of the form: Job %d terminated: Turn Around Time = %d, Wait time = %d, I/O wait = %d where " Turn Around Time " is total elapsed time, "Wait time" is the total time spent by a process in the Ready Queue, and "I/O wait" is the total amount of time the process had to wait for the I/O device.
- At the end of the simulation, the simulator shall display the percentage of CPU utilization, average Turn Around Time, average wait time, and average I/O wait time.
-
We proceed the same way until we remove a SIMULATION_FINISHED event from the queue. The main loop of your program will then be something like:
while the priority queue is not empty (it shouldn't ever be) and the simulation isn't finished: read an event from the queue handle the event
-
Each event in the event queue will be of a given type. It seems simplest to store these as named constants, e.g., something like CPU_FINISHED=1, DISK1_FINISHED=2, etc., and to write a handler function for each possible event type. We've just described what the handler for a job arrival event might do. As an example, what should a disk finished handler do? Remove the job from the disk, return the job to the CPU (determining how much CPU time it will need for the next CPU execution, seeing if the CPU is free and if not, add it to the CPU's queue just as we did previously). We should also look at the disk's queue (because the disk is now free). If the disk's queue isn't empty, we need to remove a job from its queue, determine how long the job will require using the disk, create a new "disk finished" event and add it to the event queue.
The network finished handler will behave in a similar way to the disk finished handler when the NETWORK_FINISHED event occurs.
Running Your Simulator
The program will read from a text config file the following values:
- a SEED for the random number generator
- INIT_TIME
- FIN_TIME
- ARRIVE_MIN
- ARRIVE_MAX
- QUIT_PROB
- NETWORK_PROB
- CPU_MIN
- CPU_MAX
- DISK1_MIN
- DISK1_MAX
- DISK2_MIN
- DISK2_MAX
- NETWORK_MIN
- NETWORK_MAX
The format of the file is up to you, but it could be something as simple as:
INIT_TIME 0 FIN_TIME 10000
Results
log
Your program should write to a log file the values of each of the constants listed above as well as each significant event (e.g., arrival of a new job into the system, the completion of a job at a component, the termination of the simulation, along with the time of the event)
statistics
Calculate and print:
- The average and the maximum size of each queue.
- The utilization of each server (component). This would be: time_the_server_is_busy/total_time where total_time = FIN_TIME-INIT_TIME.
- The average and maximum response time of each server (response time will be the difference in time between the job arrival at a server and the completion of the job at the server)
- The throughput (number of jobs completed per unit of time) of each server.
Run the program a number of times with different values for the parameters and random seed. Examine how the utilizations relate to queue sizes. If for a given choice of parameters by changing the random seed we obtain utilization and size values that are stable (i.e., they do don't change much (maybe 10%)), then we have a good simulation.
Your program should process a reasonable number of jobs, at least one thousand.
As part of the homework submit a document, README.txt, from two to three double-spaced pages plus (maybe include) diagrams, describing your program. Your description should be written, as addressed to a technical manager that needs to understand what you have done, the alternatives that you had to choose in design, why you made the choices you made, and how you tested your program.
Include also a second document, RUNS.txt, describing the data you have used to test your program and what you have learned from executing and testing. You should choose reasonable values for the inter-arrival times and for the CPU service times. For simplicity, choose a QUIT_PROB defaulted to 0.2, and use a service time at the disks equal to the service time of real disks (i.e., what is the average time to access and read or write a block of disk?). You can use a value of 0.3 for NETWORK_PROB, indicating that 70% of the time a job leaving the CPU will do disk I/O and 30% of the time the task will perform a network send. If you can, determine what is the smallest reasonable inter-arrival time (and for that matter, how would we even go about deciding on what reasonable would be)?
Step by Step Solution
There are 3 Steps involved in it
Step: 1
Get Instant Access to Expert-Tailored Solutions
See step-by-step solutions with expert insights and AI powered tools for academic success
Step: 2
Step: 3
Ace Your Homework with AI
Get the answers you need in no time with our AI-driven, step-by-step assistance
Get Started