Scheduling in operating system is the process of selecting a process from a ready queue. And allotting CPU to this process for execution. The operating system schedules the processes in such a way that the CPU doesn’t sit idle. And keeps processing some or the other process.
Scheduling is an important part of an operating system. As perfect scheduling increases the efficiency of the operating system. In this content, we will be discussing scheduling in more detail.
Content: Scheduling in Operating System
- What is Scheduling?
- What is its Need?
- Scheduling Queue
- Types of Scheduler
- Scheduling Criteria
- Scheduling Algorithm
What is Scheduling in Operating System?
Scheduling is the process of allotting the CPU to the processes present in the ready queue. We also refer to this procedure as process scheduling.
The operating system schedules the process so that the CPU always has one process to execute. This reduces the CPU’s idle time and increases its utilization.
The part of OS that allots the computer resources to the processes is termed as a scheduler. It uses scheduling algorithms to decide which process it must allot to the CPU.
Why Scheduling in Operating System is Important?
In your day to day life, you schedule your work sequences to get:
- better efficiency.
- reduce any kind of delay.
- even waiting time.
Like in the morning you wake up, have breakfast, work for some hours take lunch, again resume your work, at night have dinner and go to bed. Doing this way, you make your day more logical and efficient.
Similarly, the operating system schedules the processes to achieve maximum CPU utilization. The processor can keep several processes in the main memory in the multiprogramming environment. And some or the other processes must be running at all times.
As we know a typical process has both I/O times as well as CPU time. When a process completes its CPU time and now needs to perform I/O operation. The operating system takes the CPU from this process. And provides it to the other process waiting for the CPU. In this way, the operating system manages the CPU resources in an efficient way.
The operating system schedules every computer resource before providing it to the process.
Scheduling Queue
-
Job Queue
As soon as any process enters the system, the system assembles it in the job queue. The system stores the job queue on the mass storage device i.e. secondary memory. -
Ready Queue
If the process is ready for execution, Os brings it to the ready queue i.e. in the main memory. The process in the ready queue is the one those are waiting to get the CPU cycle. -
Device Queue
The operating system maintains a separate device queue for each I/O device. This device queue holds the processes that are waiting to perform I/O on the device.
-
Process_ID
-
Process State
-
Process Priority
-
Program counter, etc.
Scheduling Queue Sequences
- While execution, the process may create a new process. Then the system suspends the parent process until the child process terminates. Once the child process terminates the parent process is put back into the ready queue.
- If the time allotted to process for execution gets expired. And if the process execution isn’t completed the scheduler put it back in the ready queue.
- If an interrupt occurs during the process execution, the processor suspends its execution. It handles the interrupt first and then puts the process back in the ready queue.
- During execution, if the process wants to perform an I/O operation, the system stores it in the device queue. After it completes its I/O operations, the system puts it back into the ready queue.
- If the process completes its execution, the processor terminates it. And it deallocates the PCB related to the process. It also releases the resources allotted to the process.
Types of Scheduler
Above we have discussed three scheduling queues. A process migrates among these queues before it terminates. The scheduler picks an appropriate process from these queues to maximize CPU utilization.
This schedular can be further distinguished as:
- Long Term Scheduler
- Short Term Scheduler
- Medium Term Scheduler
Long Term Scheduler/ Process Scheduling/ Job Scheduling
The long-term schedular picks the process from the mass storage device such as a hard disk. And places it in the ready queue that is in the main memory.
Why do we term it Long-Term Scheduling?
Because this scheduler selects a new process from the mass storage less frequently. The creation of new processes one after the other is at least separated by the minutes of duration.
The long-term scheduler decides the degree of multiprogramming. If the degree of multiprogramming is stable. Then the average rate of process creation is equal to the average rate of process departure.
In this condition, the long-term scheduler has to select a process from the mass storage. Only when a process leaves or departs the system.
Short Term Scheduler/ CPU Scheduling
The short-term schedular picks one process from the ready queue. And allots the CPU to this process.
Why do we term it Short-Term Scheduler?
Because this scheduler frequently selects the process from the ready queue. And allots it to the CPU. As the processes include both I/O time and CPU time. They keep on switching between the CPU and I/O devices.
Thus, the CPU scheduler schedules the processes more frequently. The CPU scheduling is further classified into two types:
- Preemptive Scheduling
In this scheduling, the schedular allots CPU to a process for a limited amount of time. The processor preempts the execution of a process if the process with higher priority occurs. - Non-Preemptive Scheduling
In this scheduling, once the schedular allots the CPU to any process, the process doesn’t release the CPU. Until it gets terminated or it switches to the waiting state.
Medium Term Scheduler
The medium-term scheduler removes the partially executed process from the ready queue. This reduces the degree of multiprogramming. This process can be later reintroduced in the ready queue. And they can resume the execution from where it left off. This procedure is also referred to as swapping.
Scheduling Criteria
Scheduling criteria help to compare scheduling algorithms.
- CPU Utilization
CPU utilization must be high, this is possible if the CPU is busy all-time executing one or the other process. - Throughout
Throughput is the number of processes executed by the CPU per unit of time. - Turnaround Time
Turnaround time is the time required by the process to execute completely. We count this time from the moment the process comes in the ready queue. Till the process completes its execution. - Waiting Time
Waiting time is time spent by the process in the ready queue. - Response Time
Response time is the time counted from the moment the process comes to the ready queue. Till the process produces the first result.
Scheduling Algorithm
The scheduling algorithm helps the scheduler in deciding which process it must pick from the ready queue. There are several scheduling algorithms as discussed below:
- First-Come, First-Served Scheduling
The processor services the processes in the sequence they arrive in the ready queue. - Shortest-Job-First Scheduling
The processor services the shortest process in the ready queue first. - Priority Scheduling
The processor services the process with high priority in the ready queue first. - Round-Robin Scheduling
This is similar to the first-in-first-out scheduling. As the process arriving first in the ready queue will be serviced first.
But the processor preempts the process after a time quantum. And placed back in the ready queue, the next process in the ready queue is given a chance to execute.
There are a few more algorithms for scheduling the process. We will discuss them in some of our later content.
So, scheduling is the important part of the operating system that controls the degree of multiprogramming. Thus perfect scheduling increases the CPU utilization making the system more efficient.
Leave a Reply