What are process scheduling algorithms?

Process scheduling plays a vital role in operating systems, ensuring the efficient utilization of system resources and enhancing overall system performance.

Before diving into the role of scheduling algorithms, let's first discuss the concept of process states.

Process states

A typical process goes through these states in its lifecycle:

  • New

  • Running

  • Waiting

  • Ready

  • Terminated

Note: For more details on process states, click here.

In operating systems, a process is simply a program in the running state.

Process scheduling algorithms

Process scheduling algorithms determine which processes are selected from the ready queue and granted CPU time for execution. They ensure that each process receives an appropriate amount of CPU time they prevent starvation. They play a critical role in managing the transitions between these process states, ensuring efficient resource utilization, fair allocation, and optimal system performance.

Process scheduling algorithms are divided into two categories:

  1. Preemptive scheduling.

  2. Non-preemptive scheduling.

Process Scheduling algorithms
Process Scheduling algorithms

Preemptive scheduling algorithms

In preemptive scheduling, a running process can be forcefully interrupted and moved out of the CPU before its completion if a higher-priority process becomes ready or if its time slice (quantum) expires. Preemptive scheduling algorithms are:

  1. Round robin (RR).

  2. Multi-level queue scheduling (MLQS).

Let's study these preemptive scheduling algorithms in detail.

Round robin (RR)

In RR scheduling, each process is assigned a fixed time slice (quantum). If a process does not complete within its quantum, it is preempted and moved to the end of the ready queue, allowing the next process to execute.

Let's have a look at an example of an RR scheduling algorithm with 5 processes: P1, P2, P3, P4, and P5, arriving at different times and a fixed quantum slice of 4s.

RR example

Process

Arrival Time

Execution Time

P1

0

8

P2

1

6

P3

3

3

P4

5

2

P5

6

4

RR Gantt Chart
RR Gantt Chart

Note: RR scheduling is useful in time-sharing systems or environments with multiple interactive users.

Multi-level queue scheduling (MLQS)

Multi-level queue scheduling (MLQS) is a process scheduling algorithm that divides processes into multiple priority-based queues. Each queue has its own scheduling policy, such as First Come, First Served (FCFS), round robin (RR), or another suitable algorithm. Processes are initially assigned to a specific queue based on their priority or specific characteristics. Higher-priority queues are serviced first, and if there are no processes in the higher-priority queues, lower-priority queues are given CPU time.

In the diagram below, let's visualize the structure of a multi-level queue scheduler.

MLQS Structure
MLQS Structure

Note: MLQS is suitable when processes have different priority levels or require differentiated service levels. It allows for efficient resource allocation by dividing processes into priority-based queues

Non-preemptive scheduling algorithms

Non-preemptive scheduling algorithms are process scheduling techniques where a running process cannot be interrupted until it voluntarily releases the CPU.

Here are some examples of non-preemptive scheduling algorithms:

  1. First Come, First Served (FCFS)

  2. Shortest Job First (SJF)

Let's study these preemptive scheduling algorithms in depth.

First Come, First Served (FCFS)

FCFS scheduling is a non-preemptive algorithm where processes are executed in the order they arrive. Once a process starts executing, it continues until it completes or enters a waiting state voluntarily. FCFS scheduling algorithm acts similarly to a queue.

Let's have a look at an example of an FCFS scheduling algorithm with 4 processes: P0, P1, P2, and P3, arriving at different times.

FCFS example

Process

Arrival Time

Execution Time

Service Time

P0

0

5

0

P1

1

3

5

P2

2

8

8

P3

3

6

16

FCFS Gantt Chart
FCFS Gantt Chart

Note: Use FCFS scheduling when the order of process arrival is important or needs to be preserved.

Shortest Job First (SJF)

Shortest Job First (SJF) is a process scheduling algorithm that selects the process with the smallest burst time (execution time) to run next. It aims to minimize the waiting time of processes and optimize system performance. In SJF scheduling, when multiple processes are ready to run, the scheduler chooses the process with the shortest burst time for execution.

Let's have a look at an example of SJF scheduling algorithm with 4 processes: P0, P1, P2, and P3, arriving at different times.

Note: SJF scheduling is beneficial when accurate burst time estimations are available. It minimizes the average waiting time and is suitable for scenarios where response time or turnaround time is critical.

SJF example

Process

Arrival Time

Burst Time

Service Time

P0

3

5

10

P1

0

4

0

P2

4

2

4

P3

5

4

6

SJF Gantt Chart
SJF Gantt Chart

Summary

To sum up, process scheduling algorithms are essential for efficient resource management and optimizing system performance in operating systems. First-Come First-Served (FCFS) prioritizes process order and is suitable for maintaining fairness. Shortest Job First (SJF) minimizes waiting time by selecting the process with the shortest burst time. Round robin (RR) ensures fair CPU allocation and responsiveness by using time slices. Multi-level queue scheduling (MLQS) offers prioritization and differentiated service levels. The choice of scheduling algorithm depends on the system's requirements, workload characteristics, and desired performance objectives, and hybrid approaches can be utilized to meet specific needs.

Free Resources

Copyright ©2025 Educative, Inc. All rights reserved