Skip to content

๐Ÿงฉ Topic: Process Concepts

๐Ÿง  What is a Process?

  • A process is a program in execution.
  • It is an active entity, unlike a program (which is just a passive code file).
  • A process includes:

  • Program code

  • Current activity (Program Counter)
  • Data, stack, heap, and registers

๐Ÿ“ฆ Process Components

Component Description
Text Section The program code (instructions)
Data Section Global/static variables
Heap Dynamic memory allocation (e.g., malloc)
Stack Function calls, return addresses, local variables
Program Counter Shows the address of the next instruction
Registers Hold intermediate data during execution

๐Ÿ”„ Process vs Program

Aspect Program Process
Nature Passive (code on disk) Active (code in execution)
Stored In Secondary memory (disk) Main memory (RAM)
Lifespan Exists until deleted Exists during execution

๐Ÿ” Process States

State Description
New Process is being created
Ready Process is waiting to be assigned to CPU
Running Process is currently being executed by CPU
Waiting/Blocked Process is waiting for an event (e.g., I/O)
Terminated Process has finished execution

๐Ÿ‘‰ OS uses a state transition diagram to manage these changes.


๐Ÿ” Process Control Block (PCB)

  • A data structure used by the OS to store information about a process.
Field Description
Process ID (PID) Unique identifier for the process
Process State Current state (Running, Waiting, etc.)
Program Counter Next instruction to execute
CPU Registers Contents of registers when switched out
Memory Info Info about memory assigned
I/O Info List of open files/devices

๐Ÿ‘จโ€๐Ÿ‘ฉโ€๐Ÿ‘งโ€๐Ÿ‘ฆ Types of Processes

Type Description
User Process Initiated by a user (e.g., running a program)
System Process Created and managed by OS (e.g., background services)

๐Ÿ‘ฏ Process Creation

A process can create child processes using system calls like fork() (UNIX/Linux).

Parent and child processes can:

  • Execute concurrently
  • Share resources or have separate memory

๐Ÿงฉ Process Termination

  • A process ends by calling exit()
  • Can be:

  • Normal termination (completed successfully)

  • Abnormal termination (error, killed by parent, failed I/O, etc.)

๐Ÿ“Œ Summary

  • A process is a program in execution with code, data, and system resources.
  • It moves through states (New, Ready, Running, Waiting, Terminated).
  • Managed by the OS through PCB.
  • Understanding processes is essential for multitasking, scheduling, and resource management.

๐Ÿงฉ Topic: Concurrent Processes

๐Ÿง  What are Concurrent Processes?

  • Concurrent processes are two or more processes that are executed during overlapping time periods.
  • They may run:

  • Truly simultaneously on multi-core systems

  • Sequentially interleaved on a single-core CPU, but appear to run in parallel

๐Ÿ”„ Why Concurrency is Important

  • Allows multiple tasks to be active at the same time
  • Increases CPU utilization and system efficiency
  • Enables responsive systems (e.g., web servers, OS multitasking)

๐Ÿงฉ Examples of Concurrent Processes

Example Description
Running a music player while browsing Both applications are active concurrently
Downloading a file while editing text One process uses I/O, the other uses CPU
OS handling multiple users User processes handled concurrently by the OS

๐Ÿ” Types of Concurrency

Type Description
Process-based Multiple independent programs running simultaneously
Thread-based Multiple threads within the same process sharing memory

โš™๏ธ Challenges in Concurrency

Problem Description
Race Condition Two or more processes access shared data simultaneously, leading to inconsistent results
Critical Section Problem Section of code that accesses shared data must not be executed by more than one process at a time
Deadlock Two or more processes wait for resources held by each other, blocking all
Starvation A process waits indefinitely for a resource

๐Ÿ” OS Responsibilities in Concurrency

  • Process synchronization using:

  • Semaphores

  • Mutexes
  • Monitors
  • Scheduling to ensure fair CPU access
  • Deadlock detection and prevention

๐Ÿ“Œ Summary

  • Concurrent processes run during overlapping times to improve system performance.
  • May share resources and need synchronization to avoid errors.
  • The OS uses scheduling and synchronization mechanisms to manage concurrency safely and efficiently.

๐Ÿงฉ Topic: Scheduling Concepts

๐Ÿง  What is Scheduling in OS?

  • Scheduling is the process of deciding which process runs next on the CPU when there are multiple ready processes.
  • The component that makes this decision is called the CPU scheduler.

๐ŸŽฏ Goals of CPU Scheduling

  • Maximize CPU utilization
  • Maximize throughput (number of processes completed in a time unit)
  • Minimize waiting time, turnaround time, and response time
  • Ensure fairness among processes

๐Ÿงฉ Types of Schedulers

Scheduler Type Description
Long-Term Scheduler Selects which processes are admitted into the ready queue (job scheduler)
Short-Term Scheduler Selects which ready process gets the CPU next (CPU scheduler)
Medium-Term Scheduler Temporarily removes and swaps processes to control memory load (swapper)

๐Ÿ” Process Scheduling Queues

Queue Description
Job Queue Contains all processes in the system
Ready Queue Processes waiting to be assigned to CPU
Waiting (I/O) Queue Processes waiting for I/O devices

๐Ÿ”„ Dispatcher

  • The dispatcher is a module that gives CPU control to the selected process.
  • Responsibilities:

  • Switching context

  • Switching to user mode
  • Jumping to the process's program counter

๐Ÿ• Scheduling Criteria

Criterion Meaning
CPU Utilization Keep CPU busy as much as possible
Throughput Number of processes completed per unit time
Turnaround Time Time from submission to completion of a process
Waiting Time Total time a process spends in the ready queue
Response Time Time from submission to the first response (especially in interactive systems)
Fairness All processes get equal share of CPU

๐Ÿง  Preemptive vs Non-Preemptive Scheduling

Type Description
Preemptive CPU can be taken away from a running process (e.g., time slicing)
Non-Preemptive Once a process gets the CPU, it runs until it finishes or blocks

๐Ÿ“Œ Summary

  • CPU Scheduling determines the order in which processes are executed.
  • It aims to improve performance by reducing waiting time, increasing throughput, and ensuring fairness.
  • The OS uses schedulers and queues to manage the execution of multiple processes.

๐Ÿงฉ Topic: CPU Scheduling

๐Ÿง  What is CPU Scheduling?

  • CPU Scheduling is the process of selecting a process from the ready queue and assigning it to the CPU for execution.
  • It's a core function of the short-term scheduler (also called CPU scheduler).

๐ŸŽฏ Objectives of CPU Scheduling

  • Maximize CPU utilization
  • Minimize waiting time and turnaround time
  • Increase system throughput
  • Ensure fairness and responsiveness (especially in interactive systems)

๐Ÿงฉ Key CPU Scheduling Algorithms

1. First-Come, First-Served (FCFS)

  • Non-preemptive
  • Processes are executed in the order they arrive

โœ… Simple to implement โŒ May cause long waiting time (convoy effect)


2. Shortest Job First (SJF)

  • Non-preemptive or Preemptive (also called SRTF)
  • CPU is given to the process with the shortest burst time

โœ… Minimum average waiting time โŒ Requires knowledge of burst time in advance โŒ May cause starvation


3. Priority Scheduling

  • Each process is assigned a priority number
  • CPU is allocated to the highest priority process

โœ… Suitable for critical tasks โŒ Low-priority processes may starve

๐Ÿ‘‰ Aging can be used to prevent starvation (gradually increase priority over time)


4. Round Robin (RR)

  • Preemptive
  • Each process gets a fixed time quantum
  • After the time expires, process is moved to the end of the ready queue

โœ… Good for time-sharing systems โŒ Performance depends on time quantum size


5. Multilevel Queue Scheduling

  • Ready queue is divided into multiple queues, each for different types of processes
  • Each queue may have its own scheduling algorithm

โœ… Separates processes based on priority/type โŒ Rigid โ€“ No movement between queues unless combined with aging


6. Multilevel Feedback Queue

  • Processes can move between queues based on behavior
  • Interactive processes get higher priority

โœ… Flexible and adaptive โœ… Good for general-purpose systems


๐Ÿงช Common Scheduling Criteria

Metric Description
CPU Utilization % of time CPU is busy
Throughput # of processes completed per unit time
Waiting Time Time spent in ready queue
Turnaround Time Time from submission to completion
Response Time Time from submission to first CPU response
Fairness Equal chance for all processes

๐Ÿ“Š Comparison Table of Scheduling Algorithms

Algorithm Preemptive Starvation Good For
FCFS โœ˜ โœ˜ Simple batch systems
SJF Optional โœ” Short tasks, minimum wait
Priority Optional โœ” Time-critical applications
RR โœ” โœ˜ Interactive users
MLQ Optional โœ” System type separation
MLFQ โœ” โœ˜ (uses aging) General OS with mixed load

๐Ÿ“Œ Summary

  • CPU Scheduling improves performance by efficiently allocating CPU to processes.
  • Several algorithms exist, each suited to different system goals.
  • Important to balance between efficiency, fairness, and responsiveness.

๐Ÿงฉ Topic: Scheduling Algorithms

๐Ÿง  What are Scheduling Algorithms?

Scheduling algorithms determine how the CPU selects processes from the ready queue for execution. Each algorithm has its own way of improving CPU utilization, response time, turnaround time, and fairness.


๐Ÿ“Š Major CPU Scheduling Algorithms


1. First-Come, First-Served (FCFS)

  • Non-preemptive
  • Executes processes in the order they arrive

๐Ÿ“Œ Characteristics:

  • Simple and easy to implement
  • Can cause convoy effect (slow process blocks faster ones)

๐Ÿงฎ Example:

Processes:  P1  P2  P3  
Arrival:    0   1   2  
Burst:      5   3   1  

Gantt Chart: | P1 | P2 | P3 |

2. Shortest Job First (SJF)

  • Selects the process with the shortest CPU burst
  • Can be:

  • Non-preemptive (waits till current job finishes)

  • Preemptive (Shortest Remaining Time First โ€“ SRTF)

๐Ÿ“Œ Pros:

  • Best average waiting time โŒ Needs exact burst time in advance โŒ Starvation for long jobs

3. Round Robin (RR)

  • Preemptive
  • Each process gets a fixed time quantum (e.g., 2ms)
  • After quantum, process goes to back of ready queue

๐Ÿ“Œ Pros:

  • Fair for all processes
  • Good for time-sharing systems โŒ Too small quantum โ†’ high context switching โŒ Too large quantum โ†’ like FCFS

๐Ÿงฎ Example:

Processes: P1(4), P2(3), P3(5)  
Quantum: 2  
Gantt: | P1 | P2 | P3 | P1 | P2 | P3 | P3 |

4. Priority Scheduling

  • Each process is assigned a priority number
  • CPU is given to the highest-priority process

๐ŸŸก Can be:

  • Preemptive (interrupts running lower-priority process)
  • Non-preemptive

๐Ÿ“Œ Pros:

  • Useful for time-critical tasks โŒ Starvation possible for low-priority processes โœ”๏ธ Aging can be used to avoid starvation

5. Multilevel Queue (MLQ)

  • Processes are divided into multiple queues (e.g., system, interactive, batch)
  • Each queue has its own scheduling algorithm

๐Ÿ“Œ Rigid โ€“ no movement between queues

  • Example: OS uses FCFS for one queue, RR for another

6. Multilevel Feedback Queue (MLFQ)

  • Like MLQ, but processes can move between queues
  • Prioritizes interactive tasks with short bursts

๐Ÿ“Œ Pros:

  • Adaptive, prevents starvation
  • Complex but effective for mixed systems (real-world OS)

๐Ÿ“ˆ Performance Metrics (Used to Compare Algorithms)

Metric Description
CPU Utilization % of time CPU is active
Throughput # of processes completed per unit time
Turnaround Time Total time from submission to completion
Waiting Time Time spent in ready queue
Response Time Time from submission to first response

๐Ÿ“Œ Summary Table of Scheduling Algorithms

Algorithm Preemptive Starvation Ideal For
FCFS โœ˜ โœ˜ Simple, batch jobs
SJF โœ˜/โœ” โœ” Short tasks
RR โœ” โœ˜ Time-sharing
Priority โœ˜/โœ” โœ” Critical processes
MLQ โœ˜/โœ” โœ” Grouped scheduling
MLFQ โœ” โœ˜ General OS (adaptive)

๐Ÿงฎ Bonus: Formulas to Remember

  • Turnaround Time = Completion Time โ€“ Arrival Time
  • Waiting Time = Turnaround Time โ€“ Burst Time
  • Response Time = First Execution Time โ€“ Arrival Time

๐Ÿงฉ Topic: Multiple Processor Scheduling

๐Ÿง  What is Multiple Processor Scheduling?

  • It refers to CPU scheduling in systems with more than one processor (also called multiprocessor systems).
  • The goal is to efficiently use all processors while ensuring fairness, responsiveness, and load balancing.

โš™๏ธ Types of Multiprocessor Systems

  1. Symmetric Multiprocessing (SMP)

  2. All processors are identical and run the same OS.

  3. Each processor may schedule its own tasks.
  4. Most modern OS (like Linux, Windows) use SMP.

  5. Asymmetric Multiprocessing (AMP)

  6. One processor is master; others are slaves.

  7. Master handles all scheduling and task assignment.
  8. Simpler, but less efficient than SMP.

๐Ÿ“Š Key Concepts in Multiple Processor Scheduling

Concept Description
Processor Affinity A process prefers to run on the same CPU it used before (caches stay warm)
Load Balancing Distribute processes evenly across all CPUs
Scheduling Algorithms Same algorithms (like RR, Priority) can be applied on multiprocessor setup

๐Ÿ” Types of Scheduling Approaches

Approach Description
Common Ready Queue One global ready queue for all processors โ€“ any CPU can pick a process
Separate Ready Queues Each processor has its own queue โ€“ faster context switching
Dynamic Scheduling Tasks are reassigned during runtime to balance load

๐ŸŽฏ Goals of Multiprocessor Scheduling

  • Maximize CPU utilization
  • Balance workload across processors
  • Minimize process migration (unless needed)
  • Improve scalability and responsiveness

๐Ÿงฉ Processor Affinity Types

Type Meaning
Soft Affinity OS tries to keep a process on the same CPU but may move if needed
Hard Affinity Process is strictly bound to a specific CPU

๐Ÿ“Œ Summary

  • Multiple processor scheduling manages how processes are assigned to multiple CPUs.
  • Uses concepts like SMP/AMP, affinity, and load balancing.
  • Aims for better performance, efficiency, and parallelism in modern multicore systems.