๐งฉ Topic: Process Concepts
๐ง What is a Process?
- A process is a program in execution.
- It is an active entity, unlike a program (which is just a passive code file).
-
A process includes:
-
Program code
- Current activity (Program Counter)
- Data, stack, heap, and registers
๐ฆ Process Components
| Component | Description |
|---|---|
| Text Section | The program code (instructions) |
| Data Section | Global/static variables |
| Heap | Dynamic memory allocation (e.g., malloc) |
| Stack | Function calls, return addresses, local variables |
| Program Counter | Shows the address of the next instruction |
| Registers | Hold intermediate data during execution |
๐ Process vs Program
| Aspect | Program | Process |
|---|---|---|
| Nature | Passive (code on disk) | Active (code in execution) |
| Stored In | Secondary memory (disk) | Main memory (RAM) |
| Lifespan | Exists until deleted | Exists during execution |
๐ Process States
| State | Description |
|---|---|
| New | Process is being created |
| Ready | Process is waiting to be assigned to CPU |
| Running | Process is currently being executed by CPU |
| Waiting/Blocked | Process is waiting for an event (e.g., I/O) |
| Terminated | Process has finished execution |
๐ OS uses a state transition diagram to manage these changes.
๐ Process Control Block (PCB)
- A data structure used by the OS to store information about a process.
| Field | Description |
|---|---|
| Process ID (PID) | Unique identifier for the process |
| Process State | Current state (Running, Waiting, etc.) |
| Program Counter | Next instruction to execute |
| CPU Registers | Contents of registers when switched out |
| Memory Info | Info about memory assigned |
| I/O Info | List of open files/devices |
๐จโ๐ฉโ๐งโ๐ฆ Types of Processes
| Type | Description |
|---|---|
| User Process | Initiated by a user (e.g., running a program) |
| System Process | Created and managed by OS (e.g., background services) |
๐ฏ Process Creation
A process can create child processes using system calls like fork() (UNIX/Linux).
Parent and child processes can:
- Execute concurrently
- Share resources or have separate memory
๐งฉ Process Termination
- A process ends by calling
exit() -
Can be:
-
Normal termination (completed successfully)
- Abnormal termination (error, killed by parent, failed I/O, etc.)
๐ Summary
- A process is a program in execution with code, data, and system resources.
- It moves through states (New, Ready, Running, Waiting, Terminated).
- Managed by the OS through PCB.
- Understanding processes is essential for multitasking, scheduling, and resource management.
๐งฉ Topic: Concurrent Processes
๐ง What are Concurrent Processes?
- Concurrent processes are two or more processes that are executed during overlapping time periods.
-
They may run:
-
Truly simultaneously on multi-core systems
- Sequentially interleaved on a single-core CPU, but appear to run in parallel
๐ Why Concurrency is Important
- Allows multiple tasks to be active at the same time
- Increases CPU utilization and system efficiency
- Enables responsive systems (e.g., web servers, OS multitasking)
๐งฉ Examples of Concurrent Processes
| Example | Description |
|---|---|
| Running a music player while browsing | Both applications are active concurrently |
| Downloading a file while editing text | One process uses I/O, the other uses CPU |
| OS handling multiple users | User processes handled concurrently by the OS |
๐ Types of Concurrency
| Type | Description |
|---|---|
| Process-based | Multiple independent programs running simultaneously |
| Thread-based | Multiple threads within the same process sharing memory |
โ๏ธ Challenges in Concurrency
| Problem | Description |
|---|---|
| Race Condition | Two or more processes access shared data simultaneously, leading to inconsistent results |
| Critical Section Problem | Section of code that accesses shared data must not be executed by more than one process at a time |
| Deadlock | Two or more processes wait for resources held by each other, blocking all |
| Starvation | A process waits indefinitely for a resource |
๐ OS Responsibilities in Concurrency
-
Process synchronization using:
-
Semaphores
- Mutexes
- Monitors
- Scheduling to ensure fair CPU access
- Deadlock detection and prevention
๐ Summary
- Concurrent processes run during overlapping times to improve system performance.
- May share resources and need synchronization to avoid errors.
- The OS uses scheduling and synchronization mechanisms to manage concurrency safely and efficiently.
๐งฉ Topic: Scheduling Concepts
๐ง What is Scheduling in OS?
- Scheduling is the process of deciding which process runs next on the CPU when there are multiple ready processes.
- The component that makes this decision is called the CPU scheduler.
๐ฏ Goals of CPU Scheduling
- Maximize CPU utilization
- Maximize throughput (number of processes completed in a time unit)
- Minimize waiting time, turnaround time, and response time
- Ensure fairness among processes
๐งฉ Types of Schedulers
| Scheduler Type | Description |
|---|---|
| Long-Term Scheduler | Selects which processes are admitted into the ready queue (job scheduler) |
| Short-Term Scheduler | Selects which ready process gets the CPU next (CPU scheduler) |
| Medium-Term Scheduler | Temporarily removes and swaps processes to control memory load (swapper) |
๐ Process Scheduling Queues
| Queue | Description |
|---|---|
| Job Queue | Contains all processes in the system |
| Ready Queue | Processes waiting to be assigned to CPU |
| Waiting (I/O) Queue | Processes waiting for I/O devices |
๐ Dispatcher
- The dispatcher is a module that gives CPU control to the selected process.
-
Responsibilities:
-
Switching context
- Switching to user mode
- Jumping to the process's program counter
๐ Scheduling Criteria
| Criterion | Meaning |
|---|---|
| CPU Utilization | Keep CPU busy as much as possible |
| Throughput | Number of processes completed per unit time |
| Turnaround Time | Time from submission to completion of a process |
| Waiting Time | Total time a process spends in the ready queue |
| Response Time | Time from submission to the first response (especially in interactive systems) |
| Fairness | All processes get equal share of CPU |
๐ง Preemptive vs Non-Preemptive Scheduling
| Type | Description |
|---|---|
| Preemptive | CPU can be taken away from a running process (e.g., time slicing) |
| Non-Preemptive | Once a process gets the CPU, it runs until it finishes or blocks |
๐ Summary
- CPU Scheduling determines the order in which processes are executed.
- It aims to improve performance by reducing waiting time, increasing throughput, and ensuring fairness.
- The OS uses schedulers and queues to manage the execution of multiple processes.
๐งฉ Topic: CPU Scheduling
๐ง What is CPU Scheduling?
- CPU Scheduling is the process of selecting a process from the ready queue and assigning it to the CPU for execution.
- It's a core function of the short-term scheduler (also called CPU scheduler).
๐ฏ Objectives of CPU Scheduling
- Maximize CPU utilization
- Minimize waiting time and turnaround time
- Increase system throughput
- Ensure fairness and responsiveness (especially in interactive systems)
๐งฉ Key CPU Scheduling Algorithms
1. First-Come, First-Served (FCFS)
- Non-preemptive
- Processes are executed in the order they arrive
โ Simple to implement โ May cause long waiting time (convoy effect)
2. Shortest Job First (SJF)
- Non-preemptive or Preemptive (also called SRTF)
- CPU is given to the process with the shortest burst time
โ Minimum average waiting time โ Requires knowledge of burst time in advance โ May cause starvation
3. Priority Scheduling
- Each process is assigned a priority number
- CPU is allocated to the highest priority process
โ Suitable for critical tasks โ Low-priority processes may starve
๐ Aging can be used to prevent starvation (gradually increase priority over time)
4. Round Robin (RR)
- Preemptive
- Each process gets a fixed time quantum
- After the time expires, process is moved to the end of the ready queue
โ Good for time-sharing systems โ Performance depends on time quantum size
5. Multilevel Queue Scheduling
- Ready queue is divided into multiple queues, each for different types of processes
- Each queue may have its own scheduling algorithm
โ Separates processes based on priority/type โ Rigid โ No movement between queues unless combined with aging
6. Multilevel Feedback Queue
- Processes can move between queues based on behavior
- Interactive processes get higher priority
โ Flexible and adaptive โ Good for general-purpose systems
๐งช Common Scheduling Criteria
| Metric | Description |
|---|---|
| CPU Utilization | % of time CPU is busy |
| Throughput | # of processes completed per unit time |
| Waiting Time | Time spent in ready queue |
| Turnaround Time | Time from submission to completion |
| Response Time | Time from submission to first CPU response |
| Fairness | Equal chance for all processes |
๐ Comparison Table of Scheduling Algorithms
| Algorithm | Preemptive | Starvation | Good For |
|---|---|---|---|
| FCFS | โ | โ | Simple batch systems |
| SJF | Optional | โ | Short tasks, minimum wait |
| Priority | Optional | โ | Time-critical applications |
| RR | โ | โ | Interactive users |
| MLQ | Optional | โ | System type separation |
| MLFQ | โ | โ (uses aging) | General OS with mixed load |
๐ Summary
- CPU Scheduling improves performance by efficiently allocating CPU to processes.
- Several algorithms exist, each suited to different system goals.
- Important to balance between efficiency, fairness, and responsiveness.
๐งฉ Topic: Scheduling Algorithms
๐ง What are Scheduling Algorithms?
Scheduling algorithms determine how the CPU selects processes from the ready queue for execution. Each algorithm has its own way of improving CPU utilization, response time, turnaround time, and fairness.
๐ Major CPU Scheduling Algorithms
1. First-Come, First-Served (FCFS)
- Non-preemptive
- Executes processes in the order they arrive
๐ Characteristics:
- Simple and easy to implement
- Can cause convoy effect (slow process blocks faster ones)
๐งฎ Example:
Processes: P1 P2 P3
Arrival: 0 1 2
Burst: 5 3 1
Gantt Chart: | P1 | P2 | P3 |
2. Shortest Job First (SJF)
- Selects the process with the shortest CPU burst
-
Can be:
-
Non-preemptive (waits till current job finishes)
- Preemptive (Shortest Remaining Time First โ SRTF)
๐ Pros:
- Best average waiting time โ Needs exact burst time in advance โ Starvation for long jobs
3. Round Robin (RR)
- Preemptive
- Each process gets a fixed time quantum (e.g., 2ms)
- After quantum, process goes to back of ready queue
๐ Pros:
- Fair for all processes
- Good for time-sharing systems โ Too small quantum โ high context switching โ Too large quantum โ like FCFS
๐งฎ Example:
Processes: P1(4), P2(3), P3(5)
Quantum: 2
Gantt: | P1 | P2 | P3 | P1 | P2 | P3 | P3 |
4. Priority Scheduling
- Each process is assigned a priority number
- CPU is given to the highest-priority process
๐ก Can be:
- Preemptive (interrupts running lower-priority process)
- Non-preemptive
๐ Pros:
- Useful for time-critical tasks โ Starvation possible for low-priority processes โ๏ธ Aging can be used to avoid starvation
5. Multilevel Queue (MLQ)
- Processes are divided into multiple queues (e.g., system, interactive, batch)
- Each queue has its own scheduling algorithm
๐ Rigid โ no movement between queues
- Example: OS uses FCFS for one queue, RR for another
6. Multilevel Feedback Queue (MLFQ)
- Like MLQ, but processes can move between queues
- Prioritizes interactive tasks with short bursts
๐ Pros:
- Adaptive, prevents starvation
- Complex but effective for mixed systems (real-world OS)
๐ Performance Metrics (Used to Compare Algorithms)
| Metric | Description |
|---|---|
| CPU Utilization | % of time CPU is active |
| Throughput | # of processes completed per unit time |
| Turnaround Time | Total time from submission to completion |
| Waiting Time | Time spent in ready queue |
| Response Time | Time from submission to first response |
๐ Summary Table of Scheduling Algorithms
| Algorithm | Preemptive | Starvation | Ideal For |
|---|---|---|---|
| FCFS | โ | โ | Simple, batch jobs |
| SJF | โ/โ | โ | Short tasks |
| RR | โ | โ | Time-sharing |
| Priority | โ/โ | โ | Critical processes |
| MLQ | โ/โ | โ | Grouped scheduling |
| MLFQ | โ | โ | General OS (adaptive) |
๐งฎ Bonus: Formulas to Remember
- Turnaround Time = Completion Time โ Arrival Time
- Waiting Time = Turnaround Time โ Burst Time
- Response Time = First Execution Time โ Arrival Time
๐งฉ Topic: Multiple Processor Scheduling
๐ง What is Multiple Processor Scheduling?
- It refers to CPU scheduling in systems with more than one processor (also called multiprocessor systems).
- The goal is to efficiently use all processors while ensuring fairness, responsiveness, and load balancing.
โ๏ธ Types of Multiprocessor Systems
-
Symmetric Multiprocessing (SMP)
-
All processors are identical and run the same OS.
- Each processor may schedule its own tasks.
-
Most modern OS (like Linux, Windows) use SMP.
-
Asymmetric Multiprocessing (AMP)
-
One processor is master; others are slaves.
- Master handles all scheduling and task assignment.
- Simpler, but less efficient than SMP.
๐ Key Concepts in Multiple Processor Scheduling
| Concept | Description |
|---|---|
| Processor Affinity | A process prefers to run on the same CPU it used before (caches stay warm) |
| Load Balancing | Distribute processes evenly across all CPUs |
| Scheduling Algorithms | Same algorithms (like RR, Priority) can be applied on multiprocessor setup |
๐ Types of Scheduling Approaches
| Approach | Description |
|---|---|
| Common Ready Queue | One global ready queue for all processors โ any CPU can pick a process |
| Separate Ready Queues | Each processor has its own queue โ faster context switching |
| Dynamic Scheduling | Tasks are reassigned during runtime to balance load |
๐ฏ Goals of Multiprocessor Scheduling
- Maximize CPU utilization
- Balance workload across processors
- Minimize process migration (unless needed)
- Improve scalability and responsiveness
๐งฉ Processor Affinity Types
| Type | Meaning |
|---|---|
| Soft Affinity | OS tries to keep a process on the same CPU but may move if needed |
| Hard Affinity | Process is strictly bound to a specific CPU |
๐ Summary
- Multiple processor scheduling manages how processes are assigned to multiple CPUs.
- Uses concepts like SMP/AMP, affinity, and load balancing.
- Aims for better performance, efficiency, and parallelism in modern multicore systems.