🧩 Topic: Process Synchronization
🧠 What is Process Synchronization?
- Process Synchronization is the coordination of processes that share resources (like memory, files, variables) so they don’t interfere with each other and cause inconsistent data or system errors.
- It is especially important in concurrent systems, where multiple processes or threads run simultaneously.
⚠️ Why Synchronization is Needed
- In a multitasking environment, multiple processes may access shared data at the same time.
-
Without control, this can cause:
-
Race conditions
- Deadlocks
- Data inconsistency
🧠 Example: Two processes updating a shared variable x = x + 1
If both read x=5 and then increment, the final value will be 6 instead of 7.
🧩 Race Condition
- A race condition occurs when the output depends on the sequence or timing of processes' execution.
-
Happens when:
-
Multiple processes access shared data
- At least one writes to it
- No proper synchronization mechanism is used
🎯 Goals of Synchronization
-
Mutual Exclusion
-
Only one process should access the critical section at a time.
-
Progress
-
If no process is in the critical section, one of the waiting processes must be allowed to enter.
-
Bounded Waiting
-
A process should not be indefinitely delayed from entering its critical section.
🔒 Critical Section Problem
- A critical section is a part of the code where a process accesses shared resources.
- Problem: How to ensure that only one process executes its critical section at a time?
🧪 Solutions to Critical Section Problem
1. Software Solutions (for 2 or more processes)
| Method | Description |
|---|---|
| Peterson’s Algorithm | Classical software solution for two processes; uses turn and flag variables |
| Bakery Algorithm | Generalized solution for n processes using ticket system |
2. Hardware Solutions
| Method | Description |
|---|---|
| Disabling interrupts | Prevents context switch during critical section |
| Test-and-Set, Compare-and-Swap | Atomic instructions used to create locks |
3. OS-Level Synchronization Tools
| Tool | Description |
|---|---|
| Semaphore | Integer variable that controls access to resources |
| Mutex | Lock used to enforce mutual exclusion between threads/processes |
| Monitor | High-level construct that encapsulates variables + procedures |
| Spinlock | A process waits in a loop (busy wait) until a lock is released |
🧮 Semaphore Example
Semaphore S = 1; // 1 = available
// Process A
wait(S); // decreases S (enter critical section)
// critical section
signal(S); // increases S (leave critical section)
- wait() / P(): Decrements the semaphore. If the result is negative, process is blocked.
- signal() / V(): Increments the semaphore. If there are blocked processes, one is unblocked.
⚠️ Common Synchronization Problems
| Problem | Description |
|---|---|
| Deadlock | Two or more processes waiting on each other forever |
| Starvation | A process never gets the resource it needs |
| Busy Waiting | Process keeps checking for a condition, wasting CPU time |
📌 Summary
- Process synchronization ensures safe access to shared resources in concurrent execution.
- It prevents race conditions, inconsistent data, and deadlocks.
- Solutions include critical section protocols, semaphores, mutexes, and monitors.
- Synchronization is critical for multi-user, multi-tasking, and real-time systems.
🧩 Topic: Critical Section
🧠 What is a Critical Section?
-
A critical section is the part of a program where a process accesses shared resources such as:
-
Variables
- Files
- Printers
- Memory
-
Since shared resources can be used by only one process at a time, access to the critical section must be controlled to prevent:
-
Race conditions
- Data inconsistency
- System crashes
🔄 Structure of a Process with a Critical Section
Entry Section → Critical Section → Exit Section → Remainder Section
- Entry Section: Code that requests permission to enter the critical section
- Critical Section: Code that accesses/modifies shared data
- Exit Section: Code that signals exit from the critical section
- Remainder Section: Code outside the critical section
🎯 Goals of Critical Section Handling (3 Requirements)
-
Mutual Exclusion
-
Only one process at a time can be in the critical section
-
Progress
-
If no process is in the critical section, others should not be unnecessarily delayed
-
Bounded Waiting
-
A process must not wait indefinitely to enter the critical section
⚙️ Solutions to the Critical Section Problem
🔹 1. Software Solutions
- Work well for 2 or few processes
| Algorithm | Description |
|---|---|
| Peterson's Algorithm | Uses turn and flag variables to alternate access |
| Dekker's Algorithm | First known software solution for 2 processes |
| Bakery Algorithm | Generalized solution for n processes (like taking a token number) |
🔹 2. Hardware Solutions
- Use atomic instructions that can't be interrupted
| Instruction | Description |
|---|---|
| Test-and-Set | Checks and sets a lock in one step |
| Compare-and-Swap | Compares memory value and swaps if equal |
🔹 3. Operating System Solutions
- OS provides synchronization tools:
| Tool | Description |
|---|---|
| Semaphore | Integer variable; wait() and signal() control access |
| Mutex | Lock used to allow only one thread/process in the section |
| Monitor | High-level structure with internal mutual exclusion |
⚠️ Problems Without Proper Critical Section Handling
- Race Conditions: Output depends on unpredictable process timing
- Deadlock: Two or more processes wait forever for each other
- Starvation: A process waits indefinitely to enter its critical section
📌 Summary
- A critical section is where a process accesses shared data.
- Only one process should be in a critical section at any time (mutual exclusion).
- OS or programmer must use synchronization techniques to enforce safe access.
- Helps in preventing race conditions, deadlocks, and ensures correct execution in concurrent systems.
🧩 Topic: Synchronization Hardware
🧠 What is Synchronization Hardware?
- Synchronization hardware refers to special CPU instructions that help implement mutual exclusion and prevent race conditions at the hardware level.
- These instructions are atomic — they execute completely without interruption.
- Used to build locks like mutexes and semaphores without relying on software-only solutions.
🔒 Why Hardware Support is Needed?
- In multiprocessor or preemptive environments, process execution can be interrupted anytime.
- Critical sections must not be interrupted while accessing shared data.
- Hardware instructions help guarantee atomicity and avoid race conditions.
⚙️ Key Hardware Instructions for Synchronization
1. Test-and-Set (TAS) Instruction
- Atomic operation: Tests and modifies a lock variable in a single step.
// Pseudocode
boolean TestAndSet(boolean *target) {
boolean old = *target;
*target = true;
return old;
}
🟢 Used like this:
do {
while (TestAndSet(&lock)); // Wait (busy-wait)
// Critical Section
lock = false; // Release lock
// Remainder Section
} while (true);
✅ Pros:
- Simple and fast ❌ Cons:
- Causes busy waiting (CPU waste)
- Doesn’t ensure bounded waiting
2. Compare-and-Swap (CAS) Instruction
- Compares a memory value to an expected value and swaps it only if they match.
// Pseudocode
int CompareAndSwap(int *value, int expected, int new) {
int temp = *value;
if (*value == expected)
*value = new;
return temp;
}
🟢 Usage:
do {
while (CompareAndSwap(&lock, 0, 1) != 0); // Lock
// Critical Section
lock = 0; // Unlock
// Remainder Section
} while (true);
✅ Pros:
- No need for disabling interrupts
- Works on multiprocessor systems ❌ May still use busy waiting
3. Exchange (XCHG) Instruction
- Swaps content of register and memory location atomically
- Used for simple spinlocks
🧩 Disabling Interrupts (Outdated Method)
- Temporarily disables interrupts to prevent context switch
disableInterrupts();
// critical section
enableInterrupts();
❌ Not practical in multiprocessor systems ❌ Can affect system responsiveness ✅ Useful in uniprocessor systems only
📌 Summary Table
| Technique | Type | Use Case | Drawbacks |
|---|---|---|---|
| Test-and-Set | Atomic Op | Simple lock control | Busy-waiting, no fairness |
| Compare-and-Swap | Atomic Op | Lock-free synchronization | Busy-waiting |
| Exchange (XCHG) | Atomic Op | Low-level locking mechanism | Limited flexibility |
| Disable Interrupts | CPU Flag | Uniprocessor critical sections | Unsafe for multiprocessors |
✅ Summary
- Synchronization hardware provides low-level support for atomic operations.
- Instructions like Test-and-Set, Compare-and-Swap, and Exchange help implement mutex locks and semaphores.
- Modern OS combine these with software techniques for better performance and fairness.
🧩 Topic: Semaphores
🧠 What is a Semaphore?
- A semaphore is a synchronization tool used to control access to shared resources in concurrent systems.
- It helps prevent race conditions, deadlocks, and ensures mutual exclusion.
🧩 Types of Semaphores
| Type | Description |
|---|---|
| Counting Semaphore | Can take any non-negative integer value (used for resource counting) |
| Binary Semaphore / Mutex | Takes only 0 or 1 value (used for mutual exclusion) |
📟 Basic Semaphore Operations
Semaphores use two atomic operations:
// wait(S) or P(S)
wait(S):
while (S <= 0); // wait
S--;
// signal(S) or V(S)
signal(S):
S++;
These operations must be atomic (executed without interruption).
🔄 How Semaphores Work
Let’s say S is a semaphore initialized to 1:
wait(S); // enter critical section
// critical section code
signal(S); // exit critical section
wait(S)decreases the semaphore value.- If the value becomes less than 0, the process is blocked.
signal(S)increments the value and wakes up a blocked process if any.
🔐 Example: Mutual Exclusion Using Semaphore
Semaphore mutex = 1;
Process A:
wait(mutex);
// critical section
signal(mutex);
Process B:
wait(mutex);
// critical section
signal(mutex);
✅ Only one process can be in the critical section at a time.
🧰 Example: Producer-Consumer Problem Using Counting Semaphores
- Shared Buffer Size =
n - Semaphores:
mutex = 1,empty = n,full = 0
Producer:
wait(empty);
wait(mutex);
// Add item to buffer
signal(mutex);
signal(full);
Consumer:
wait(full);
wait(mutex);
// Remove item from buffer
signal(mutex);
signal(empty);
⚠️ Common Problems with Semaphores
| Problem | Description |
|---|---|
| Deadlock | Processes wait forever due to improper semaphore order |
| Starvation | A process never gets a chance to execute |
| Busy Waiting | Waiting process uses CPU cycles (solved by blocking) |
🧩 Semaphore vs Mutex
| Feature | Semaphore | Mutex |
|---|---|---|
| Value Range | Integer (0 or more) | Binary (0 or 1) |
| Multiple Processes | Can allow multiple access | Allows only one at a time |
| Signaling | Can be signaled by any process | Only the owner can unlock |
| Used For | Resource counting or mutual exclusion | Strict mutual exclusion only |
📌 Summary
- Semaphores are essential tools for process synchronization.
- They help in mutual exclusion, resource sharing, and inter-process communication.
- Proper use prevents race conditions, but incorrect use can lead to deadlocks and starvation.
🧩 Topic: Classical Problems of Synchronization
🧠 What are Classical Synchronization Problems?
- These are standard problems used to test and design synchronization mechanisms like semaphores and mutexes.
- They help in understanding process coordination, deadlocks, and mutual exclusion in concurrent systems.
🧩 1. The Bounded Buffer Problem (Producer–Consumer Problem)
🔹 Problem:
- One or more producers generate data and put it in a shared buffer.
- One or more consumers take data from the buffer.
- The buffer has finite size → Can be full or empty.
🧪 Goal:
-
Ensure that:
-
Producers don’t write when the buffer is full.
- Consumers don’t read when the buffer is empty.
- Access to the buffer is mutually exclusive.
🔐 Solution Using Semaphores:
Semaphore mutex = 1;
Semaphore empty = n; // Buffer size
Semaphore full = 0;
Producer:
wait(empty);
wait(mutex);
// Add item to buffer
signal(mutex);
signal(full);
Consumer:
wait(full);
wait(mutex);
// Remove item from buffer
signal(mutex);
signal(empty);
🧩 2. The Readers–Writers Problem
🔹 Problem:
- Multiple reader processes and writer processes share data.
- Readers can read simultaneously.
- Writers need exclusive access (no other readers or writers allowed).
🧪 Goal:
- Allow multiple readers at a time.
- Only one writer allowed at a time.
- Avoid reader or writer starvation.
🔐 Basic Solution:
Semaphore mutex = 1, wrt = 1;
int readCount = 0;
Reader:
wait(mutex);
readCount++;
if (readCount == 1)
wait(wrt);
signal(mutex);
// Reading section
wait(mutex);
readCount--;
if (readCount == 0)
signal(wrt);
signal(mutex);
Writer:
wait(wrt);
// Writing section
signal(wrt);
🧩 3. The Dining Philosophers Problem
🔹 Problem:
- Five philosophers sit at a table, each with a fork between them.
- Each philosopher alternates between thinking and eating.
- To eat, a philosopher must pick up both left and right forks.
🧪 Goal:
- Prevent deadlock (where all philosophers hold one fork and wait forever).
- Prevent starvation (where a philosopher never gets to eat).
🔐 Solution (Basic with Semaphores):
Semaphore forks[5] = {1,1,1,1,1};
Philosopher i:
wait(forks[i]); // Pick left fork
wait(forks[(i+1)%5]); // Pick right fork
// Eat
signal(forks[i]); // Put down left fork
signal(forks[(i+1)%5]); // Put down right fork
🧠 This can lead to deadlock. ✅ Improved solutions:
- Limit number of philosophers who can try to eat.
- Use asymmetric fork picking (e.g., odd philosophers pick right fork first).
🧩 4. The Sleeping Barber Problem
🔹 Problem:
- A barber sleeps if no customer is in the shop.
-
When a customer arrives:
-
If all chairs are full → customer leaves.
- If seats available → sits and waits.
🧪 Goal:
- Synchronize barber and customers.
- Manage waiting room (bounded buffer) and barber’s chair (mutex).
📌 Summary Table
| Problem | Resource Type | Goal | Common Tools |
|---|---|---|---|
| Bounded Buffer | Shared buffer | Mutual exclusion, full/empty | Semaphore |
| Readers–Writers | Shared database | Readers parallel, writers alone | Semaphore, mutex |
| Dining Philosophers | Forks (shared resources) | Deadlock-free eating | Semaphore, monitor |
| Sleeping Barber | Chairs and barber | Customer/barber sync | Semaphore, queue |
✅ Summary
-
These classical problems help test and understand:
-
Critical section
- Mutual exclusion
- Deadlocks & starvation
- Solved using semaphores, mutexes, and monitors.
🧩 Topic: Interprocess Communication (IPC)
🧠 What is Interprocess Communication (IPC)?
- Interprocess Communication (IPC) allows multiple processes to communicate with each other.
- Used for data sharing, event signaling, and coordination in multitasking or distributed systems.
🎯 Why IPC is Needed?
- Processes are independent and often run in isolation.
-
IPC allows:
-
Sharing information between processes
- Synchronizing process actions
- Avoiding conflicts over shared resources
- Communication between user applications or between kernel and user space
🧩 Types of IPC Models
| Model | Description |
|---|---|
| Message Passing | Processes communicate by sending and receiving messages |
| Shared Memory | Processes share a common memory segment and read/write data there |
🔁 1. Message Passing IPC
📌 Features:
- No shared memory
- Communication via send() and receive()
- Good for distributed systems
💡 Operations:
send(P, message); // Send message to process P
receive(Q, message); // Receive message from process Q
➕ Advantages:
- Easy to implement
- Well-suited for remote communication
➖ Disadvantages:
- Slower due to kernel involvement
- Overhead in copying messages
🔗 2. Shared Memory IPC
📌 Features:
- Memory segment is mapped into address space of two or more processes
- Fast communication as data is directly accessed
💡 Steps:
- Process A creates a shared memory segment.
- Process B attaches to that segment.
- Both read/write to it using synchronization mechanisms (like semaphores).
➕ Advantages:
- Faster than message passing (no system calls after setup)
- Better for large data transfers
➖ Disadvantages:
- Synchronization must be handled manually
- Security risks if not handled properly
🧃 Example IPC Mechanisms in OS
| Mechanism | OS Tool / Function |
|---|---|
| Pipes | One-way data channel for parent-child communication |
| Named Pipes (FIFOs) | Pipe with a name; can be used by unrelated processes |
| Message Queues | Kernel-managed message storage |
| Shared Memory | Memory segment mapped into multiple processes |
| Semaphores | For synchronizing access to shared memory |
| Sockets | Network-based IPC (local and remote) |
| Signals | Notify a process of an event (lightweight communication) |
🧪 Example: Using Pipes in UNIX/Linux
int pipefd[2];
pipe(pipefd);
if (fork() == 0) {
// Child
close(pipefd[1]); // Close write end
read(pipefd[0], buffer, size);
} else {
// Parent
close(pipefd[0]); // Close read end
write(pipefd[1], "Hello", 5);
}
📌 Comparison: Message Passing vs Shared Memory
| Feature | Message Passing | Shared Memory |
|---|---|---|
| Speed | Slower | Faster |
| Ease of Use | Easier | Requires manual sync |
| Synchronization | Built-in | Must be done by developer |
| Scalability | Good for distributed systems | Best for local systems |
✅ Summary
- IPC allows processes to exchange data and coordinate actions.
- Two main methods: Message Passing and Shared Memory
- Operating systems provide tools like pipes, semaphores, message queues, and sockets for IPC.
- Proper synchronization is essential to avoid data inconsistency or deadlocks.