๐ Operating System โ Unit 3: Memory Management
๐งฉ Topic: Single Contiguous Memory Management
๐ง What is Memory Management?
Memory management refers to the way an Operating System (OS) handles the allocation and deallocation of main memory (RAM) to various processes during execution.
๐น What is Single Contiguous Memory Management?
- It is the simplest memory management technique.
- The entire RAM is treated as one single block (contiguous space).
-
The OS divides memory into two parts:
-
One part for the Operating System
- One part for a single user process
๐งฑ Memory Layout:
+-----------------------------+
| Operating System | โ Low memory address
+-----------------------------+
| User Program (Process) |
+-----------------------------+ โ High memory address
๐ Key Features
| Feature | Description |
|---|---|
| Single user process | Only one program runs at a time |
| Fixed memory size | Memory is allocated as one large block |
| No fragmentation | Since thereโs only one process, memory is used fully |
| No multitasking | System cannot run multiple processes together |
โ Advantages
- Simple to implement
- Low overhead
- Useful for embedded systems or early OS like MS-DOS
โ Disadvantages
| Limitation | Impact |
|---|---|
| No Multiprogramming | Only one process runs at a time |
| Poor resource utilization | CPU and memory may be underused |
| No process protection | Process can access any part of memory |
| No dynamic resizing | Fixed memory division, can't adjust automatically |
๐ง Where is it used?
- Early computers and simple embedded devices
- Educational simulators
- Systems where simplicity and predictability are more important than performance
โ Summary
- Single contiguous memory management is the simplest form of memory management.
- The OS and one user process share memory in a contiguous (unbroken) block.
- Easy to implement but supports only one process at a time, with no protection or multitasking.
๐งฉ Topic: Fixed Partitioned Memory Management
๐ง What is Fixed Partitioning?
- Fixed Partitioned Memory Management divides main memory into fixed-sized partitions at system startup.
- Each partition can hold only one process at a time.
- Partition sizes are predefined and do not change dynamically.
๐งฑ Memory Layout Example
Letโs say total RAM = 16 KB, and it's divided like this:
+------------------+
| OS (4 KB) |
+------------------+
| Partition 1 (4 KB) โ P1 (2 KB) โ 2 KB wasted
+------------------+
| Partition 2 (4 KB) โ P2 (4 KB)
+------------------+
| Partition 3 (4 KB) โ P3 (3 KB) โ 1 KB wasted
+------------------+
๐น Key Characteristics
| Feature | Description |
|---|---|
| Predefined partitions | Set at boot time |
| One process per partition | No sharing; no overlap |
| Simple allocation | OS looks for the first free partition large enough |
| Internal Fragmentation | Space inside partition goes unused if process is smaller |
๐ Types of Fixed Partitioning
-
Equal-size Partitions
-
All partitions are of the same size
- Easy to manage
-
High internal fragmentation
-
Unequal-size Partitions
-
Partitions are of different sizes
- Reduces internal fragmentation
- OS must match process to suitable partition
๐งฎ Advantages
โ Simple and fast โ Easy to implement โ No overhead of dynamic allocation
โ ๏ธ Disadvantages
| Issue | Description |
|---|---|
| Internal fragmentation | Wasted space within allocated partition |
| Limited multitasking | Number of partitions = max number of processes |
| Rigid structure | Canโt adapt to process size at runtime |
| Memory wastage | If many small processes or too few large ones |
๐ Example Problem
- RAM = 16 KB
- 4 partitions of 4 KB
- P1 = 2 KB, P2 = 5 KB, P3 = 4 KB
๐ธ P2 cannot be allocated (no partition large enough) ๐ธ P1 and P3 get assigned with wasted space
โ Summary
- Fixed Partitioning divides memory into blocks of fixed size.
- Simple but causes internal fragmentation and limits number of processes.
- Useful for simple batch systems, but not used in modern multitasking OS.
๐งฉ Topic: Swapping
๐ง What is Swapping?
- Swapping is a memory management technique where processes are moved between main memory (RAM) and secondary storage (disk).
- It allows the OS to temporarily remove a process from memory when itโs not needed and bring it back later when required.
๐ This helps in multiprogramming, even with limited RAM.
๐งฑ How Swapping Works
- A process is executing in memory.
- The OS decides it needs to free memory.
- The process is swapped out โ its memory is copied to disk (swap space).
- When needed again, it is swapped in โ copied back from disk to memory.
๐ Swapping Diagram
+----------------------+ +-----------------------+
| Main Memory | | Disk (Swap Area) |
+----------------------+ +-----------------------+
| OS | | |
| P1 (Running) | | P2 (Swapped Out) |
| P3 (Running) | | |
+----------------------+ +-----------------------+
โ Swap P2 โ
๐งฉ Key Terms
| Term | Meaning |
|---|---|
| Swap Space | Area on disk used to store swapped-out processes |
| Swap In | Bring process back from disk to memory |
| Swap Out | Move process from memory to disk |
| Context Switch | Switching CPU from one process to another, often using swapping |
๐ฏ When is Swapping Used?
- When RAM is full
- For background or low-priority processes
- In virtual memory systems
- To increase multiprogramming (more processes than available RAM)
๐งฎ Advantages of Swapping
โ Allows more processes to be managed โ Improves CPU utilization โ Helps in load balancing and memory reuse
โ ๏ธ Disadvantages of Swapping
| Issue | Description |
|---|---|
| High Disk I/O | Swapping uses the disk, which is slower than RAM |
| Process Delay | Swapping in/out causes waiting and latency |
| Thrashing | Too much swapping causes the system to slow down |
๐ก Example:
Assume:
- RAM size = 4 GB
- 5 processes need 1 GB each
- Only 4 can stay in RAM at a time
๐ The OS swaps one process out when a new one comes in.
๐ง Swapping in Modern OS
- Used with paging and virtual memory
- Often called โpaging to diskโ or โvirtual memory swappingโ
- Linux swap area:
/swapfileor separate partition - Windows:
pagefile.sys
โ Summary
- Swapping is the process of moving processes between main memory and disk to optimize memory usage.
- It helps the OS support more processes than physically possible in RAM.
- Swapping increases flexibility but may lead to performance overhead if overused.
๐งฉ Topic: Single and Multiple Partition Allocation
๐ง What is Partition Allocation?
- Partition allocation is a memory management technique where main memory is divided into partitions, and each partition is allocated to a process.
-
It can be done using either:
-
Single Partition Allocation
- Multiple Partition Allocation
๐น 1. Single Partition Allocation
๐งฑ Description:
-
Memory is split into two parts:
-
One part for the Operating System
- Remaining memory for one user process
๐ผ๏ธ Memory Layout:
+------------------+ โ Low Address
| Operating System |
+------------------+
| User Process |
+------------------+ โ High Address
โ Advantages:
- Simple and easy to implement
โ Disadvantages:
- Only one user process at a time
- No multitasking
- Poor CPU and memory utilization
๐น 2. Multiple Partition Allocation
- Memory is divided into multiple partitions, and multiple processes can be loaded into memory simultaneously.
๐ธ Types:
๐งฉ a) Fixed Partition Allocation
- Memory is divided into fixed-size blocks (partitions) during system startup.
- Each partition can hold exactly one process.
โ Advantages:
- Simple implementation
- No need for dynamic partition management
โ Disadvantages:
- Internal fragmentation (unused space inside a partition)
- Number of processes = number of partitions
๐งฉ b) Variable Partition Allocation
- Memory is allocated dynamically based on the exact size of each process.
- More flexible than fixed partitioning.
โ Advantages:
- No internal fragmentation (less wasted space)
- More efficient memory utilization
โ Disadvantages:
- Can cause external fragmentation (free memory scattered in small chunks)
- Requires compaction to reduce fragmentation
๐ง Key Terms
| Term | Meaning |
|---|---|
| Internal Fragmentation | Wasted memory within an allocated partition |
| External Fragmentation | Free memory exists but not contiguous, cannot be used |
| Compaction | Rearranging memory contents to eliminate fragmentation |
๐ Comparison Table: Single vs Multiple Partition Allocation
| Feature | Single Partition | Multiple Partition |
|---|---|---|
| No. of processes | Only 1 | Multiple |
| Utilization | Poor | Better |
| Fragmentation | No fragmentation | Internal or external possible |
| Flexibility | Rigid | More flexible |
| Complexity | Very simple | Slightly more complex |
โ Summary
- Single partition allocation is simple but supports only one process.
-
Multiple partition allocation allows multiprogramming:
-
Fixed: easier but causes internal fragmentation
- Variable: more efficient but causes external fragmentation
๐งฉ Topic: Relocation and Address Translation
๐ง 1. What is Relocation?
Relocation refers to the process of moving a program from one memory location to another during its execution or loading time.
๐งพ Why Relocation is Needed:
- Programs are compiled assuming they start at address 0.
- But in real systems, many programs run simultaneously, so they are loaded anywhere in memory.
- Thus, the logical addresses used in programs must be relocated to physical addresses.
๐งฉ 2. Address Binding
Address binding is the process of mapping logical addresses to physical addresses.
| Binding Time | Description |
|---|---|
| Compile Time | If memory location is known in advance |
| Load Time | Address calculated when program is loaded into memory |
| Execution Time | Binding done during execution using hardware support |
๐ง Relocation is usually handled during Load Time or Execution Time.
๐ 3. Logical vs Physical Address
| Type | Description |
|---|---|
| Logical Address | Address generated by the CPU/program (also called Virtual Address) |
| Physical Address | Actual location in main memory (RAM) |
๐ Logical address must be translated to physical address using address translation mechanism.
๐งฎ 4. Address Translation Formula
When using a base register (relocation register):
Physical Address = Logical Address + Base Register
- Base Register contains the starting physical address of the process.
- All addresses used by the program are relative to 0.
๐ผ๏ธ Example:
- Base Register = 1000
- Logical Address = 120 ๐ Physical Address = 1000 + 120 = 1120
๐ ๏ธ 5. Hardware Support for Relocation
- MMU (Memory Management Unit) is responsible for run-time address translation.
- CPU generates a logical address โ MMU translates it to physical address using base (and sometimes limit) registers.
๐งพ 6. Limit Register (Optional)
- Used to define the size of the memory block allocated to a process.
- Ensures that a process cannot access memory beyond its limit.
if (Logical Address < Limit)
Physical Address = Logical + Base;
else
Trap โ Memory Violation Error
โ Summary
- Relocation allows a program to be placed anywhere in memory.
- Address translation maps logical (virtual) addresses to physical addresses.
- Achieved using a base register or MMU.
- Helps support multiprogramming and process isolation.
๐งฉ Topic: Variable Partitioning (Dynamic Partitioning)
๐ง What is Variable Partitioning?
- In variable partitioning, memory is divided dynamically based on the size of incoming processes.
- Unlike fixed partitioning, there are no predefined partition sizes.
- Partitions are created at runtime, according to the memory needed by each process.
๐ผ๏ธ Example:
+------------------------+ โ Initially
| OS (Fixed Size) |
+------------------------+
| Free Memory (Dynamic) |
+------------------------+
After allocating P1 (4 KB), P2 (6 KB):
+------------------------+
| OS |
+------------------------+
| P1 (4 KB) |
+------------------------+
| P2 (6 KB) |
+------------------------+
| Free (Remaining RAM) |
+------------------------+
๐ Key Characteristics
| Feature | Description |
|---|---|
| No fixed partitions | Partitions created dynamically |
| Better utilization | Matches process size exactly |
| External fragmentation | Can occur as small free spaces get scattered |
| Requires compaction | To merge small free blocks |
โ๏ธ Allocation Methods
๐ธ 1. First Fit
- Allocate the first free block that is large enough.
- Fast, but may leave small gaps.
๐ธ 2. Best Fit
- Allocate the smallest free block that is just large enough.
- Reduces wastage but slower and may create many small holes.
๐ธ 3. Worst Fit
- Allocate the largest available block.
- Leaves big gaps, hoping to reduce future fragmentation.
โ Problems in Variable Partitioning
๐ External Fragmentation
- Occurs when free memory is scattered in small chunks.
- Total free memory may be enough for a process, but not contiguous.
๐ Compaction
- The process of rearranging memory contents to combine free spaces.
- Example:
``` Before Compaction: P1 | Hole(2KB) | P2 | Hole(3KB) | P3
After Compaction: P1 | P2 | P3 | Hole(5KB) ```
โ Advantages
| Benefit | Description |
|---|---|
| Better memory utilization | Less internal fragmentation |
| Flexible | Adapts to process sizes dynamically |
โ Disadvantages
| Limitation | Description |
|---|---|
| External fragmentation | Small gaps lead to wasted memory |
| Compaction overhead | Time-consuming rearrangement |
| Slower allocation | Search for suitable partition can take time |
๐ Comparison: Fixed vs Variable Partitioning
| Feature | Fixed Partitioning | Variable Partitioning |
|---|---|---|
| Partition Size | Fixed, predefined | Dynamic, varies with process |
| Flexibility | Low | High |
| Fragmentation Type | Internal | External |
| Compaction Needed | No | Yes |
| Memory Utilization | Poor (if process is small) | Better |
โ Summary
- Variable Partitioning improves memory utilization by allocating memory blocks exactly as needed.
- It suffers from external fragmentation, which is managed by compaction.
- Memory is allocated using First Fit, Best Fit, or Worst Fit strategies.
๐งฉ Topic: Non-Contiguous Memory Allocation
๐ง What is Non-Contiguous Memory Allocation?
- In non-contiguous allocation, a process is allowed to occupy memory in separate (non-adjacent) blocks.
- This approach helps in efficient memory utilization and eliminates external fragmentation.
- Logical memory is divided into sections and mapped to free spaces in RAM.
๐ Why Use Non-Contiguous Allocation?
- To support multiprogramming with limited memory.
- To avoid the problem of finding large contiguous memory blocks.
- To allow dynamic growth of processes.
๐ Key Techniques of Non-Contiguous Allocation
| Technique | Description |
|---|---|
| Paging | Memory is divided into fixed-size pages |
| Segmentation | Memory is divided into logical segments |
| Paging + Segmentation | Hybrid model combining both techniques |
๐ธ 1. Paging
โ Key Points:
- Physical memory is divided into frames
- Logical memory is divided into pages
- OS maintains a page table to map pages โ frames
๐ Example:
| Logical Address Space | Physical Memory |
|---|---|
| Page 0 โ Frame 5 | Frame 0 |
| Page 1 โ Frame 2 | Frame 1 |
| Page 2 โ Frame 9 | Frame 2 |
No need for contiguous memory!
โ Solves external fragmentation โ May cause internal fragmentation
๐ธ 2. Segmentation
โ Key Points:
- Memory is divided into logical segments (e.g., code, data, stack)
- Each segment has its own base and limit value
- Logical address = \
๐ Example:
| Segment | Purpose | Base | Limit |
|---|---|---|---|
| 0 | Code | 1000 | 400 |
| 1 | Data | 1400 | 300 |
| 2 | Stack | 1700 | 200 |
โ Matches logical program structure โ Can suffer from external fragmentation
๐ธ 3. Combined Paging and Segmentation
- Each segment is divided into pages
- Logical Address = \
- This gives the logical benefits of segmentation and the physical efficiency of paging
๐ Comparison: Contiguous vs Non-Contiguous Allocation
| Feature | Contiguous Allocation | Non-Contiguous Allocation |
|---|---|---|
| Memory layout | Adjacent memory blocks | Separate memory blocks |
| Fragmentation | External (variable) | Paging: internal, Segmentation: external |
| Flexibility | Low | High |
| Overhead | Less | More (needs page/segment tables) |
| Performance | Fast (simple mapping) | Slightly slower (due to address translation) |
โ Advantages of Non-Contiguous Allocation
- โ Efficient memory utilization
- โ Solves external fragmentation (especially in paging)
- โ Supports multiprogramming
โ Disadvantages
- โ Needs complex hardware support (MMU)
- โ Extra time for address translation
- โ May cause internal fragmentation (in paging)
โ Summary
- Non-contiguous memory allocation allows a process to be placed in different memory locations.
- Uses techniques like paging and segmentation.
- Increases flexibility and reduces fragmentation, but requires more OS management and hardware support.
๐งฉ Topic: Combined Paging and Segmentation (Page Segmentation)
๐ง What is Page Segmentation?
-
Page segmentation is a hybrid memory management technique that combines the features of:
-
Segmentation: Divide memory based on logical sections (code, data, stack, etc.)
- Paging: Divide each segment into fixed-size pages and store them in non-contiguous physical memory
๐ It provides both:
- Logical organization (via segmentation)
- Efficient memory use (via paging)
๐งฑ Structure
- Logical address is divided into three parts:
<Segment Number, Page Number, Offset>
๐๏ธ Components:
| Part of Address | Meaning |
|---|---|
| Segment Number (s) | Identifies the segment (code, data, stack, etc.) |
| Page Number (p) | Identifies the page within that segment |
| Offset (d) | Offset within the page |
๐งฐ Memory Structures Used
-
Segment Table
-
Each entry points to a page table of that segment
-
Page Table
-
Maps the pages of the segment to physical frames
-
Frame Table (RAM)
-
Shows where the pages are stored physically
๐ Address Translation Steps
- Segment Number is used to find the page table base address
- Page Number is used to find the frame number
- Offset is added to frame base to get the physical address
๐ผ๏ธ Diagram:
Logical Address: <s, p, d>
โ
Segment Table โ Page Table of segment s
โ
Page Table Entry p โ Frame Number
โ
Physical Address = Frame + d
โ Advantages
| Benefit | Description |
|---|---|
| Logical program view | Program is divided into logical segments |
| Efficient memory use | Segments are paged โ no external fragmentation |
| Large address space support | Combines benefits of paging & segmentation |
โ Disadvantages
| Limitation | Description |
|---|---|
| Complex address translation | Needs 2-level lookup (segment table + page table) |
| High overhead | Extra memory needed for storing tables |
| Slower access | More time to compute physical address |
๐ Real-World Usage
-
Used in systems that need:
-
Logical code structure
- Efficient use of memory
- Example: Multics OS, and Intel x86 architecture (protected mode)
โ Summary
- Page segmentation combines the logical structure of segmentation with the efficiency of paging.
- Logical address = \
- Requires both segment and page tables for address translation.
- Useful in modern systems with complex memory requirements.
๐งฉ Topic: Paged Segmentation
๐ง What is Paged Segmentation?
Paged Segmentation is a hybrid memory management technique that combines:
- Segmentation for dividing memory into logical sections (like code, data, stack), and
- Paging for dividing those segments into equal-sized pages, allowing them to be stored in non-contiguous frames in RAM.
๐ Logical Address Structure
A logical address in paged segmentation is divided into three parts:
< Segment Number, Page Number within Segment, Offset within Page >
| Component | Description |
|---|---|
| Segment Number | Identifies the logical segment (e.g., code/data) |
| Page Number | Identifies the page within that segment |
| Offset | Displacement within that page |
๐ ๏ธ How It Works
-
Segment Table: Each entry points to the base address of the page table for that segment.
-
Page Table (per segment): Maps the logical pages of the segment to physical memory frames.
-
Offset: Added to the base physical address to find the exact memory location.
๐งญ Address Translation Steps
- Use segment number to access the segment table
- From the segment table, get the base address of the page table
- Use the page number to find the frame number in that page table
- Combine the frame base with the offset to get the final physical address
๐ผ๏ธ Diagram
Logical Address: <Segment, Page, Offset>
โ
Segment Table โ Base Address of Page Table for that Segment
โ
Page Table Entry (Page No.) โ Frame No.
โ
Physical Address = Frame Base + Offset
โ Advantages of Paged Segmentation
| Advantage | Description |
|---|---|
| Logical structure | Maintains programโs logical division |
| No external fragmentation | Paging removes the need for contiguous memory |
| Efficient memory use | Pages stored in available free frames |
| Supports large programs | Segments + pages allow a large virtual address space |
โ Disadvantages
| Limitation | Description |
|---|---|
| Complex translation | Requires both segment and page tables |
| Memory overhead | Tables consume additional memory |
| Slower access | Multiple lookups (segment table โ page table โ frame) |
๐ Comparison: Paging vs Segmentation vs Paged Segmentation
| Feature | Paging | Segmentation | Paged Segmentation |
|---|---|---|---|
| Division Basis | Fixed-size pages | Logical segments | Segments, then paged |
| Fragmentation | Internal | External | Internal (within pages) |
| Logical View | None | Maintained | Maintained |
| Address Form | Page + Offset | Segment + Offset | Segment + Page + Offset |
| Flexibility | Medium | High | Very High |
โ Summary
- Paged segmentation gives the OS the logical clarity of segmentation and the memory efficiency of paging.
- Each segment is paged, and memory is allocated page-wise within each segment.
- Logical address: Segment + Page + Offset
- Requires hardware support for segment and page tables.
๐งฉ Topic: Virtual Memory
๐ง What is Virtual Memory?
Virtual Memory is a memory management technique that allows a process to execute even if not all of its memory pages are loaded into physical RAM.
It creates an illusion of a very large memory space for each process by using secondary storage (usually a hard disk or SSD) as temporary extension of RAM.
๐ฏ Purpose of Virtual Memory
- Run large programs that may not fit entirely in RAM
- Allow more processes to be loaded than the actual physical memory
- Improve CPU utilization and multiprogramming
๐งฑ How It Works
- Logical address space is divided into pages
- Only required pages are loaded into physical memory
- The rest remain on disk (called the backing store or swap area)
- When a program accesses a page not in memory, a page fault occurs
- The OS loads the required page into RAM from disk
๐ Components Involved
| Component | Description |
|---|---|
| Page Table | Maintains mapping between virtual pages and physical frames |
| MMU (Memory Management Unit) | Hardware that translates virtual addresses to physical addresses |
| Swap Space | Area on disk used for storing parts of virtual memory |
๐ Page Fault
- A page fault occurs when a process tries to access a page that is not in main memory.
-
The OS handles it by:
-
Suspending the process
- Bringing the page from swap (disk) to RAM
- Updating the page table
- Resuming the process
๐ผ๏ธ Diagram: Virtual Memory Overview
+-----------------------+
| Process View | โ Virtual Address Space
| +-----------------+ |
| | Page 0 | |
| | Page 1 | |
| | Page 2 | |
| +-----------------+ |
+-----------------------+
โ Address Translation (via MMU)
+--------------------------+ +---------------------+
| Physical Memory (RAM) | | Disk (Swap Area) |
| +--------+ +--------+ | | +--------+ |
| | Frame 0| | Frame 2| | | | Page 1 | (swapped) |
| +--------+ +--------+ | | +--------+ |
+--------------------------+ +---------------------+
๐ Key Features of Virtual Memory
| Feature | Description |
|---|---|
| Demand Paging | Load pages only when needed |
| Page Replacement | If memory is full, old pages are replaced using algorithms (e.g., LRU) |
| Logical View | Each process gets its own virtual address space |
| Isolation | Processes are isolated and canโt access othersโ memory |
โ Advantages of Virtual Memory
| Advantage | Explanation |
|---|---|
| โ๏ธ Runs large applications | Even if RAM is smaller than program size |
| โ๏ธ Increases multiprogramming | Multiple programs can be in memory |
| โ๏ธ Better memory utilization | Load only needed pages |
| โ๏ธ Protection and isolation | Each process gets its own virtual address space |
โ Disadvantages of Virtual Memory
| Disadvantage | Explanation |
|---|---|
| โ Page Fault Overhead | Too many page faults can slow down the system |
| โ Disk I/O is slow | Disk access is much slower than RAM |
| โ Thrashing | Excessive paging can reduce performance badly |
| โ More hardware needed | Requires MMU and larger storage (swap space) |
๐ Comparison: Physical Memory vs Virtual Memory
| Feature | Physical Memory (RAM) | Virtual Memory |
|---|---|---|
| Size | Limited | Large (uses disk space) |
| Speed | Fast | Slower (uses disk I/O) |
| Visibility | Actual location of data | Illusion for each process |
| Fault Handling | No page fault | Page faults possible |
๐ Virtual Memory Techniques
-
Demand Paging Load pages into memory only when they are needed.
-
Page Replacement Algorithms Decide which page to remove when RAM is full (e.g., FIFO, LRU, Optimal).
-
Thrashing Control Mechanisms to detect and prevent too many page faults.
-
Working Set Model Only keep the set of pages that a process is currently using.
โ Summary
- Virtual Memory allows execution of large programs by using disk as temporary RAM.
- Only required parts are loaded into physical memory using demand paging.
- It provides process isolation, flexibility, and efficient memory use, but can suffer from performance issues due to page faults and disk I/O.
๐งฉ Topic: Demand Paging
๐ง What is Demand Paging?
Demand Paging is a memory management technique where pages of a process are not loaded into RAM until they are actually needed during execution.
๐งพ Only those pages that are demanded (accessed) are loaded โ others remain in secondary storage (disk).
๐ฏ Key Idea: Lazy Loading
- Load pages on demand, not in advance.
- If a program accesses a page that is not in RAM โ Page Fault occurs.
๐ How Demand Paging Works
- CPU generates a memory access request (logical address).
- MMU checks if the required page is in physical memory.
- If not found โ Page Fault
-
OS:
-
Suspends the process
- Loads the required page from disk to RAM
- Updates the page table
- Resumes execution from the faulting instruction.
๐ผ๏ธ Diagram: Demand Paging Process
Logical Address
โ
+------------------+
| Page Table |
+------------------+
โ (Page not found โ Page Fault)
+-----------+ +------------+
| RAM | โโโโโโ | Disk |
+-----------+ +------------+
| Page loaded | | Remaining Pages |
๐ What is a Page Fault?
A Page Fault occurs when a process tries to access a page not currently in memory.
OS handles it by:
- Fetching the page from secondary storage (swap or disk)
- Placing it in a free frame in RAM
- Resuming the process
๐ Valid / Invalid Bit in Page Table
- Valid Bit = 1 โ page is in memory
- Valid Bit = 0 โ page is on disk (not yet loaded)
โ Advantages of Demand Paging
| Advantage | Description |
|---|---|
| โ๏ธ Efficient memory use | Load only needed pages |
| โ๏ธ Large programs run | Even if full program doesnโt fit in RAM |
| โ๏ธ Supports multiprogramming | More processes can be kept in memory |
โ Disadvantages of Demand Paging
| Disadvantage | Description |
|---|---|
| โ High page fault rate | Initially, many faults can occur (cold start) |
| โ Slow access time | Disk I/O is much slower than RAM |
| โ Thrashing | Excessive page faults reduce system performance |
๐ Performance of Demand Paging
- Affected by Page Fault Rate:
Let:
ma= memory access timepf= page fault rate (0 โค pf โค 1)pf_time= time to handle page fault (usually high)
Effective Access Time (EAT):
EAT = (1 - pf) ร ma + pf ร pf_time
โ If
pfis high, EAT increases drastically.
๐ Difference: Pure vs Partial Demand Paging
| Type | Description |
|---|---|
| Pure Demand Paging | No page is loaded initially โ all loaded on demand |
| Partial Demand Paging | Some pages pre-loaded, rest loaded as needed |
โ Summary
- Demand Paging loads memory pages only when they are accessed.
- Helps run larger programs and improves memory efficiency.
- May cause page faults initially, and if too many occur โ system thrashes.
๐งฉ Topic: Page Replacement and Algorithms
๐ง What is Page Replacement?
When a page fault occurs and RAM is full, the operating system must decide which page to remove from memory to make space for the new page.
๐ This process is called Page Replacement.
๐ Why is Page Replacement Needed?
- Limited RAM: Not all pages of all processes can fit in memory at once.
- To bring in a new page, one must be replaced.
- Goal: Minimize page faults and maintain efficient access.
๐ง Basic Page Replacement Mechanism
- A page fault occurs.
- If there is no free frame, choose a page to evict.
- If the page is dirty (modified), write it back to disk.
- Load the new page into the freed frame.
- Update the page table.
๐ Page Replacement Algorithms
๐น 1. FIFO (First-In, First-Out)
- Replace the oldest page in memory (the one that entered first).
Example:
Pages: 1, 2, 3, 4 with 3 frames
1 โ 2 โ 3 โ [Page Faults: 3]
4 โ (replace 1) โ 5 โ (replace 2)
โ Simple โ Poor performance โ doesn't consider usage
๐น 2. LRU (Least Recently Used)
- Replace the page that was least recently used.
Example:
Pages: 1, 2, 3, 1, 4 (3 frames)
1 โ 2 โ 3 โ (no replacement for 1) โ 4 (replace 2)
โ Good performance โ Complex to implement (needs history or timestamps)
๐น 3. Optimal Page Replacement
- Replace the page that will not be used for the longest time in future.
โ Best theoretical performance โ Cannot be implemented (future unknown) โ used for comparison
๐น 4. Clock (Second-Chance Algorithm)
- Circular queue of pages with a use bit.
- If use bit is 0 โ replace
- If use bit is 1 โ give second chance (set bit to 0)
โ Efficient and practical โ Slightly more complex than FIFO
๐ Example: FIFO vs LRU
Pages: 1 2 3 1 4 5 (3 frames)
FIFO:
1 โ 2 โ 3 โ 1 (no fault) โ 4 (replace 1) โ 5 (replace 2)
Page Faults = 5
LRU:
1 โ 2 โ 3 โ 1 (no fault) โ 4 (replace 2) โ 5 (replace 3)
Page Faults = 4
๐ Comparison Table
| Algorithm | Concept | Page Faults | Complexity | Used In Practice |
|---|---|---|---|---|
| FIFO | Oldest page is replaced | High | Low | Sometimes |
| LRU | Least recently used page | Low | High | Yes |
| Optimal | Page not needed for longest | Lowest | Very High | No (theoretical) |
| Clock | Use bit for second chance | Medium | Medium | Yes |
๐ Effective Access Time (EAT)
EAT = (1 - p) ร ma + p ร page_fault_time
where:
p = page fault rate (0 to 1)
ma = memory access time
๐ Lower page fault rate = better system performance.
โ Summary
- Page Replacement is needed when RAM is full and a new page must be loaded.
- Algorithms decide which page to replace to minimize faults.
- LRU and Clock are widely used in real systems.
๐งฉ Topic: Thrashing
๐ง What is Thrashing?
Thrashing is a condition in a virtual memory system where the CPU spends more time swapping pages in and out of memory than executing actual processes.
๐ It happens when there are too many page faults and insufficient frames to hold the working set of active processes.
๐ฅ Example Scenario
- Too many processes are loaded into memory.
- Each process is actively using more pages than the available frames.
- Frequent page faults occur โ OS constantly swaps pages.
- CPU remains idle, waiting for pages โ performance drops sharply.
๐ช Key Symptoms of Thrashing
| Symptom | Description |
|---|---|
| ๐ High page fault rate | Frequent loading of pages from disk |
| ๐ Low CPU utilization | CPU waits on disk I/O for page fetches |
| ๐ System slowdown | Dramatic drop in overall system performance |
๐ฏ Causes of Thrashing
| Cause | Explanation |
|---|---|
| ๐บ High degree of multiprogramming | Too many processes competing for limited memory |
| ๐ Insufficient RAM | Not enough memory to hold working sets |
| ๐ Frequent context switching | Each process brings its own pages, evicting others' pages |
| ๐ง Ignoring working set size | Allocating fewer frames than what a process actually needs |
๐ฆ Working Set Model (Concept)
- The working set of a process is the set of pages it actively uses in a given time interval.
- Thrashing occurs when the total working sets of all processes exceed available memory.
๐งฎ Working Set = { pages accessed in last โ time }
โ How to Prevent Thrashing
| Method | Description |
|---|---|
| โ๏ธ Limit degree of multiprogramming | Reduce number of active processes |
| โ๏ธ Use Working Set Model | Allocate memory based on actual page usage |
| โ๏ธ Use Page Fault Frequency (PFF) | Monitor page fault rate and adjust memory allocation |
| โ๏ธ Provide more RAM | Increase number of frames available |
๐ Page Fault Frequency (PFF) Approach
-
Track how often page faults occur:
-
If PFF too high โ allocate more frames
- If PFF too low โ can reduce frames or start new process
โ Summary
| Concept | Details |
|---|---|
| Definition | Excessive paging causing CPU to stall |
| Cause | Too many processes with too few memory frames |
| Result | High page fault rate, low CPU utilization, system slowdown |
| Solution | Working set model, limit processes, increase RAM |
๐ก Easy Mnemonic to Remember THRASHING:
T โ Too many processes
H โ High page faults
R โ RAM not enough
A โ Active sets too big
S โ Swapping constantly
H โ Hurt performance
I โ Idle CPU
N โ Need working set control
G โ Get more memory!