Skip to content

๐Ÿ“˜ Operating System โ€“ Unit 3: Memory Management

๐Ÿงฉ Topic: Single Contiguous Memory Management


๐Ÿง  What is Memory Management?

Memory management refers to the way an Operating System (OS) handles the allocation and deallocation of main memory (RAM) to various processes during execution.


๐Ÿ”น What is Single Contiguous Memory Management?

  • It is the simplest memory management technique.
  • The entire RAM is treated as one single block (contiguous space).
  • The OS divides memory into two parts:

  • One part for the Operating System

  • One part for a single user process

๐Ÿงฑ Memory Layout:

+-----------------------------+
|     Operating System        |  โ† Low memory address
+-----------------------------+
|     User Program (Process)  |
+-----------------------------+  โ† High memory address

๐Ÿ“Œ Key Features

Feature Description
Single user process Only one program runs at a time
Fixed memory size Memory is allocated as one large block
No fragmentation Since thereโ€™s only one process, memory is used fully
No multitasking System cannot run multiple processes together

โœ… Advantages

  • Simple to implement
  • Low overhead
  • Useful for embedded systems or early OS like MS-DOS

โŒ Disadvantages

Limitation Impact
No Multiprogramming Only one process runs at a time
Poor resource utilization CPU and memory may be underused
No process protection Process can access any part of memory
No dynamic resizing Fixed memory division, can't adjust automatically

๐Ÿง  Where is it used?

  • Early computers and simple embedded devices
  • Educational simulators
  • Systems where simplicity and predictability are more important than performance

โœ… Summary

  • Single contiguous memory management is the simplest form of memory management.
  • The OS and one user process share memory in a contiguous (unbroken) block.
  • Easy to implement but supports only one process at a time, with no protection or multitasking.

๐Ÿงฉ Topic: Fixed Partitioned Memory Management

๐Ÿง  What is Fixed Partitioning?

  • Fixed Partitioned Memory Management divides main memory into fixed-sized partitions at system startup.
  • Each partition can hold only one process at a time.
  • Partition sizes are predefined and do not change dynamically.

๐Ÿงฑ Memory Layout Example

Letโ€™s say total RAM = 16 KB, and it's divided like this:

+------------------+
| OS (4 KB)        |
+------------------+
| Partition 1 (4 KB) โ† P1 (2 KB) โ†’ 2 KB wasted
+------------------+
| Partition 2 (4 KB) โ† P2 (4 KB)
+------------------+
| Partition 3 (4 KB) โ† P3 (3 KB) โ†’ 1 KB wasted
+------------------+

๐Ÿ”น Key Characteristics

Feature Description
Predefined partitions Set at boot time
One process per partition No sharing; no overlap
Simple allocation OS looks for the first free partition large enough
Internal Fragmentation Space inside partition goes unused if process is smaller

๐Ÿ“‰ Types of Fixed Partitioning

  1. Equal-size Partitions

  2. All partitions are of the same size

  3. Easy to manage
  4. High internal fragmentation

  5. Unequal-size Partitions

  6. Partitions are of different sizes

  7. Reduces internal fragmentation
  8. OS must match process to suitable partition

๐Ÿงฎ Advantages

โœ… Simple and fast โœ… Easy to implement โœ… No overhead of dynamic allocation


โš ๏ธ Disadvantages

Issue Description
Internal fragmentation Wasted space within allocated partition
Limited multitasking Number of partitions = max number of processes
Rigid structure Canโ€™t adapt to process size at runtime
Memory wastage If many small processes or too few large ones

๐Ÿ“Œ Example Problem

  • RAM = 16 KB
  • 4 partitions of 4 KB
  • P1 = 2 KB, P2 = 5 KB, P3 = 4 KB

๐Ÿ”ธ P2 cannot be allocated (no partition large enough) ๐Ÿ”ธ P1 and P3 get assigned with wasted space


โœ… Summary

  • Fixed Partitioning divides memory into blocks of fixed size.
  • Simple but causes internal fragmentation and limits number of processes.
  • Useful for simple batch systems, but not used in modern multitasking OS.

๐Ÿงฉ Topic: Swapping

๐Ÿง  What is Swapping?

  • Swapping is a memory management technique where processes are moved between main memory (RAM) and secondary storage (disk).
  • It allows the OS to temporarily remove a process from memory when itโ€™s not needed and bring it back later when required.

๐ŸŒ€ This helps in multiprogramming, even with limited RAM.


๐Ÿงฑ How Swapping Works

  1. A process is executing in memory.
  2. The OS decides it needs to free memory.
  3. The process is swapped out โ€“ its memory is copied to disk (swap space).
  4. When needed again, it is swapped in โ€“ copied back from disk to memory.

๐Ÿ“Œ Swapping Diagram

+----------------------+           +-----------------------+
|     Main Memory      |           |     Disk (Swap Area) |
+----------------------+           +-----------------------+
|  OS                  |           |                       |
|  P1 (Running)        |           |  P2 (Swapped Out)     |
|  P3 (Running)        |           |                       |
+----------------------+           +-----------------------+

          โ‡„ Swap P2 โ‡„

๐Ÿงฉ Key Terms

Term Meaning
Swap Space Area on disk used to store swapped-out processes
Swap In Bring process back from disk to memory
Swap Out Move process from memory to disk
Context Switch Switching CPU from one process to another, often using swapping

๐ŸŽฏ When is Swapping Used?

  • When RAM is full
  • For background or low-priority processes
  • In virtual memory systems
  • To increase multiprogramming (more processes than available RAM)

๐Ÿงฎ Advantages of Swapping

โœ… Allows more processes to be managed โœ… Improves CPU utilization โœ… Helps in load balancing and memory reuse


โš ๏ธ Disadvantages of Swapping

Issue Description
High Disk I/O Swapping uses the disk, which is slower than RAM
Process Delay Swapping in/out causes waiting and latency
Thrashing Too much swapping causes the system to slow down

๐Ÿ’ก Example:

Assume:

  • RAM size = 4 GB
  • 5 processes need 1 GB each
  • Only 4 can stay in RAM at a time

๐Ÿ” The OS swaps one process out when a new one comes in.


๐Ÿง  Swapping in Modern OS

  • Used with paging and virtual memory
  • Often called โ€œpaging to diskโ€ or โ€œvirtual memory swappingโ€
  • Linux swap area: /swapfile or separate partition
  • Windows: pagefile.sys

โœ… Summary

  • Swapping is the process of moving processes between main memory and disk to optimize memory usage.
  • It helps the OS support more processes than physically possible in RAM.
  • Swapping increases flexibility but may lead to performance overhead if overused.

๐Ÿงฉ Topic: Single and Multiple Partition Allocation

๐Ÿง  What is Partition Allocation?

  • Partition allocation is a memory management technique where main memory is divided into partitions, and each partition is allocated to a process.
  • It can be done using either:

  • Single Partition Allocation

  • Multiple Partition Allocation

๐Ÿ”น 1. Single Partition Allocation

๐Ÿงฑ Description:

  • Memory is split into two parts:

  • One part for the Operating System

  • Remaining memory for one user process

๐Ÿ–ผ๏ธ Memory Layout:

+------------------+ โ† Low Address
| Operating System |
+------------------+
| User Process     |
+------------------+ โ† High Address

โœ… Advantages:

  • Simple and easy to implement

โŒ Disadvantages:

  • Only one user process at a time
  • No multitasking
  • Poor CPU and memory utilization

๐Ÿ”น 2. Multiple Partition Allocation

  • Memory is divided into multiple partitions, and multiple processes can be loaded into memory simultaneously.

๐Ÿ”ธ Types:

๐Ÿงฉ a) Fixed Partition Allocation

  • Memory is divided into fixed-size blocks (partitions) during system startup.
  • Each partition can hold exactly one process.
โœ… Advantages:
  • Simple implementation
  • No need for dynamic partition management
โŒ Disadvantages:
  • Internal fragmentation (unused space inside a partition)
  • Number of processes = number of partitions

๐Ÿงฉ b) Variable Partition Allocation

  • Memory is allocated dynamically based on the exact size of each process.
  • More flexible than fixed partitioning.
โœ… Advantages:
  • No internal fragmentation (less wasted space)
  • More efficient memory utilization
โŒ Disadvantages:
  • Can cause external fragmentation (free memory scattered in small chunks)
  • Requires compaction to reduce fragmentation

๐Ÿง  Key Terms

Term Meaning
Internal Fragmentation Wasted memory within an allocated partition
External Fragmentation Free memory exists but not contiguous, cannot be used
Compaction Rearranging memory contents to eliminate fragmentation

๐Ÿ“Œ Comparison Table: Single vs Multiple Partition Allocation

Feature Single Partition Multiple Partition
No. of processes Only 1 Multiple
Utilization Poor Better
Fragmentation No fragmentation Internal or external possible
Flexibility Rigid More flexible
Complexity Very simple Slightly more complex

โœ… Summary

  • Single partition allocation is simple but supports only one process.
  • Multiple partition allocation allows multiprogramming:

  • Fixed: easier but causes internal fragmentation

  • Variable: more efficient but causes external fragmentation

๐Ÿงฉ Topic: Relocation and Address Translation

๐Ÿง  1. What is Relocation?

Relocation refers to the process of moving a program from one memory location to another during its execution or loading time.

๐Ÿงพ Why Relocation is Needed:

  • Programs are compiled assuming they start at address 0.
  • But in real systems, many programs run simultaneously, so they are loaded anywhere in memory.
  • Thus, the logical addresses used in programs must be relocated to physical addresses.

๐Ÿงฉ 2. Address Binding

Address binding is the process of mapping logical addresses to physical addresses.

Binding Time Description
Compile Time If memory location is known in advance
Load Time Address calculated when program is loaded into memory
Execution Time Binding done during execution using hardware support

๐Ÿง  Relocation is usually handled during Load Time or Execution Time.


๐Ÿ“Œ 3. Logical vs Physical Address

Type Description
Logical Address Address generated by the CPU/program (also called Virtual Address)
Physical Address Actual location in main memory (RAM)

๐Ÿ” Logical address must be translated to physical address using address translation mechanism.


๐Ÿงฎ 4. Address Translation Formula

When using a base register (relocation register):

Physical Address = Logical Address + Base Register
  • Base Register contains the starting physical address of the process.
  • All addresses used by the program are relative to 0.

๐Ÿ–ผ๏ธ Example:

  • Base Register = 1000
  • Logical Address = 120 ๐Ÿ‘‰ Physical Address = 1000 + 120 = 1120

๐Ÿ› ๏ธ 5. Hardware Support for Relocation

  • MMU (Memory Management Unit) is responsible for run-time address translation.
  • CPU generates a logical address โ†’ MMU translates it to physical address using base (and sometimes limit) registers.

๐Ÿงพ 6. Limit Register (Optional)

  • Used to define the size of the memory block allocated to a process.
  • Ensures that a process cannot access memory beyond its limit.
if (Logical Address < Limit)
    Physical Address = Logical + Base;
else
    Trap โ†’ Memory Violation Error

โœ… Summary

  • Relocation allows a program to be placed anywhere in memory.
  • Address translation maps logical (virtual) addresses to physical addresses.
  • Achieved using a base register or MMU.
  • Helps support multiprogramming and process isolation.

๐Ÿงฉ Topic: Variable Partitioning (Dynamic Partitioning)

๐Ÿง  What is Variable Partitioning?

  • In variable partitioning, memory is divided dynamically based on the size of incoming processes.
  • Unlike fixed partitioning, there are no predefined partition sizes.
  • Partitions are created at runtime, according to the memory needed by each process.

๐Ÿ–ผ๏ธ Example:

+------------------------+    โ† Initially
| OS (Fixed Size)        |
+------------------------+
| Free Memory (Dynamic)  |
+------------------------+

After allocating P1 (4 KB), P2 (6 KB):
+------------------------+
| OS                     |
+------------------------+
| P1 (4 KB)              |
+------------------------+
| P2 (6 KB)              |
+------------------------+
| Free (Remaining RAM)   |
+------------------------+

๐Ÿ“Œ Key Characteristics

Feature Description
No fixed partitions Partitions created dynamically
Better utilization Matches process size exactly
External fragmentation Can occur as small free spaces get scattered
Requires compaction To merge small free blocks

โš™๏ธ Allocation Methods

๐Ÿ”ธ 1. First Fit

  • Allocate the first free block that is large enough.
  • Fast, but may leave small gaps.

๐Ÿ”ธ 2. Best Fit

  • Allocate the smallest free block that is just large enough.
  • Reduces wastage but slower and may create many small holes.

๐Ÿ”ธ 3. Worst Fit

  • Allocate the largest available block.
  • Leaves big gaps, hoping to reduce future fragmentation.

โŒ Problems in Variable Partitioning

๐Ÿ“ External Fragmentation

  • Occurs when free memory is scattered in small chunks.
  • Total free memory may be enough for a process, but not contiguous.

๐Ÿ“ Compaction

  • The process of rearranging memory contents to combine free spaces.
  • Example:

``` Before Compaction: P1 | Hole(2KB) | P2 | Hole(3KB) | P3

After Compaction: P1 | P2 | P3 | Hole(5KB) ```


โœ… Advantages

Benefit Description
Better memory utilization Less internal fragmentation
Flexible Adapts to process sizes dynamically

โŒ Disadvantages

Limitation Description
External fragmentation Small gaps lead to wasted memory
Compaction overhead Time-consuming rearrangement
Slower allocation Search for suitable partition can take time

๐Ÿ“Š Comparison: Fixed vs Variable Partitioning

Feature Fixed Partitioning Variable Partitioning
Partition Size Fixed, predefined Dynamic, varies with process
Flexibility Low High
Fragmentation Type Internal External
Compaction Needed No Yes
Memory Utilization Poor (if process is small) Better

โœ… Summary

  • Variable Partitioning improves memory utilization by allocating memory blocks exactly as needed.
  • It suffers from external fragmentation, which is managed by compaction.
  • Memory is allocated using First Fit, Best Fit, or Worst Fit strategies.

๐Ÿงฉ Topic: Non-Contiguous Memory Allocation

๐Ÿง  What is Non-Contiguous Memory Allocation?

  • In non-contiguous allocation, a process is allowed to occupy memory in separate (non-adjacent) blocks.
  • This approach helps in efficient memory utilization and eliminates external fragmentation.
  • Logical memory is divided into sections and mapped to free spaces in RAM.

๐Ÿ“Œ Why Use Non-Contiguous Allocation?

  • To support multiprogramming with limited memory.
  • To avoid the problem of finding large contiguous memory blocks.
  • To allow dynamic growth of processes.

๐Ÿ”— Key Techniques of Non-Contiguous Allocation

Technique Description
Paging Memory is divided into fixed-size pages
Segmentation Memory is divided into logical segments
Paging + Segmentation Hybrid model combining both techniques

๐Ÿ”ธ 1. Paging

โœ… Key Points:

  • Physical memory is divided into frames
  • Logical memory is divided into pages
  • OS maintains a page table to map pages โ†’ frames

๐Ÿ” Example:

Logical Address Space Physical Memory
Page 0 โ†’ Frame 5 Frame 0
Page 1 โ†’ Frame 2 Frame 1
Page 2 โ†’ Frame 9 Frame 2

No need for contiguous memory!

โœ… Solves external fragmentation โŒ May cause internal fragmentation


๐Ÿ”ธ 2. Segmentation

โœ… Key Points:

  • Memory is divided into logical segments (e.g., code, data, stack)
  • Each segment has its own base and limit value
  • Logical address = \

๐Ÿ›  Example:

Segment Purpose Base Limit
0 Code 1000 400
1 Data 1400 300
2 Stack 1700 200

โœ… Matches logical program structure โŒ Can suffer from external fragmentation


๐Ÿ”ธ 3. Combined Paging and Segmentation

  • Each segment is divided into pages
  • Logical Address = \
  • This gives the logical benefits of segmentation and the physical efficiency of paging

๐Ÿ“Š Comparison: Contiguous vs Non-Contiguous Allocation

Feature Contiguous Allocation Non-Contiguous Allocation
Memory layout Adjacent memory blocks Separate memory blocks
Fragmentation External (variable) Paging: internal, Segmentation: external
Flexibility Low High
Overhead Less More (needs page/segment tables)
Performance Fast (simple mapping) Slightly slower (due to address translation)

โœ… Advantages of Non-Contiguous Allocation

  • โœ… Efficient memory utilization
  • โœ… Solves external fragmentation (especially in paging)
  • โœ… Supports multiprogramming

โŒ Disadvantages

  • โŒ Needs complex hardware support (MMU)
  • โŒ Extra time for address translation
  • โŒ May cause internal fragmentation (in paging)

โœ… Summary

  • Non-contiguous memory allocation allows a process to be placed in different memory locations.
  • Uses techniques like paging and segmentation.
  • Increases flexibility and reduces fragmentation, but requires more OS management and hardware support.

๐Ÿงฉ Topic: Combined Paging and Segmentation (Page Segmentation)

๐Ÿง  What is Page Segmentation?

  • Page segmentation is a hybrid memory management technique that combines the features of:

  • Segmentation: Divide memory based on logical sections (code, data, stack, etc.)

  • Paging: Divide each segment into fixed-size pages and store them in non-contiguous physical memory

๐Ÿ“Œ It provides both:

  • Logical organization (via segmentation)
  • Efficient memory use (via paging)

๐Ÿงฑ Structure

  • Logical address is divided into three parts:

<Segment Number, Page Number, Offset>

๐Ÿ—‚๏ธ Components:

Part of Address Meaning
Segment Number (s) Identifies the segment (code, data, stack, etc.)
Page Number (p) Identifies the page within that segment
Offset (d) Offset within the page

๐Ÿงฐ Memory Structures Used

  1. Segment Table

  2. Each entry points to a page table of that segment

  3. Page Table

  4. Maps the pages of the segment to physical frames

  5. Frame Table (RAM)

  6. Shows where the pages are stored physically


๐Ÿ” Address Translation Steps

  1. Segment Number is used to find the page table base address
  2. Page Number is used to find the frame number
  3. Offset is added to frame base to get the physical address

๐Ÿ–ผ๏ธ Diagram:

Logical Address: <s, p, d>
          โ†“
Segment Table โ†’ Page Table of segment s
                          โ†“
                Page Table Entry p โ†’ Frame Number
                                          โ†“
                           Physical Address = Frame + d

โœ… Advantages

Benefit Description
Logical program view Program is divided into logical segments
Efficient memory use Segments are paged โ†’ no external fragmentation
Large address space support Combines benefits of paging & segmentation

โŒ Disadvantages

Limitation Description
Complex address translation Needs 2-level lookup (segment table + page table)
High overhead Extra memory needed for storing tables
Slower access More time to compute physical address

๐Ÿ“Œ Real-World Usage

  • Used in systems that need:

  • Logical code structure

  • Efficient use of memory
  • Example: Multics OS, and Intel x86 architecture (protected mode)

โœ… Summary

  • Page segmentation combines the logical structure of segmentation with the efficiency of paging.
  • Logical address = \
  • Requires both segment and page tables for address translation.
  • Useful in modern systems with complex memory requirements.

๐Ÿงฉ Topic: Paged Segmentation

๐Ÿง  What is Paged Segmentation?

Paged Segmentation is a hybrid memory management technique that combines:

  • Segmentation for dividing memory into logical sections (like code, data, stack), and
  • Paging for dividing those segments into equal-sized pages, allowing them to be stored in non-contiguous frames in RAM.

๐Ÿ”— Logical Address Structure

A logical address in paged segmentation is divided into three parts:

< Segment Number, Page Number within Segment, Offset within Page >
Component Description
Segment Number Identifies the logical segment (e.g., code/data)
Page Number Identifies the page within that segment
Offset Displacement within that page

๐Ÿ› ๏ธ How It Works

  1. Segment Table: Each entry points to the base address of the page table for that segment.

  2. Page Table (per segment): Maps the logical pages of the segment to physical memory frames.

  3. Offset: Added to the base physical address to find the exact memory location.


๐Ÿงญ Address Translation Steps

  1. Use segment number to access the segment table
  2. From the segment table, get the base address of the page table
  3. Use the page number to find the frame number in that page table
  4. Combine the frame base with the offset to get the final physical address

๐Ÿ–ผ๏ธ Diagram

Logical Address: <Segment, Page, Offset>
          โ†“
Segment Table โ†’ Base Address of Page Table for that Segment
                          โ†“
Page Table Entry (Page No.) โ†’ Frame No.
                                          โ†“
Physical Address = Frame Base + Offset

โœ… Advantages of Paged Segmentation

Advantage Description
Logical structure Maintains programโ€™s logical division
No external fragmentation Paging removes the need for contiguous memory
Efficient memory use Pages stored in available free frames
Supports large programs Segments + pages allow a large virtual address space

โŒ Disadvantages

Limitation Description
Complex translation Requires both segment and page tables
Memory overhead Tables consume additional memory
Slower access Multiple lookups (segment table โ†’ page table โ†’ frame)

๐Ÿ“Œ Comparison: Paging vs Segmentation vs Paged Segmentation

Feature Paging Segmentation Paged Segmentation
Division Basis Fixed-size pages Logical segments Segments, then paged
Fragmentation Internal External Internal (within pages)
Logical View None Maintained Maintained
Address Form Page + Offset Segment + Offset Segment + Page + Offset
Flexibility Medium High Very High

โœ… Summary

  • Paged segmentation gives the OS the logical clarity of segmentation and the memory efficiency of paging.
  • Each segment is paged, and memory is allocated page-wise within each segment.
  • Logical address: Segment + Page + Offset
  • Requires hardware support for segment and page tables.

๐Ÿงฉ Topic: Virtual Memory

๐Ÿง  What is Virtual Memory?

Virtual Memory is a memory management technique that allows a process to execute even if not all of its memory pages are loaded into physical RAM.

It creates an illusion of a very large memory space for each process by using secondary storage (usually a hard disk or SSD) as temporary extension of RAM.


๐ŸŽฏ Purpose of Virtual Memory

  • Run large programs that may not fit entirely in RAM
  • Allow more processes to be loaded than the actual physical memory
  • Improve CPU utilization and multiprogramming

๐Ÿงฑ How It Works

  1. Logical address space is divided into pages
  2. Only required pages are loaded into physical memory
  3. The rest remain on disk (called the backing store or swap area)
  4. When a program accesses a page not in memory, a page fault occurs
  5. The OS loads the required page into RAM from disk

๐Ÿ”„ Components Involved

Component Description
Page Table Maintains mapping between virtual pages and physical frames
MMU (Memory Management Unit) Hardware that translates virtual addresses to physical addresses
Swap Space Area on disk used for storing parts of virtual memory

๐Ÿ”” Page Fault

  • A page fault occurs when a process tries to access a page that is not in main memory.
  • The OS handles it by:

  • Suspending the process

  • Bringing the page from swap (disk) to RAM
  • Updating the page table
  • Resuming the process

๐Ÿ–ผ๏ธ Diagram: Virtual Memory Overview

+-----------------------+
|   Process View        |  โ† Virtual Address Space
|  +-----------------+  |
|  |  Page 0         |  |
|  |  Page 1         |  |
|  |  Page 2         |  |
|  +-----------------+  |
+-----------------------+

          โ‡“ Address Translation (via MMU)

+--------------------------+     +---------------------+
| Physical Memory (RAM)    |     | Disk (Swap Area)    |
| +--------+ +--------+    |     | +--------+          |
| | Frame 0| | Frame 2|    |     | | Page 1 | (swapped) |
| +--------+ +--------+    |     | +--------+          |
+--------------------------+     +---------------------+

๐Ÿ“Œ Key Features of Virtual Memory

Feature Description
Demand Paging Load pages only when needed
Page Replacement If memory is full, old pages are replaced using algorithms (e.g., LRU)
Logical View Each process gets its own virtual address space
Isolation Processes are isolated and canโ€™t access othersโ€™ memory

โœ… Advantages of Virtual Memory

Advantage Explanation
โœ”๏ธ Runs large applications Even if RAM is smaller than program size
โœ”๏ธ Increases multiprogramming Multiple programs can be in memory
โœ”๏ธ Better memory utilization Load only needed pages
โœ”๏ธ Protection and isolation Each process gets its own virtual address space

โŒ Disadvantages of Virtual Memory

Disadvantage Explanation
โŒ Page Fault Overhead Too many page faults can slow down the system
โŒ Disk I/O is slow Disk access is much slower than RAM
โŒ Thrashing Excessive paging can reduce performance badly
โŒ More hardware needed Requires MMU and larger storage (swap space)

๐Ÿ“Š Comparison: Physical Memory vs Virtual Memory

Feature Physical Memory (RAM) Virtual Memory
Size Limited Large (uses disk space)
Speed Fast Slower (uses disk I/O)
Visibility Actual location of data Illusion for each process
Fault Handling No page fault Page faults possible

๐Ÿ” Virtual Memory Techniques

  1. Demand Paging Load pages into memory only when they are needed.

  2. Page Replacement Algorithms Decide which page to remove when RAM is full (e.g., FIFO, LRU, Optimal).

  3. Thrashing Control Mechanisms to detect and prevent too many page faults.

  4. Working Set Model Only keep the set of pages that a process is currently using.


โœ… Summary

  • Virtual Memory allows execution of large programs by using disk as temporary RAM.
  • Only required parts are loaded into physical memory using demand paging.
  • It provides process isolation, flexibility, and efficient memory use, but can suffer from performance issues due to page faults and disk I/O.

๐Ÿงฉ Topic: Demand Paging

๐Ÿง  What is Demand Paging?

Demand Paging is a memory management technique where pages of a process are not loaded into RAM until they are actually needed during execution.

๐Ÿงพ Only those pages that are demanded (accessed) are loaded โ€” others remain in secondary storage (disk).


๐ŸŽฏ Key Idea: Lazy Loading

  • Load pages on demand, not in advance.
  • If a program accesses a page that is not in RAM โ‡’ Page Fault occurs.

๐Ÿ” How Demand Paging Works

  1. CPU generates a memory access request (logical address).
  2. MMU checks if the required page is in physical memory.
  3. If not found โ‡’ Page Fault
  4. OS:

  5. Suspends the process

  6. Loads the required page from disk to RAM
  7. Updates the page table
  8. Resumes execution from the faulting instruction.

๐Ÿ–ผ๏ธ Diagram: Demand Paging Process

           Logical Address
                โ†“
        +------------------+
        |   Page Table     |
        +------------------+
               โ†“  (Page not found โ†’ Page Fault)
           +-----------+        +------------+
           |   RAM     |  โ†โ†โ†โ†โ†โ† |   Disk     |
           +-----------+        +------------+
           | Page loaded |       | Remaining Pages |

๐Ÿ”” What is a Page Fault?

A Page Fault occurs when a process tries to access a page not currently in memory.

OS handles it by:

  • Fetching the page from secondary storage (swap or disk)
  • Placing it in a free frame in RAM
  • Resuming the process

๐Ÿ“Œ Valid / Invalid Bit in Page Table

  • Valid Bit = 1 โ†’ page is in memory
  • Valid Bit = 0 โ†’ page is on disk (not yet loaded)

โœ… Advantages of Demand Paging

Advantage Description
โœ”๏ธ Efficient memory use Load only needed pages
โœ”๏ธ Large programs run Even if full program doesnโ€™t fit in RAM
โœ”๏ธ Supports multiprogramming More processes can be kept in memory

โŒ Disadvantages of Demand Paging

Disadvantage Description
โŒ High page fault rate Initially, many faults can occur (cold start)
โŒ Slow access time Disk I/O is much slower than RAM
โŒ Thrashing Excessive page faults reduce system performance

๐Ÿ“Š Performance of Demand Paging

  • Affected by Page Fault Rate:

Let:

  • ma = memory access time
  • pf = page fault rate (0 โ‰ค pf โ‰ค 1)
  • pf_time = time to handle page fault (usually high)

Effective Access Time (EAT):

EAT = (1 - pf) ร— ma + pf ร— pf_time

โ›” If pf is high, EAT increases drastically.


๐Ÿ“ Difference: Pure vs Partial Demand Paging

Type Description
Pure Demand Paging No page is loaded initially โ€” all loaded on demand
Partial Demand Paging Some pages pre-loaded, rest loaded as needed

โœ… Summary

  • Demand Paging loads memory pages only when they are accessed.
  • Helps run larger programs and improves memory efficiency.
  • May cause page faults initially, and if too many occur โ‡’ system thrashes.

๐Ÿงฉ Topic: Page Replacement and Algorithms

๐Ÿง  What is Page Replacement?

When a page fault occurs and RAM is full, the operating system must decide which page to remove from memory to make space for the new page.

๐Ÿ“Œ This process is called Page Replacement.


๐Ÿ” Why is Page Replacement Needed?

  • Limited RAM: Not all pages of all processes can fit in memory at once.
  • To bring in a new page, one must be replaced.
  • Goal: Minimize page faults and maintain efficient access.

๐Ÿ”ง Basic Page Replacement Mechanism

  1. A page fault occurs.
  2. If there is no free frame, choose a page to evict.
  3. If the page is dirty (modified), write it back to disk.
  4. Load the new page into the freed frame.
  5. Update the page table.

๐Ÿ”„ Page Replacement Algorithms


๐Ÿ”น 1. FIFO (First-In, First-Out)

  • Replace the oldest page in memory (the one that entered first).

Example:

Pages: 1, 2, 3, 4 with 3 frames

1 โ†’ 2 โ†’ 3 โ†’ [Page Faults: 3]
4 โ†’ (replace 1) โ†’ 5 โ†’ (replace 2)

โœ… Simple โŒ Poor performance โ€“ doesn't consider usage


๐Ÿ”น 2. LRU (Least Recently Used)

  • Replace the page that was least recently used.

Example:

Pages: 1, 2, 3, 1, 4 (3 frames)

1 โ†’ 2 โ†’ 3 โ†’ (no replacement for 1) โ†’ 4 (replace 2)

โœ… Good performance โŒ Complex to implement (needs history or timestamps)


๐Ÿ”น 3. Optimal Page Replacement

  • Replace the page that will not be used for the longest time in future.

โœ… Best theoretical performance โŒ Cannot be implemented (future unknown) โ€“ used for comparison


๐Ÿ”น 4. Clock (Second-Chance Algorithm)

  • Circular queue of pages with a use bit.
  • If use bit is 0 โ†’ replace
  • If use bit is 1 โ†’ give second chance (set bit to 0)

โœ… Efficient and practical โŒ Slightly more complex than FIFO


๐Ÿ” Example: FIFO vs LRU

Pages: 1 2 3 1 4 5 (3 frames)

FIFO:

1 โ†’ 2 โ†’ 3 โ†’ 1 (no fault) โ†’ 4 (replace 1) โ†’ 5 (replace 2)
Page Faults = 5

LRU:

1 โ†’ 2 โ†’ 3 โ†’ 1 (no fault) โ†’ 4 (replace 2) โ†’ 5 (replace 3)
Page Faults = 4

๐Ÿ“Œ Comparison Table

Algorithm Concept Page Faults Complexity Used In Practice
FIFO Oldest page is replaced High Low Sometimes
LRU Least recently used page Low High Yes
Optimal Page not needed for longest Lowest Very High No (theoretical)
Clock Use bit for second chance Medium Medium Yes

๐Ÿ”„ Effective Access Time (EAT)

EAT = (1 - p) ร— ma + p ร— page_fault_time

where:
p = page fault rate (0 to 1)
ma = memory access time

๐Ÿ“ Lower page fault rate = better system performance.


โœ… Summary

  • Page Replacement is needed when RAM is full and a new page must be loaded.
  • Algorithms decide which page to replace to minimize faults.
  • LRU and Clock are widely used in real systems.

๐Ÿงฉ Topic: Thrashing

๐Ÿง  What is Thrashing?

Thrashing is a condition in a virtual memory system where the CPU spends more time swapping pages in and out of memory than executing actual processes.

๐Ÿ“Œ It happens when there are too many page faults and insufficient frames to hold the working set of active processes.


๐Ÿ”ฅ Example Scenario

  • Too many processes are loaded into memory.
  • Each process is actively using more pages than the available frames.
  • Frequent page faults occur โ†’ OS constantly swaps pages.
  • CPU remains idle, waiting for pages โ†’ performance drops sharply.

๐Ÿช“ Key Symptoms of Thrashing

Symptom Description
๐Ÿ” High page fault rate Frequent loading of pages from disk
๐ŸŒ Low CPU utilization CPU waits on disk I/O for page fetches
๐Ÿ“‰ System slowdown Dramatic drop in overall system performance

๐ŸŽฏ Causes of Thrashing

Cause Explanation
๐Ÿ”บ High degree of multiprogramming Too many processes competing for limited memory
๐Ÿ“‰ Insufficient RAM Not enough memory to hold working sets
๐Ÿ”„ Frequent context switching Each process brings its own pages, evicting others' pages
๐Ÿง  Ignoring working set size Allocating fewer frames than what a process actually needs

๐Ÿ“ฆ Working Set Model (Concept)

  • The working set of a process is the set of pages it actively uses in a given time interval.
  • Thrashing occurs when the total working sets of all processes exceed available memory.

๐Ÿงฎ Working Set = { pages accessed in last โˆ† time }


โœ… How to Prevent Thrashing

Method Description
โœ”๏ธ Limit degree of multiprogramming Reduce number of active processes
โœ”๏ธ Use Working Set Model Allocate memory based on actual page usage
โœ”๏ธ Use Page Fault Frequency (PFF) Monitor page fault rate and adjust memory allocation
โœ”๏ธ Provide more RAM Increase number of frames available

๐Ÿ“Œ Page Fault Frequency (PFF) Approach

  • Track how often page faults occur:

  • If PFF too high โ†’ allocate more frames

  • If PFF too low โ†’ can reduce frames or start new process

โœ… Summary

Concept Details
Definition Excessive paging causing CPU to stall
Cause Too many processes with too few memory frames
Result High page fault rate, low CPU utilization, system slowdown
Solution Working set model, limit processes, increase RAM

๐Ÿ’ก Easy Mnemonic to Remember THRASHING:

T โ€“ Too many processes
H โ€“ High page faults
R โ€“ RAM not enough
A โ€“ Active sets too big
S โ€“ Swapping constantly
H โ€“ Hurt performance
I โ€“ Idle CPU
N โ€“ Need working set control
G โ€“ Get more memory!