Print Page | Close Window

Gate Study Material OS-3

Printed From: One Stop GATE
Category: GATE AT A GLANCE
Forum Name: Other Discussions on GATE
Forum Discription: If you want to discuss some aspects of GATE which are not covered by any of the threads above, you are free to use this forum for those discussions.
URL: http://forum.onestopgate.com/forum_posts.asp?TID=959
Printed Date: 11Jan2025 at 6:40pm


Topic: Gate Study Material OS-3
Posted By: Anamica
Subject: Gate Study Material OS-3
Date Posted: 04Apr2007 at 10:06pm

File System of Operating System


  • When does a process need to access OS functionality? Here are several examples
  • Reading a file. The OS must perform the file system operations required to read the data off of disk.
  • Creating a child process. The OS must set stuff up for the child process.
  • Sending a packet out onto the network. The OS typically handles the network interface.
Why have the OS do these things? Why doesn't the process just do them directly?
  • Convenience. Implement the functionality once in the OS and encapsulate it behind an interface that everyone uses. So, processes just deal with the simple interface, and don't have to write complicated low-level code to deal with devices.
  • Portability. OS exports a common interface typically available on many hardware platforms. Applications do not contain hardware-specific code.
  • Protection. If give applications complete access to disk or network or whatever, they can corrupt data from other applications, either maliciously or because of bugs. Having the OS do it eliminates security problems between applications. Of course, applications still have to trust the OS.
  • How do processes invoke OS functionality? By making a system call. Conceptually, processes call a subroutine that goes off and performs the required functionality. But OS must execute in a different protection domain than the application. Typically, OS executes in supervisor mode, which allows it to do things like manipulate the disk directly.
  • To switch from normal user mode to supervisor mode, most machines provide a system call instruction. This instruction causes an exception to take place. The hardware switches from user mode to supervisor mode and invokes the exception handler inside the operating system. There is typically some kind of convention that the process uses to interact with the OS.
  • Let's do an example - the Open system call. System calls typically start out with a normal subroutine call. In this case, when the process wants to open a file, it just calls the Open routine in a system library someplace.
    		 /* Open the Nachos file "name", and return an "OpenFileId" that can
    * be used to read and write to the file.
    */
    OpenFileId Open(char *name);
  • Inside the library, the Open subroutine executes a syscall instruction, which generates a system call exception.
    		 Open:
    addiu $2,$0,SC_Open
    syscall
    j $31
    .end Open
    By convention, the Open subroutine puts a number (in this case SC_Open) into register 2. Inside the exception handler the OS looks at register 2 to figure out what system call it should perform.
  • The Open system call also takes a parameter - the address of the character string giving the name of the file to open. By convention, the compiler puts this parameter into register 4 when it generates the code that calls the Open routine in the library. So, the OS looks in that register to find the address of the name of the file to open.
  • More conventions: succeeding parameters are put into register 5, register 6, etc. Any return values from the system call are put into register 2.
  • Inside the exception handler, the OS figures out what action to take, performs the action, then returns back to the user program.
  • There are other kinds of exceptions. For example, if the program attempts to deference a NULL pointer, the hardware will generate an exception. The OS will have to figure out what kind of exception took place and handle it accordingly. Another kind of exception is a divide by 0 fault.
  • Similar things happen on a interrupt. When an interrupt occurs, the hardware puts the OS into supervisor mode and invokes an interrupt handler. The difference between interrupts and exceptions is that interrupts are generated by external events (the disk IO completes, a new character is typed at the console, etc.) while exceptions are generated by a running program.
  • Object file formats. To run a process, the OS must load in an executable file from the disk into memory. What does this file contain? The code to run, any initialized data, and a specification for how much space the uninitialized data takes up. May also be other stuff to help debuggers run, etc.
  • The compiler, linker and OS must agree on a format for the executable file. For example, Nachos uses the following format for executables:
    		 #define NOFFMAGIC       0xbadfad        /* magic number denoting Nachos
    * object code file
    */
    typedef struct segment {
    int virtualAddr; /* location of segment in virt addr space */
    int inFileAddr; /* location of segment in this file */
    int size; /* size of segment */
    } Segment;
    typedef struct noffHeader {
    int noffMagic; /* should be NOFFMAGIC */
    Segment code; /* executable code segment */
    Segment initData; /* initialized data segment */
    Segment uninitData; /* uninitialized data segment --
    * should be zero'ed before use
    */
    } NoffHeader;
  • What does the OS do when it loads an executable in?
    • Reads in the header part of the executable.
    • Checks to see if the magic number matches.
    • Figures out how much space it needs to hold the process. This includes space for the stack, the code, the initialized data and the uninitialized data.
    • If it needs to hold the entire process in physical memory, it goes off and finds the physical memory it needs to hold the process.
    • It then reads the code segment in from the file to physical memory.
    • It then reads the initialized data segment in from the file to physical memory.
    • It zeros the stack and unintialized memory.
  • How does the operating system do IO? First, we give an overview of how the hardware does IO.
  • There are two basic ways to do IO - memory mapped IO and programmed IO.
    • Memory mapped IO - the control registers on the IO device are mapped into the memory space of the processor. The processor controls the device by performing reads and writes to the addresses that the IO device is mapped into.
    • Programmed IO - the processor has special IO instructions like IN and OUT. These control the IO device directly.
  • Writing the low level, complex code to control devices can be a very tricky business. So, the OS encapsulates this code inside things called device drivers. There are several standard interfaces that device drivers present to the kernel. It is the job of the device driver to implement its standard interface for its device. The rest of the OS can then use this interface and doesn't have to deal with complex IO code.
  • For example, Unix has a block device driver interface. All block device drivers support a standard set of calls like open, close, read and write. The disk device driver, for example, translates these calls into operations that read and write sectors on the disk.
  • Typically, IO takes place asynchronously with respect to the processor. So, the processor will start an IO operation (like writing a disk sector), then go off and do some other processing. When the IO operation completes, it interrupts the processor. The processor is typically vectored off to an interrupt handler, which takes whatever action needs to take place.
  • Here is how Nachos does IO. Each device presents an interface. For example, the disk interface is in disk.h, and has operations to start a read and write request. When the request completes, the "hardware" invokes the HandleInterrupt method.
  • Only one thread can use each device at a time. Also, threads typically want to use devices synchronously. So, for example, a thread will perform a disk operation then wait until the disk operation completes. Nachos therefore encapsulates the device interface inside a higher level interface that provides synchronous, synchronized access to the device. For the disk device, this interface is in synchdisk.h. This provides operations to read and write sectors, for example.
  • Each method in the synchronous interface ensures exclusive access to the IO device by acquiring a lock before it performs any operation on the device.
  • When the synchronous method gets exclusive access to the device, it performs the operation to start the IO. It then uses a semaphore (P operation) to block until the IO operation completes. When the IO operation completes, it invokes an interrupt handler. This handler performs a V operation on the semaphore to unblock the synchronous method. The synchronous method then releases the lock and returns back to the calling thread.

  • Memory Management


  • Point of memory management algorithms - support sharing of main memory. We will focus on having multiple processes sharing the same physical memory. Key issues:
    • Protection. Must allow one process to protect its memory from access by other processes.
    • Naming. How do processes identify shared pieces of memory.
    • Transparency. How transparent is sharing. Does user program have to manage anything explicitly?
    • Efficiency. Any memory management strategy should not impose too much of a performance burden.
  • Why share memory between processes? Because want to multiprogram the processor. To time share system, to overlap computation and I/O. So, must provide for multiple processes to be resident in physical memory at the same time. Processes must share the physical memory.
  • Historical Development.
    • For first computers, loaded one program onto machine and it executed to completion. No sharing required. OS was just a subroutine library, and there was no protection. What addresses does program generate?
    • Desire to increase processor utilization in the face of long I/O delays drove the adoptation of multiprogramming. So, one process runs until it does I/O, then OS lets another process run. How do processes share memory? Alternatives:
      • Load both processes into memory, then switch between them under OS control. Must relocate program when load it. Big Problem: Protection. A bug in one process can kill the other process. MS-DOS, MS-Windows use this strategy.
      • Copy entire memory of process to disk when it does I/O, then copy back when it restarts. No need to relocate when load. Obvious performance problems. Early version of Unix did this.
      • Do access checking on each memory reference. Give each program a piece of memory that it can access, and on every memory reference check that it stays within its address space. Typical mechanism: base and bounds registers. Where is check done? Answer: in hardware for speed. When OS runs process, loads the base and bounds registers for that process. Cray-1 did this. Note: there is now a translation process. Program generates virtual addresses that get translated into physical addresses. But, no longer have a protection problem: one process cannot access another's memory, because it is outside its address space. If it tries to access it, the hardware will generate an exception.
  • End up with a model where physical memory of machine is dynamically allocated to processes as they enter and exit the system. Variety of allocation strategies: best fit, first fit, etc. All suffer from external fragmentation. In worst case, may have enough memory free to load a process, but can't use it because it is fragmented into little pieces.
  • What if cannot find a space big enough to run a process? Either because of fragmentation or because physical memory is too small to hold all address spaces. Can compact and relocate processes (easy with base and bounds hardware, not so easy for direct physical address machines). Or, can swap a process out to disk then restore when space becomes available. In both cases incur copying overhead. When move process within memory, must copy between memory locations. When move to disk, must copy back and forth to disk.
  • One way to avoid external fragmentation: allocate physical memory to processes in fixed size chunks called page frames. Present abstraction to application of a single linear address space. Inside machine, break address space of application up into fixed size chunks called pages. Pages and page frames are same size. Store pages in page frames. When process generates an address, dynamically translate to the physical page frame which holds data for that page.
  • So, a virtual address now consists of two pieces: a page number and an offset within that page. Page sizes are typically powers of 2; this simplifies extraction of page numbers and offsets. To access a piece of data at a given address, system automatically does the following:
    • Extracts page number.
    • Extracts offset.
    • Translate page number to physical page frame id.
    • Accesses data at offset in physical page frame.
  • How does system perform translation? Simplest solution: use a page table. Page table is a linear array indexed by virtual page number that gives the physical page frame that contains that page. What is lookup process?
    • Extract page number.
    • Extract offset.
    • Check that page number is within address space of process.
    • Look up page number in page table.
    • Add offset to resulting physical page number
    • Access memory location.
  • With paging, still have protection. One process cannot access a piece of physical memory unless its page table points to that physical page. So, if the page tables of two processes point to different physical pages, the processes cannot access each other's physical memory.
  • Fixed size allocation of physical memory in page frames dramatically simplifies allocation algorithm. OS can just keep track of free and used pages and allocate free pages when a process needs memory. There is no fragmentation of physical memory into smaller and smaller allocatable chunks.
  • But, are still pieces of memory that are unused. What happens if a program's address space does not end on a page boundary? Rest of page goes unused. This kind of memory loss is called internal fragmentation.

  • Introduction to Paging


  • Basic idea: allocate physical memory to processes in fixed size chunks called page frames. Present abstraction to application of a single linear address space. Inside machine, break address space of application up into fixed size chunks called pages. Pages and page frames are same size. Store pages in page frames. When process generates an address, dynamically translate to the physical page frame which holds data for that page.
  • So, a virtual address now consists of two pieces: a page number and an offset within that page. Page sizes are typically powers of 2; this simplifies extraction of page numbers and offsets. To access a piece of data at a given address, system automatically does the following:
    • Extracts page number.
    • Extracts offset.
    • Translate page number to physical page frame id.
    • Accesses data at offset in physical page frame.
  • How does system perform translation? Simplest solution: use a page table. Page table is a linear array indexed by virtual page number that gives the physical page frame that contains that page. What is lookup process?
    • Extract page number.
    • Extract offset.
    • Check that page number is within address space of process.
    • Look up page number in page table.
    • Add offset to resulting physical page number
    • Access memory location.
    Problem: for each memory access that processor generates, must now generate two physical memory accesses.
  • Speed up the lookup problem with a cache. Store most recent page lookup values in TLB. TLB design options: fully associative, direct mapped, set associative, etc. Can make direct mapped larger for a given amount of circuit space.
  • How does lookup work now?
    • Extract page number.
    • Extract offset.
    • Look up page number in TLB.
    • If there, add offset to physical page number and access memory location.
    • Otherwise, trap to OS. OS performs check, looks up physical page number, and loads translation into TLB. Restarts the instruction.
  • Like any cache, TLB can work well, or it can work poorly. What is a good and bad case for a direct mapped TLB? What about fully associative TLBs, or set associative TLB?
  • Fixed size allocation of physical memory in page frames dramatically simplifies allocation algorithm. OS can just keep track of free and used pages and allocate free pages when a process needs memory. There is no fragmentation of physical memory into smaller and smaller allocatable chunks.
  • But, are still pieces of memory that are unused. What happens if a program's address space does not end on a page boundary? Rest of page goes unused. Book calls this internal fragmentation.
  • How do processes share memory? The OS makes their page tables point to the same physical page frames. Useful for fast interprocess communication mechanisms. This is very nice because it allows transparent sharing at speed.
  • What about protection? There are a variety of protections:
    • Preventing one process from reading or writing another process' memory.
    • Preventing one process from reading another process' memory.
    • Preventing a process from reading or writing some of its own memory.
    • Preventing a process from reading some of its own memory.
    How is this protection integrated into the above scheme?
  • Preventing a process from reading or writing memory: OS refuses to establish a mapping from virtual address space to physical page frame containing the protected memory. When program attempts to access this memory, OS will typically generate a fault. If user process catches the fault, can take action to fix things up.
  • Preventing a process from writing memory, but allowing a process to read memory. OS sets a write protect bit in the TLB entry. If process attempts to write the memory, OS generates a fault. But, reads go through just fine.
  • Virtual Memory Introduction.
  • When a segmented system needed more memory, it swapped segments out to disk and then swapped them back in again when necessary. Page based systems can do something similar on a page basis.
  • Basic idea: when OS needs to a physical page frame to store a page, and there are none free, it can select one page and store it out to disk. It can then use the newly free page frame for the new page. Some pragmatic considerations:
    • In practice, it makes sense to keep a few free page frames. When number of free pages drops below this threshold, choose a page and store it out. This way, can overlap I/O required to store out a page with computation that uses the newly allocated page frame.
    • In practice the page frame size usually equals the disk block size. Why?
    • Do you need to allocate disk space for a virtual page before you swap it out? (Not if always keep one page frame free) Why did BSD do this? At some point OS must refuse to allocate a process more memory because has no swap space. When can this happen? (malloc, stack extension, new process creation).
  • When process tries to access paged out memory, OS must run off to the disk, find a free page frame, then read page back off of disk into the page frame and restart process.
  • What is advantage of virtual memory/paging?
    • Can run programs whose virtual address space is larger than physical memory. In effect, one process shares physical memory with itself.
    • Can also flexibly share machine between processes whose total address space sizes exceed the physical memory size.
    • Supports a wide range of user-level stuff - See Li and Appel paper.
  • Disadvantages of VM/paging: extra resource consumption.
    • Memory overhead for storing page tables. In extreme cases, page table may take up a significant portion of virtual memory. One Solution: page the page table. Others: go to a more complicated data structure for storing virtual to physical translations.
    • Translation overhead.






  • Print Page | Close Window