4.Cooperating Processes

  1. Independent process cannot affect or be affected by the execution of another process.
  2. Cooperating process can affect or be affected by the execution of another process
  3. Advantages of process cooperation
  • Information sharing
  • Computation speed-up
  • Modularity
  • Convenience

5. Interprocess Communication (IPC)

Mechanism for processes to communicate and to synchronize their actions.
  1. Message system – processes communicate with each other without resorting to shared variables.
  2. IPC facility provides two operations:
  • send(message) –message size fixed or variable
  • receive(message)

3. If P and Q wish to communicate, they need to:

  • establish a communication link between them
  • exchange messages via send/receiven Implementation of communication link
  • physical (e.g., shared memory, hardware bus) considered later
  • logical (e.g., logical properties) now

1.Concept on Process
An operating system executes a variety of programs:


  • Batch system – jobs
  • Time-shared systems – user programs or task

Textbook uses the terms job and process almost interchangeably.

Process – a program in execution; process execution must progress in sequential fashion.

A process includes:

  • program counter
  • stack

  • data section

Process State

As a process executes, it changes state









  • new: The process is being created.
  • running: Instructions are being executed.

  • waiting: The process is waiting for some event to occur

  • ready: The process is waiting to be assigned to a processor

  • terminated: The process has finished execution.


Process Control Block


Information associated with each process.


Process ID

Process staten Program counter

CPU registersn CPU scheduling information


Memory-management information


Accounting information

I/O status information





Threads


In computer science, a thread of execution results from a fork of a computer program into two or more concurrently running tasks. The implementation of threads and processes differs from one operating system to another, but in most cases, a thread is contained inside a process. Multiple threads can exist within the same process and share resources such as memory, while different processes do not share these resources.
On a single processor, multithreading generally occurs by time-division multiplexing (as in multitasking): the processor switches between different threads. This context switching generally happens frequently enough that the user perceives the threads or tasks as running at the same time. On a multiprocessor or multi-core system, the threads or tasks will generally run at the same time, with each processor or core running a particular thread or task. Support for threads in programming languages varies: a number of languages simply do not support having more than one execution context inside the same program executing at the same time.

2. Process Scheduling

Scheduling Queues
Job queue – set of all processes in the system.
Ready queue – set of all processes residing in main memory, ready and waiting to execute.
Ready Queue And Various I/O Device Queues





Device queues – set of processes waiting for an I/O device.
Processes migrate between the various queues.
Schedulers
Long-term scheduler (or job scheduler)selects which processes should be brought into the ready queue.
Short-term scheduler (or CPU scheduler) – selects whichprocess should be executed next and allocates CPU.





Short-term scheduler is invoked very frequently
(milliseconds)=> (must be fast).
Long-term scheduler is invoked very infrequently
(seconds, minutes) =>(may be slow).
The long-term scheduler controls the degree of multiprogramming.
Processes can be described as either:

  • I/O-bound process – spends more time doing I/O than computations, many short CPU bursts.
  • CPU-bound process – spends more time doing computations; few very long CPU bursts.
Context Switch

  • When CPU switches to another process, the system must save the state of the old process and load the saved state for the new process.

  • Context-switch time is overhead; the system does no useful work while switching.

  • Time dependent on hardware support.

3.)Operations on Processes

--Process Creation

  1. Parent process creates children processes, which, in turn create other processes, forming a tree of processes.
  2. Resource sharing
  • Parent and children share all resources.
  • Children share subset of parent’s resources.
  • Parent and child share no resources.


3. Execution

  • Parent and children execute concurrently.
  • Parent waits until children terminate.

4.Address space

  • Child duplicate of parent.
  • Child has a program loaded into it.
UNIX examples

  • fork system call creates new process
  • fork returns 0 to child , process id of child for parent
  • exec system call used after a fork to replace the process’ memory space with a new program.

--Process Termination

Process executes last statement and asks the operating system to delete it (exit).

  • Output data from child to parent (via wait).
  • Process’ resources are deallocated by operating system.n Parent may terminate execution of children processes (abort).
  • Child has exceeded allocated resources.
  • Task assigned to child is no longer required.
  • Parent is exiting.

----------- Operating system does not allow child to continue if its parent terminates.

----------- Cascading termination.

  • In Unix, if parent exits children are assigned init as parent

1. What are the major activities of an Operating System with regards to Process Management?

The operating system is responsible for the following activities in connection with process management.
  • Process creation and deletion.
  • Process suspension (process is in I/O wait queue, or “swapped” out to disk, …) and resumption (move to ready queue or execution) – manage the state of the process.

  • Provision of mechanisms for:
  • Process synchronization - concurrent processing is supported thus the need for synchronization of processes or threads.
  • Process communication
  • Deadlock handling

2. What are the major activities of an Operating System with regards to Memory Management?

-Keep track of which parts of memory are currently being used and by whom.
-Decide which processes to load when memory space becomes available - long term or medium term scheduler.
-Mapping addresses in a process to absolute memory addresses - at load time or run time.
-Allocate and deallocate memory space as needed.
-Memory partitioning, allocation, paging (VM), address translation, defrag,
…Memory protection

3. What are the major activities of an Operating System with regards to Secondary-Storage Management?

--Free space management
--Storage allocation
--Disk scheduling – minimize seeks (arm movement … very slow operation)
--Disk as the media for mapping virtual memory space
--Disk caching for performance
--Disk utilities: defrag, recovery of lost clusters, etc.

4. What are the major activities of an Operating System with regards to File Management?

  • File creation and deletion - system calls or commands.
  • Directory creation and deletion - system calls or commands.
  • Support of primitives for manipulating files and directories in an efficient manner - system calls or commands.
  • Mapping files onto secondary storage.
  • File backup on stable (nonvolatile) storage media.

EX: File Allocation Table (FAT) for Windows/PC systems

5. What is the purpose of the command-interpreter?

The program that reads and interprets control statements is called variously:

·command-line interpreter (Control card interpreter in the “old batch days”)

·shell (in UNIX)Command.com for external commands in DOS Its function is to get and execute the next command statement.

(0) System Boot

Operating system must be made available to hardware so hardware can start it

  • Small piece of code – bootstrap loader, locates the kernel, loads it into memory, and starts it
  • Sometimes two-step process where boot block at fixed location loads bootstrap loader
  • When power initialized on system, execution starts at a fixed memory location
    -------Firmware used to hold initial boot code

(0) System Generation

(computer science) A process that creates a particular and uniquely specified operating system; it combines user-specified options and parameters with manufacturer-supplied general-purpose or nonspecialized program subsections to produce an operating system (or other complex software) of the desired form and capacity. Abbreviated sysgen.

(0) Virtual Machine

A virtual machine was originally defined by Popek and Goldberg as "an efficient, isolated duplicate of a real machine". Current use includes virtual machines which have no direct correspondence to any real hardware.[1]Virtual machines are separated into two major categories, based on their use and degree of correspondence to any real machine. A system virtual machine provides a complete system platform which supports the execution of a complete operating system (OS). In contrast, a process virtual machine is designed to run a single program, which means that it supports a single process. An essential characteristic of a virtual machine is that the software running inside is limited to the resources and abstractions provided by the virtual machine -- it cannot break out of its virtual world.
A virtual machine, simply put, is a virtual computer running on a physical computer. The virtual machine emulates a physical machine in software. This includes not only the processor but the instruction set, the memory bus, any BIOS commands and critical machine hardware such as the system clock and and DMA hardware. Depending upon the machine peripheral devices are generally virtualized including storage devices like floppy drives, hard drives and CD drives. Video, keyboard and mouse support are also common. A virtual machine must look and act just like the real thing so standard software, like operating systems and applications, can run without modification.


  • Implementation


Modes:
>>virtual user mode and virtual monitor mode,
>>Actual user mode and actual monitor mode
Time
>>Whereas the real I/O might have taken 100 milliseconds, the virtual I/O might take less time (because it is spooled) or more time (because it is interpreted.)
>>The CPU is being multi-programmed among many virtual machines, further slowing down the virtual machines in unpredictable ways.

  • benefits

For testers it is important that they test software against the various supported operating systems that an application runs against. The traditional approach is to run multiple physical machines, each with a different operating system. This is bad for several reasons. Space, maintenance, power and feasibility come to mind. Deployment of the software to these various machines can also be an issue. Instead a tester can run multiple virtual machines on one physical machine. Each virtual machine could have a different operating system. The application can be deployed to the virtual machines and tested.

Another advantage of virtual machines is reproducibility. Build and test environments generally need to be well controlled. It would be undo work to have to wipe out a machine and rebuild it after each build or test run. A virtual machine allows the environment to be set up once. The environment is then captured. Any changes made after the capture can then be thrown away after the build or test run. Most emulation software packages offer this in some form or another


Two advantages
>>To provide a robust level of security
>>no direct sharing of resources.
Two solutions
>>To allow system development to be done easily
>>A perfect vehicle for OS research and development.
>>difficult to implement due to the effort required to provide an exact duplicate to the underlying machine.
>>Wine for Linux


  • Examples

A program written in Java receives services from the Java Runtime Environment (JRE) software by issuing commands to, and receiving the expected results from, the Java software. By providing these services to the program, the Java software is acting as a "virtual machine", taking the place of the operating system or hardware for which the program would ordinarily be tailored.

(0) System Structure


  • Simple Structure

-any part of the system may use the functionality of the rest ofthe system

-MS-DOS (user programs can call low level I/O routines)View the OS as a series of levels

–Each level performs a related subset of functions

–Each level relies on the next lower level to perform more primitive functions

–This decomposes a problem into a number of more manageable subproblems

  • Layered Approach

    The operating system is divided into a number of layers (levels), each built on top of lower layers. The bottom layer (layer 0), is the hardware; the highest (layer N) is the user interface.

    With modularity, layers are selected such that each uses functions (operations) and services of only lower-level layers

---Properties

 Simplicity of construction
 Simplicity of Debugging
 Problems
 Precise definition of layers
 Example: Memory manager requires device driver of backing store (due to virtual memory)
 The device driver requires CPU scheduler (since if the driver waits for IO, another task should be scheduled)
 CPU scheduler may require virtual memory for large amount of information of some processes
 Less efficiency: due to the number of layers a request should pass

(0) System Calls

System calls provide the interface between a running program and the operating system
Generally available as assembly-language instructions
Languages defined to replace assembly language for systems programming allow system calls to be made directly (e.g., C, C++)
Types of System Calls
  • Process control

create/terminate a process (including self)

  • File management
    Also referred to as simply a file system or filesystem. The system that an operating system or program uses to organize and keep track of files. For example, a hierarchical file system is one that uses directories to organize files into a tree structure.Although the operating system provides its own file management system, you can buy separate file management systems. These systems interact smoothly with the operating system but provide more features, such as improved backup procedures and stricter file protection.

--->>Device management

Device Management is a set of technologies, protocols and standards used to allow the remote management of mobile devices, often involving updates of firmware over the air (FOTA). The network operator, handset OEM or in some cases even the end-user (usually via a web portal) can use Device Management, also known as Mobile Device Management, or MDM, to update the handset firmware/OS, install applications and fix bugs, all over the air. Thus, large numbers of devices can be managed with single commands and the end-user is freed from the requirement to take the phone to a shop or service center to refresh or update.

  • Information maintenance get time

–set system data (OS parameters)

– get process information (id, time used)

  • Communications

– establish a connection

– send, receive messages

– terminate a connectionlProcess control

– create/terminate a process (including self)

(0)Operating System Services

  1. Program execution – system capability to load a program into memory and to run it
  2. I/O operationssince user programs cannot execute I/O operations directly, the operating system must provide some means to perform I/On
  3. File-system manipulationprogram capability to read, write, create, and delete files
  4. Communicationsexchange of information between processes executing either on the same computer or on different systems tied together by a network. Implemented via shared memory or message passing
  5. Error detectionensure correct computing by detecting errors in the CPU and memory hardware, in I/O devices, or in user programs

(0) System Components

  • Operating System Process Management

A process is a program in execution

A process needs certain resources, including CPU time, memory, files, and I/O devices, to accomplish its task.

The operating system is responsible for the following activities in connection with process management

  1. Process creation and deletion
  2. Process suspension and resumption
  3. Provision of mechanisms for:
  • process synchronization
  • process communication

  • Main memory Management

1.Memory is a large array of words or bytes, each with its own address

() It is a repository of quickly accessible data shared by the CPU and I/O devices

2. Main memory is a volatile storage device. It loses its contents in the case of system failure

3. The operating system is responsible for the following activities in connections with memory managementl

  • Keep track of which parts of memory are currently being used and by whoml
  • Decide which processes to load when memory space becomes availablel
  • Allocate and deallocate memory space as needed

Memory management is the act of managing computer memory. In its simpler forms, this involves providing ways to allocate portions of memory to programs at their request, and freeing it for reuse when no longer needed. The management of main memory is critical to the computer system.

  • File management

A file is a collection of related information defined by its creatorl Commonly, files represent programs (both source and object forms) and data

The operating system is responsible for the following activities in connections with file management:

  • File creation and deletion
  • Directory creation and deletion
  • Support of primitives for manipulating files and directories
  • Mapping files onto secondary storage
  • File backup on stable (nonvolatile) storage media

A file object provides a representation of a resource (either a physical device or a resource located on a physical device) that can be managed by the I/O system. Like other objects, they enable sharing of the resource, they have names, they are protected by object-based security, and they support synchronization. The I/O system also enables reading from or writing to the resource.

  • I/O System management

The I/O system consists of:

>A buffer-caching system

>A general device-driver interface

>Drivers for specific hardware devices

  • Secondary Storage Management

Since main memory (primary storage) is volatile and too small to accommodate all data and programs permanently, the computer system must provide secondary storage to back up main memory

Most modern computer systems use disks as the principle on-line storage medium, for both programs and datan

The operating system is responsible for the following activities in connection with disk management:

>> Free space managementl Storage allocationl

>>Disk scheduling

The I/O management subsystem controls all the input andoutput of the computer system. For the enforcement ofsecurity, the most important things that the I/O managementsubsystem does are· Managing the transfer of data.· Enforcing access controls (the DAC mechanisms) on datawhile it is being transferred. See “Discretionary access control(DAC)” for more information on DAC.During the transfer of blocks or streams of data, and duringcharacter I/O operation, each I/O transaction is completelyseparate from all others. It follows a well-known and well-defined path; therefore, the integrity of all data is maintainedduring data transactions.

  • Protection System

Protection refers to a mechanism for controlling access by programs, processes, or users to both system and user resourcesn

The protection mechanism must:

  1. distinguish between authorized and unauthorized usagel specify the controls to be imposedl
  2. provide a means of enforcement
  • Command-Interpreter system

Many commands are given to the operating system by control statements which deal with:

  1. Process creation and management
  2. I/O handling
  3. Secondary-storage management
  4. Main-memory management
  5. File-system access
  6. Protection
  7. Networking

A command interpreter is the part of a computer operating system that understands and executes commands that are entered interactively by a human being or from a program. In some operating systems, the command interpreter is called the shell.

(3) Hardware Protection


  • Dual Mode Operation

• Sharing system resources requires operating system to ensurethat an incorrect program cannot cause other programs toexecute incorrect

• Provide hardware support to differentiate between at least twomodes of operations.

1. User mode – execution done on behalf of a user.

2. Monitor mode (also supervisor mode or system mode) –execution done on behalf of operating system.

  • I/O Protection
• All I/O instructions are privileged instructions.
• Must ensure that a user program could never gain control of the computer in monitor mode (i.e., a user program that, aspart of its execution, stores a new address in the interruptvector).
  • Memory Protection
• Must provide memory protection at least for the interrupt vectorand the interrupt service routines.
• In order to have memory protection, add two registers thatdetermine the range of legal addresses a program may access:

– base register
– holds the smallest legal physical memoryaddress.
– limit register
– contains the size of the range.

• Memory outside the defined range is protected.

  • CPU Protection

-to prevent a user programs gets stuck in infinite loop and never returning back to the os




(2) Storage Heirarchy

  • caching


--->>In computer science, a cache (pronounced /kæʃ/) is a collection of data duplicating original values stored elsewhere or computed earlier, where the original data is expensive to fetch (owing to longer access time) or to compute, compared to the cost of reading the cache. In other words, a cache is a temporary storage area where frequently accessed data can be stored for rapid access. Once the data is stored in the cache, it can be used in the future by accessing the cached copy rather than re-fetching or recomputing the original data.
A cache has proven to be extremely effective in many areas of computing because access patterns in typical
computer applications have locality of reference. There are several kinds of locality, but this article primarily deals with data that are accessed close together in time (temporal locality). The data might or might not be located physically close to each other (spatial locality).



  • coherency and consistency


In computing, cache coherence (also cache coherency) refers to the integrity of data stored in local caches of a shared resource. Cache coherence is a special case of memory coherence.
When clients in a system maintain
caches of a common memory resource, problems may arise with inconsistent data. This is particularly true of CPUs in a multiprocessing system. Referring to the "Multiple Caches of Shared Resource" figure, if the top client has a copy of a memory block from a previous read and the bottom client changes that memory block, the top client could be left with an invalid cache of memory without any notification of the change. Cache coherence is intended to manage such conflicts and maintain consistency between cache and memory.

cache consistency

-->>The synchronisation of data in multiple caches such that reading a memory location via any cache will return the most recent data written to that location via any (other) cache.Some parallel processors do not cache accesses to shared memory to avoid the issue of cache coherency. If caches are used with shared memory then some system is required to detect when data in one processor's cache should be discarded or replaced because another processor has updated that memory location. Several such schemes have been devised.


Cache Consistency Mechanisms There are several ways to maintain cache consistency of files placed in a cache: time-to-live fields, active invalidation protocols, and client polling. Time-to-live fields are efficient to set up, but are inaccurate.





(1) Storage Structure

main memory

--->>>Primary storage, presently known as memory, is the only one directly accessible to the CPU. The CPU continuously reads instructions stored there and executes them as required. Any data actively operated on is also stored there in uniform manner.Historically, early computers used delay lines, Williams tubes, or rotating magnetic drums as primary storage. By 1954, those unreliable methods were mostly replaced by magnetic core memory, which was still rather cumbersome. Undoubtedly, a revolution was started with the invention of a transistor, that soon enabled then-unbelievable miniaturization of electronic memory via solid-state silicon chip technology.This led to a modern random access memory (RAM). It is small-sized, light, but quite expensive at the same time. (The particular types of RAM used for primary storage are also volatile, i.e. they lose the information when not powered).

  • magnetic Disk
--->>>A memory device, such as a floppy disk, a hard disk, or a removable cartridge, that is covered with a magnetic coating on which digital information is stored in the form of microscopically small, magnetized needles.










--Moving Head Disk Mechanism





  • Magnetic tapes

--->>Magnetic tape is a medium for magnetic recording generally consisting of a thin magnetizable coating on a long and narrow strip of plastic. Nearly all recording tape is of this type, whether used for recording audio or video or for computer data storage. It was originally developed in Germany, based on the concept of magnetic wire recording. Devices that record and playback audio and video using magnetic tape are generally called tape recorders and video tape recorders respectively. A device that stores computer data on magnetic tape can be called a tape drive, a tape unit, or a streamer.
Magnetic tape revolutionized the broadcast and recording industries. In an age when all
radio (and later television) was live, it allowed programming to be prerecorded. In a time when gramophone records were recorded in one take, it allowed recordings to be created in multiple stages and easily mixed and edited with a minimal loss in quality between generations. It is also one of the key enabling technologies in the development of modern computers. Magnetic tape allowed massive amounts of data to be stored in computers for long periods of time and rapidly accessed when needed.

5. Device Status Table.

--->>Device-status table contains entry for each I/O deviceindicating its type, address, and state.



















1. Bootstrap Program

---->>In computing, bootstrapping (from an old expression "to pull oneself up by one's bootstraps") is a technique by which a simple computer program activates a more complicated system of programs. In the start up process of a computer system, a small program such as BIOS, initializes and tests that hardware, peripherals and external memory devices are connected, then loads a program from one of them and passes control to it, thus allowing loading of larger programs, such as an operating system.
A different use of the term bootstrapping is to use a
compiler to compile itself, by first writing a small part of a compiler of a new programming language in an existing language to compile more programs of the new compiler written in the new language. This solves the "chicken and egg" causality dilemma.

2. Difference of Interrupt and trap and their use.

---->>In computing, an interrupt is an asynchronous signal indicating the need for attention or a synchronous event in software indicating the need for a change in execution.
A hardware interrupt causes the
processor to save its state of execution via a context switch, and begin execution of an interrupt handler.
Software interrupts are usually implemented as
instructions in the instruction set, which cause a context switch to an interrupt handler similar to a hardware interrupt.
Interrupts are a commonly used technique for
computer multitasking, especially in real-time computing. Such a system is said to be interrupt-driven.An act of interrupting is referred to as an interrupt request (IRQ).

---->>In computing and operating systems, a trap is a type of synchronous interrupt typically caused by an exceptional condition (e.g. division by zero or invalid memory access) in a user process. A trap usually results in a switch to kernel mode, wherein the operating system performs some action before returning control to the originating process. In some usages, the term trap refers specifically to an interrupt intended to initiate a context switch to a monitor program or debugger.
In SNMP, a trap is a type of PDU used to report an alert or other asynchronous event about a managed subsystem.

4.User Mode

---->>In User mode, the executing code has no ability to directly access hardware or reference memory. Code running in user mode must delegate to system APIs to access hardware or memory. Due to the protection afforded by this sort of isolation, crashes in user mode are always recoverable. Most of the code running on your computer will execute in user mode.

3.Monitor mode

--->>Monitor mode, or RFMON (Radio Frequency Monitor) mode, allows a computer with a wireless network interface card (NIC) to monitor all traffic received from the wireless network. Unlike promiscuous mode, which is also used for packet sniffing, monitor mode allows packets to be captured without having to associate with an access point or ad-hoc network first. Monitor mode only applies to wireless networks, while promiscuous mode can be used on both wired and wireless networks. Monitor mode is one of the six modes that 802.11 wireless cards can operate in: Master (acting as an access point), Managed (client, also known as station), Ad-hoc, Mesh, Repeater, and Monitor mode.Monitor mode, or RFMON (Radio Frequency Monitor) mode, allows a computer with a wireless network interface card (NIC) to monitor all traffic received from the wireless network. Unlike promiscuous mode, which is also used for packet sniffing, monitor mode allows packets to be captured without having to associate with an access point or ad-hoc network first. Monitor mode only applies to wireless networks, while promiscuous mode can be used on both wired and wireless networks. Monitor mode is one of the six modes that 802.11 wireless cards can operate in: Master (acting as an access point), Managed (client, also known as station), Ad-hoc, Mesh, Repeater, and Monitor mode.

6. Direct Memory Access


---->>Direct memory access (DMA) is a feature of modern computers and microprocessors that allows certain hardware subsystems within the computer to access system memory for reading and/or writing independently of the central processing unit. Many hardware systems use DMA including disk drive controllers, graphics cards, network cards and sound cards. DMA is also used for intra-chip data transfer in multi-core processors, especially in multiprocessor system-on-chips, where its processing element is equipped with a local memory (often called scratchpad memory) and DMA is used for transferring data between the local memory and the main memory. Computers that have DMA channels can transfer data to and from devices with much less CPU overhead than computers without a DMA channel. Similarly a processing element inside a multi-core processor can transfer data to and from its local memory without occupying its processor time and allowing computation and data transfer concurrency.
Without DMA, using
programmed input/output (PIO) mode for communication with peripheral devices, or load/store instructions in the case of multicore chips, the CPU is typically fully occupied for the entire duration of the read or write operation, and is thus unavailable to perform other work. With DMA, the CPU would initiate the transfer, do other operations while the transfer is in progress, and receive an interrupt from the DMA controller once the operation has been done. This is especially useful in real-time computing applications where not stalling behind concurrent operations is critical. Another and related application area is various forms of stream processing where it is essential to have data processing and transfer in parallel, in order to achieve sufficient throughput.


7. Difference of RAM and DRAM.


--->>Random-access memory (usually known by its acronym, RAM) is a form of computer data storage. Today, it takes the form of integrated circuits that allow stored data to be accessed in any order (i.e., at random). The word random thus refers to the fact that any piece of data can be returned in a constant time, regardless of its physical location and whether or not it is related to the previous piece of data.[1]
By contrast, storage devices such as tapes, magnetic discs and optical discs rely on the physical movement of the recording medium or a reading head. In these devices, the movement takes longer than data transfer, and the retrieval time varies based on the physical location of the next item.
The word RAM is often associated with
volatile types of memory (such as DRAM memory modules), where the information is lost after the power is switched off. Many other types of memory are RAM, too, including most types of ROM and flash memory called NOR-Flash.
---->>Dynamic random access memory (DRAM) is a type of random access memory that stores each bit of data in a separate capacitor within an integrated circuit. Since real capacitors leak charge, the information eventually fades unless the capacitor charge is refreshed periodically. Because of this refresh requirement, it is a dynamic memory as opposed to SRAM and other static memory.
The advantage of DRAM is its structural simplicity: only one transistor and a capacitor are required per bit, compared to four transistors in SRAM. This allows DRAM to reach very high
density. Unlike Flash memory, it is volatile memory (cf. non-volatile memory), since it loses its data when the power supply is removed.


8. Main memory


--->>>Primary storage, presently known as memory, is the only one directly accessible to the CPU. The CPU continuously reads instructions stored there and executes them as required. Any data actively operated on is also stored there in uniform manner.
Historically,
early computers used delay lines, Williams tubes, or rotating magnetic drums as primary storage. By 1954, those unreliable methods were mostly replaced by magnetic core memory, which was still rather cumbersome. Undoubtedly, a revolution was started with the invention of a transistor, that soon enabled then-unbelievable miniaturization of electronic memory via solid-state silicon chip technology.
This led to a modern
random access memory (RAM). It is small-sized, light, but quite expensive at the same time. (The particular types of RAM used for primary storage are also volatile, i.e. they lose the information when not powered).

9. Magnetic Disk

--->>Magnetic storage and magnetic recording are terms from engineering referring to the storage of data on a magnetized medium. Magnetic storage uses different patterns of magnetization in a magnetizable material to store data and is a form of non-volatile memory. The information is accessed using one or more read/write heads. As of 2009, magnetic storage media, primarily hard disks, are widely used to store computer data as well as audio and video signals. In the field of computing, the term magnetic storage is preferred and in the field of audio and video production, the term magnetic recording is more commonly used. The distinction is less technical and more a matter of preference.

10. Storage Heirarchy

---->>>The hierarchical arrangement of storage in current computer architectures is called the memory hierarchy. It is designed to take advantage of memory locality in computer programs. Each level of the hierarchy has the properties of higher bandwidth, smaller size, and lower latency than lower levels.
Most modern
CPUs are so fast that for most program workloads, the locality of reference of memory accesses and the efficiency of the caching and memory transfer between different levels of the hierarchy are the practical limitation on processing speed. As a result, the CPU spends much of its time idling, waiting for memory I/O to complete. This is sometimes called the space cost, as a larger memory object is more likely to overflow a small/fast level and require use of a larger/slower level.