BLOGGER TEMPLATES AND TWITTER BACKGROUNDS

Tuesday, June 23, 2009

1. Bootstrap program

A "bootstrap" most commonly refers to the simple program itself that actually begins the initialization of the computer's operating system, like GRUB, LILO or NTLDR. Modern personal computers have the ability of using their network interface card (NIC) for bootstrapping; on IA-32 (x86) and IA-64 (Itanium) this method is implemented by PXE and Etherboot.

The computer is regarded as starting in a "blank slate" condition - either its main memory is blank, or else its content is suspect due to a prior crash. Although magnetic core memory retains its state with the power off, there would still exist the problem of loading in to it the very first program. A special start program could be very large, and given the modern affordability of read-only memory chips, could constitute the entire program to be run (as in embedded systems) but such an arrangement is inflexible. The bootstrap part would be a short simple piece of code that loads the main code.


2. Difference of interrupt and trap and their use.

The difference between interrupt and trap is that in computing and operating systems, a trap is a type of synchronous interrupt typically caused by an exceptional condition (e.g. division by zero or invalid memory access) in a user process. A trap usually results in a switch to kernel mode, wherein the operating system performs some action before returning control to the originating process. In some usages, the term trap refers specifically to an interrupt intended to initiate a context switch to a monitor program or debugger.[1]In SNMP, a trap is a type of PDU used to report an alert or other asynchronous event about a managed subsystem.

While interrupt is an asynchronous signal indicating the need for attention or a synchronous event in software indicating the need for a change in execution. A hardware interrupt causes the processor to save its state of execution via a context switch, and begin execution of an interrupt handler.Software interrupts are usually implemented as instructions in the instruction set, which cause a context switch to an interrupt handler similar to a hardware interrupt.Interrupts are a commonly used technique for computer multitasking, especially in real-time computing. Such a system is said to be interrupt-driven.[1]An act of interrupting is referred to as an interrupt request (IRQ).

3. Monitor mode

Monitor mode, or RFMON (Radio Frequency Monitor) mode, allows a computer with a wireless network interface card (NIC) to monitor all traffic received from the wireless network. Unlike promiscuous mode, which is also used for packet sniffing, monitor mode allows packets to be captured without having to associate with an access point or ad-hoc network first. Monitor mode only applies to wireless networks, while promiscuous mode can be used on both wired and wireless networks. Monitor mode is one of the six modes that 802.11 wireless cards can operate in: Master (acting as an access point), Managed (client, also known as station), Ad-hoc, Mesh, Repeater, and Monitor mode.


4. User mode. Contains the userhelper program, which can be used to allow configured programs to be run with superuser privileges by ordinary users, and several graphical tools for users:

  • userinfo allows users to change their finger information.

  • usermount lets users mount, unmount, and format filesystems.

  • userpasswd allows users to change their passwords.


5. Device Status Table



6. Direct memory access (DMA) IO

  • requires DMA enhanced controller
  • controller given command: (read or write, file manager buffer memory address, byte count)
  • controller copies block of count bytes from its internal buffer to memory address in file manager for a read
  • controller copies block of count bytes from memory address in file manager to its internal buffer for a write
  • the controller then generates interrupt, now that it has copied all requested bytes from or to the file manager buffer
  • the only downside is the device controller and the CPU must share the system bus to RAM while the controller copies bytes to of from the file manager buffer in RAM and the CPU reads and writes RAM.

7. Difference of RAM and DRAM

RAM - Random Access memory is a second memory section. This is where the computer stores information that you enter through thr keyboard. The amount or capacity of memory depends on the selected system. RAM is referred to as the workspace. You can write in it or erase it with just a few keystroke.

By flashing letters and numbers on the screen, the computer communicates with the operator- telling him, for example, that it is ready to receive the next program. The operator uses the keyboard to give the computer working directions or, in some cases, program material or data.

DRAM- Dynamic random access memory.Type of random access memory that stores each bit of data in a separate capacitor within an integrated circuit. Since real capacitors leak charge, the information eventually fades unless the capacitor charge is refreshed periodically. Because of this refresh requirement, it is a dynamic memory as opposed to SRAM and other static memory.
The advantage of DRAM is its structural simplicity: only one transistor and a capacitor are required per bit, compared to four transistors in SRAM. This allows DRAM to reach very high
density. Unlike Flash memory, it is volatile memory (cf. non-volatile memory), since it loses its data when the power supply is removed.

8. Storage Structure

  • Main memory. All program execution and data processing takes place in memory, often called "main memory" to differentiate it from memory chips on other circuit boards in the machine. The program's instructions are copied into memory from disk, tape or from the network and then extracted from memory into the control unit circuit for analysis and execution. The instructions direct the computer to input data into memory from a keyboard, disk, tape, modem or network. As data are entered into memory, the previous contents of that space are lost. Once the data are in memory, they can be processed (calculated, compared and copied). The results are sent to a screen, printer, disk, tape, modem or network. Memory is like an electronic checkerboard, with each square holding one byte of data or instruction. Each square has a separate address like a post office box and can be manipulated independently. As a result, the computer can break apart programs into instructions for execution and data records into fields for processing. See early memories and RAM. Memory is an important resource that cannot be wasted. It must be allocated by the operating system as well as by applications and then released when not needed. Errant programs can grab memory and not let go of it, which results in less and less memory available as you load and use more programs. Restarting the computer gives memory a clean slate, which is why rebooting the computer clears up so many problems with applications.
    In addition, if the operating system has bugs, a malfunctioning application can write into the same memory used by another program, causing all kinds of unspecified behavior. You discover it when the system freezes or something weird happens all of a sudden. If you were to be able to look into memory and watch how fast data and instructions are written into and out of it in the course of even 10 minutes, you would know it is truly a miracle that it works at all.
    Other terms for the computer's main memory are RAM, primary storage and read/write memory. Earlier terms were core and core storage. See
    dynamic RAM, static RAM and memory module.
  • Magnetic Disk. Provides direct-access storage.Data stored as a magnetic spots on the disk that make up the pack. As thepack spins, a comblike constantly moves in and out of the spaces between the individual disks, "reading" data they contain or adding new data. A memory device, such as a floppy disk, a hard disk, or a removable cartridge, that is covered with a magnetic coating on which digital information is stored in the form of microscopically small, magnetized needles.

Moving Head Disk Mechanism -

  • Magnetic Tape - Early secondary-storage medium of choice.
Persistent, inexpensive, and has large data capacity .Very slow access due to sequential nature .Used for backup and for storing infrequently-used data .Kept on spools .Transfer rates comparable to disk if read write head is positioned to the data . 20-200GB are typical storage capacities.

9. Storage Hierarchy

The range of memory and storage devices within the computer system. The following list starts with the slowest devices and ends with the fastest.

  • Caching - is a collection of data duplicating original values stored elsewhere or computed earlier, where the original data is expensive to fetch (owing to longer access time) or to compute, compared to the cost of reading the cache. In other words, a cache is a temporary storage area where frequently accessed data can be stored for rapid access. Once the data is stored in the cache, it can be used in the future by accessing the cached copy rather than re-fetching or recomputing the original data.

  • Coherency and Consistency (Coherency) the integrity of data stored in local caches of a shared resource. Cache coherence is a special case of memory coherence.
    When clients in a system maintain
    caches of a common memory resource, problems may arise with inconsistent data. This is particularly true of CPUs in a multiprocessing system. Referring to the "Multiple Caches of Shared Resource" figure, if the top client has a copy of a memory block from a previous read and the bottom client changes that memory block, the top client could be left with an invalid cache of memory without any notification of the change. Cache coherence is intended to manage such conflicts and maintain consistency between cache and memory.(Consistency) A set of rules governing how the memory systems will process memory operations from multiple processors. Contract between the programmer and system. Determines what optimizations can be performed for correct programs.

10. Hardware Protection

  • Dual-mode Operation - The two modes of the NXT software can also be used together, in order to allow the robot to both move (control its motors) and gather data as part of the same task. For example, in the Motion lesson, the robot must run forward while “looking” backward and recording how far it is from the wall behind it at different points in time. The use of both software modes in a combined task is called Dual-Mode Operation.

  • I/O Protection - To prevent illegal I/O, or simultaneousI/O requests from multiple processes,perform all I/O via privileged instructions. User programs must make a system callto the OS to perform I/O. When user process makes a system call: A trap (software-generated interrupt)occurs, which causes: The appropriate trap handler to be invokedusing the trap vectorn. Kernel mode to be set. Trap handler: Saves state. Performs requested I/O (if appropriate). Restores state, sets user mode, andreturns to calling program.

  • Memory Protection - A technique that prohibits one program from accidentally clobbering another active program. Using various different techniques, a protective boundary is created around the program, and instructions within the program are prohibited from referencing data outside of that boundary.
    When a program does go outside of its boundary, DOS, Windows 3.x, Windows 95/98 and earlier personal computer operating systems simply lock up (crash, bomb, abend, etc.). Operating systems such as Unix, OS/2 and Windows NT, 2000 and XP are more robust and generally allow the errant program to be closed without affecting the remaining active programs.

  • CPU Protection - to prevent a user programs gets stuck in infinite loop and never returning back to the os.

Saturday, June 20, 2009

Differences between user's view and system's view

Operating System can be explored from two viewpoints: the user and the system.

User view

The user view of the computer varies by the interface being used. Most computer user's sit in front of a pC, consisting of a monitor, keyboard,mouse and system unit. Such a system is designed for one user to monopolize its resources, to maximize the work that the user is performing. In this case, the operating system is designed mostly for ease of use,with some attention paid to performance, and none paid to resource utilization.

Some users sit at a terminal connected to a mainframe or minicomputer. Other users are accessing the same computer through other terminals. These users share resources and may exchange information. The operaring system is designed to maximize resource utilization.

Other users sit at workstations,connected to networks of other workstations and servers. These users have dedicated resources at thier disposal, but they also share resources such as networking and servers.

Recently, many varieties of handheld computers have comeinto fashion. These devices are mostly standalone, used singly by individual users. Some are connected to networks, either directly by wire or through wireless modems. Due to power and interface limitations they perform relatively few remore operations. These operating sysytems are designed mostly for individual usabilit, but performanceper amount of battery life is important as well.

Some computers have little or no user view. For example, embedded computers in home devices and automobiles may have numeric keypad, and may turn indicator lights on or off toshow status, but mostly they and their operating system are designed to run without uer intervention.

System view

Wecan view an ooperating system as a resource allocator. A computer system has many resources-hardware an software - that maybe required to solve a problem. The operating system avts as the manager of these resources.

An operating system can also be viewed as a control program that manages the execution of user programs to prevent that manages the execution of user programs to prevent errors and improper use of the computer. It is especially concerned with the operation and control of I/O devices.

We have no universally accepted definition of what is part of the operating system. A simple viewpoint is that it includes everything a vendor ships when you order "the operating system".

A more common definition is that the operating system by what it does than what it is, but even this can tricky. The primary goal of some operating system is convenience for the user. The primary goal of other operating system is efficient operation ofthe computer system. Operating system and computer architeture have influenced each other a great deal. To facilitate the use of the hardware, researches developed operating systems. Users of the operating systems then proposed changes in hardware design to simplify them. In this short historical review, notice how identification of operating-system problems led to the introduction of new hardware features.

Thursday, June 18, 2009

Difference between a stand alone PC and a workstation connected to a network

  • stand-alone PC A desktop or laptop computer that is used on its own without requiring a connection to a local area network (LAN) or wide area network (WAN). Although it may be connected to a network, it is still a stand-alone PC as long as the network connection is not mandatory for its general use.In offices throughout the 1990s, millions of stand-alone PCs were hooked up to the local network for file sharing and mainframe access. Today, computers are commonly networked in the home so that family members can share an Internet connection as well as printers, scanners and other peripherals. When the computer is running local applications without Internet access, the machine is technically a stand-alone PC.
  • Workstation in the computer network. In the days of main frame computers, work stations were keyboards and printers, or monitors, which were linked to the Central Processing Unit, which performed all the functions. In asdfadfadfadfdfabecame available, workstations were given limited storage capacity, so that characters are stored in a memory inside the workstation until the CPU polls the work station to dump its data into a buffer, or holding area, in the CPU. These work stations resembled stand-alone computers, but did not have the ability to run programs. Work stations have disappeared, for the most part, as stand-alone computers have become much less expensive.
    Some applications still have the term 'work station', where there is a central computer, called a server, which handles storing and retrieving data, for the stand-alone computers which are linked to it, thus reducing the amount of data stored on each computer. However, desk top computers have become so powerful that it is rare to have a central computer do the processing, with only input and output occurring at the work station.

Difference between Symmetric Multiprocessing and Asymmetric Multiprocessing

  • Symmetric multiprocessing or SMP involves a multiprocessor computer-architecture where two or more identical processors can connect to a single shared main memory. Most common multiprocessor systems today use an SMP architecture. In the case of multi-core processors, the SMP architecture applies to the cores, treating them as separate processors.
    SMP systems allow any processor to work on any task no matter where the data for that task are located in memory; with proper operating system support, SMP systems can easily move tasks between processors to balance the workload efficiently.
  • Asymmetric multiprocessing or ASMP is a type of multiprocessing supported in DEC's VMS V.3 as well as a number of older systems including TOPS-10 and OS-360. It varies greatly from the standard processing model that we see in personal computers today. Due to the complexity and unique nature of this architecture, it was not adopted by many vendors or programmers during its brief stint between 1970 - 1980.
    Where as a symmetric multiprocessor or SMP treats all of the processing elements in the system identically, an ASMP system assigns certain tasks only to certain processors. In particular, only one processor may be responsible for fielding all of the interrupts in the system or perhaps even performing all of the I/O in the system. This makes the design of the I/O system much simpler, although it tends to limit the ultimate performance of the system. Graphics cards, physics cards and cryptographic accelerators which are subordinate to a CPU in modern computers can be considered a form of asymmetric multiprocessing.[citation needed] SMP is extremely common in the modern computing world, when people refer to "multi core" or "multi processing" they are most commonly referring to SMP.

Goals of Operating Systems

Operating systems generally have following three major goals. Operating systems generally accomplish these goals by running processes in low privilege and providing service calls that invoke the operating system kernel in high-privilege state.

  • To hide details of hardware by creating abstraction. An abstraction is software that hides lower level details and provides a set of higher-level functions. An operating system transforms the physical world of devices, instructions, memory, and time into virtual world that is the result of abstractions built by the operating system. There are several reasons for abstraction. First, the code needed to control peripheral devices is not standardized. Operating systems provide subroutines called device drivers that perform operations on behalf of programs for example, input/output operations. Second, the operating system introduces new functions as it abstracts the hardware. For instance, operating system introduces the file abstraction so that programs do not have to deal with disks. Third, the operating system transforms the computer hardware into multiple virtual computers, each belonging to a different program. Each program that is running is called a process. Each process views the hardware through the lens of abstraction. Fourth, the operating system can enforce security through abstraction.
  • To allocate resources to processes (Manage resources)An operating system controls how processes (the active agents) may access resources(passive entities).

  • Provide a pleasant and effective user interface The user interacts with the operating systems through the user interface and usually interested in the “look and feel” of the operating system. The most important components of the user interface are the command interpreter, the file system, on-line help, and application integration. The recent trend has been toward increasingly integrated graphical user interfaces that encompass the activities of multiple processes on networks of computers.

Difference between client-server system and peer-to-peer system

Client server computing or networking is a distributed application architecture that partitions tasks or work loads between service providers (servers) and service requesters, called clients. Often clients and servers operate over a computer network on separate hardware. A server is a high-performance host that is a registering unit and shares its resources with clients. A client does not share any of its resources, but requests a server's content or service function. Clients therefore initiate communication sessions with servers which await (listen to) incoming requests.
Client-server describes the relationship between two computer programs in which one program, the client program, makes a service request to another, the server program. Standard networked functions such as email exchange, web access and database access, are based on the client-server model.The client-server model has become one of the central ideas of network computing. Many business applications being written today use the client-server model. So do the Internet's main application protocols, such as HTTP, SMTP, Telnet, DNS. In marketing, the term has been used to distinguish distributed computing by smaller dispersed computers from the "monolithic" centralized computing of mainframe computers. But this distinction has largely disappeared as mainframes and their applications have also turned to the client-server model and become part of network computing.Each instance of the client software can send data requests to one or more connected servers. In turn, the servers can accept these requests, process them, and return the requested information to the client. Although this concept can be applied for a variety of reasons to many different kinds of applications, the architecture remains fundamentally the same.The most basic type of client-server architecture employs only two types of hosts: clients and servers. This type of architecture is sometimes referred to as two-tier. It allows devices to share files and resources. The two tier architecture means that the client acts as one tier and application in combination with server acts as another tier.The interaction between client and server is often described using sequence diagrams. Sequence diagrams are standardized in the Unified Modeling Language.Another type of network architecture is known as peer-to-peer, because each host or instance of the program can simultaneously act as both a client and a server, and because each has equivalent responsibilities and status. Peer-to-peer architectures are often abbreviated by P2P.Both client-server and P2P architectures are in wide usage today. Peer-to-peer (P2P) networking is a method of delivering computer network services in which the participants share a portion of their own resources, such as processing power, disk storage, network bandwidth, printing facilities. Such resources are provided directly to other participants without intermediary network hosts or servers.Peer-to-peer network participants are providers and consumers of network services simultaneously, which contrasts with other service models, such as traditional client-server computing.

Advantages of Parallel System

In terms of disproportionality, Parallel systems usually give results which fall somewhere between pure plurality/majority and pure PR systems. One advantage is that, when there are enough PR seats, small minority parties which have been unsuccessful in the plurality/majority elections can still be rewarded for their votes by winning seats in the proportional allocation. In addition, a Parallel system should, in theory, fragment the party system less than a pure PR electoral system.

Essential properties of the following types of OS

a. Batch-is one of the earlier OS systems and concepts developed. It was actually assigned to take or perform tasks from a Job pool or Batch of jobs, also now known as batch jobs. The batch operating system may not exist but certainly it isn't extinct because no OS today perform without having batch jobs being a part of it. Batch jobs nowadays are a fundamental part of modern OS and working principle.



b.Time-sharing - is sharing a computing resource among many users by multitasking. Its introduction in the 1960s, and emergence as the prominent model of computing in the 1970s, represents a major historical shift in the history of computing. By allowing a large number of users to interact simultaneously on a single computer, time-sharing dramatically lowered the cost of providing computing, while at the same time making the computing experience much more interactive



c.Real time - It is a multitasking operating system that aims at executing real-time applications. Real-time operating systems often use specialized scheduling algorithms so that they can achieve a deterministic nature of behavior. The main object of real-time operating systems is their quick and predictable response to events. They either have an event-driven or a time-sharing design. An event-driven system switches between tasks based of their priorities while time-sharing operating systems switch tasks based on clock interrupts.



d.Network - is a computer operating system system that is designed primarily to support workstation, personal computer, and, in some instances, older terminal that are connected on a local area network (LAN). Artisoft's LANtastic, Banyan VINES, Novell's NetWare, and Microsoft's LAN Manager are examples of network operating systems. In addition, some multi-purpose operating systems, such as Windows NT and Digital's OpenVMS come with capabilities that enable them to be described as a network operating system.
A network operating system provides printer sharing, common file system and database sharing, application sharing, and the ability to manage a network name directory, security, and other housekeeping aspects of a network.



e.Distributed - An operating system that manages a group of independent computers and makes them appear to be a single computer is known as a distributed operating system. The development of networked computers that could be linked and communicate with each other, gave rise to distributed computing. Distributed computations are carried out on more than one machine. When computers in a group work in cooperation, they make a distributed system.



f.Handheld - determines not only what you see onscreen, but also how you interact with the device and what kind of services you can get from it. The two dominant handheld OSes are Palm and Pocket PC but Symbian and Linux are both up and coming. To help you decide which OS you want on your next handheld, here's a breakdown of these four operating systems plus a few of our hardware picks to get you started.