BLOGGER TEMPLATES AND TWITTER BACKGROUNDS

Thursday, August 27, 2009

Resource Allocation Graph

How would you know if there is a deadlock base on a resource allocation graph?

  • A deadlock detection method based on the use of the resource allocation graph is presented. The method is different from the existing deadlock avoidance techniques in that the original directed resource allocation graph is first transformed into an undirected (0 1)-labelled graph in which the deadlock would occur only if a cycle has been labelled alternatingly with 0s and 1s. The algorithm is applicable to the centralised and distributed systems. Another feature of the algorithm is that it can be used in distributed systems, since the detection of deadlock is carried out by an interprocess communications which is basically the exchange of 0 and 1 bits among the processes. The worst case cost of the algorithm is O(e), which is low enough to run it at the background of the operating system.

Thursday, August 20, 2009

Deadlock

DEADLOCK CHARACTERIZATION

A deadlock situation can arise if the following four conditions hold simultaneously in a system:

Mutual exclusion - at least one resource must be held in a non-sharable mode; that is, only one process at a time can use the resource.If another process requests that resource, the requesting process must be delayed until the resource has been released.

Hold and wait - there must exist a process that is holding at least one resource and is waiting to acquire additional resources that are currently being held by other processes.

No preemption resources cannot be preempted; that is, the process holding it after that process has completed its task can release a resource only voluntarily by the process holding it, after that process has completed its task.

Circular wait - there must exist a set {Po,P1,�,Pn} of waiting processes such that P0 is waiting for a resource that is held by P1,P1 is waiting for a resource that is held by P2,Pn-1 is waiting for a resource that held by Pn and Pn is waiting for a resource that is held by Po.

Deadlock: Deadlock Detection and Recovery

If a system does not employ a protocol to ensure that deadlocks will never occur, then a detection-and-recovery scheme must be employed.

A deadlock-detection algorithm must be invoked to determine whether a deadlock has occurred.

If a deadlock is detected, the system must recover either by terminating some of the deadlock processes, or by preempting resources from some of the deadlocked processes.

In a system that selects victims for rollback primarily on the basis of cost factors, starvation may occur . As a result, the selected process never completes its designated task.

Deadlock: Deadlock Prevention

A deadlock situation may occur if and only four necessary conditions hold simultaneously in the system: mutual exclusion, hold and wait, no preemption, and circular wait. To prevent deadlocks, we ensure that at least one of the necessary conditions never holds.

Another method for avoiding deadlocks that is less stringent than the prevention algorithms is to have a priori information on how each process will be utilizing the resources. The banker’s algorithm, for example, needs to know the maximum number of each resource class that may be requested by each process. Using this information, we can define a deadlock-avoidance algorithm.

Deadlock: Methods for handling deadlocks

Principally, there are three methods for dealing with deadlocks:

1. Use some protocol to prevent or avoid deadlocks, ensuring that the system will never enter a deadlock state.

2. Allow the system to enter deadlock state, detect it, and then recover.

3. Ignore the problem all together, and pretend that deadlocks never occur in the system. This solution is the one used by most operating systems, including UNIX.

Thursday, August 13, 2009

Thread Scheduling

Threads can be scheduled, and the threads library provides several facilities to handle and control the scheduling of threads. It also provides facilities to control the scheduling of threads during synchronization operations such as locking a mutex. Each thread has its own set of scheduling parameters. These parameters can be set using the thread attributes object before the thread is created. The parameters can also be dynamically set during the thread's execution.
Controlling the scheduling of a thread can be a complicated task. Because the scheduler handles all threads systemwide, the scheduling parameters of a thread interact with those of all other threads in the process and in the other processes. The following facilities are the first to be used if you want to control the scheduling of a thread.

The threads library allows the programmer to control the execution scheduling of the threads in the following ways:
By setting scheduling attributes when creating a thread
By dynamically changing the scheduling attributes of a created thread
By defining the effect of a mutex on the thread's scheduling when creating a mutex (known as synchronization scheduling)
By dynamically changing the scheduling of a thread during synchronization operations (known as synchronization scheduling)

Monday, August 10, 2009

CPU Scheduling Algorithms

CPU-I/O Burst Cycle
The success of CPU scheduling depends on the following observed property of processes: Process execution consists of a cycle of CPU execution and I/O wait. Processes alternate between these two states. Process execution begins with a CPU burst. That is followed by an I/O burst, then another CPU burst, then another I/O burst, and so on. Eventually, the last CPU burst will end with a system request to terminate execution, rather than with another I/O burst. The durations of these CPU bursts have been extensively measured. Although they vary greatly by process and by computer, they tend to have a similar frequency curve.
CPU Scheduler
Whenever the CPU becomes idle, the operating system must select one of the processes in the ready queue to be executed. The selection process is carried out by the short-term scheduler or CPU scheduler. The scheduler selects from among the processes in memory that are ready to execute, and allocates the CPU to one of them.
Preemtive Scheduling
CPU scheduling decisions may take place under the following four circumstances:
1. When a process switches from the running state to the waiting state.
2. When a process switches from the running state to the ready state.
3. When a process switches from the waiting state to the ready state.
4. When process terminates.
In circumstances 1 and 4, there is no choice in terms of scheduling. A new process must be selected for execution. There is a choice, however, in circumstances 2 and 3. When scheduling takes place only under circumstances 1 and 4, we say the scheduling scheme is nonpreemptive; otherwise, the scheduling scheme is preemptive.
Dispatcher
The dispatcher is the module that gives control of the CPU to the process selected by the short-term scheduler. This function involves:
1. Switching context
2. Switching to user mode
3. Jumping to the proper location in the user program to restart that program.
The dispatcher should be as fast as possible, given that it is invoked during every process switch. The time it takes for the dispatcher to stop one process and start another running is known as the dispatch latency.
Scheduling Criteria
1. CPU Utilization - keep the CPU as busy as possible
2. Throughput - number of processes completed per time unit
3. Turnaround time - interval between time of submission to time of completion.
4. Waiting time - sum of the periods spent waiting in the ready queue.
5. Response time - time of submission of the request until the first response is produced.
CPU scheduling is the task of selecting a waiting process from the ready queue and allocating the CPU to it. The CPU is allocated to the selected process by the dispatcher.
First-come, First-served (FCFS) scheduling is the simplest scheduling algorithm, but it can cause short processes to wait for very long processes. Shortest-job-first (SJF) scheduling is provably optimal, providing the shortest average waiting time. Implementing SJF scheduling is difficult because predicting the length of the next CPU burst is difficult. The SJF algorithm is a special case of the general priority-scheduling algorithm, which simply allocates the CPU to the highest-priority process. Both priority and SJF scheduling may suffer from starvation. Aging is a technique to prevent starvation.
Round-robin (RR) scheduling is more appropriate for a time-shared (interactive) system. RR scheduling allocates the CPU to the first process in the ready queue for q time units, where q is the time quantum. After q time units, if the process has not relinquished the CPU, it is preempted and the process is put at the tail of the ready queue. The major problem is the selection of the time quantum. If the quantum is too large, RR scheduling degenerates to FCFS scheduling; if the quantum is too small, scheduling overhead in the form of context-switch time becomes excessive.
The FCFS algorithm is nonpreemptive, the RR algorithm is preemptive. The SJF and priority algorithms may be either preemptive or nonpreemptive.
Multilevel queue algorithms allow different algorithms to be used for various classes of processes. The most common is a foreground interactive queue which uses RR scheduling, and a background batch queue, which uses FCFS scheduling. Multilevel feedback queues allow processes to move from one queue to another.
Because such a wide variety of scheduling algorithms are available, we need methods to select among them. Analytic methods use mathematical analysis to determine the performance of an algorithm. Simulation methods determine performance by imitating the scheduling algorithm on a “representative” sample of processes, and computing the resulting performance.
Operating Systems supporting threads at the kernel level must schedule threads - not processes - for execution. This is the case with Solaris 2 and Windows 2000 where both systems schedule threads using preemptive priority based on scheduling algorithm including support for real-time threads. The Linux process scheduler also uses a priority-based algorithm with real-time supports as well. The scheduling algorithms for these three operating systems typically favor interactive over batch and CPU-bound processes.