COMP 3400 Lecture 7: Deadlock
[ previous | schedule | next ]
additional resource: Modern Operating Systems (2nd Ed), Tanenbaum, Prentice Hall, 2001.
Note: All the problems, solutions, and algorithms in this lecture apply equally to both processes and threads.
Follow the time sequence in this scenario:
Tanenbaum's definition is a good one: A set of processes is deadlocked if each process in the set is waiting for an event that only another process in the set can cause.
This approach is justifiable, based on risk analysis. If the costs outweigh the benefits, there is no reason to do it. We'll see shortly the costs are high. For general purpose OSs, the benefits are low. Responsibility shifts to those who implement software development tools (e.g. Oracle) as well as the programmers who use them.
If OS designers decide to tackle deadlocks, the major approaches are:
The last two approaches both require the OS to "know" what a deadlock "looks like." Deadlocks can be reasonably modeled using a system-wide Resource Allocation Graph (RAG).
Note: I'll restrict the system to have only nonsharable resources of which there is a single instance (e.g. a system with one printer and one tape drive). It will illustrate the concepts with a minimum of details. There are algorithms that do not require these restrictions.
A resource allocation actually includes both allocations and requests. A graph has the following components:
"Draw" a graph representing the global system state, where Pi are all system processes and Ri are all the system resource. "Draw" the current allocations and requests.
The system contains a deadlock if the graph contains any cycles. All processes in a cycle are said to be deadlocked. The figure above shows two processes and two resources, but the cycle could involve many more than two processes.
The graph is obviously not really drawn, but appropriate graph data structures and algorithms can easily be developed to detect deadlocks or potential deadlocks.
As state above, this requires denying one or more of the four conditions.
The strategy here is for the OS to deny a resource request that could lead to deadlock. There are a number of strategies and algorithms for doing this. They focus on keeping the system in a safe state: a state in there is at least one deadlock-free scheduling sequence to completion even if all processes request all their resources at once.
One is to maintain a RAG supplemented with an additional edge type: the claim edge. A claim edge represents a future process request for a resource. If these are added to the RAG, then a request can be evaluated for its safeness: if allocation would result in a cycle, it should not be granted. Example below for P2 claim on R2 and what would occur were it allocated.
Dijkstra's Banker's algorithm could be employed. A banker (the OS) has several customers (processes) which ask for loans (resources) from their lines of credit (maximum resources required for completion). Once a customer has depleted its line of credit, it pays off the entire loan (releases all resources). The combined lines of credit are greater than the amount of money on hand in the bank (available resources). The banker can grant a loan request only if the remaining money on hand is enough to cover all the possible future loan requests in some sequence. This is illustrated below in an example for one resource type of which multiple units are available.
|
|
|
||||||||||||||||||||||||||||||||||||
start: 8 units free | safe: 4 units free | unsafe: 3 units free |
This diagram shows three system snapshots:
Pretty much covered above: Build/maintain a RAG and look for cycles.
Recovery methods are unsavory and fall into two categories:
The major policy design is which process(es) to terminate or preempt. This decision is based on process properties (which are stored in PCB and OS data structures).
One possibility is rollback. "Undo" the actions of a process until it reaches pre-deadlock state. Then resume it later (after potential for deadlock is past) from that state.