C SC 340 Lecture 6: Process Synchronization
[ previous | schedule | next ]
additional resource: Modern Operating Systems (2nd Ed), Tanenbaum, Prentice Hall, 2001.
Note: All the problems, solutions, and algorithms in this lecture apply equally to both processes and threads.
Illustrated with print spooler problem
A print spooler is software that permits "background" printing. Since printers are slower than just about everything, a user should be able to submit a print request, then continue working. The term spooler is fron the acronym SPOOL: Simultaneous Peripheral Operations On-Line.
Suppose the print spooler uses a circular queue of file names as its data structure. The queue has front and rear indexes (to indicate the first and last file names in the queue. These data structures are shared by all processes that call the spooler's addToPrintQueue() method - a partial version is shown here:
void addToPrintQueue(String fileName) { int nextSlot = (rear+1) % queue.length; queue[nextSlot] = fileName; rear = nextSlot; }Files are removed from the queue by another process called the print daemon. We will not consider the print daemon here.
Example run 1: beginning configuration of spooler's shared memory is this
front | rear | queue | [0] | [1] | [2] | [3] | [4] |
0 | 2 | myfile.txt | silly.java | resume.doc |
Process 1 (fileName is osh.c) | Process 2 (fileName is index.html) |
void addToPrintQueue(String fileName) { (1) int nextSlot = (rear+1) % queue.length; (2) queue[nextSlot] = fileName; (3) rear = nextSlot; } |
void addToPrintQueue(String fileName) { (4) int nextSlot = (rear+1) % queue.length; (5) queue[nextSlot] = fileName; (6) rear = nextSlot; } |
front | rear | queue | [0] | [1] | [2] | [3] | [4] |
0 | 4 | myfile.txt | silly.java | resume.doc | osh.c | index.html |
front | rear | queue | [0] | [1] | [2] | [3] | [4] |
0 | 2 | myfile.txt | silly.java | resume.doc |
Process 1 (fileName is osh.c) | Process 2 (fileName is index.html) |
void addToPrintQueue(String fileName) { (1) int nextSlot = (rear+1) % queue.length; (5) queue[nextSlot] = fileName; (6) rear = nextSlot; } |
void addToPrintQueue(String fileName) { (2) int nextSlot = (rear+1) % queue.length; (3) queue[nextSlot] = fileName; (4) rear = nextSlot; } |
front | rear | queue | [0] | [1] | [2] | [3] | [4] |
0 | 3 | myfile.txt | silly.java | resume.doc | osh.c |
Moral of the story:
When the outcome of an execution differs depending on the particular sequence in which usage of the shared variable occurs, we call the situation a race condition. Opportunities for these abound in software design (especially database design), and in OS design itself since several OS processes often work concurrently on the same kernel data structure.
Illustrated with producer-consumer problem
For a more subtle example, let's look again at the producer-consumer problem.
Suppose we changed the implementation of MessageQueue to use an array rather than a Vector. The array is large but unspecified size; assume it is unbounded.
The resulting modified code (changes in bold) is:
import java.util.*; public class MessageQueue { public MessageQueue() { size = 0; queue = new Object[UNBOUNDED]; } public void send(Object item) { queue[size] = item; size++; } public Object receive() { Object item; if (size == 0) { return null; } else { size--; item = queue[size]; return item; } } private Object[] queue; private int size; }Now, focus on manipulation of the shared variable size, in the send() statement size++; and the receive() statement size--;.
In a compiled language like C or C++, the statement size++; would be compiled into an assembly language statement sequence something like this:
LOAD $1, size ADD $1, 1 STORE $1, size($1 is general purpose register 1).
Similarly, size--; would be compiled into something like:
LOAD $1, size SUBTRACT $1, 1 STORE $1, size
Consider this execution scenario:
We'll look at several possible means of assuring a process/thread mutually exclusive access to its critical section. A good solution must meet these criteria:
In the sections to follow, we will describe and evaluate several possible solutions to the mutual exclusion problem. They include:
One way to assure that a process can not be interrupted while in its critical section is to disable system interrupts upon entry and re-enable them upon exit. This is appealing in its simplicity but is not viable because it gives user processes too much power -- suppose the critical section gets caught in an infinite loop?
An effect similar to this is achieved through nonpreemptive kernels. This is where a process cannot be preempted while running in kernel mode. Windows XP does this, as did Linux until kernel version 2.6.
Assures mutual exclusion between two processes, but does not work for more than two.
Define a shared int (or boolean) variable called turn, initialized to either 0 or 1. This keeps track of whose turn it is to enter the critical section. A process awaits its turn using the technique of busy waiting -- in a continous loop testing turn value.
Define a shared data structure: a boolean array called, say, ready, with two elements (one per process). ready[i] indicates whether or not process i is ready to enter its critical section.
/* Process i code. Assume j (1-i) is the other process */ while (true) { ready[i] = true; turn = j; while (ready[j] && turn == j) ; criticalSection(); ready[i] = false; nonCritical(); }
This works in the crucial scenario that both processes attempt to enter their critical section at about the same time: each sets turn to the other's ID but only the second such assignment will "stick"; the first is overwritten. Thus turn will have the ID of the first one! Think about it...
Consider the while condition from its inverse: process i will spin in the while
loop until either it is i's turn or j is not ready
(e.g. j is in its nonCritical section).
Briefly addressing the criteria for critical sections:
Mutual exclusion: Since turn is not modified
inside the critical section and turn cannot simultaneously have the
values i and j, the turn==j condition alone would insure mutually
exclusive access to the critical section (in strict alternation).
Progress: The ready array is needed
to assure the progress requirement. In other words, process i should be able
to enter its critical section if it is ready and process j is not ready,
regardless of whose turn it is. The ready[j] term in the while condition assures this.
Starvation:If i is in the critical section and j wants to be, then j is assured
of getting in the critical section before i can have it again. Explanation: In this situation, j is spinning
in its while loop having set ready[j] = true;. In the worst case, i will continue to run after finishing its critical section
then perform its non-critical section and become ready to use its critical section again. Just before
its while loop, i will set turn = j;. Then i will
begin spinning in its while loop, because j is ready and it is j's turn. Eventually, j gets to run again
and in the next spin of its while loop the condition will be false because it is j's turn. Then j
can enter its critical section.
Nice solution but it only works for two processes plus it uses busy-waiting, a nuisance and CPU-time-waster that we'll deal with later.
This solution requires the hardware to have a machine instruction TSL (Test-and-Set-Lock). This machine instruction has the format:
TSL RX, LOCK
The shared variable acts as a lock:
TRY: TSL REGISTER, LOCK # copy lock value into register and set LOCK to 1. CMP REGISTER, ZERO # compare value in register to 0. BNE TRY # if not equal to 0, loop back to try again # Critical section goes here MOV LOCK, ZERO # sets LOCK to 0 # Non-critical section goes here B TRY # go back jack and do it againNote that if LOCK was initially 1, the TSL just sets it to its same value.
Expressing it in Java
class Lock { private boolean lockVar = false; public boolean get() { return lockVar; } public void set(boolean val) { lockVar = val; } }
// This is an indivisible operation boolean testAndSet(Lock lok) { boolean result = lok.get(); lok.set(true); return result; }
// code that uses shared Lock variable lock while (true) { while (testAndSet(lock)) ; criticalSection(); lock.set(false); nonCritical(); }
Fast solution, fast, works for any number of processes, but starvation is possible and it uses busy-waiting and requires machine support. It is also complicated to use.
The starvation issue can be addressed using, in addition to the lock, a boolean array with one element per process. The array element indicates whether or not the process is waiting for the lock. This does not affect the TestAndSet() operation but makes the client code for obtaining and releasing the lock even more complex.
This technique for mutual exclusion was developed in the 1960's by Edsgar Dijkstra (who passed away in summer 2002). A semaphore is an shared variable that, once initialized, can be accessed only through operations called P() and V(). These are acronyms for Dutch words, so most call them by different names: P() is also known as acquire() or down() and V() is also known as release() or up().
We will use acquire() and release() terminology.
Semaphores come in two flavors: binary and counting. The former is used to assure mutual exclusion, the latter to permit up to a given fixed number of processes into a code section simultaneously.
The basic usage of a binary semaphore is as follows:
Semaphore mutex; // shared among all processes . . . while (true) { mutex.acquire(); criticalSection(); mutex.release(); nonCritical(); }
A newly-created semaphore is normally initialized to the maximum number of processes which should be simultaneously allowed into the code it protects. For binary semaphore, this is 1.
A semaphore can be used to coordinate any number of processes, and semaphores do not use busy-waiting. In order for semphores to work, the acquire() and release() operations must be indivisible.
The components of a semaphore could be expressed using Java class notation:
public class Semaphore { private int value; private PCBList blockList; public Semaphore(int init) { value = init; blockList = new PCBList(); } // JAVA-LIKE IMPLEMENTATIONS OF acquire() AND release() DISPLAYED SIDE-BY-SIDE // public void acquire() { public void release() { value--; value++; if (value < 0) { if (value <= 0) { blockList.add(thisProcess); process p = blockList.remove(); thisProcess.block(); p.wakeup(); } } } } }
In acquire(), variable thisProcess refers to the currently running process. Assume PCBList is a collection structure for holding a list of Process Control Blocks.
The normal operation of a binary semaphore is as follows:
The discussion below on classical problems includes a producer-consumer solution that uses both a binary semaphore and two counting semaphores. The counting semaphores are used to maintain the empty/full status of the buffer.
Semaphores meet all the criteria for good mutual exclusion.
The knock against semaphores is they are primitive and unstructured, and have to be used very carefully. A single misplaced call to acquire() or release(), particularly when more than one semaphore is in use (see producer-consumer solution below), results in erroneous operation or complete deadlock!
In the 1970s, Tony Hoare and Per Brinch Hansen developed a structured synchronization mechanism called monitors.
A monitor is a language structure that resembles a class. It is a module that consists of variables, procedures and special constructs called conditions. It is tightly encapsulated: procedures may access only their local variables, monitor variables and conditions; monitor clients may access only its procedures.
Every procedure defined in a monitor inherently defines a critical section. In other words, only one process can be active in a monitor at a given instant.
Here is an outlined pseudocode example (keywords in bold):
monitor BoundBuffer { int i; condition x, y; procedure insert() { . . . } procedure remove() { . . . } BoundBuffer() { . . . } }Every condition variable has two associated operations, wait() and signal().
As an example, the bounded (blocking) producer-consumer could be implemented using two condition variables, one for a full buffer and another for an empty buffer. Insert and remove could be implemented as monitor procedures something like this:
monitor BoundBuffer { final int CAPACITY; Buffer buffer; condition empty, full;
BoundBuffer(int capacity) { CAPACITY = capacity; buffer = new Buffer(CAPACITY); }
// PROCEDURES ARE PLACED SIDE BY SIDE FOR DISPLAY PURPOSES // // called only by producer. // called only by comsumer.
procedure insert(Object item) { Object procedure remove() { if (buffer.size()==CAPACITY) { if (buffer.size()==0) { full.wait(); empty.wait(); } } buffer.add(item); Object item = buffer.remove(); if (buffer.size()==1) { if (buffer.size() == CAPACITY-1) { empty.signal(); full.signal(); } } } return item; } } // end of monitor
You are familiar with the problem as well as a couple solutions. Here I present the essential parts of a blocking solution that uses semaphores:
// Create semaphores in a place global to both producer and consumer... Semaphore mutex = new Semaphore(1); Semaphore empty = new Semaphore(CAPACITY); Semaphore full = new Semaphore(0); Buffer buffer = new Buffer(CAPACITY);mutex initial value is the number of processes/threads allowed in the critical section simultaneously. empty initial value is the number of empty buffer slots. full initial value is the number of filled buffer slots.
// Called only by producer. // Called only by consumer. public void insert(Object item) { public Object remove() { empty.acquire(); full.acquire(); mutex.acquire(); mutex.acquire(); buffer.add(item); Object item = buffer.remove(); mutex.release(); mutex.release(); full.release(); empty.release(); } return item; }Note: the producer call to empty.acquire(); will block the producer if the buffer is full! Study the initial value of empty and the code for acquire() to convince yourself of this. Its subsequent call to full.release() will awaken a consumer blocked for an empty buffer, if any.
|
Obviously, a chopstick is the resource requiring mutually exclusive access. It is overly restrictive to assign one semaphore to represent all the chopsticks -- this allows only one philosopher to eat at a time. Suppose you define a semaphore for each chopstick. One possible solution is:
while (true) { chopStick[left].acquire(); chopStick[right].acquire(); eat(); chopStick[left].release(); chopStick[right].release(); think(); }Surprisingly, this solution is not guaranteed to work! Suppose all five philosophers decide to eat at about the same time and all are able to get their left chopstick before any can get their right chopstick?!? What happens?
This can be solved using semaphores, and the textbook shows a solution using monitors. Both solutions are a somewhat complex and will not be considered here. They involve recording each philosopher's state: thinking, hungry, or eating. This introduces the "hungry" state to describe the period between wanting to eat and having control of both chopsticks.
We will not cover solutions in detail. One solution strategy is this:
An alternate solution does not permit new readers into the database while a writer is waiting. What happens if writers consistently come along more frequently than readers?
The weakness in both solutions is they give one category of processes priority over the other and starvation can occur. There are better solutions.
Java provides some language support for implementing mutual exclusion. Here is the lowdown:
Additional notes:
If only a portion of a method needs to be a critical section, it can instead be placed in front of a code block, e.g. synchronized(this) { . . }
I have read that use of volatile can in limited cases substitute for small critical sections. For example:
public class VolDemo { private volatile count = 0; public void yin() { count++; } public void yang() { count--; } }In this example, the operations count++ and count-- are done atomically (indivisibly) because count is declared volatile. If the critical section were to consist only of that statement, then synchronized is not needed. My information resource for this tidbit is http://www.javaperformancetuning.com/tips/volatile.shtml