C SC 340 Lecture 5: Processor Scheduling

[ previous | schedule | next ]

Please note that scheduling concepts and algorithms apply to both processes and kernel threads. They will be presented in the context of processes.

Background

A word on context switching

Two quotes from http://www.bellevuelinux.org/context_switch.html :

"Context switching can be described as the kernel performing the following activities with regard to processes (including threads) on the CPU: (1) suspending the progression of one process and storing the CPU's state (i.e., the context) for that process somewhere in memory, (2) retrieving the context of the next process from memory and restoring it in the CPU's registers and (3) returning to the location indicated by the program counter in order to resume the process."

"Context switching is generally computationally intensive. That is, it requires considerable processor time, which can be on the order of nanoseconds for each of the tens or hundreds of switches per second. Thus, context switching represents a substantial cost to the system in terms of CPU time and can, in fact, be the most costly operation on an operating system."

The context switching algorithm may be hardware- or software-based. Beware, I have read articles where context switching was defined to include scheduling time.

Scheduling goals

There are a variety of possible scheduling algorithms. Each are written to support a set of goals. Not all goals can be fully achieved because some may contradict others. Here are major goals.

Performance Evaluation of scheduling algorithms

Scheduling goals are given above. There are several techniques for determining how well a given scheduling algorithm meets the goals, and for comparing different algorithms.

Scheduling concepts

Ready queue. Linked list (or similar structure) of processes in "ready" state. Each process is represented by its PCB. The generic queue data structure implies FIFO arrangement of processes, but scheduling policies may alter this based on priority or other factors.

CPU bursts. If you're into musicals at all (I have a fondness for them, having been a chorus member in our high school production of Bye Bye Birdie) there is a song in The Music Man with the chorus "Pick a little, talk a little, pick a little, talk a little, cheep cheep cheep, talk a lot, pick a little more." Processes are like this: they pick (process) a little, then talk (I/O) a little, then pick some more then talk some more, finally picking a little more before terminating. They alternate between processing and I/O, beginning and ending with processing. The processing periods are called CPU bursts and the I/O periods are called I/O bursts. Most processes over their lifetime have a lot of short CPU bursts and a few long ones. Processes can be classified as CPU-bound or I/O-bound based on the number and lengths of their CPU bursts; this is an important scheduling factor.

Preemptive versus Nonpreemptive scheduling. Preemptive scheduling simply means that the scheduler can force a process to give up the CPU when it is not yet at a waiting (I/O or wait) or terminating operation. This happened to me my senior year at BGSU, when a football recruit appeared at the door during my conversation with the placement director. Nonpreemptive scheduling means it does not. Preemption can lead to better scheduler performance but is more complicated to implement because the interruption can occur while the process is in the middle of a critical operation (could be a kernel process updating an OS table).

First Come, First Served (FCFS)

Simple scheduling algorithm which allows a running process to keep the CPU until wait or termination (nonpreemptive), at which time the scheduler selects the process at head of a FIFO queue.

Tradeoff for simplicity is poor performance (see goals above), especially in an interactive environment with CPU-bound processes thrown into the mix.

Suppose the mix consists only of 15 I/O-bound processes (which interactive processes tend to be -- keyboard input, monitor output) each of which has CPU bursts of 2 milliseconds. Processes turn over rapidly, and response times are good: about 15*2 = 30 milliseconds for all users. Throw in one CPU-bound process with CPU bursts of 5 seconds. Now the response time degenerates to over 5 seconds for all users, not just the one with the CPU-bound process. Not acceptable.

Round Robin scheduling (RR)

This is a preemptive version of FCFS. Preemption is based on the length of time a running process has been running. The OS defines a system variable called the time quantum (or time slice), which is the maximum length of time that a process can run in one CPU burst. Typical length is 10 to 100 milliseconds.

When a process is context switched to the CPU a quantum timer is set. If the timer goes off before the process voluntarily gives up the CPU, the process is preempted and is placed at the tail of the ready queue. It is replaced by the process at the head of the ready queue.

This addresses the major shortcomings of FCFS. However it may result in longer average waiting times (averaged over all processes) due to the significantly longer waiting time of CPU-bound processes.

The choice of time quantum length can seriously affect performance. If too long, then scheduling becomes de facto FCFS. If too short, then many processes will be preempted, resulting in many context switches. Context switches are nonproductive overhead that take a small but measurable amount of time. For instance, if the quantum is twice the amount of time as a context switch, the system will spend 1/3 or more of its time just context switching! Quantum length guideline: adjust so that about 80% of CPU bursts are shorter.

Shortest Job First scheduling (SJF)

Note the term job is synonymous with process.

A better name for this algorithm would be shortest "next CPU burst" first. It may be either preemptive or nonpreemptive. The preemptive version is sometimes called shortest "remaining time" first.

The idea is this: when scheduling the next process, schedule the one which will use the CPU for the least amount of time before wait/terminate. Break ties using FCFS.

Do you see an implementation problem here, Mr. Wizard? Yes , we do not know what the length of the next CPU burst will be! The best we can do is predict it based on past behavior. Predictions are based on the high probability that the next CPU burst will be similar in length to previous ones. There is a simple formula and lots of empirical evidence to support prediction.

The predicted CPU burst length is a weighted sum of the previous actual burst length and the previous predicted burst length. The weight on each term is the relative significance of each term. Weight is a value between 0 (previous prediction only) and 1 (previous actual only) and the sum of the weights must be 1.

predictionn+1 = weight * actualn + (1-weight) * predictionn

At the extremes, if the weight is 1, then the prediction is equal to the length of the last burst (recent history only); if the weight is 0, the prediction is equal to the previous prediction (past history only). Typical weight is 0.5.

the preemptive SJF occurs whenever a process becomes ready. If the ready has a shorter predicted CPU burst than the remaining (predicted minus "actual so far") time for the currently running process, then preemption occurs.

SJF in theory results in minimal wait times. It is not used much for short-term scheduling. Its main use is for long-term scheduling (determining which processes should be memory-resident) and as benchmark.

Priority scheduling

Each process has a priority value, and the scheduler selects the one with highest priority. Ties are broken using FCFS.

Priorities are small integer values, and generally a lower value means a higher priority (we'll see that Java threads go the opposite way).

Priorities can be established either externally, by the process "owner", or internally by the OS using an algorithm, or a combination. The SJF algorithm, for example, can be used to establish process priority. The weight given to externally assigned priorities depends on how "honest" the process's owner is.

Priority scheduling can be either preemptive or nonpreemptive. For preemptive, the priority of the newly-ready process is compared to that of the running process. If the former has higher priority, preemption occurs.

Preemptive priority scheduling can lead to starvation. A process with low priority may not be able to run for a long long time! There are ways of dealing with this, for instance gradually raising the priority as a process waits longer and longer.

Multilevel queueing

In this technique the ready queue is split up into multiple queues, each queue representing a given priority level. A newly-created process is assigned a priority value will keep that value throughout its life. Thus every time it becomes ready, it will be placed in the queue for its priority level.

Priority can be based on a number of factors, such as whether the process is a kernel or user process, whether the process is a foreground or background process, whether the process is for an interactive user or a batch user, whether the process is expected to be I/O-bound or CPU-bound, and so forth.

Each queue can be maintained using separate scheduling algorithms if desired. E.g. one can be RR while another is FCFS, or all can be RR but with different time quantum lengths.

The overall scheduler must select from among the "first" processes in all the queues. In general, no process will be selected from a queue until all queues with higher priority are empty. This can be ether preemptive or nonpreemptive. Preemption can be based on time quantum, transition into ready state by higher-level process, or both.

Multilevel feedback queueing

This is a multilevel queueing technique in which a process can change priority over time, and thus be placed into different ready queues at different points in its life. The term feedback refers to changing a process' priority based on feedback from its recent CPU usage performance.

This technique is typically applied to migrate CPU-bound processes into lower priority queues, and I/O-bound processes into higher priority queues. To avoid starvation, a low priority process should be migrated upward if stuck in the queue for a very long time.

This is a very complex scheduling technique, since its parameters include: number of queues, scheduling algorithm for each queue, and algorithms for raising and lowering priority.

Thread scheduling in Java Virtual Machine

JVM implements user threads and manages them through its own scheduling algorithms.

Every thread has a priority. By default a new thread is given the same priority as its creator, but a thread can change its own priority or that of another thread (provided it has access) using the setPriority() method. The JVM itself does not modify a thread's priority.

The Thread class defines static constants MIN_PRIORITY, MAX_PRIORITY, and NORM_PRIORITY to represent the range of possible priority values and the default. Their values are 1, 10 and 5, respectively. Thus contrary to most OSs, JVM uses a higher value to indicate higher priority.

When a thread voluntarily releases the CPU (completes, blocks for I/O, suspends itself), the scheduler replaces it with the highest priority runnable thread -- FCFS is tie-breaker. One interesting twist is a thread can yield to another thread of equal priority by calling the yield() method.

A thread is preempted by a higher priority thread entering the runnable state. The JVM specification does not address time slicing so JVM implementations may or may not include it. Keep in mind this is independent of time slicing performed by the OS on which the JVM is running.

Given this information, how would you classify JVM scheduling (from among the categories above)?

Wild fact: Using the methods available in the Thread class, it is possible to define your own custom thread scheduler to be used with an application! The textbook shows a simple user-defined RR scheduler called Scheduler that implements time slicing. The scheduler itself is implemented as a thread having higher priority than any of the threads it manages.


[ C SC 340 | Peter Sanderson | Math Sciences server  | Math Sciences home page | Otterbein ]

Last updated:
Peter Sanderson (PSanderson@otterbein.edu)