CSC 150 Chapter 5: Computer Organization and Architecture
primary information resource: An Invitation to Computer Science (Java), Third Edition, G. Michael Schneider and Judith L. Gersting, Course Technology, 2007.

[ previous | schedule | next ]

Topic Index
[ Overview | Memory | ALU | Control Unit | I/O ]


Overview

Computer organization: study of the functional units which comprise the physical computer and their relationships.

This is a higher level of abstraction: computer is collection of functional units which are themselves collection of circuits which are collection of gates which are built of transistors.

Software analogy: OO software application is collection of classes which are themselves collections of methods which are collections of statements which are collections of machine instructions.

Advances in computer organization are marked by generations:


Nearly all modern digital computers are organized according to a design known as the Von Neumann architecture.  Pronounce it "NOI-man"  (just like Freud is pronounced "froid")


Basic Von Neumann architecture



Organization of Memory

Holds data and instructions, a.k.a. RAM.
Logically consists of a linear sequence of addressable units known generically as cells but universally implemented as bytes (8 bit chunks).

Addresses are numbered sequentially starting at 0 (0,1,2,3,….).  Maximum # addresses, and therefore bytes, determined by # bits available to hold an address.  Nearly all machines have 32 bit addresses, which means there are 232 possible addresses.  How many is that?

Memory capacity usually specified in kilobytes, megabytes, gigabytes, terabytes.


The two operations associated with memory are:
Fetch: retrieve the contents of a memory cell given its address
Store: modify the contents of a memory cell given its address and the new contents

Memory unit has several basic components (see textbook Figure "Overall RAM Organization"):


Additional notes about those components:

 

Cache Memory

Look at a motherboard, what can you say about the location of memory cards in relation to location of processor (CPU)?  They are physically separated.

As processors became faster, an instruction could be executed faster than the next instruction could be fetched – memory became a bottleneck.

 
The last one is practical solution.  Cache is pronounced “cash”.  You are probably familiar with this concept as applied by web browsers.  Describe what browser caching is.

If you read CPU specifications, you’ve probably also seen this (e.g. L1 or level 1 cache, L2 or level 2 cache).

Memory access from cache is several times faster than access from memory unit.

How this modifies fetch/store:  When memory fetch is requested, first look in cache.  If there (called a "cache hit" or "hit") then we're done. If not there, read from memory unit as usual but place extra copy into cache.  For memory store, store into cache as well.

We’ll skip the gory details (Such as, what if we read from memory unit into cache but cache is already full? Or, what could happen when data written to cache but not to memory unit?)

Cache works because of temporal locality principle.  This means when a given memory location is used it is likely to be used again very soon (temporal refers to time).  Example: updating loop counter.

To a lesser extent, spacial locality applies too.  This means if memory cell k is used, then cell k+1 will likely be used very soon.  Examples are sequential instructions, or array elements processed in a loop.

The higher the cache hit rate, the better.  This is percentage of total memory access requests satisfied in the cache w/o going to memory unit.

Example: suppose access time from memory unit is 60 ns (nanoseconds) and access time from cache is 10 ns.  If hit rate is 50%, then the average access time is 10 ns * .5 + (10+60) ns * .5 = 40 ns.  If hit rate is 90%, then the average access time is 10 ns * .9 + (10+60) ns * .1 = 16 ns!
 


Organization of I/O units
 


Disk as example of Direct Access.


Sequential Access storage


Organization of ALU

Arithmetic/Logic unit, surprisingly, performs arithmetic (+,-,*,/) and logical (=,<,>, bitwise AND/OR/NOT) operations.

The ALU located on the processor chip because it works so closely with the control unit and needs to be very fast (and because it does the processing!)

Operations are performed on two (or in the case of NOT, one) input values with one resulting output value.

Where are the input and output values stored?  In registers. A register is a small fixed length circuit to hold a single value.

It is not economical for the ALU to reach out to the memory unit to find its inputs and store its output, so prior to the operation the input values are fetched from memory and copied into two registers within the ALU.  The result is placed into a third register, and only later stored to memory.

For added speed and flexibility, most ALUs have 16 to 32 registers (a.k.a. general purpose registers).

To carry out an operation, the ALU needs to know which registers hold its operands and which register to put the result into.  Control lines handle this.

For simplicity and speed, the ALU actually performs all its operations in parallel on the same input!  All the outputs then go into a multiplexor circuit.  Selector lines also go into the multiplexor, to tell it which operation was the desired one.  The multiplexor produces a single output: the result of the selected operation.

For instance, suppose we want to add two values.  The ALU performs addition, equality, AND, and OR.  The input values are loaded into registers then all four operations are performed simultaneously.  The four results all go into the multiplexor.  The selector lines are set to allow only the results of the addition to be output from the multiplexor.
 


Organization of Control Unit

Function of control unit is to fetch instruction from memory, decode it, and execute it.

Such instructions are called machine instructions.

Format of instruction: operation code followed by 0 or more operand values.

Operation codes


Operand values (address fields)


Registers required by Control Unit


Simplified fetch/decode/execute cycle (no cache)
1. Address in Program Counter is copied into the Memory Address Register.
2. Instruction is fetched and placed in Memory Data Register.
3. Instruction in Memory Data Register is copied into the Instruction Register.
4. Address in Program Counter is incremented.
5. The operation code from the Instruction Register goes into the instruction decoder.
6. The instruction decoder activates the circuit for that operation code.
7. The circuit executes the instruction.

Instruction Set used in textbook and lab manual.
See textbook figure "Instruction Set for Our Von Neumann Machine".
Four classes of instructions: data transfer, arithmetic, comparison, branching.


END OF MATERIAL FOR EXAM 1.


[ C SC 150 | Peter Sanderson | Math Sciences server  | Math Sciences home page | Otterbein ]

Last updated:
Peter Sanderson (PSanderson@otterbein.edu)