Giáo trình

Computer Architecture

Science and Technology

Top-Level View of Computer Organization

Tác giả: Hoang Lan Nguyen

Computer Component

Contemporary computer designs are based on concepts developed by John von Neumann at the Institute for Advanced Studies Princeton. Such a design is referred to as the von Neumann architecture and is based on three key concepts:

  • Data and instructions are stored in a single read -write memory.
  • The contents of this memory are addressable by location, without regard to the type of data contained there.
  • Execution occurs in a sequential fashion (unless explicitly modified) from one instruction to the next.

Suppose that there is a small set of basic logic components that can be com­bined in various ways to store binary data and to perform arithmetic and logical operations on that data. If there is a particular computation to be performed, a con­figuration of logic components designed specifically for that computation could be constructed. We can think of the process of connecting the various components in the desired configuration as a form of programming. The resulting "program"' is in the form of hardware and is termed a hardwired program.

Now consider other alternative. Suppose we construct a general-purpose con­figuration of arithmetic and logic functions, this set of hardware will perform vari­ous functions on data depending on control signals applied to the hardware. In the original case of customized hardware, the system accepts data and produces results (Figure 1a). With general-purpose hardware, the system accepts data and control signals and produces results. Thus, instead of rewiring the hardware for each new program, The programmer merely needs to supply a new set of control signals.

How shall control signals be supplied? The answer is simple but subtle. The entire program is actually a sequence of steps. At each step, some arithmetic or log­ical operation is performed on some data. For each step, a new set of control signals is needed. Let us provide a unique code for each possible set of control signals, and let us add to the general-purpose hardware a segment that can accept a code and generate control signals (Figure 1b).

Programming is now much easier. Instead of rewiring the hardware for each new program, all we need to do is provide a new sequence of codes. Each code is, in effect, an instruction, and part of the hardware interprets each instruction and generates control signals. To distinguish this new method of programming, a sequence of codes or instructions is called software.

Hardware and software approaches

Figure 1b indicates two major components of the system: an instruction interpreter and a module of general-purpose arithmetic and logic functions. These two constitute the CPU.

Several other components are needed to yield a function­ing computer. Data and instructions must be put into the system. For this we need some sort of input module. This module contains basic components for accepting data and instructions in some form and converting them into an internal form of sig­nals usable by the system. A means of reporting results is needed, and this is in the form of an output module. Taken together, these are referred to as I/O components.

One more component is needed. An input device will bring instructions and data in sequentially, but a program is not invariably executed sequentially: it may jump around. Similarly, operations on data may require access to more than just one element at a lime in a predetermined sequence. Thus, There must be a place to store temporarily both instructions and data. That module is called memory, or main memory to distinguish it from external storage or peripheral devices. Von Neumann pointed out that the same memory could be used to store both instructions and data.

Figure 2 illustrates these top-level components and suggests the interactions among them. The CPU exchanges data with memory. For this purpose, it typically makes use of two internal (to the CPU) registers: a memory address register (MAR), which specifies the address in memory for the next read or write, and a memory buffer register (MBR), which contains the data to be written into memory or receives the data read from memory. Similarly, an I/O address register (I/OAR) specifies a particular I/O device. An I/O buffer register (I/OBR) is used for the exchange of data between an I/O module and the CPU.

A memory module consists of a set of locations, defined by sequentially num­bered addresses. Each location contains a binary number that can be interpreted as either an instruction or data. An I/O module transfers data from external devices to CPU and memory, and vice versa. It contains internal buffers for temporarily hold­ing these data until they can be sent on.

Having looked briefly al these major components, we now turn to an overview of how these components function together to execute programs.

Computer components: Top-level view

Computer Function

The basic function performed by a computer is execution of a program, which con­sists of a set of instructions stored in memory. The processor does the actual work by executing instructions specified in the program. In its simplest form, instruction processing consists of two steps: The processor reads (fetches) instructions from memory one at a time and executes each instruction. Program execution consists of repeating the process of instruction fetch and instruction execution. The Instruction execution may involve several operations and depends on the nature of the instruc­tion.

The processing required for a single instruction is called an instruction cycle. Using the simplified two-step description given previously, the instruction cycle is depicted in Figure 3

Basic instruction cycle

The two steps are referred to as the fetch cycle and the execute cycle. Program execution halts only if the machine is turned off, some sort of unrecoverable error occurs, or a program instruction that halts the computer is encountered.

Instruction Fetch and Execute

Fetch Cycle:

  • Program Counter (PC) holds address of next instruction to fetch
  • Processor fetches instruction from memory location pointed to by PC
  • Increment PC (Unless told otherwise)
  • Instruction loaded into Instruction Register (IR)
  • Processor interprets instruction and performs required actions

At the beginning of each instruction cycle, the processor fetches an instruction from memory. In a typical processor, a register called the program counter (PC) holds the address of the instruction to be fetched next. Unless told otherwise, the processor always increments the PC after each instruction fetch so that it will fetch the next instruction in sequence. The fetched instruction is loaded into a register in the processor known as the instruction register (IR). The instruction contains bits that specify the action the processor is to take. The processor interprets the instruction and performs the re­quired action.

Execute Cycle:

In general, the required actions fall into four categories:

  • Processor-memory: Data may be transferred from processor to memory or from memory to processor.
  • Processor-I/O: Data may be transferred to or from a peripheral device by transferring between the processor and an I/O module.
  • Data processing: The processor may perform some arithmetic or logic opera­tion on data.
  • Control: An instruction may specify that the sequence of execution be altered.
  • An instruction’s execution may involve a combination of these actions.

Instruction Cycle State Diagram:

Figure 4 provides a more detailed look at the basic instruction cycle. The figure is in the form of a state diagram. For any given instruction cycle, some stales may be null and others may be visited more than once. The states can be described as follows:

Instruction cycle state diagram
  • Instruction address calculation (iac): Determine the address of the next instruction to be executed. Usually, this involves adding a fixed number to the address of the previous instruction. For example, if each instruction is 16 bits long and memory is organized into 16-bit words, then add 1 to the previous address. If, instead, memory is organized as individually addressable 8-bit bytes, then add 2 to the previous address.
  • Instruction fetch (if): Read instruction from its memory location into the processor.
  • Instruction operation decoding (iod): Analyze instruction to determine type of operation to he performed and operand(s) to be used.
  • Operand address calculation (oac): If the operation involves reference to an operand in memory or available via I/O. then determine the address of the operand.
  • Operand fetch (of): Fetch the operand from memory or read it in from I/O,
  • Data operation (do): Perform the operation indicated in the instruction.
  • Operand store (os): Write the result into memory or out to I/O


Virtually all computers provide a mechanism called Interrupt, by which other modules (I/O. mem­ory) may interrupt the normal processing of the processor. Interrupts are provided primarily as a way to improve processing efficiency.

For example, most external devices are much slower than the processor. Suppose that the processor is transferring data to a printer using the instruction cycle scheme of Figure 3. After each write operation, the processor must pause and remain idle until the printer catches up. The length of this pause may be on the order of many hundreds or even thousands of instruction cycles that do not involve memory. Clearly, this is a very wasteful use of the processor. The figure 5a illustrates this state of affairs.

Program flow control without and with interrupts

The user program (depicted in figure 5a) performs a series of WRITE calls interleaved with processing. Code segments 1, 2, and 3 refer to sequences of instructions that do not involve I/O. The WRITE calls arc to an I/O program that is a System utility and that will perform the actual I/O operation. The I/O program consists of three sections:

  • A sequence of instructions, labeled 4 in the figure, to prepare for the actual I/O operation. This may include copying the data to be output into a special buffer and preparing the parameters for a device command.
  • The actual I/O command. Without the use of interrupts, once this command is issued, the program must wait for the I/O device to perform the requested func­tion (or periodically poll the device). The program might wail by simply repeat­edly performing a test operation to determine if the I/O operation is done.
  • A sequence of instructions, labeled 5 in the figure, to complete the operation. This may include setting a flag indicating the success or failure of the operation.

Because the I/O operation may lake a relatively long time to complete. The I/O program is hung up wailing for the operation to complete; hence. The user program is slopped at the point of the WRITE call for some considerable period of time

Interrupts and the Instruction Cycle

With interrupts, the processor can be engaged in executing other instructions while an I/O operation is in progress. Consider the flow of control in Figure 5b. As before, the user program reaches a point at which it makes a system call in the form of a WRITE call. The I/O program that is invoked in this case consists only of the preparation code and the actual I/O command. After these few instructions have been executed, control returns to the user program. Meanwhile, the external device is busy accepting data from computer memory and printing it. This I/O operation is conducted concurrently with the execution of instructions in the user program.

When the external device becomes ready to be serviced, that is, when it is ready to accept more data from the processor, the I/O module for that external device sends an interrupt request signal to the processor. The processor responds by suspending operation of the current program, branching off to a program to service that particular I/O device, known as an interrupt handler, and resuming the original execution after the device is serviced. The points at which such interrupts occur are indicated by an asterisk in Figure 5b.

From the point of view of the user program, an interrupt is just that: an interruption of the normal sequence of execution. When the interrupt processing is completed, execution resumes (Figure 6). Thus, the user program does not have to contain any special code to accommodate interrupts; the processor and the oper­ating system are responsible for suspending the user program and then resuming it at the same point.

The transfer of control via interrupt

To accommodate interrupts, an interrupt cycle is added to the instruction cycle, as shown in Figure 7. In the interrupt cycle, the processor checks to see if any interrupts have occurred, indicated by the presence of an interrupt signal. If no interrupts arc pending, the processor proceeds to the fetch cycle and fetches the next instruction of the current program. If an interrupt is pending, the processor does the following:

  • It suspends execution of the current program being executed and saves its context. This means saving the address of the next instruction to be executed (current contents of the program counter) and any other data relevant to the processor's current activity.
  • It sets the program counter to the starting address of an interrupt handler routine.
Instruction Cycle with Interrupts.

The processor now proceeds to the fetch cycle and fetches the first instruc­tion in the interrupt handler program, which will service the interrupt. The inter­rupt handler program is generally part of the operating system. Typically, this program determines the nature of the interrupt and performs whatever actions are needed. In the example we have been using, the handler determines which I/O module generated the interrupt, and may branch to a program that will write more data out to that I/O module. When the interrupt handler routine is com­pleted, the processor can resume execution of the user program at the point of interruption. Figure 8 shows a revised instruction cycle state diagram that includes interrupt cycle processing.

Instruction cycle state diagram with interrupt

Multiple Interrupts

In some cases, multiple interrupts can occur. For example, a program may be receiving data from a communications line and printing results. The printer will generate an interrupt every lime that it completes a print operation. The communi­cation line controller will generate an interrupt every time a unit of data arrives. I he unit could either be a single character or a block, depending on the nature of the communications discipline. In any case, it is possible for a communications interrupt to occur while a printer interrupt is being processed. Two approaches can be taken to dealing with multiple interrupts:

  • Disabling interrupts while an interrupt is being processed.
  • Defining priorities for interrupts.

The first is to disable interrupts while an interrupt is being processed. A disabled interrupt sim­ply means that the processor can and will ignore that interrupt request signal. If an interrupt occurs during this time, it generally remains pending and will be cheeked by the processor after the processor has enabled interrupts. Thus, when a user pro­gram is executing and an interrupt occurs, interrupts are disabled immediately. After the interrupt handler routine completes, interrupts are enabled before re­suming the user program, and the processor checks to see if additional interrupts have occurred. This approach is nice and simple, as interrupts are handled in strict sequential order (Figure 2.10).

Sequential interrupt processing

The second approach is to define priorities for interrupts and to allow an in­terrupt of higher priority to cause a lower-priority interrupt handler to be itself interrupted (Figure 10)

Transfer of Control with Multiple Interrupts

Interconnection Structures

A computer consists of a set of components or modules of three basic types (proces­sor, memory, I/O) that communicate with each other. In effect, a computer is a net­work of basic modules. Thus, there must be paths for connecting the modules.

The collection of paths connecting the various modules is called the intercon­nection structure. The design of this structure will depend on the exchanges that must be made between modules.

Figure 11 suggests the types of exchanges that are needed by indicating the major forms of input and output for each module type:

  • Memory
  • Input/Output
  • CPU

The interconnection struc­ture must support the following types of transfers:

  • Memory to processor: The processor reads an instruction or a unit of data from memory.
  • Processor to memory: The processor writes a unit of data to memory.
  • I/O to processor: The processor reads data from an I/O device via an I/O module.
  • Processor to I/O: The processor sends data to the I/O device.
  • I/O to or from memory: For these two cases, an I/O module is allowed to ex­change data directly with memory, without going through the processor, using direct memory access (DMA).

Over the years, a number of interconnection structures have been tried. By far the most common is the bus and various multiple-bus structures.

Computer Modules

Bus Interconnection

A bus is a communication pathway connecting two or more devices. A key charac­teristic of a bus is that it is a shared transmission medium. Multiple devices connect to the bus, and a signal transmitted by any one device is available for reception by all other devices attached to the bus (broadcast). If two devices transmit during the same time period, their signals will overlap and become garbled. Thus, only one device at a time can successfully transmit.

Typically, a bus consists of multiple communication pathways, or lines. Each line is capable of transmitting signals representing binary 1 and binary 0. Overtime, a sequence of binary digits can be transmitted across a single line. Taken together, several lines of a bus can be used to transmit binary digits simultaneously (in par­allel). For example, an 8-bil unit of data can be transmitted over eight bus lines.

Computer systems contain a number of different buses that provide pathways between components at various levels of the computer system hierarchy. A bus that connects major computer components (processor, memory, I/O) is called a system bus. The most common computer interconnection structures are based on the use of one or more system buses.

Bus Structure

A system bus consists, typically, of from about 50 to hundreds of separate lines. Each line is assigned a particular meaning or function. Although there are many dif­ferent bus designs, on any bus the lines can be classified into three functional groups (Figure 12): data, address, and control lines. In addition, there may he power dis­tribution lines that supply power to the attached modules.

The data lines (data bus):

  • Provide a path for moving, data between system modules. These lines, collectively, are called the data bus.
  • The width of the data bus: The data bus may consist of from 32 to hundreds of separate lines, the number of lines being referred to as the width of the data bus. Because each line can carry only 1 bit at a time, the number of lines deter­mines how many bits can be transferred at a lime. The width of the data bus is a key factor in determining overall system performance. For example, if the data bus is 8 bits wide and each instruction is 16 bits long, then the processor must access the memory module twice during each instruction cycle.

The address lines ( address bus):

  • Address lines are used to designate the source or destination of the data on the data bus. For example, if the processor wishes to read a word (8, 16. or 32 bits) of data from memory, it puts the address of the desired word on the address lines.
  • The width of the address bus: determines the maximum possible memory capacity of the system. Furthermore, the address lines are generally also used to address I/O ports.

The control lines (control bus):

  • Control bus are used to control the access to and the use of the data and address lines. Because the data and address lines are shared by all components, there must be a means of controlling their use. Control signals transmit both com­mand and liming information between system modules. Timing signals indicate the validity of data and address information.
  • Command signals specify operations to be performed. Typical control lines include the following:
    • Memory write: Causes data on the bus to be written into the addressed location.
    • Memory read: Causes data from the addressed location to be placed on the bus.
    • I/O write: Causes data on the bus to be output to the addressed I/O port.
    • I/O read: Causes data from the addressed I/O port to be placed on the bus.
    • Transfer ACK: Indicates that data have been accepted from or placed on the bus.
    • Bus request: Indicates that a module needs to gain control of the bus.
    • Bus grant: Indicates that a requesting module has been granted control of the bus.
    • Interrupt request: Indicates that an interrupt is pending.
    • Interrupt ACK: Acknowledges that the pending interrupt has been recognized.
    • Clock: Used to synchronize operations.
    • Reset: Initializes all modules.
Bus Interconnection Scheme

Multiple-Bus Hierarchies

If a great number of devices are connected to the bus, performance will suffer. There are two main causes:

  • In general, the more devices attached to the bus, the greater the bus length and hence the greater the propagation delay. This delay determines the time it takes for devices to coordinate the use of the bus. When control of the bus passes from one device to another frequently, these propagation delays can noticeably affect performance.
  • The bus may become a bottleneck as the aggregate data transfer demand approaches the capacity of the bus. This problem can be countered to some extent by increasing the data rate that the bus can carry and by using wider buses (e.g., increasing the data bus from 32 to 64 bit). However, because the data rates generated by attached devices (e.g.. graphics and video controllers, network interfaces) are growing rapidly, this is a race that a single bus is ulti­mately destined to lose.

Accordingly, most computer systems use multiple buses, generally laid out in a hierarchy. A typical traditional structure is shown in Figure 13. There is a local bus that connects the processor to a cache memory and that may support one or more local devices. The cache memory controller connects the cache not only to this local bus, but to a system bus to which are attached all of the main memory mod­ules.

It is possible to connected I/O controllers directly onto the system bus. A more efficient solution is to make use of one or more expansion buses for this purpose. An expansion bus interface buffers data transfers between the system bus and the I/O controllers on the expansion bus. This arrangement allows the system to sup­port a wide variety of I/O devices and at the same time insulate memory-to-processor traffic from I/O traffic.

Traditional (ISA) (with cache):

Traditional bus architecture

Elements of Bus Design




Method of Arbitration:






Bus Width:



Data Transfer Type: