Components of a Processor
Lesson 3
Lesson Objectives
- Describe von Neumann, Harvard and contemporary processor architecture
- Describe the differences between, and uses of, CISC and RISC processors
- Describe GPUs and their uses
- Describe multicore and parallel systems
Multipurpose Machines
Early computers were able to calculate an output using fixed instructions
They could perform only one set of instructions
In the 1940s, John von Neumann and Alan Turing both proposed the stored program concept
Stored Program Concept
- A program must be loaded into main memory to be executed by the processor
- The instructions are fetched one at a time, decoded and executed sequentially by the processor
- The sequence of instructions can only be changed by a conditional or unconditional jump instruction
John Von Neumann
-
The most common implementation of this concept is the von Neumann architecture
-
Instructions and data are stored in a common main memory and transferred using a single shared bus
-
What compromises might there be with a shared bus?
Harvard Architecture
- An alternative model separates the data and instructions into separate memories using different buses
-
Program instructions and data are no longer competing for the same bus
Uses of Harvard Architecture
-
Different sized memories and word lengths can be used for data and instructions
-
Harvard principles are used with specialist embedded systems and digital signal processing (DSP), where speed takes priority over the complexities of design
-
Can you think of any other examples of where this is used?
Advantages of Von Neumann
-
Owing primarily to cost and programming complexity, almost all general purpose computers are based on von Neumann’s principles
-
It simplifies the design of the Control Unit
-
Data from memory and from devices are accessed in the same way
Von Neumann vs Harvard
Fill in the blanks
Von Neumann | Harvard |
---|---|
Used in PCs, laptops, servers and high performance computers | |
Data and instructions share the same memory. Both use the same word length | |
One bus for data and instructions is a bottleneck | |
One bus is simpler for control unit design |
Contemporary Processor Architectures
- Modern CPU chips often incorporate aspects of both von Neumann and Harvard architecture
- In desktop computers, there is one main memory for holding both data and instructions, but cache memory is divided into an instruction cache and a data cache so data and instructions are retrieved using Harvard architecture
- Some digital signal processors have multiple parallel data buses (two write, three read) and one instruction bus
CISC and RISC
-
In Complex Instruction Set Computers (CISC), a large instruction set is used to accomplish tasks in as few lines of assembly language as possible
-
A CISC instruction combines a “load/store” instruction with the instruction that carries out the actual calculation
-
A single assembly language instruction such as:
MULT A, B
could be used to multiply A by B and store the result back in A
RISC
- Reduced Instruction Set Computers (RISC) take an opposite approach
- A minimum number of very simple instructions, each taking one clock cycle, are used to accomplish all the required operations in multiple general purpose registers
- How would the multiplication operation be carried out with the following operations?
LDA (LOAD)
STO (STORE)
MULT (MULTIPLY)
Coding in RISC
-
The CISC instruction:
MULT A, B
Might be written in a RISC assembly code as:
LDA R1, A
LDA R2, B
MULT R1, R2
STO R1 A
Advantages of CISC and RISC
Task
Write a report on the differences between CISC and RISC.
It should include:
- A brief history of both architectures.
- The primary design philosophies for each: What is the main idea behind their respective designs?
- Advantages and disadvantages of each architecture.
- Practical examples: Identify and describe some real-world processors that are designed based on CISC and RISC architectures.
- How have these architectures influenced the development and evolution of modern computing?
Multi-core and Parallel Systems
-
Multi-core processors are able to distribute workload across multiple processor cores, thus achieving significantly higher performance by performing several tasks in parallel
- They are therefore known as parallel systems
- Many personal computers and mobile devices are dual-core or quad-core, meaning they have two or four processing chips
- Supercomputers have thousands of cores
Using Parallel Processing
-
The software has to be written to take advantage of multiple cores
-
For example, browsers such as Google Chrome and Mozilla Firefox can run several concurrent processes
-
Using tabbed browsing, different cores can work simultaneously processing requests, showing videos or running software in different windows
Co-Processing Systems
-
A co-processor is an extra processor used to supplement the functions of the primary processor (the CPU)
-
It may be used to perform floating point arithmetic, graphics processing, digital signal processing and other functions
-
It generally carries out only a limited range of functions
GPU
-
A Graphics Processing Unit (GPU) is a specialised electronic circuit which is very efficient at manipulating computer graphics and image processing
-
It consists of thousands of small efficient cores designed for parallel processing
-
It can process large blocks of visual data simultaneously
-
In a PC, a GPU may be present on a graphics card
GPU
- A GPU has thousands of cores to process parallel tasks efficiently
GPU
- A GPU can act together with a CPU to accelerate scientific, engineering and other applications
- They are used in numerous devices ranging from mobile phones and tablets to cars, drones and robots
Activity
Complete Tasks 1 and 2 in the worksheet set on Moodle
Components of a processor - Student
By CJackson
Components of a processor - Student
- 144