Question: ( Getting your feet wet with some ( Getting your feet wet with some background background info, valuable info regarding the project starts on page

(Getting your feet wet with some(Getting your feet wet with some background background info, valuable info regarding the project starts on page 5)
A CPU can be thought of as a complex, programmable state machine beast that can perform operations according to a set of instructions that have been stored in some memory. Modern CPUs are *very* difficult to understand or try to reverse-engineer because of the stack-design method needed for backwards compatibility with older generations as well as the protection around certain manufacturing companies intellectual properties (IP) for the likes of AMD, Intel, Qualcomm, ARM, etc..
Although this incremental development over the years have caused the hardware architecture to become cluttered and confusing, at their hearts, processors still go through the same process of performing a fetch-decode-execute cycle.
The processor retrieves an instruction from memory, decodes the instruction to determine what actions need to be performed, performs the necessary actions, and then begins retrieving the next instruction from the next memory location. On their own, individual instructions perform simple tasks3(move values around different registers, fetch values from a specific memory address, etc..), but when instructions are arranged in special sequences, extremely complex tasks can be performed. When instructions are arranged in this manner, they are collectively referred to as a program.
This program can have a sequence like the following:
Fetch instruction from memory to start executing
Instruction is decoded (lookup Instruction Set Architecture if interested how this happens)
Operands/Values are collected and stored in internal registers inside the CPU
After the program is fetched, the execution stage starts happening where operations get performed on the inputs/operands brought in. These operations can be anything from adding numbers, to logic manipulations, and more. After the results are obtained, they either are stored in registers for further usage later (branch prediction in more complex CPUs for example) or are stored back into memory by overwriting some other locations. Once this is all done, the CPU moves on to start working on the next instruction residing in memory. This typically takes several clock cycles to accomplish (up to 50+ cycles) for one instruction to be completed depending where the values are being loaded from (L1->L3 cache, RAM, SSD/HDD).
This whole process is a highly sophisticated, choreographed procedure that thousands of engineers work together to get it working as efficiently as possible with the tightest margins of error/time-wasting. This is where the idea of pipelining, parallelism, executing instructions out-of-order, and other techniques become very important and are the tiny details that differentiate different companies from each others designs and performances. One of these technologies is executing two instructions at the same time using pipelining if they have no dependencies between each other, with an implementation called Simultaneous Multithreading (SMT), or Hyper-Threading. Intel and AMD support two-way SMT whereas IBM supports up to eight-way.
Figure 1: Microprocessor - High Level View
The architecture above follows the Von Nuemann model, in which program instructions and data are stored in the same memory space. This means that one must pay careful attention when creating a program to avoid accidentally overwriting instructions or important data. If you have ever worked with a high level language such as C, you might have experienced a mitigation against that when you encountered a segmentation fault error; this is when you try to access a memory location inside your onboard RAM thats an instruction/important data that you are technically not allowed to overwrite. When you start working with low-level stuff such as VHDL/Assembly, you have to keep track of what is data that you want to preserve and not overwrite.
Zooming into the CPU, one will typically find the following components residing within. The CPU block here in Figure 2 refers to one core, nowadays you hear about quad/hex/8core CPUs for your standard non-server PC.
Figure 2: CPU Internal Components Structure
This is as simple as a CPU can get, but modern CPUs tend to cram more components into one chip to look something like that:
Figures 3&4: Exposure of CPU Die (colorful silicon wafer)
Where each core might have an architecture like the following:
Figure 5: CPUs Single Core Architecture
You are tasked with designing a CPU that has the following architecture:
CPU Architecture
Figure 6: Simplified CPU Architecture
Figure 6 shows the simplified internal architecture of the CPU, as well as the bus lines connecting them together. Keep in mind that each bus/wire might have a different width associated with it (varying number of bits can be transferred at a time). Figure 6 does not show/include the memory module/RAM used to f

Step by Step Solution

There are 3 Steps involved in it

1 Expert Approved Answer
Step: 1 Unlock blur-text-image
Question Has Been Solved by an Expert!

Get step-by-step solutions from verified subject matter experts

Step: 2 Unlock
Step: 3 Unlock

Students Have Also Explored These Related Programming Questions!