Answered step by step
Verified Expert Solution
Link Copied!

Question

1 Approved Answer

In this exercise, we look at how software techniques can extract instruction-level parallelism (ILP) in a common vector loop. The following loop is the so-called

In this exercise, we look at how software techniques can extract instruction-level parallelism (ILP) in a common vector loop. The

image text in transcribed

following loop is the so-called DAXPY loop (double-precision aX plus Y) and is the central operation in Gaussian elimination. The following code implements the DAXPY operation, Y = aX + Y, for a vector length 100. Initially, R1 is set to the base address of array X and R2 is set to the base address of Y. Assume the functional unit latencies as shown in the table below. Assume a one-cycle delayed branch that resolves in the ID stage. Assume that results are fully bypassed. a. Assume a single-issue pipeline. Show how the loop would look both unscheduled by the compiler und after compiler scheduling for both floating-point operation und brunch delays, including any stalls or idle clock cycles. What is the execution time (in cycles) per element of the result vector, Y, unscheduled und scheduled? How much faster must the cluck be for processor hardware alone to match the performance improvement achieved by the scheduling compiler? (Neglect any possible effects of increased clock speed on memory system performance.) b. Assume a single-issue pipeline. Unroll the loop as many limes as necessary to schedule it without any stalls, collapsing the loop overhead instructions. How many times must the loop be unrolled? Show the instruction schedule. What is the execution time per element of the result? Assume a VLIW processor with instructions that contain five operations, as shown in Figure 3.1b. We will compare two degrees of loop unrolling. First, unroll the loop 6 times to extract ILP and schedule it without any stalls (i.e., completely empty issue cycles), collapsing the loop overhead instructions, and then repeat the process hut unroll the loop 10 times. Ignore the branch delay slot. Show the two schedules. What is the execution time per element of the result vector for each schedule? What percent of the operation slots are used in each schedule? How much does the sire of the code differ between the two schedules? What is the total register demand for the two schedules? following loop is the so-called DAXPY loop (double-precision aX plus Y) and is the central operation in Gaussian elimination. The following code implements the DAXPY operation, Y = aX + Y, for a vector length 100. Initially, R1 is set to the base address of array X and R2 is set to the base address of Y. Assume the functional unit latencies as shown in the table below. Assume a one-cycle delayed branch that resolves in the ID stage. Assume that results are fully bypassed. a. Assume a single-issue pipeline. Show how the loop would look both unscheduled by the compiler und after compiler scheduling for both floating-point operation und brunch delays, including any stalls or idle clock cycles. What is the execution time (in cycles) per element of the result vector, Y, unscheduled und scheduled? How much faster must the cluck be for processor hardware alone to match the performance improvement achieved by the scheduling compiler? (Neglect any possible effects of increased clock speed on memory system performance.) b. Assume a single-issue pipeline. Unroll the loop as many limes as necessary to schedule it without any stalls, collapsing the loop overhead instructions. How many times must the loop be unrolled? Show the instruction schedule. What is the execution time per element of the result? Assume a VLIW processor with instructions that contain five operations, as shown in Figure 3.1b. We will compare two degrees of loop unrolling. First, unroll the loop 6 times to extract ILP and schedule it without any stalls (i.e., completely empty issue cycles), collapsing the loop overhead instructions, and then repeat the process hut unroll the loop 10 times. Ignore the branch delay slot. Show the two schedules. What is the execution time per element of the result vector for each schedule? What percent of the operation slots are used in each schedule? How much does the sire of the code differ between the two schedules? What is the total register demand for the two schedules

Step by Step Solution

There are 3 Steps involved in it

Step: 1

blur-text-image

Get Instant Access to Expert-Tailored Solutions

See step-by-step solutions with expert insights and AI powered tools for academic success

Step: 2

blur-text-image_2

Step: 3

blur-text-image_3

Ace Your Homework with AI

Get the answers you need in no time with our AI-driven, step-by-step assistance

Get Started

Recommended Textbook for

PC Magazine Guide To Client Server Databases

Authors: Joe Salemi

1st Edition

156276070X, 978-1562760700

Students also viewed these Databases questions

Question

What about leadership lessons from particularly good or bad bosses?

Answered: 1 week ago