Answered step by step
Verified Expert Solution
Link Copied!

Question

1 Approved Answer

3 . The switched interconnect increases the performance of a snooping cache - coherent multiprocessor by allowing multiple requests to be overlapped. Because the controllers

3. The switched interconnect increases the performance of a snooping cache-coherent multiprocessor by allowing multiple requests to be overlapped. Because the controllers and the networks are pipelined, there is a difference between an operation's latency (i.e., cycles to complete the operation) and overhead (i.e., cycles until the next operation can begin). For the multiprocessor illustrated in Figure 4.39, assume the following latencies and overheads:
*CPU read and write hits generate no stall cycles.
*A CPU read or write that generates a replacement event issues the corresponding GetShared or GetModified message before the PutModified message (e.g., using a writeback buffer).
*A cache controller event that sends a request message (e.g., GetShared) has latency send reg and blocks the controller from processing other events for send req cycles.
*A cache controller event that reads the cache and sends a data message has latency Lend data and overhead Osend data cycles.
*A cache controller event that receives a data message and updates the cache has latency Lrcv data and overhead Orc data.
*A memory controller has latency Lead memory and overhead Oread memory cycles to read memory and send a data message
*A memory controller has latency write memory and overhead Owrite memory cycles to write a data message to memory.
*In the absence of contention, a request message has network latency reg msg and overhead Oregmsg cycles.
*In the absence of contention, a data message has network latency Ldata msg and overhead Odata msg cycles.
Consider an implementation with the performance characteristics summarized in Figure 4.41. For the following sequences of operations and the cache contents from Figure 4.37 and the implementation parameters in Figure 4.41, how many stall cycles does each processor incur for each memory request? Similarly, for how many cycles are the different controllers occupied?
For simplicity, assume (1) each processor can have only one memory operation outstanding at a time, (2) if two nodes make requests in the same cycle and the one listed first "wins," the later node must stall for the request message overhead, and (3) all requests map to the same memory controller.
a. PO: read 120
b. PO: write 120--80
c. P15: write 120--80
d. P1: read 110
e. PO: read 120
P15: read 128
f. PO: read 100
P1: write 110--78
g. PO: write 100--28
P1: write 100--48
image text in transcribed

Step by Step Solution

There are 3 Steps involved in it

Step: 1

blur-text-image

Get Instant Access to Expert-Tailored Solutions

See step-by-step solutions with expert insights and AI powered tools for academic success

Step: 2

blur-text-image

Step: 3

blur-text-image

Ace Your Homework with AI

Get the answers you need in no time with our AI-driven, step-by-step assistance

Get Started

Recommended Textbook for

Database Theory And Application Bio Science And Bio Technology International Conferences DTA And BSBT 2011 Held As Part Of The Future Generation In Computer And Information Science 258

Authors: Tai-hoon Kim ,Hojjat Adeli ,Alfredo Cuzzocrea ,Tughrul Arslan ,Yanchun Zhang ,Jianhua Ma ,Kyo-il Chung ,Siti Mariyam ,Xiaofeng Song

2011th Edition

ISBN: 3642271561, 978-3642271564

More Books

Students also viewed these Databases questions

Question

B) C) D) E)

Answered: 1 week ago

Question

8. Demonstrate aspects of assessing group performance

Answered: 1 week ago