Go back

Chip Multiprocessor Architecture Techniques To Improve Throughput And Latency(1st Edition)

Authors:

Kunle Olukotun

Free chip multiprocessor architecture techniques to improve throughput and latency 1st edition kunle olukotun
4 ratings
Cover Type:Hardcover
Condition:Used

In Stock

Shipment time

Expected shipping within 2 Days
Access to 30 Million+ solutions Free
Ask 50 Questions from expert AI-Powered Answers
7 days-trial

Total Price:

$0

List Price: $25.50 Savings: $25.5(100%)
Access to 30 Million+ solutions
Ask 50 Questions from expert AI-Powered Answers 24/7 Tutor Help Detailed solutions for Chip Multiprocessor Architecture Techniques To Improve Throughput And Latency

Price:

$9.99

/month

Book details

ISBN: 159829122X, 978-1598291223

Book publisher: Morgan And Claypool Publishers

Get your hands on the best-selling book Chip Multiprocessor Architecture Techniques To Improve Throughput And Latency 1st Edition for free. Feed your curiosity and let your imagination soar with the best stories coming out to you without hefty price tags. Browse SolutionInn to discover a treasure trove of fiction and non-fiction books where every page leads the reader to an undiscovered world. Start your literary adventure right away and also enjoy free shipping of these complimentary books to your door.

Book Summary: Chip multiprocessors - also called multi-core microprocessors or CMPs for short - are now the only way to build high-performance microprocessors, for a variety of reasons. Large uniprocessors are no longer scaling in performance, because it is only possible to extract a limited amount of parallelism from a typical instruction stream using conventional superscalar instruction issue techniques. In addition, one cannot simply ratchet up the clock speed on today's processors, or the power dissipation will become prohibitive in all but water-cooled systems. Compounding these problems is the simple fact that with the immense numbers of transistors available on today's microprocessor chips, it is too costly to design and debug ever-larger processors every year or two. CMPs avoid these problems by filling up a processor die with multiple, relatively simpler processor cores instead of just one huge core. The exact size of a CMPs cores can vary from very simple pipelines to moderately complex superscalar processors, but once a core has been selected the CMPs performance can easily scale across silicon process generations simply by stamping down more copies of the hard-to-design, high-speed processor core in each successive chip generation. In addition, parallel code execution, obtained by spreading multiple threads of execution across the various cores, can achieve significantly higher performance than would be possible using only a single core. While parallel threads are already common in many useful workloads, there are still important workloads that are hard to divide into parallel threads. The low inter-processor communication latency between the cores in a CMP helps make a much wider range of applications viable candidates for parallel execution than was possible with conventional, multi-chip multiprocessors; nevertheless, limited parallelism in key applications is the main factor limiting acceptance of CMPs in some types of systems.