Rating:

(7 reviews)
Author: David B. Kirk Wen-mei W. Hwu
ISBN : B00AQEXYS0
New from $51.39
Format: PDF, EPUB
Free download Programming Massively Parallel Processors: A Hands-on Approach [Kindle Edition] Free Download from 4shared, mediafire, hotfile, and mirror link
Programming Massively Parallel Processors: A Hands-on Approach shows both student and professional alike the basic concepts of parallel programming and GPU architecture. Various techniques for constructing parallel programs are explored in detail. Case studies demonstrate the development process, which begins with computational thinking and ends with effective and efficient parallel programs. Topics of performance, floating-point format, parallel patterns, and dynamic parallelism are covered in depth.
This best-selling guide to CUDA and GPU parallel programming has been revised with more parallel programming examples, commonly-used libraries such as Thrust, and explanations of the latest tools. With these improvements, the book retains its concise, intuitive, practical approach based on years of road-testing in the authors' own parallel computing courses.
Updates in this new edition include:
- New coverage of CUDA 5.0, improved performance, enhanced development tools, increased hardware support, and more
- Increased coverage of related technology, OpenCL and new material on algorithm patterns, GPU clusters, host programming, and data parallelism
- Two new case studies (on MRI reconstruction and molecular visualization) explore the latest applications of CUDA and GPUs for scientific research and high-performance computing
Download latest books on mediafire and other links compilation Programming Massively Parallel Processors: A Hands-on Approach Free Download
- File Size: 6485 KB
- Print Length: 514 pages
- Publisher: Morgan Kaufmann; 2 edition (December 31, 2012)
- Sold by: Amazon Digital Services, Inc.
- Language: English
- ASIN: B00AQEXYS0
- Text-to-Speech: Enabled
X-Ray:
- Lending: Not Enabled
- Amazon Best Sellers Rank: #191,018 Paid in Kindle Store (See Top 100 Paid in Kindle Store)
- #17
in Books > Computers & Technology > Hardware > Microprocessors & System Design > Microprocessor Design - #52
in Books > Computers & Technology > Hardware > Parallel Processing Computers
- #17
in Books > Computers & Technology > Hardware > Microprocessors & System Design > Microprocessor Design - #52
in Books > Computers & Technology > Hardware > Parallel Processing Computers
Programming Massively Parallel Processors: A Hands-on Approach Free Download
This second edition of PMPP extends the table of contents of the first one, almost doubling the number of pages (in the 2nd ed. its ~500 pages. I have the paper version.)
The book can be separated roughly in 4 parts: the first, and more important, deals with parallel programming using Nvidia's CUDA technology: this takes about the first 10 chapters and Ch. 20; the second slice shows a couple of important examples (MRI image reconstruction and molecular simulation and visualization, chapters 11 and 12); the 3rd important block of chapters (chapters 14 upto 19) deals with other parallel programming technologies and CUDA expansions: OpenCL, OpenACC, CUDA Fortran, Thrust, C++AMP, MPI. Finally, spread all over the book, there are several "outlier", but nevertheless important, chapters: Ch. 7 discusses floating-point issues and its impact in calculation's accuracy; Ch. 13, "PP and Computational Thinking", discusses broadly how to think when converting sequential algorithms to parallel; and Ch. 21 discusses the future of PP (using CUDA goggles :-).
I've read about half of the book (I attended Coursera's MOOC -"Heterogeneous Parallel Computing"- taught by one of the authors, Prof. W. Hwu, and waited until the 2nd edition was out to buy it), and browsed carefully the other half. Here are my...
Comments
----------
(+++) Pluses:
# There are just a few typos, here and there, but they are easy to spot (the funniest is in line 5 of ch. 1 (!), where Giga corresponds to 10^12 and Tera to 10^15, according to the authors: of course Giga is 10^9 and Tera is 10^12 - this bug is browseable with Amazon's "look inside" feature...).
"Programming Massively Parallel Processors (second edition)" by Kirk and Hwu is a very good second book for those interested in getting started with CUDA. A first must-read is "CUDA by Example: An Introduction to General-Purpose GPU Programming" by Jason Sanders. After reading all of Sanders work, feel free to jump right to chapters 8 and 9 of this Kirk and Hwu publication.
In chapter 8, the authors do a nice job of explaining how to write an efficient convolution algorithm that is useful for smoothing and sharpening data sets. Their explanation of how shared memory can play a key role in improving performance is well written. They also handle the issue of "halo" data very well. Benchmark data would have served as a nice conclusion to this chapter.
In chapter 9, the authors provide the best description of the Prefix Sum algorithm I have seen to date. It describes the problem being solved in terms that I can easily relate to - food. They write, "We can illustrate the applications of inclusive scan operations using an example of cutting sausage for a group of people." They first describe a simple algorithm, then a "work-efficient" algorithm, and then an extension for larger data sets. What puzzles me here is that the authors seem fixated on solving the problem with the least number of total operations (across all threads) as opposed to the least number of operations per thread. They do not mention that the "work-efficient" algorithm requires almost twice as many more operations for the longest-path thread than the simple algorithm. Actual performance benchmarks showing a net throughput gain would be required for a skeptical reader.
Now before moving forward, lets back up a bit. Even though we have already read CUDA by Example, it is worth reading chapter 6...
Download Link 1 -
Download Link 2