CS计算机代考程序代写 x86 database compiler cuda GPU cache algorithm Chapter …

Chapter …

Chapter 6
Parallel Processors from Client to Cloud

Morgan Kaufmann Publishers
Morgan Kaufmann Publishers
*
Chapter 7 — Multicores, Multiprocessors, and Clusters
*

Chapter 7 — Multicores, Multiprocessors, and Clusters

Introduction
Goal: connecting multiple computers
to get higher performance
Multiprocessors
Scalability, availability, power efficiency
Task-level (process-level) parallelism
High throughput for independent jobs
Parallel processing program
Single program run on multiple processors
Multicore microprocessors
Chips with multiple processors (cores)

§6.1 Introduction
Chapter 6 — Parallel Processors from Client to Cloud — *

Chapter 6 — Parallel Processors from Client to Cloud — *
Morgan Kaufmann Publishers
Morgan Kaufmann Publishers
*
Chapter 7 — Multicores, Multiprocessors, and Clusters
*

Chapter 7 — Multicores, Multiprocessors, and Clusters

Hardware and Software
Hardware
Serial: e.g., Pentium 4
Parallel: e.g., quad-core Xeon e5345
Software
Sequential: e.g., matrix multiplication
Concurrent: e.g., operating system
Sequential/concurrent software can run on serial/parallel hardware
Challenge: making effective use of parallel hardware

Chapter 6 — Parallel Processors from Client to Cloud — *

Chapter 6 — Parallel Processors from Client to Cloud — *
Morgan Kaufmann Publishers
Morgan Kaufmann Publishers
*
Chapter 7 — Multicores, Multiprocessors, and Clusters
*

Chapter 7 — Multicores, Multiprocessors, and Clusters

What We’ve Already Covered
§2.11: Parallelism and Instructions
Synchronization
§3.6: Parallelism and Computer Arithmetic
Subword Parallelism
§4.10: Parallelism and Advanced Instruction-Level Parallelism
§5.10: Parallelism and Memory Hierarchies
Cache Coherence

Chapter 6 — Parallel Processors from Client to Cloud — *

Chapter 6 — Parallel Processors from Client to Cloud — *
Morgan Kaufmann Publishers
Morgan Kaufmann Publishers
*
Chapter 7 — Multicores, Multiprocessors, and Clusters
*

Chapter 7 — Multicores, Multiprocessors, and Clusters

Parallel Programming
Parallel software is the problem
Need to get significant performance improvement
Otherwise, just use a faster uniprocessor, since it’s easier!
Difficulties
Partitioning
Coordination
Communications overhead

§6.2 The Difficulty of Creating Parallel Processing Programs
Chapter 6 — Parallel Processors from Client to Cloud — *

Chapter 6 — Parallel Processors from Client to Cloud — *
Morgan Kaufmann Publishers
Morgan Kaufmann Publishers
*
Chapter 7 — Multicores, Multiprocessors, and Clusters
*

Chapter 7 — Multicores, Multiprocessors, and Clusters

Amdahl’s Law
Sequential part can limit speedup
Example: 100 processors, 90× speedup?
Tnew = Tparallelizable/100 + Tsequential

Solving: Fparallelizable = 0.999
Need sequential part to be 0.1% of original time

Chapter 6 — Parallel Processors from Client to Cloud — *

Chapter 6 — Parallel Processors from Client to Cloud — *
Morgan Kaufmann Publishers
Morgan Kaufmann Publishers
*
Chapter 7 — Multicores, Multiprocessors, and Clusters
*

Chapter 7 — Multicores, Multiprocessors, and Clusters

Scaling Example
Workload: sum of 10 scalars, and 10 × 10 matrix sum
Speed up from 10 to 100 processors
Single processor: Time = (10 + 100) × tadd
10 processors
Time = 10 × tadd + 100/10 × tadd = 20 × tadd
Speedup = 110/20 = 5.5 (55% of potential)
100 processors
Time = 10 × tadd + 100/100 × tadd = 11 × tadd
Speedup = 110/11 = 10 (10% of potential)
Assumes load can be balanced across processors

Chapter 6 — Parallel Processors from Client to Cloud — *

Chapter 6 — Parallel Processors from Client to Cloud — *
Morgan Kaufmann Publishers
Morgan Kaufmann Publishers
*
Chapter 7 — Multicores, Multiprocessors, and Clusters
*

Chapter 7 — Multicores, Multiprocessors, and Clusters

Scaling Example (cont)
What if matrix size is 100 × 100?
Single processor: Time = (10 + 10000) × tadd
10 processors
Time = 10 × tadd + 10000/10 × tadd = 1010 × tadd
Speedup = 10010/1010 = 9.9 (99% of potential)
100 processors
Time = 10 × tadd + 10000/100 × tadd = 110 × tadd
Speedup = 10010/110 = 91 (91% of potential)
Assuming load balanced

Chapter 6 — Parallel Processors from Client to Cloud — *

Chapter 6 — Parallel Processors from Client to Cloud — *
Morgan Kaufmann Publishers
Morgan Kaufmann Publishers
*
Chapter 7 — Multicores, Multiprocessors, and Clusters
*

Chapter 7 — Multicores, Multiprocessors, and Clusters

Strong vs Weak Scaling
Strong scaling: problem size fixed
As in example
Weak scaling: problem size proportional to number of processors
10 processors, 10 × 10 matrix
Time = 20 × tadd
100 processors, 32 × 32 matrix
Time = 10 × tadd + 1000/100 × tadd = 20 × tadd
Constant performance in this example

Chapter 6 — Parallel Processors from Client to Cloud — *

Chapter 6 — Parallel Processors from Client to Cloud — *
Morgan Kaufmann Publishers
Morgan Kaufmann Publishers
*
Chapter 7 — Multicores, Multiprocessors, and Clusters
*

Chapter 7 — Multicores, Multiprocessors, and Clusters

Instruction and Data Streams
An alternate classification

SPMD: Single Program Multiple Data
A parallel program on a MIMD computer
Conditional code for different processors

Chapter 6 — Parallel Processors from Client to Cloud — *
§6.3 SISD, MIMD, SIMD, SPMD, and Vector
Data Streams
Single Multiple
Instruction Streams Single SISD:
Intel Pentium 4 SIMD: SSE instructions of x86
Multiple MISD:
No examples today MIMD:
Intel Xeon e5345

Chapter 6 — Parallel Processors from Client to Cloud — *
Morgan Kaufmann Publishers
Morgan Kaufmann Publishers
*
Chapter 7 — Multicores, Multiprocessors, and Clusters
*

Chapter 7 — Multicores, Multiprocessors, and Clusters

Example: DAXPY (Y = a × X + Y)
Conventional LEGv8 code:

LDURD D0,[X28,a] //load scalar a
ADDI X0,X19,512 //upper bound of what to load
loop: LDURD D2,[X19,#0] //load x(i)
FMULD D2,D2,D0 //a x x(i)
LDURD D4,[X20,#0] //load y(i)
FADDD D4,D4,D2 //a x x(i) + y(i)
STURD D4,[X20,#0] //store into y(i)
ADDI X19,X19,#8 //increment index to x
ADDI X20,X20,#8 //increment index to y
CMPB X0,X19 //compute bound
B.NE loop //check if done
Vector LEGv8 code:
LDURD D0,[X28,a] //load scalar a
LDURDV V1,[X19,#0] //load vector x
FMULDVS V2,V1,D0 //vector-scalar multiply
LDURDV V3,[X20,#0] //load vector y
FADDDV V4,V2,V3 //add y to product
STURDV V4,[X20,#0] //store the result

Chapter 6 — Parallel Processors from Client to Cloud — *

Chapter 6 — Parallel Processors from Client to Cloud — *
Morgan Kaufmann Publishers
Morgan Kaufmann Publishers
*
Chapter 7 — Multicores, Multiprocessors, and Clusters
*

Chapter 7 — Multicores, Multiprocessors, and Clusters

Vector Processors
Highly pipelined function units
Stream data from/to vector registers to units
Data collected from memory into registers
Results stored from registers to memory
Example: Vector extension to LEGv8
32 × 64-element registers (64-bit elements)
Vector instructions
lv, sv: load/store vector
addv.d: add vectors of double
addvs.d: add scalar to each element of vector of double
Significantly reduces instruction-fetch bandwidth

Chapter 6 — Parallel Processors from Client to Cloud — *

Chapter 6 — Parallel Processors from Client to Cloud — *
Morgan Kaufmann Publishers
Morgan Kaufmann Publishers
*
Chapter 7 — Multicores, Multiprocessors, and Clusters
*

Chapter 7 — Multicores, Multiprocessors, and Clusters

Vector vs. Scalar
Vector architectures and compilers
Simplify data-parallel programming
Explicit statement of absence of loop-carried dependences
Reduced checking in hardware
Regular access patterns benefit from interleaved and burst memory
Avoid control hazards by avoiding loops
More general than ad-hoc media extensions (such as MMX, SSE)
Better match with compiler technology

Chapter 6 — Parallel Processors from Client to Cloud — *

Chapter 6 — Parallel Processors from Client to Cloud — *
Morgan Kaufmann Publishers
Morgan Kaufmann Publishers
*
Chapter 7 — Multicores, Multiprocessors, and Clusters
*

Chapter 7 — Multicores, Multiprocessors, and Clusters

SIMD
Operate elementwise on vectors of data
E.g., MMX and SSE instructions in x86
Multiple data elements in 128-bit wide registers
All processors execute the same instruction at the same time
Each with different data address, etc.
Simplifies synchronization
Reduced instruction control hardware
Works best for highly data-parallel applications

Chapter 6 — Parallel Processors from Client to Cloud — *

Chapter 6 — Parallel Processors from Client to Cloud — *
Morgan Kaufmann Publishers
Morgan Kaufmann Publishers
*
Chapter 7 — Multicores, Multiprocessors, and Clusters
*

Chapter 7 — Multicores, Multiprocessors, and Clusters

Vector vs. Multimedia Extensions
Vector instructions have a variable vector width, multimedia extensions have a fixed width
Vector instructions support strided access, multimedia extensions do not
Vector units can be combination of pipelined and arrayed functional units:

Chapter 6 — Parallel Processors from Client to Cloud — *

Chapter 6 — Parallel Processors from Client to Cloud — *

Multithreading
Performing multiple threads of execution in parallel
Replicate registers, PC, etc.
Fast switching between threads
Fine-grain multithreading
Switch threads after each cycle
Interleave instruction execution
If one thread stalls, others are executed
Coarse-grain multithreading
Only switch on long stall (e.g., L2-cache miss)
Simplifies hardware, but doesn’t hide short stalls (eg, data hazards)

§6.4 Hardware Multithreading
Chapter 6 — Parallel Processors from Client to Cloud — *

Chapter 6 — Parallel Processors from Client to Cloud — *
Morgan Kaufmann Publishers
Morgan Kaufmann Publishers
*
Chapter 7 — Multicores, Multiprocessors, and Clusters
*

Chapter 7 — Multicores, Multiprocessors, and Clusters

Simultaneous Multithreading
In multiple-issue dynamically scheduled processor
Schedule instructions from multiple threads
Instructions from independent threads execute when function units are available
Within threads, dependencies handled by scheduling and register renaming
Example: Intel Pentium-4 HT
Two threads: duplicated registers, shared function units and caches

Chapter 6 — Parallel Processors from Client to Cloud — *

Chapter 6 — Parallel Processors from Client to Cloud — *
Morgan Kaufmann Publishers
Morgan Kaufmann Publishers
*
Chapter 7 — Multicores, Multiprocessors, and Clusters
*

Chapter 7 — Multicores, Multiprocessors, and Clusters

Multithreading Example
Chapter 6 — Parallel Processors from Client to Cloud — *

Chapter 6 — Parallel Processors from Client to Cloud — *
Morgan Kaufmann Publishers
Morgan Kaufmann Publishers
*
Chapter 7 — Multicores, Multiprocessors, and Clusters
*

Chapter 7 — Multicores, Multiprocessors, and Clusters

Future of Multithreading
Will it survive? In what form?
Power considerations  simplified microarchitectures
Simpler forms of multithreading
Tolerating cache-miss latency
Thread switch may be most effective
Multiple simple cores might share resources more effectively

Chapter 6 — Parallel Processors from Client to Cloud — *

Chapter 6 — Parallel Processors from Client to Cloud — *
Morgan Kaufmann Publishers
Morgan Kaufmann Publishers
*
Chapter 7 — Multicores, Multiprocessors, and Clusters
*

Chapter 7 — Multicores, Multiprocessors, and Clusters

Shared Memory
SMP: shared memory multiprocessor
Hardware provides single physical
address space for all processors
Synchronize shared variables using locks
Memory access time
UMA (uniform) vs. NUMA (nonuniform)

Chapter 6 — Parallel Processors from Client to Cloud — *
§6.5 Multicore and Other Shared Memory Multiprocessors

Chapter 6 — Parallel Processors from Client to Cloud — *
Morgan Kaufmann Publishers
Morgan Kaufmann Publishers
*
Chapter 7 — Multicores, Multiprocessors, and Clusters
*

Chapter 7 — Multicores, Multiprocessors, and Clusters

Example: Sum Reduction
Sum 100,000 numbers on 100 processor UMA
Each processor has ID: 0 ≤ Pn ≤ 99
Partition 1000 numbers per processor
Initial summation on each processor

sum[Pn] = 0;
for (i = 1000*Pn;
i < 1000*(Pn+1); i = i + 1) sum[Pn] = sum[Pn] + A[i]; Now need to add these partial sums Reduction: divide and conquer Half the processors add pairs, then quarter, … Need to synchronize between reduction steps Chapter 6 — Parallel Processors from Client to Cloud — * Chapter 6 — Parallel Processors from Client to Cloud — * Morgan Kaufmann Publishers Morgan Kaufmann Publishers * Chapter 7 — Multicores, Multiprocessors, and Clusters * Chapter 7 — Multicores, Multiprocessors, and Clusters Example: Sum Reduction half = 100; repeat synch(); if (half%2 != 0 && Pn == 0) sum[0] = sum[0] + sum[half-1]; /* Conditional sum needed when half is odd; Processor0 gets missing element */ half = half/2; /* dividing line on who sums */ if (Pn < half) sum[Pn] = sum[Pn] + sum[Pn+half]; until (half == 1); Chapter 6 — Parallel Processors from Client to Cloud — * Chapter 6 — Parallel Processors from Client to Cloud — * Morgan Kaufmann Publishers Morgan Kaufmann Publishers * Chapter 7 — Multicores, Multiprocessors, and Clusters * Chapter 7 — Multicores, Multiprocessors, and Clusters History of GPUs Early video cards Frame buffer memory with address generation for video output 3D graphics processing Originally high-end computers (e.g., SGI) Moore’s Law  lower cost, higher density 3D graphics cards for PCs and game consoles Graphics Processing Units Processors oriented to 3D graphics tasks Vertex/pixel processing, shading, texture mapping, rasterization §6.6 Introduction to Graphics Processing Units Chapter 6 — Parallel Processors from Client to Cloud — * Chapter 6 — Parallel Processors from Client to Cloud — * Morgan Kaufmann Publishers Morgan Kaufmann Publishers * Chapter 7 — Multicores, Multiprocessors, and Clusters * Chapter 7 — Multicores, Multiprocessors, and Clusters Graphics in the System Chapter 6 — Parallel Processors from Client to Cloud — * Chapter 6 — Parallel Processors from Client to Cloud — * Morgan Kaufmann Publishers Morgan Kaufmann Publishers * Chapter 7 — Multicores, Multiprocessors, and Clusters * Chapter 7 — Multicores, Multiprocessors, and Clusters GPU Architectures Processing is highly data-parallel GPUs are highly multithreaded Use thread switching to hide memory latency Less reliance on multi-level caches Graphics memory is wide and high-bandwidth Trend toward general purpose GPUs Heterogeneous CPU/GPU systems CPU for sequential code, GPU for parallel code Programming languages/APIs DirectX, OpenGL C for Graphics (Cg), High Level Shader Language (HLSL) Compute Unified Device Architecture (CUDA) Chapter 6 — Parallel Processors from Client to Cloud — * Chapter 6 — Parallel Processors from Client to Cloud — * Morgan Kaufmann Publishers Morgan Kaufmann Publishers * Chapter 7 — Multicores, Multiprocessors, and Clusters * Chapter 7 — Multicores, Multiprocessors, and Clusters Example: NVIDIA Tesla Streaming multiprocessor 8 × Streaming processors Chapter 6 — Parallel Processors from Client to Cloud — * Chapter 6 — Parallel Processors from Client to Cloud — * Morgan Kaufmann Publishers Morgan Kaufmann Publishers * Chapter 7 — Multicores, Multiprocessors, and Clusters * Chapter 7 — Multicores, Multiprocessors, and Clusters Example: NVIDIA Tesla Streaming Processors Single-precision FP and integer units Each SP is fine-grained multithreaded Warp: group of 32 threads Executed in parallel, SIMD style 8 SPs × 4 clock cycles Hardware contexts for 24 warps Registers, PCs, … Chapter 6 — Parallel Processors from Client to Cloud — * Chapter 6 — Parallel Processors from Client to Cloud — * Morgan Kaufmann Publishers Morgan Kaufmann Publishers * Chapter 7 — Multicores, Multiprocessors, and Clusters * Chapter 7 — Multicores, Multiprocessors, and Clusters Classifying GPUs Don’t fit nicely into SIMD/MIMD model Conditional execution in a thread allows an illusion of MIMD But with performance degredation Need to write general purpose code with care Chapter 6 — Parallel Processors from Client to Cloud — * Static: Discovered at Compile Time Dynamic: Discovered at Runtime Instruction-Level Parallelism VLIW Superscalar Data-Level Parallelism SIMD or Vector Tesla Multiprocessor Chapter 6 — Parallel Processors from Client to Cloud — * Morgan Kaufmann Publishers Morgan Kaufmann Publishers * Chapter 7 — Multicores, Multiprocessors, and Clusters * Chapter 7 — Multicores, Multiprocessors, and Clusters GPU Memory Structures Chapter 6 — Parallel Processors from Client to Cloud — * Chapter 6 — Parallel Processors from Client to Cloud — * Putting GPUs into Perspective Chapter 6 — Parallel Processors from Client to Cloud — * Feature Multicore with SIMD GPU SIMD processors 4 to 8 8 to 16 SIMD lanes/processor 2 to 4 8 to 16 Multithreading hardware support for SIMD threads 2 to 4 16 to 32 Typical ratio of single precision to double-precision performance 2:1 2:1 Largest cache size 8 MB 0.75 MB Size of memory address 64-bit 64-bit Size of main memory 8 GB to 256 GB 4 GB to 6 GB Memory protection at level of page Yes Yes Demand paging Yes No Integrated scalar processor/SIMD processor Yes No Cache coherent Yes No Chapter 6 — Parallel Processors from Client to Cloud — * Guide to GPU Terms Chapter 6 — Parallel Processors from Client to Cloud — * Chapter 6 — Parallel Processors from Client to Cloud — * Message Passing Each processor has private physical address space Hardware sends/receives messages between processors §6.7 Clusters, WSC, and Other Message-Passing MPs Chapter 6 — Parallel Processors from Client to Cloud — * Chapter 6 — Parallel Processors from Client to Cloud — * Morgan Kaufmann Publishers Morgan Kaufmann Publishers * Chapter 7 — Multicores, Multiprocessors, and Clusters * Chapter 7 — Multicores, Multiprocessors, and Clusters Loosely Coupled Clusters Network of independent computers Each has private memory and OS Connected using I/O system E.g., Ethernet/switch, Internet Suitable for applications with independent tasks Web servers, databases, simulations, … High availability, scalable, affordable Problems Administration cost (prefer virtual machines) Low interconnect bandwidth c.f. processor/memory bandwidth on an SMP Chapter 6 — Parallel Processors from Client to Cloud — * Chapter 6 — Parallel Processors from Client to Cloud — * Morgan Kaufmann Publishers Morgan Kaufmann Publishers * Chapter 7 — Multicores, Multiprocessors, and Clusters * Chapter 7 — Multicores, Multiprocessors, and Clusters Sum Reduction (Again) Sum 100,000 on 100 processors First distribute 100 numbers to each The do partial sums sum = 0; for (i = 0; i<1000; i = i + 1) sum = sum + AN[i]; Reduction Half the processors send, other half receive and add The quarter send, quarter receive and add, … Chapter 6 — Parallel Processors from Client to Cloud — * Chapter 6 — Parallel Processors from Client to Cloud — * Morgan Kaufmann Publishers Morgan Kaufmann Publishers * Chapter 7 — Multicores, Multiprocessors, and Clusters * Chapter 7 — Multicores, Multiprocessors, and Clusters Sum Reduction (Again) Given send() and receive() operations limit = 100; half = 100;/* 100 processors */ repeat half = (half+1)/2; /* send vs. receive dividing line */ if (Pn >= half && Pn < limit) send(Pn - half, sum); if (Pn < (limit/2)) sum = sum + receive(); limit = half; /* upper limit of senders */ until (half == 1); /* exit with final sum */ Send/receive also provide synchronization Assumes send/receive take similar time to addition Chapter 6 — Parallel Processors from Client to Cloud — * Chapter 6 — Parallel Processors from Client to Cloud — * Morgan Kaufmann Publishers Morgan Kaufmann Publishers * Chapter 7 — Multicores, Multiprocessors, and Clusters * Chapter 7 — Multicores, Multiprocessors, and Clusters Grid Computing Separate computers interconnected by long-haul networks E.g., Internet connections Work units farmed out, results sent back Can make use of idle time on PCs E.g., SETI@home, World Community Grid Chapter 6 — Parallel Processors from Client to Cloud — * Chapter 6 — Parallel Processors from Client to Cloud — * Morgan Kaufmann Publishers Morgan Kaufmann Publishers * Chapter 7 — Multicores, Multiprocessors, and Clusters * Chapter 7 — Multicores, Multiprocessors, and Clusters Interconnection Networks Network topologies Arrangements of processors, switches, and links §6.8 Introduction to Multiprocessor Network Topologies Bus Ring 2D Mesh N-cube (N = 3) Fully connected Chapter 6 — Parallel Processors from Client to Cloud — * Chapter 6 — Parallel Processors from Client to Cloud — * Morgan Kaufmann Publishers Morgan Kaufmann Publishers * Chapter 7 — Multicores, Multiprocessors, and Clusters * Chapter 7 — Multicores, Multiprocessors, and Clusters Multistage Networks Chapter 6 — Parallel Processors from Client to Cloud — * Chapter 6 — Parallel Processors from Client to Cloud — * Morgan Kaufmann Publishers Morgan Kaufmann Publishers * Chapter 7 — Multicores, Multiprocessors, and Clusters * Chapter 7 — Multicores, Multiprocessors, and Clusters Network Characteristics Performance Latency per message (unloaded network) Throughput Link bandwidth Total network bandwidth Bisection bandwidth Congestion delays (depending on traffic) Cost Power Routability in silicon Chapter 6 — Parallel Processors from Client to Cloud — * Chapter 6 — Parallel Processors from Client to Cloud — * Morgan Kaufmann Publishers Morgan Kaufmann Publishers * Chapter 7 — Multicores, Multiprocessors, and Clusters * Chapter 7 — Multicores, Multiprocessors, and Clusters Parallel Benchmarks Linpack: matrix linear algebra SPECrate: parallel run of SPEC CPU programs Job-level parallelism SPLASH: Stanford Parallel Applications for Shared Memory Mix of kernels and applications, strong scaling NAS (NASA Advanced Supercomputing) suite computational fluid dynamics kernels PARSEC (Princeton Application Repository for Shared Memory Computers) suite Multithreaded applications using Pthreads and OpenMP §6.10 Multiprocessor Benchmarks and Performance Models Chapter 6 — Parallel Processors from Client to Cloud — * Chapter 6 — Parallel Processors from Client to Cloud — * Morgan Kaufmann Publishers Morgan Kaufmann Publishers * Chapter 7 — Multicores, Multiprocessors, and Clusters * Chapter 7 — Multicores, Multiprocessors, and Clusters Code or Applications? Traditional benchmarks Fixed code and data sets Parallel programming is evolving Should algorithms, programming languages, and tools be part of the system? Compare systems, provided they implement a given application E.g., Linpack, Berkeley Design Patterns Would foster innovation in approaches to parallelism Chapter 6 — Parallel Processors from Client to Cloud — * Chapter 6 — Parallel Processors from Client to Cloud — * Morgan Kaufmann Publishers Morgan Kaufmann Publishers * Chapter 7 — Multicores, Multiprocessors, and Clusters * Chapter 7 — Multicores, Multiprocessors, and Clusters Modeling Performance Assume performance metric of interest is achievable GFLOPs/sec Measured using computational kernels from Berkeley Design Patterns Arithmetic intensity of a kernel FLOPs per byte of memory accessed For a given computer, determine Peak GFLOPS (from data sheet) Peak memory bytes/sec (using Stream benchmark) Chapter 6 — Parallel Processors from Client to Cloud — * Chapter 6 — Parallel Processors from Client to Cloud — * Morgan Kaufmann Publishers Morgan Kaufmann Publishers * Chapter 7 — Multicores, Multiprocessors, and Clusters * Chapter 7 — Multicores, Multiprocessors, and Clusters Roofline Diagram Attainable GPLOPs/sec = Max ( Peak Memory BW × Arithmetic Intensity, Peak FP Performance ) Chapter 6 — Parallel Processors from Client to Cloud — * Chapter 6 — Parallel Processors from Client to Cloud — * Morgan Kaufmann Publishers Morgan Kaufmann Publishers * Chapter 7 — Multicores, Multiprocessors, and Clusters * Chapter 7 — Multicores, Multiprocessors, and Clusters Comparing Systems Example: Opteron X2 vs. Opteron X4 2-core vs. 4-core, 2× FP performance/core, 2.2GHz vs. 2.3GHz Same memory system To get higher performance on X4 than X2 Need high arithmetic intensity Or working set must fit in X4’s 2MB L-3 cache Chapter 6 — Parallel Processors from Client to Cloud — * Chapter 6 — Parallel Processors from Client to Cloud — * Morgan Kaufmann Publishers Morgan Kaufmann Publishers * Chapter 7 — Multicores, Multiprocessors, and Clusters * Chapter 7 — Multicores, Multiprocessors, and Clusters Optimizing Performance Optimize FP performance Balance adds & multiplies Improve superscalar ILP and use of SIMD instructions Optimize memory usage Software prefetch Avoid load stalls Memory affinity Avoid non-local data accesses Chapter 6 — Parallel Processors from Client to Cloud — * Chapter 6 — Parallel Processors from Client to Cloud — * Morgan Kaufmann Publishers Morgan Kaufmann Publishers * Chapter 7 — Multicores, Multiprocessors, and Clusters * Chapter 7 — Multicores, Multiprocessors, and Clusters Optimizing Performance Choice of optimization depends on arithmetic intensity of code Arithmetic intensity is not always fixed May scale with problem size Caching reduces memory accesses Increases arithmetic intensity Chapter 6 — Parallel Processors from Client to Cloud — * Chapter 6 — Parallel Processors from Client to Cloud — * Morgan Kaufmann Publishers Morgan Kaufmann Publishers * Chapter 7 — Multicores, Multiprocessors, and Clusters * Chapter 7 — Multicores, Multiprocessors, and Clusters i7-960 vs. NVIDIA Tesla 280/480 §6.11 Real Stuff: Benchmarking and Rooflines i7 vs. Tesla Chapter 6 — Parallel Processors from Client to Cloud — * Chapter 6 — Parallel Processors from Client to Cloud — * Morgan Kaufmann Publishers Morgan Kaufmann Publishers * Chapter 7 — Multicores, Multiprocessors, and Clusters * Chapter 7 — Multicores, Multiprocessors, and Clusters Rooflines Chapter 6 — Parallel Processors from Client to Cloud — * Chapter 6 — Parallel Processors from Client to Cloud — * Benchmarks Chapter 6 — Parallel Processors from Client to Cloud — * Chapter 6 — Parallel Processors from Client to Cloud — * Performance Summary Chapter 6 — Parallel Processors from Client to Cloud — * GPU (480) has 4.4 X the memory bandwidth Benefits memory bound kernels GPU has 13.1 X the single precision throughout, 2.5 X the double precision throughput Benefits FP compute bound kernels CPU cache prevents some kernels from becoming memory bound when they otherwise would on GPU GPUs offer scatter-gather, which assists with kernels with strided data Lack of synchronization and memory consistency support on GPU limits performance for some kernels Chapter 6 — Parallel Processors from Client to Cloud — * Multi-threading DGEMM Chapter 6 — Parallel Processors from Client to Cloud — * §6.12 Going Faster: Multiple Processors and Matrix Multiply Use OpenMP: void dgemm (int n, double* A, double* B, double* C) { #pragma omp parallel for for ( int sj = 0; sj < n; sj += BLOCKSIZE ) for ( int si = 0; si < n; si += BLOCKSIZE ) for ( int sk = 0; sk < n; sk += BLOCKSIZE ) do_block(n, si, sj, sk, A, B, C); } Chapter 6 — Parallel Processors from Client to Cloud — * Multithreaded DGEMM Chapter 6 — Parallel Processors from Client to Cloud — * Chapter 6 — Parallel Processors from Client to Cloud — * Multithreaded DGEMM Chapter 6 — Parallel Processors from Client to Cloud — * Chapter 6 — Parallel Processors from Client to Cloud — * Fallacies Amdahl’s Law doesn’t apply to parallel computers Since we can achieve linear speedup But only on applications with weak scaling Peak performance tracks observed performance Marketers like this approach! But compare Xeon with others in example Need to be aware of bottlenecks §6.13 Fallacies and Pitfalls Chapter 6 — Parallel Processors from Client to Cloud — * Chapter 6 — Parallel Processors from Client to Cloud — * Morgan Kaufmann Publishers Morgan Kaufmann Publishers * Chapter 7 — Multicores, Multiprocessors, and Clusters * Chapter 7 — Multicores, Multiprocessors, and Clusters Pitfalls Not developing the software to take account of a multiprocessor architecture Example: using a single lock for a shared composite resource Serializes accesses, even if they could be done in parallel Use finer-granularity locking Chapter 6 — Parallel Processors from Client to Cloud — * Chapter 6 — Parallel Processors from Client to Cloud — * Morgan Kaufmann Publishers Morgan Kaufmann Publishers * Chapter 7 — Multicores, Multiprocessors, and Clusters * Chapter 7 — Multicores, Multiprocessors, and Clusters Concluding Remarks Goal: higher performance by using multiple processors Difficulties Developing parallel software Devising appropriate architectures SaaS importance is growing and clusters are a good match Performance per dollar and performance per Joule drive both mobile and WSC §6.14 Concluding Remarks Chapter 6 — Parallel Processors from Client to Cloud — * Chapter 6 — Parallel Processors from Client to Cloud — * Morgan Kaufmann Publishers Morgan Kaufmann Publishers * Chapter 7 — Multicores, Multiprocessors, and Clusters * Chapter 7 — Multicores, Multiprocessors, and Clusters Concluding Remarks (con’t) SIMD and vector operations match multimedia applications and are easy to program Chapter 6 — Parallel Processors from Client to Cloud — * Chapter 6 — Parallel Processors from Client to Cloud — * 90 /100 F ) F (1 1 Speedup able paralleliz able paralleliz = + - =