代写 algorithm Scheme math scala parallel graph network Lecture 11: Multigrid Methods

Lecture 11: Multigrid Methods
Kirk M. Soodhalter
Trinity College Dublin The University of Dublin Ireland http://math.soodhalter.com
Soodhalter Parallel Numerics 5636 Multi-grid

Class Business
• Any questions?
Soodhalter Parallel Numerics 5636 Multi-grid

Previously in Parallel Numerics…
• Afterthoughts from last lecture
• Jacobi vs. Gauss-Seidel – Parallelizability vs. Information
communication
• Compromise between parallel computations and communication
• Synchronous versus asynchronous methods
• Graph colorings
• Red-black Gauss-Seidel
• Polynomial Preconditioning
Soodhalter Parallel Numerics 5636 Multi-grid

Overview for today
• Details on stagnation behavior of stationary methods (Jacobi, Gauss-Seidel)
• Stationary methods as error “smoothers”
• Representation of the error on different grids
• Grid transfer by restriction and projection
• The multigrid method
• Multigrid cycles
• Multigrid as a solvers and as a preconditioner
• The multigrid operator
Soodhalter Parallel Numerics 5636 Multi-grid

Recall Jacobi/Gauss-Seidel/etc slow convergence
• For, e.g., the discretized 2d-Laplacian, convergence is guaranteed, but only in the limit
• In practice, near stagnation occurs
Soodhalter Parallel Numerics 5636 Multi-grid

An illustrative experiment
• We use Gauss-Seidel to solve Ax = 0
• True solution: x = 0
• Thus the error ek = xk
• We can monitor the effect of Gauss-Seidel on the error
• Letx0 =s+noisewheresisthevectorizationofsinx+siny evaluated on a discretization of [0, π] × [0, π]
Soodhalter Parallel Numerics 5636 Multi-grid

Gauss-Seidel as a smoother
Figure: s, vectorized sin x + sin y on [0, π] × [0, π]
Soodhalter Parallel Numerics 5636 Multi-grid

Gauss-Seidel as a smoother
Figure: x0 = s + noise
Soodhalter Parallel Numerics 5636 Multi-grid

Gauss-Seidel as a smoother
Figure: Gauss-Seidel iteration 1
Soodhalter Parallel Numerics 5636 Multi-grid

Gauss-Seidel as a smoother
Figure: Gauss-Seidel iteration 2
Soodhalter Parallel Numerics 5636 Multi-grid

Gauss-Seidel as a smoother
Figure: Gauss-Seidel iteration 3
Soodhalter Parallel Numerics 5636 Multi-grid

Gauss-Seidel as a smoother
Figure: Gauss-Seidel iteration 4
Soodhalter Parallel Numerics 5636 Multi-grid

Gauss-Seidel as a smoother
Figure: Gauss-Seidel iteration 10
Soodhalter Parallel Numerics 5636 Multi-grid

Gauss-Seidel as a smoother
Figure: Gauss-Seidel iteration 100
Soodhalter Parallel Numerics 5636 Multi-grid

Gauss-Seidel as a smoother II
What did this experiment tell us?
Gauss-Seidel was able to reduce (within 4 iterations) the quickly oscillating components of the error. The slowly oscillating smooth error (i.e., s) is reduced quite slowly.
Conjecture
Near stagnation occurs when oscillatory components of the error have been eliminated.
Non-rigorous explanation
Stationary iterations we studied make spatially local corrections by eliminating component(s) of the residual. For discretized differential operators, slowly oscillating error tends to have smaller residual components than highly-oscillating error.
=⇒ Local residual elimination more effective on highly oscillatory error
Soodhalter Parallel Numerics 5636 Multi-grid

Error component elimination in 1D
• Let A ∈ R50×50 be the 1D central difference (2nd derivative) operator
• Let vk be discretization of sin kπx on [0, π] discretized with 50 points
• x0 =v1 +v5 +v10 +v20
Soodhalter Parallel Numerics 5636 Multi-grid

Error component elimination in 1D
• Let A ∈ R50×50 be the 1D central difference (2nd derivative) operator
• Let vk be discretization of sin kπx on [0, π] discretized with 50 points
• x0 =v1 +v5 +v10 +v20
Figure: v1
Soodhalter Parallel Numerics 5636 Multi-grid

Error component elimination in 1D
• Let A ∈ R50×50 be the 1D central difference (2nd derivative) operator
• Let vk be discretization of sin kπx on [0, π] discretized with 50 points
• x0 =v1 +v5 +v10 +v20
Figure: v1, v2
Soodhalter Parallel Numerics 5636 Multi-grid

Error component elimination in 1D
• Let A ∈ R50×50 be the 1D central difference (2nd derivative) operator
• Let vk be discretization of sin kπx on [0, π] discretized with 50 points
• x0 =v1 +v5 +v10 +v20
Figure: v1, v2, v3
Soodhalter Parallel Numerics 5636 Multi-grid

Error component elimination in 1D
• Let A ∈ R50×50 be the 1D central difference (2nd derivative) operator
• Let vk be discretization of sin kπx on [0, π] discretized with 50 points
• x0 =v1 +v5 +v10 +v20
Figure: v1, v2, v3, v4
Soodhalter Parallel Numerics 5636 Multi-grid

Error component elimination in 1D
• Let A ∈ R50×50 be the 1D central difference (2nd derivative) operator
• Let vk be discretization of sin kπx on [0, π] discretized with 50 points
• x0 =v1 +v5 +v10 +v20
Figure: v1, v2, v3, v4, x0
Soodhalter Parallel Numerics 5636 Multi-grid

Error component elimination in 1D
• Let A ∈ R50×50 be the 1D central difference (2nd derivative) operator
• Let vk be discretization of sin kπx on [0, π] discretized with 50 points
• x0 =v1 +v5 +v10 +v20
Figure: Gauss-Seidel iteration 1
Soodhalter Parallel Numerics 5636 Multi-grid

Error component elimination in 1D
• Let A ∈ R50×50 be the 1D central difference (2nd derivative) operator
• Let vk be discretization of sin kπx on [0, π] discretized with 50 points
• x0 =v1 +v5 +v10 +v20
Figure: Gauss-Seidel iteration 2
Soodhalter Parallel Numerics 5636 Multi-grid

Error component elimination in 1D
• Let A ∈ R50×50 be the 1D central difference (2nd derivative) operator
• Let vk be discretization of sin kπx on [0, π] discretized with 50 points
• x0 =v1 +v5 +v10 +v20
Figure: Gauss-Seidel iteration 3
Soodhalter Parallel Numerics 5636 Multi-grid

Error component elimination in 1D
• Let A ∈ R50×50 be the 1D central difference (2nd derivative) operator
• Let vk be discretization of sin kπx on [0, π] discretized with 50 points
• x0 =v1 +v5 +v10 +v20
Figure: Gauss-Seidel iteration 9
Soodhalter Parallel Numerics 5636 Multi-grid

Error component elimination in 1D
• Let A ∈ R50×50 be the 1D central difference (2nd derivative) operator
• Let vk be discretization of sin kπx on [0, π] discretized with 50 points
• x0 =v1 +v5 +v10 +v20
Figure: Gauss-Seidel iteration 99
Soodhalter Parallel Numerics 5636 Multi-grid

Smooth components less smooth on coarser grids
Observation
A discretization of a function which is smooth on a grid with n
points will be less smooth on a grid with fewer points (e.g., n ). 2
Soodhalter Parallel Numerics 5636 Multi-grid

Smooth components less smooth on coarser grids
Observation
A discretization of a function which is smooth on a grid with n
points will be less smooth on a grid with fewer points (e.g., n ). 2
Figure: Discretization of sin πx on a 50-point grid
Soodhalter Parallel Numerics 5636 Multi-grid

Smooth components less smooth on coarser grids
Observation
A discretization of a function which is smooth on a grid with n
points will be less smooth on a grid with fewer points (e.g., n ). 2
Figure: Discretization of sin πx on a 25-point grid
Soodhalter Parallel Numerics 5636 Multi-grid

Smooth components less smooth on coarser grids
Observation
A discretization of a function which is smooth on a grid with n
points will be less smooth on a grid with fewer points (e.g., n ). 2
Figure: Discretization of sin πx on a 10-point grid
Soodhalter Parallel Numerics 5636 Multi-grid

Smooth components less smooth on coarser grids
Observation
A discretization of a function which is smooth on a grid with n
points will be less smooth on a grid with fewer points (e.g., n ). 2
Figure: Discretization of sin πx on a 5-point grid
Soodhalter Parallel Numerics 5636 Multi-grid

Multigrid – the basic idea
Use stationary iteration smoothing ability
We can restrict the problem onto a sequence of coarser grids and eliminate components of the error which are oscillatory on specific grids. We then prolongate the result back up to the refined grid. We can perform multigrid cycles of restriction and prolongation until convergence.
Soodhalter Parallel Numerics 5636 Multi-grid

Making Multigrid a method
To turn this idea into a method, we need various tools
• Restriction of the whole problem to coarser grids
• Prolongate to transfer coarse grid results back to finer grids
Two strategies
• Nested iterations
→ Restrict problem to coarsest grid
→ Apply smoothing iterations
→ Prolong approximation to get x0 for the next grid → Repeat on each next grid
→ Apply smoothing on the finest grid
• Coarse Grid Corrections
→ Apply smoother iterations on fine grid, calculate residual
→ Restrict residual to coarser grid, apply smoother iterations → Keep repeating on each coarser grid
Soodhalter Parallel Numerics 5636 Multi-grid

Making Multigrid a method
To turn this idea into a method, we need various tools
• Restriction of the whole problem to coarser grids
• Prolongate to transfer coarse grid results back to finer grids
Two strategies
• Nested iterations
→ Restrict problem to coarsest grid
→ Apply smoothing iterations
→ Prolong approximation to get x0 for the next grid → Repeat on each next grid
→ Apply smoothing on the finest grid
• Coarse Grid Corrections
→ Apply smoother iterations on fine grid, calculate residual
→ Restrict residual to coarser grid, apply smoother iterations → Keep repeating on each coarser grid
Soodhalter Parallel Numerics 5636 Multi-grid

Making Multigrid a method
To turn this idea into a method, we need various tools
• Restriction of the whole problem to coarser grids
• Prolongate to transfer coarse grid results back to finer grids
Two strategies
• Nested iterations
→ Restrict problem to coarsest grid
→ Apply smoothing iterations
→ Prolong approximation to get x0 for the next grid → Repeat on each next grid
→ Apply smoothing on the finest grid
• Coarse Grid Corrections
→ Apply smoother iterations on fine grid, calculate residual
→ Restrict residual to coarser grid, apply smoother iterations → Keep repeating on each coarser grid
Soodhalter Parallel Numerics 5636 Multi-grid

Making Multigrid a method
To turn this idea into a method, we need various tools
• Restriction of the whole problem to coarser grids
• Prolongate to transfer coarse grid results back to finer grids
Two strategies
• Nested iterations
→ Restrict problem to coarsest grid
→ Apply smoothing iterations
→ Prolong approximation to get x0 for the next grid → Repeat on each next grid
→ Apply smoothing on the finest grid
• Coarse Grid Corrections
→ Apply smoother iterations on fine grid, calculate residual
→ Restrict residual to coarser grid, apply smoother iterations → Keep repeating on each coarser grid
Soodhalter Parallel Numerics 5636 Multi-grid

Making Multigrid a method
To turn this idea into a method, we need various tools
• Restriction of the whole problem to coarser grids
• Prolongate to transfer coarse grid results back to finer grids
Two strategies
• Nested iterations
→ Restrict problem to coarsest grid
→ Apply smoothing iterations
→ Prolong approximation to get x0 for the next grid → Repeat on each next grid
→ Apply smoothing on the finest grid
• Coarse Grid Corrections
→ Apply smoother iterations on fine grid, calculate residual
→ Restrict residual to coarser grid, apply smoother iterations → Keep repeating on each coarser grid
Soodhalter Parallel Numerics 5636 Multi-grid

Restriction (Intergrid Transfer)
Let Ωh, Ω2h, Ω3h, · · · , Ω2kh be a series of increasingly coarse grids Definition (Interpolation Operator)
A matrix Ih2h that maps a vector from grid Ω2ih to grid Ω2(i−1)h via interpolation is called an interpolation operator.
Example (1D grid with 3 points to a grid with 7 points)
A grid with 3 points can be interpolated to a grid with 7 points via
1  2 
1 1 x1
1  
2  2 x2 =  1 1 x3
   2
 1×1  2
 x1  1(x1+x2)
2   x2 
1(x2+x3) 2 
 x3 
1 1×3 2
Soodhalter Parallel Numerics 5636 Multi-grid

Prolongation (Intergrid Transfer)
Let Ωh, Ω2h, Ω3h, · · · , Ω2kh be a series of increasingly coarse grids
Definition (Restriction Operator)
A matrix I2h that maps a vector from grid Ω2(i−1)h to grid Ω2ih via h
restriction is called an restriction operator.
• Injection: coarse grid gets corresponding nodal values from
fine grid
• Full weighting: centrally weighted sum with neighboring points
Example
1D grid with 7 points to a grid with 3 points with central weighting
1􏰞1 2 1 121
4
4 1 2 1 x
5 x6 x7
=
4
345
x1  x2
􏰟 x3 1􏰞x1+2×2+x3􏰟 x x+2x+x
x +2x +x 567
Soodhalter Parallel Numerics 5636 Multi-grid

Some comments on restriction and prolongation
• Interpolation using Ih2h is most accurate for “smooth” vectors =⇒ Best to apply after smoothing iterations
• Restriction by injection is simpler and sometimes the better choice, but we focus on full weighting
→ Full weighting can have certain optimality properties → I2h = c 􏰚Ih 􏰛T
h 2h
• What is the coarse grid equation? Let A2ih be the operator on
grid Ω2ih. The on Ω2(i+1)h, we have
A2(i+1)h = I2(i+1)h · A(2ih) · I2ih
2ih 2(i +1)h
Soodhalter Parallel Numerics 5636 Multi-grid

A two-grid correction scheme
Consider this procedure on Ωh and Ω2h
1 Algorithm: Two-grid correction
2 Smooth until stag. to A(h)x(h) = b(h) with x(h) = 0 (k 0
iterations)
3 r(h) ← b(h) − A(h)x(h) kk
4 Restrict r(2h) = I2hr(h) hk
5 Solve A(2h)e(2h) = r(2h)
6 Interpolate e(h) = Ih e(2h) and correct x(h) ← x(h) + e(h)
2h 0k
7 Apply smoothing until stagnation to A(h)x(h) = b(h) with x(h) 0
Soodhalter Parallel Numerics 5636 Multi-grid

A two-grid correction scheme
Consider this procedure on Ωh and Ω2h
1 Algorithm: Two-grid correction
2 Smooth until stag. to A(h)x(h) = b(h) with x(h) = 0 (k 0
iterations)
3 r(h) ← b(h) − A(h)x(h) kk
4 Restrict r(2h) = I2hr(h) hk
5 Solve A(2h)e(2h) = r(2h)
6 Interpolate e(h) = Ih e(2h) and correct x(h) ← x(h) + e(h)
2h 0k
7 Apply smoothing until stagnation to A(h)x(h) = b(h) with x(h)
Solving the coarse grid error problem
How do we solve the coarse grid error equation?
0
• Recursively apply multigrid!!
• When grid coarse enough, we can switch to a direct method • Can also apply Krylov method to solve coarsest problem
Soodhalter Parallel Numerics 5636 Multi-grid

Multigrid cycles
Domain Decomposition
Computation with Grids
Scalability andDFoamualtinToDlecraomncpeosition
Computation with Grids
Parallel Computation with Grids Ghost Points
MultPigaraidllel Computation with Grids
Ghost Points
Multigrid CyclesScalability and Fault Tolerance Multigrid
Parallel Computation with Grids Ghost Points
Multigrid
Multigrid Cycles Computation with Grids Scalability and Fault Tolerance
Multigrid Cycles
fine
fine
ccoaorasrese
V-cycle goes from finest grid
• V-cycle: goeVs-cyfcrloemgoefisnfreosmt fignerisdt grgidoes
goes downVt-hcyrcoleugohessfurocmcfiensestivgreid goes down through successive
down through successive levels to
goes down through successive
levels to coarsest grid and then levels to coarsest grid and then
levels to coarsest grid and then
coarsest grid and then back up to finest
back ubpactok ufipnetostfingersidt grid back up to finest grid
grid
• W-cycle: zig-zags among lower level
W-cycle zig-zags among lower W-cycle zig-zags among lower
W-cycle zig-zags among lower level (and cheaper) grids before
level (and cheaper) grids before
level (and cghoiengapbaecrk)ugp rtoidfisnebstegfroidre
(and cheaper) grids before going back
going back up to finest grid going back up to finest grid
up to finest grid Full multigrid bootstraps coarse solution up through grid levels,
Domain Decomposition
Full multigrid bootstraps coarse •Fullmultigrid:bouoltimtsaterlaypresachciongafirnsestsgroidlu-
fine
coarse
Full musoltliugtrioidn ubpoothtrsoturaghpsgrcidoalervseels, tion up throuuglthimagtreildy rleeavcehlins,g fiunlteismt gartidely
solution up through grid levels,
reachingultfiimneastetlygriedaching finest grid Michael T. Heath
Michael T. Heath
Parallel Numerical Algorithms
Parallel Numerical Algorithms
Parallel Numerical Algorithms
44 / 66
Michael T. Heath
44 / 66
44 / 66
Soodhalter Parallel Numerics 5636 Multi-grid

Course Grid Correction Revisited
Course grid correction is a projection!
• We have A(h)x(h) = b(h) with init. approx x(h)
• Set r(h) = b(h) − A(h)x(h) and restrict r(2h) = I2hr(h) 00h0
• Prolong e(h) = Ih2he(2h)
0
Soodhalter Parallel Numerics 5636 Multi-grid

Course Grid Correction Revisited
Course grid correction is a projection!
• We have A(h)x(h) = b(h) with init. approx x(h)
0
• Set r(h) = b(h) − A(h)x(h) and restrict r(2h) = I2hr(h)
00h0 • Prolong e(h) = Ih2he(2h)
Soodhalter Parallel Numerics 5636 Multi-grid

Course Grid Correction Revisited
Course grid correction is a projection!
• We have A(h)x(h) = b(h) with init. approx x(h)
0
• Set r(h) = b(h) − A(h)x(h) and restrict r(2h) = I2hr(h)
00h0 • Solve exactly A(2h)e(2h) = r(2h)
• Prolong e(h) = Ih2he(2h)
Soodhalter Parallel Numerics 5636 Multi-grid

Course Grid Correction Revisited
Course grid correction is a projection!
• We have A(h)x(h) = b(h) with init. approx x(h)
0
• Set r(h) = b(h) − A(h)x(h) and restrict r(2h) = I2hr(h)
00h0 • Solve exactly I2hA(h)Ih e(2h) = r(2h)
h 2h • Prolong e(h) = Ih2he(2h)
Soodhalter Parallel Numerics 5636 Multi-grid

Course Grid Correction Revisited
Course grid correction is a projection!
• We have A(h)x(h) = b(h) with init. approx x(h)
0
• Set r(h) = b(h) − A(h)x(h) and restrict r(2h) = I2hr(h)
00h0
• Solve exactly e(2h) = 􏰚I2hA(h)Ih 􏰛−1 r(2h) h 2h
• Prolong e(h) = Ih2he(2h)
Soodhalter Parallel Numerics 5636 Multi-grid

Course Grid Correction Revisited
Course grid correction is a projection!
• We have A(h)x(h) = b(h) with init. approx x(h)
0
• Set r(h) = b(h) − A(h)x(h) and restrict r(2h) = I2hr(h)
00h0
• Solve exactly e(2h) = 􏰚I2hA(h)Ih 􏰛−1 r(2h) h 2h
• Prolong e(h) = Ih2he(2h)
Soodhalter Parallel Numerics 5636 Multi-grid

Course Grid Correction Revisited
Course grid correction is a projection!
• We have A(h)x(h) = b(h) with init. approx x(h)
0
• Set r(h) = b(h) − A(h)x(h) and restrict r(2h) = I2hr(h)
00h0 • Solve exactly e(2h) = 􏰚I2hA(h)Ih 􏰛−1 r(2h)
• Prolong e(h) = Ih2he(2h)
• Correct x(h) ← x(h) + e(h) 0
h 2h
Soodhalter Parallel Numerics 5636 Multi-grid

Course Grid Correction Revisited
Course grid correction is a projection!
• We have A(h)x(h) = b(h) with init. approx x(h)
0
• Set r(h) = b(h) − A(h)x(h) and restrict r(2h) = I2hr(h)
00h0 • Solve exactly e(2h) = 􏰚I2hA(h)Ih 􏰛−1 r(2h)
• Prolong e(h) = Ih2he(2h)
• Correct x(h) ← x(h) + Ih e(2h) 0 2h
h 2h
Soodhalter Parallel Numerics 5636 Multi-grid

Course Grid Correction Revisited
Course grid correction is a projection!
• We have A(h)x(h) = b(h) with init. approx x(h)
0
• Set r(h) = b(h) − A(h)x(h) and restrict r(2h) = I2hr(h)
00h0 • Solve exactly e(2h) = 􏰚I2hA(h)Ih 􏰛−1 r(2h)
• Prolong e(h) = Ih2he(2h)
• Correct x(h) ← x(h) + Ih 􏰚I2hA(h)Ih 􏰛−1 r(2h) 0 2hh 2h
h 2h
Soodhalter Parallel Numerics 5636 Multi-grid

Course Grid Correction Revisited
Course grid correction is a projection!
• We have A(h)x(h) = b(h) with init. approx x(h)
0
• Set r(h) = b(h) − A(h)x(h) and restrict r(2h) = I2hr(h)
00h0 • Solve exactly e(2h) = 􏰚I2hA(h)Ih 􏰛−1 r(2h)
• Prolong e(h) = Ih2he(2h)
• Correct x(h) ← x(h) + Ih 􏰚I2hA(h)Ih 􏰛−1 I2hr(h) 0 2hh 2h h0
h 2h
Soodhalter Parallel Numerics 5636 Multi-grid

Course Grid Correction Revisited
Course grid correction is a projection!
• We have A(h)x(h) = b(h) with init. approx x(h)
0
• Set r(h) = b(h) − A(h)x(h) and restrict r(2h) = I2hr(h)
00h0 • Solve exactly e(2h) = 􏰚I2hA(h)Ih 􏰛−1 r(2h)
• Prolong e(h) = Ih2he(2h)
(h) (h) h 􏰔 􏰚 h 􏰛T (h) h 􏰕−1 􏰚 h 􏰛T (h) • Correctx ←x0 +I2h c I2h A I2h c I2h r0
h 2h
Soodhalter Parallel Numerics 5636 Multi-grid

Course Grid Correction Revisited
Course grid correction is a projection!
• We have A(h)x(h) = b(h) with init. approx x(h)
0
• Set r(h) = b(h) − A(h)x(h) and restrict r(2h) = I2hr(h)
00h0 • Solve exactly e(2h) = 􏰚I2hA(h)Ih 􏰛−1 r(2h)
• Prolong e(h) = Ih2he(2h)
• Correct x(h) ← x(h) + c 1 Ih 􏰔􏰚Ih 􏰛T A(h)Ih 􏰕−1 􏰚Ih 􏰛T r(h) 0 c2h 2h 2h 2h 0
h 2h
Soodhalter Parallel Numerics 5636 Multi-grid

Course Grid Correction Revisited
Course grid correction is a projection!
• We have A(h)x(h) = b(h) with init. approx x(h)
0
• Set r(h) = b(h) − A(h)x(h) and restrict r(2h) = I2hr(h)
00h0 • Solve exactly e(2h) = 􏰚I2hA(h)Ih 􏰛−1 r(2h)
• Prolong e(h) = Ih2he(2h)
• Correct x(h) ← x(h) + Ih 􏰔􏰚Ih 􏰛T A(h)Ih 􏰕−1 􏰚Ih 􏰛T r(h) 0 2h 2h 2h 2h 0
h 2h
Soodhalter Parallel Numerics 5636 Multi-grid

Course Grid Correction Revisited
Course grid correction is a projection!
• We have A(h)x(h) = b(h) with init. approx x(h)
0
• Set r(h) = b(h) − A(h)x(h) and restrict r(2h) = I2hr(h)
00h0 • Solve exactly e(2h) = 􏰚I2hA(h)Ih 􏰛−1 r(2h)
h 2h
• Prolong e(h) = Ih2he(2h)
• Correct x(h) ← x(h) + Ih 􏰔􏰚Ih 􏰛T A(h)Ih 􏰕−1 􏰚Ih 􏰛T r(h)
0 2h 2h 2h 2h 0
• r(h) ← r(h) − AIh 􏰔􏰚Ih 􏰛T A(h)Ih 􏰕−1 􏰚Ih 􏰛T r(h) 0 2h 2h 2h 2h 0
Soodhalter Parallel Numerics 5636 Multi-grid

Course Grid Correction Revisited
Course grid correction is a projection!
• We have A(h)x(h) = b(h) with init. approx x(h)
0
• Set r(h) = b(h) − A(h)x(h) and restrict r(2h) = I2hr(h)
00h0 • Solve exactly e(2h) = 􏰚I2hA(h)Ih 􏰛−1 r(2h)
h 2h
• Prolong e(h) = Ih2he(2h)
• Correct x(h) ← x(h) + Ih 􏰔􏰚Ih 􏰛T A(h)Ih 􏰕−1 􏰚Ih 􏰛T r(h)
0 2h 2h 2h 2h 0
• r(h) ← r(h) − AIh 􏰔􏰚Ih 􏰛T A(h)Ih 􏰕−1 􏰚Ih 􏰛T r(h)
0 2h 2h 2h 2h 0
• A SPD =⇒ this is an A-orthogonal (i.e., A-optimal) projection of the residual!
Soodhalter Parallel Numerics 5636 Multi-grid

Exploiting specific strengths of generally slow method
• Multigrid is quite effective because it exploits the strengths of stationary iterative solves
• Proper coarse grid selection allows us to target specific parts of the error for smoothing
• Smoothing iterations target only oscillatory modes, require few iterations per grid level
=⇒ Convergence rate independent of grid level mesh size 2ih
• All error components are oscillatory on some “coarse enough” grid.
=⇒ Under proper assumptions, overall convergence should be rapid, mesh parameter h independent
Soodhalter Parallel Numerics 5636 Multi-grid

Exploiting specific strengths of generally slow method
• Multigrid is quite effective because it exploits the strengths of stationary iterative solves
• Proper coarse grid selection allows us to target specific parts of the error for smoothing
• Smoothing iterations target only oscillatory modes, require few iterations per grid level
=⇒ Convergence rate independent of grid level mesh size 2ih
• All error components are oscillatory on some “coarse enough” grid.
=⇒ Under proper assumptions, overall convergence should be rapid, mesh parameter h independent
Soodhalter Parallel Numerics 5636 Multi-grid

Exploiting specific strengths of generally slow method
• Multigrid is quite effective because it exploits the strengths of stationary iterative solves
• Proper coarse grid selection allows us to target specific parts of the error for smoothing
• Smoothing iterations target only oscillatory modes, require few iterations per grid level
=⇒ Convergence rate independent of grid level mesh size 2ih
• All error components are oscillatory on some “coarse enough” grid.
=⇒ Under proper assumptions, overall convergence should be rapid, mesh parameter h independent
Soodhalter Parallel Numerics 5636 Multi-grid

Exploiting specific strengths of generally slow method
• Multigrid is quite effective because it exploits the strengths of stationary iterative solves
• Proper coarse grid selection allows us to target specific parts of the error for smoothing
• Smoothing iterations target only oscillatory modes, require few iterations per grid level
=⇒ Convergence rate independent of grid level mesh size 2ih
• All error components are oscillatory on some “coarse enough” grid.
=⇒ Under proper assumptions, overall convergence should be rapid, mesh parameter h independent
Soodhalter Parallel Numerics 5636 Multi-grid

Multigrid costs
• Cost of entire cycle is usually a not-too-large multiple of the cost of a full sweep with, e.g., GS on the finest grid
• One of the most powerful methods for solving sparse linear systems arising from PDEs
• Capable of converging to within truncation error of discretization at cost proportional to number of grid points
→ faster than most other methods
Soodhalter Parallel Numerics 5636 Multi-grid

Multigrid costs
• Cost of entire cycle is usually a not-too-large multiple of the cost of a full sweep with, e.g., GS on the finest grid
• One of the most powerful methods for solving sparse linear systems arising from PDEs
• Capable of converging to within truncation error of discretization at cost proportional to number of grid points
→ faster than most other methods
Soodhalter Parallel Numerics 5636 Multi-grid

Multigrid costs
• Cost of entire cycle is usually a not-too-large multiple of the cost of a full sweep with, e.g., GS on the finest grid
• One of the most powerful methods for solving sparse linear systems arising from PDEs
• Capable of converging to within truncation error of discretization at cost proportional to number of grid points
→ faster than most other methods
Soodhalter Parallel Numerics 5636 Multi-grid

Parallel Multigrid
We have built multigrid out of many pieces which have been covered in the parallel setting
• Jacobi and red-black Gauss-Seidel/SOR • Residual computation
• Restriction (fine-to-coarse)
• Interpolation (coarse-to-fine)
We still must discuss
• Sequential cycling through the grids • Parallel efficiency for coarse grids
Soodhalter Parallel Numerics 5636 Multi-grid

Parallel Multigrid II
Compared with fine grids, coarse grids
• Inherently allow less parallelism
• Incur higher communication cost relative to computation
• Risk poor load balance (some processors may even be idle for coarsest grids)
• Do not necessarily grow as overall problem grows How to mitigate this problem
Parallel implementations of multigrid try to minimize time spent on coarse grids
Soodhalter Parallel Numerics 5636 Multi-grid

Parallel Multigrid III
• For coarse grids, perhaps improve communication/computation ratio by using fewer processors
• Load balance could be improved by redistributing work across processors
• However, such measures affect other aspects of algorithm negatively
→ E.g., restriction and interpolation would no longer be local operations within individual processors
Soodhalter Parallel Numerics 5636 Multi-grid

Parallel Multigrid IV
• Alternatives have been proposed to enhance parallelism in multigrid
• Additive multigrid performs smoothing on all grid levels simultaneously
→ Convergence is not guaranteed, so it is used as preconditioner
• Parallel superconvergent multigrid performs smoothing on multiple grids at each level simultaneously, thereby (hopefully) accelerating convergence
Soodhalter Parallel Numerics 5636 Multi-grid

Parallel Multigrid V
• These MG variants not equivalent to serial multigrid
→ sacrifice some of its serial efficiency to gain greater
parallelism
• Whether such strategies actually reduce overall time to solution depends on specific problem and parallel system
• Even with “classical” multigrid, serial superiority of FMG over V-cycle may outweigh parallel superiority of V-cycle (log n complexity) over FMG (log2 n complexity)
Soodhalter Parallel Numerics 5636 Multi-grid

Parallel Multigrid Example I
Begin by assuming we have as many processors as we want
• Suppose we do k grid levels Ωh,Ω2h,…Ωkh
• We have k arrays of processors called P1, P2, . . . Pk each of size pi × pi
• Each grid Ωih is distributed across the processor grid Pi
• Pi only needs to communicate with Pi−1 and Pi+1 for restrictions and communication of course grid corrections
Soodhalter Parallel Numerics 5636 Multi-grid

Parallel Multigrid Example I
Begin by assuming we have as many processors as we want
• Suppose we do k grid levels Ωh,Ω2h,…Ωkh
• We have k arrays of processors called P1, P2, . . . Pk each of size pi × pi
• Each grid Ωih is distributed across the processor grid Pi
• Pi only needs to communicate with Pi−1 and Pi+1 for restrictions and communication of course grid corrections
Soodhalter Parallel Numerics 5636 Multi-grid

Parallel Multigrid Example I
Begin by assuming we have as many processors as we want
• Suppose we do k grid levels Ωh,Ω2h,…Ωkh
• We have k arrays of processors called P1, P2, . . . Pk each of size pi × pi
• Each grid Ωih is distributed across the processor grid Pi
• Pi only needs to communicate with Pi−1 and Pi+1 for restrictions and communication of course grid corrections
Soodhalter Parallel Numerics 5636 Multi-grid

Parallel Multigrid Example I
Begin by assuming we have as many processors as we want
• Suppose we do k grid levels Ωh,Ω2h,…Ωkh
• We have k arrays of processors called P1, P2, . . . Pk each of size pi × pi
• Each grid Ωih is distributed across the processor grid Pi
• Pi only needs to communicate with Pi−1 and Pi+1 for restrictions and communication of course grid corrections
Soodhalter Parallel Numerics 5636 Multi-grid

Parallel Multigrid Example I
Begin by assuming we have as many processors as we want • Suppose we do k grid levels Ωh,Ω2h,…Ωkh
PARALLEL NETWORKS FOR MULTI-GRID ALGORITHMS 703
• We have k arrays of processors called P ,P ,…P each of
stod4tothetime where is the ratio of the time required perform steps 2 an 1 2
needed k forpone smoothing sweep. Note that s is independent of n, d and y. Note that only
size pi × i
The natural way to build such a machine is to. embed the y machine in two
one
processor grid is active at any instant.
dimensioinhs as a system of communicating rows of processors, the 3’ 2 machine in
• Each grid Ω is distributed across the processor grid P
three dimensions as a system of communicating planes, etc. Of course, realizations in
i
three-space are possible for any value of 3’. For example, if d 2 and 2’ 2, we have
• P only needs to communicate with P and P
i i−1 i+1
for restrictions and communication of course grid corrections
a machine that consists of K connected 2-dimensional grids of processors. Suppose
a 2, K 3 and n 1, n2 2(1+1) 3, n 2(3+1) 7. The corresponding machine is shown in Fig. 3.1. Gannon and Van Rosendale [6] consider a similar implementation of the fully parallel machine (2’= d) on proposed VLSI and multi- microprocessor system architectures.
FIG. 3.1. A machineford 2, ), 2 and K 3.
This design differs from systolic array designs in that there is no layout with all
wire lengths equal. But for reasonably large machines the differences in wire length
Soodhalter Parallel Numerics 5636 Multi-grid
should not be so great as to cause real difficulties. Moreover, one need not continue
edistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php

Parallel Multigrid Example II
What if the number of processors is limited?
• We can store more than one level on a processor array.
• Which array contains which grids: neighboring grids should still be on directly connected arrays (for communication)
• We can also consider putting coarsest grids on one processor, solving in serial, and broadcasting
• We could also consider putting coarsest grids on all processors and solving redundantly to avoid communication associated with coarsest grid
• Communication costs and specific hardware should help decide what is the best choice for a given application
Soodhalter Parallel Numerics 5636 Multi-grid

Parallel Multigrid Example II
What if the number of processors is limited?
• We can store more than one level on a processor array.
• Which array contains which grids: neighboring grids should still be on directly connected arrays (for communication)
• We can also consider putting coarsest grids on one processor, solving in serial, and broadcasting
• We could also consider putting coarsest grids on all processors and solving redundantly to avoid communication associated with coarsest grid
• Communication costs and specific hardware should help decide what is the best choice for a given application
Soodhalter Parallel Numerics 5636 Multi-grid

Parallel Multigrid Example II
What if the number of processors is limited?
• We can store more than one level on a processor array.
• Which array contains which grids: neighboring grids should still be on directly connected arrays (for communication)
• We can also consider putting coarsest grids on one processor, solving in serial, and broadcasting
• We could also consider putting coarsest grids on all processors and solving redundantly to avoid communication associated with coarsest grid
• Communication costs and specific hardware should help decide what is the best choice for a given application
Soodhalter Parallel Numerics 5636 Multi-grid

Parallel Multigrid Example II
What if the number of processors is limited?
• We can store more than one level on a processor array.
• Which array contains which grids: neighboring grids should still be on directly connected arrays (for communication)
• We can also consider putting coarsest grids on one processor, solving in serial, and broadcasting
• We could also consider putting coarsest grids on all processors and solving redundantly to avoid communication associated with coarsest grid
• Communication costs and specific hardware should help decide what is the best choice for a given application
Soodhalter Parallel Numerics 5636 Multi-grid

Parallel Multigrid Example II
What if the number of processors is limited?
• We can store more than one level on a processor array.
• Which array contains which grids: neighboring grids should still be on directly connected arrays (for communication)
• We can also consider putting coarsest grids on one processor, solving in serial, and broadcasting
• We could also consider putting coarsest grids on all processors and solving redundantly to avoid communication associated with coarsest grid
• Communication costs and specific hardware should help decide what is the best choice for a given application
Soodhalter Parallel Numerics 5636 Multi-grid

Multigrid as a preconditioner
• By reducing smoothing iterations and solving inexactly on coarse grid, we can use multigrid as a preconditioner
• Question: Is this a fixed preconditioner?
• Consider a V-cycle, one-level, with one iteration of forward
Gauss-SeidelMf =D−EandNf =Ftopresmooth • One iteration of Gauss-Seidel with x(h) = 0
x(h) ← M−1b(h) ⇐⇒ r(h) ← b − A(h)x(h) = 􏰔I − A(h)M−1􏰕 b ff
0
Soodhalter Parallel Numerics 5636 Multi-grid

Multigrid as a preconditioner
• By reducing smoothing iterations and solving inexactly on coarse grid, we can use multigrid as a preconditioner
• Question: Is this a fixed preconditioner?
• Consider a V-cycle, one-level, with one iteration of forward
Gauss-SeidelMf =D−EandNf =Ftopresmooth • One iteration of Gauss-Seidel with x(h) = 0
x(h) ← M−1b(h) ⇐⇒ r(h) ← b − A(h)x(h) = 􏰔I − A(h)M−1􏰕 b ff
0
Soodhalter Parallel Numerics 5636 Multi-grid

Multigrid as a preconditioner
• By reducing smoothing iterations and solving inexactly on coarse grid, we can use multigrid as a preconditioner
• Question: Is this a fixed preconditioner?
• Consider a V-cycle, one-level, with one iteration of forward
Gauss-SeidelMf =D−EandNf =Ftopresmooth • One iteration of Gauss-Seidel with x(h) = 0
x(h) ← M−1b(h) ⇐⇒ r(h) ← b − A(h)x(h) = 􏰔I − A(h)M−1􏰕 b ff
0
Soodhalter Parallel Numerics 5636 Multi-grid

Multigrid as a preconditioner
• By reducing smoothing iterations and solving inexactly on coarse grid, we can use multigrid as a preconditioner
• Question: Is this a fixed preconditioner?
• Consider a V-cycle, one-level, with one iteration of forward
Gauss-SeidelMf =D−EandNf =Ftopresmooth • One iteration of Gauss-Seidel with x(h) = 0
x(h) ← M−1b(h) ⇐⇒ r(h) ← b − A(h)x(h) = 􏰔I − A(h)M−1􏰕 b ff
0
Soodhalter Parallel Numerics 5636 Multi-grid

Multigrid as a preconditioner
• By reducing smoothing iterations and solving inexactly on coarse grid, we can use multigrid as a preconditioner
• Question: Is this a fixed preconditioner?
• Consider a V-cycle, one-level, with one iteration of forward
Gauss-SeidelMf =D−EandNf =Ftopresmooth • One iteration of Gauss-Seidel with x(h) = 0
0
x(h) ← M−1b(h) ⇐⇒ r(h) ← b − A(h)x(h) = 􏰔I − A(h)M−1􏰕 b
ff
• Course grid correction
(h) (h) h 􏰔 2h (h) h 􏰕−1 h (h) x ←x +I2h IhA I2h I2hr
Soodhalter Parallel Numerics 5636 Multi-grid

Multigrid as a preconditioner
• By reducing smoothing iterations and solving inexactly on coarse grid, we can use multigrid as a preconditioner
• Question: Is this a fixed preconditioner?
• Consider a V-cycle, one-level, with one iteration of forward
Gauss-SeidelMf =D−EandNf =Ftopresmooth • One iteration of Gauss-Seidel with x(h) = 0
0
x(h) ← M−1b(h) ⇐⇒ r(h) ← b − A(h)x(h) = 􏰔I − A(h)M−1􏰕 b
ff
• Course grid correction
(h) −1 (h) h 􏰔 2h (h) h 􏰕−1 h 􏰔 (h) −1􏰕 (h) x ←Mf b +I2h Ih A I2h I2h I−A Mf b
Soodhalter Parallel Numerics 5636 Multi-grid

Multigrid as a preconditioner
• By reducing smoothing iterations and solving inexactly on coarse grid, we can use multigrid as a preconditioner
• Question: Is this a fixed preconditioner?
• Consider a V-cycle, one-level, with one iteration of forward
Gauss-SeidelMf =D−EandNf =Ftopresmooth • One iteration of Gauss-Seidel with x(h) = 0
0
x(h) ← M−1b(h) ⇐⇒ r(h) ← b − A(h)x(h) = 􏰔I − A(h)M−1􏰕 b
ff
• Course grid correction
(h) 􏰋−1 h􏰔2h(h)h􏰕−1h􏰔 (h) −1􏰕􏰌(h) x ← Mf +I2h Ih A I2h I2h I−A Mf b
Soodhalter Parallel Numerics 5636 Multi-grid

Multigrid as a preconditioner
• By reducing smoothing iterations and solving inexactly on coarse grid, we can use multigrid as a preconditioner
• Question: Is this a fixed preconditioner?
• Consider a V-cycle, one-level, with one iteration of forward
Gauss-SeidelMf =D−EandNf =Ftopresmooth • One iteration of Gauss-Seidel with x(h) = 0
0
x(h) ← M−1b(h) ⇐⇒ r(h) ← b − A(h)x(h) = 􏰔I − A(h)M−1􏰕 b
ff
• Course grid correction
(h) 􏰋−1 h􏰔2h(h)h􏰕−1h􏰔 (h) −1􏰕􏰌(h)
x ← Mf +I2h Ih A I2h I2h I−A Mf b
• One iteration of backwards Gauss-Seidel with Mb = D − F
and Nb = E
x(h) ← M−1Nbx(h) + M−1b(h) bb
Soodhalter Parallel Numerics 5636 Multi-grid

Multigrid as a preconditioner
• By reducing smoothing iterations and solving inexactly on coarse grid, we can use multigrid as a preconditioner
• Question: Is this a fixed preconditioner?
• Consider a V-cycle, one-level, with one iteration of forward
Gauss-SeidelMf =D−EandNf =Ftopresmooth • One iteration of Gauss-Seidel with x(h) = 0
0
x(h) ← M−1b(h) ⇐⇒ r(h) ← b − A(h)x(h) = 􏰔I − A(h)M−1􏰕 b
ff
• Course grid correction
(h) 􏰋−1 h􏰔2h(h)h􏰕−1h􏰔 (h) −1􏰕􏰌(h)
x ← Mf +I2h Ih A I2h I2h I−A Mf b
• One iteration of backwards Gauss-Seidel with Mb = D − F
and Nb = E
(h) 􏰋 −1 h 􏰔 2h (h) h 􏰕−1 h 􏰔 (h) −1􏰕􏰌 (h) −1 (h) x ← Mf +I2h Ih A I2h I2h I−A Mf b +Mb b
Soodhalter Parallel Numerics 5636 Multi-grid

Multigrid as a preconditioner
• By reducing smoothing iterations and solving inexactly on coarse grid, we can use multigrid as a preconditioner
• Question: Is this a fixed preconditioner?
• Consider a V-cycle, one-level, with one iteration of forward
Gauss-SeidelMf =D−EandNf =Ftopresmooth • One iteration of Gauss-Seidel with x(h) = 0
0
x(h) ← M−1b(h) ⇐⇒ r(h) ← b − A(h)x(h) = 􏰔I − A(h)M−1􏰕 b
ff
• Course grid correction
(h) 􏰋−1 h􏰔2h(h)h􏰕−1h􏰔 (h) −1􏰕􏰌(h)
x ← Mf +I2h Ih A I2h I2h I−A Mf b
• One iteration of backwards Gauss-Seidel with Mb = D − F
and Nb = E
(h) 􏰋−1 h􏰔2h(h)h􏰕−1h􏰔 (h) −1􏰕 −1􏰌(h) x←Mf+I2hIhAI2h I2hI−AMf +Mb b
Soodhalter Parallel Numerics 5636 Multi-grid

What did we talk about today?
• Details on stagnation behavior of stationary methods (Jacobi, Gauss-Seidel)
• Stationary methods as error “smoothers”
• Representation of the error on different grids
• Grid transfer by restriction and projection
• The multigrid method
• Multigrid cycles
• A parallel multigrid on a network of processor grids
• The “multigrid operator” as a preconditioner
Further reading
• Chapter 13 in Y Saad. Iterative Methods for Sparse Linear Systems. SIAM, 2003. Physical Paper Textbook
• WL Briggs, VE Henson, and SF McCormick. A Multigrid Tutorial. 2nd ed. SIAM 2000.
• T. F. Chan and R. Schreiber. Parallel networks for multigrid algorithms: architecture and complexity, SIAM J. Sci. Stat. Comput. 6:698-711, 1985.
Soodhalter Parallel Numerics 5636 Multi-grid