编程辅导 XJCO3221 Coursework 2 1 School of Computing, University of Leeds

XJCO3221 Coursework 2 1 School of Computing, University of Leeds
Parallel computation
Coursework 2: Distributed memory parallelism with MPI
Deadline: 10am, Tuesday 19th April 2022

Copyright By PowCoder代写 加微信 powcoder

If you have any queries about this coursework please visit the MPI Discussion Forum on Minerva (found in the Learning Resources folder for this module). If your query is not resolved by previous answers, post a new message.
This piece of work is worth 20% of the final module grade. Learning objectives
• Use collective communication to distribute the problem and accumulate the answer. • Implement a binary tree using point-to-point communication.
• Perform timing runs for parallel scaling and interpret your findings.
Your task is to implement an MPI-C program that performs matrix–vector multiplication in parallel. That is, given the N ⇥ N matrix A and the N vector x defined on rank 0, you need to calculate b = Ax, i.e.
For convenience, rather than store the matrix A as a two–dimensional float array, it is instead stored as a one–dimensional array of size N2. This ensures the rows of the matrix are stored adjacent in memory, which makes the use of collective communication easier. With this choice, the matrix element at row row and column col is A[row*N+col], and the serial code that performs the multiplication is
int row, col;
for( row=0; rowCS代考 加微信: powcoder QQ: 1823890830 Email: powcoder@163.com