CS计算机代考程序代写 compiler COMP528-JAN21 University of Liverpool

COMP528-JAN21 University of Liverpool

COMP528-JAN21 – Lab 3

Coordinator: Fabio Papacchini

Message Passing Interface (MPI)
This lab looks at compiling and running a couple of simple MPI code examples

Login to Barkla (as per LAB01 and LAB02) and obtain today’s lab file/s:
cd

tar -xzf /users/papacchf/COMP528-JAN21/labs/lab03.tgz

cd intro-mpi

If you now list the directory (using “ls”) you should see:
comms.c mpi hello.c

You are encouraged to follow all the steps in this document. You can email me your solutions
(please write “COMP528 Lab3” in the subject), and I will provide you with feedback on them.

1. This week’s lab work begins our focus on MPI (“message passing interface”). You have already seen how
to compile and run MPI codes in the previous lab (you may need to refer back to those instructions!)

2. Copy the “run-mpi.sh” batch script from last week’s lab directory ($HOME/intro-par) in to your
working directory

cp $HOME/intro-par/run-mpi.sh ./

(the “./” means “the current directory [as the destination of the copy])

3. Look at the script and determine:

(a) What are the needed module/s for compiling MPI with an Intel compiler?

(b) How does the script determine the number of cores to run on?

(c) How is the compiled executable initiated to run on this number of cores?

4. To note that we prefer to use the Intel compiler since we found in LAB01 that it gave better performance
than the GNU compiler. To compile MPI code – which involves MPI constants and calls to MPI
functions – we could use “icc” with a long list of include and library flags or we can just use the
system provided “mpiicc” (note it has two “i”s as opposed to “mpicc” which would call a different
MPI wrapper)

5. From LAB02, determine how to submit this MPI batch script to the SLURM batch system for a given
number of cores, and how to set the maximum wall clock time for the batch job.

6. You can see more details of what happens you use the MPI compiler wrapper. You will need to load
the relevant modules in to your login shell, and then run

mpiicc -show

which should give the following (one command spread over several lines of output) showing the include
& link libraries and the base compiler “icc”.

icc

-I/opt/apps/alces/intel/psxe/2019u5/compilers and libraries 2019.5.281/linux/mpi/intel64/include

Fabio Papacchini: Fabio. .uk 1

COMP528-JAN21 University of Liverpool

-L/opt/apps/alces/intel/psxe/2019u5/compilers and libraries 2019.5.281/linux/mpi/intel64/lib/release

-L/opt/apps/alces/intel/psxe/2019u5/compilers and libraries 2019.5.281/linux/mpi/intel64/lib

-Xlinker –enable-new-dtags -Xlinker -rpath -Xlinker

/opt/apps/alces/intel/psxe/2019u5/compilers and libraries 2019.5.281/linux/mpi/intel64/lib/release

-Xlinker -rpath -Xlinker

/opt/apps/alces/intel/psxe/2019u5/compilers and libraries 2019.5.281/linux/mpi/intel64/lib

-lmpifort -lmpi -ldl -lrt -lpthread

(NB default compiler versions may change over time.)

7. Look at the “mpi hello.c” code and read the comments to try and determine what it does.

8. Compile and run this by amending your code of the “run-mpi.sh” script (see step 2). Ensure you only
run the code once. Submit this to the batch system to run on 3 processes (nodes) with the maximum
wall clock time set to 2 minutes. Record what command you use to submit this job.

9. Observe the output and explain:

(a) What “processor name” and “rank” refer to

(b) What is happening with the variable “X”

10. Run again on 13 processes (and ensure you set the max wall clock time to 2 minutes) – what can you
say about the ordering of the output? (Do this a few times to confirm your answer!)

11. Look at the code “comms.c” which is a more complicated version of the “mpi hello.c” code. Ensuring
that you have the Intel MPI module loaded, you can examine MPI calls by (for example)

man MPI Send

Look careful at the login in the if/else-if/else loop and sketch out what you expect to happen when
there are 3 MPI processes. You should not compile & run the code at this point – can you understand
from the source code alone. . .

What value do you expect each MPI process to have for the variable “buffer”? Write this for each rank
of the 3 MPI processes.

12. Copy & amend your batch script from above (step 7) to compile and run the “comms.c” code in batch.
Submit a job that will run on 3 processes for a maximum wall clock time of 3 minutes. Look at the
output and compare to your answer for step 11.

13. What do you expect to happen when you run on 2 processes? Submit a job that will run on 2 processes
for a maximum wall clock time of 3 minutes and see if you are correct.

Today you have experienced and learned about necessary calls for MPI and how to do “point
to point” MPI communications

Fabio Papacchini: Fabio. .uk 2