MPI INTRODUCTION
1. Introduction
The Message Passing Interface, MPI, is a specification for a number of methods and some data types that support a particular model of parallel programming. All data is shared using explicit messages, that is to say we don’t have any shared memory space. Each processor can send to any other processor, and messages must be explicitly received. If we want to synchronise our processes, we must use a barrier. Some MPI operations will have the side of effect of creating a barrier.
This lab looks at getting jobs running on a cluster; it will take you through compilation of codes on the cluster and job submission. Clusters usually have a job scheduler and a queueing system to ensure a fair usage policy can be applied and that the resources are shared fairly.
Copyright By PowCoder代写 加微信 powcoder
Once again, as with the previous lab sessions, the cluster you’ll be running on is called Kudu. The compute nodes in the cs402 partition on the cluster have dual socket 6-core processors, therefore each node has 12 cores in total. At the end of this lab exercise you will have written the famous “Hello, World!” MPI example; and looked at parallelising an integral approximation algorithm with MPI.
This lab makes use of the OpenMPI Library [1]; Slurm.
3. Lab Exercise
3.1. Logging in to the cluster. The cluster can be accessed in the same way as described in the “Remote Working” section of lab sheet 2 (OpenMP). For those of you that haven’t yet accessed the cluster, it’s recommended to read through this section of lab 2. This section will explain how to get started with the DCS managed cluster1 for those that are unfamiliar with it, and how you can make use of it to run any MPI programs you write during completion of this lab sheet.
First, connect to one of the DCS Login/Remote nodes using one of the follow- ing two commands (where uXXXXXXX is be replaced by your username and nn in remote-nn is replaced by the last two digits of your username):
1Further information on the Batch Compute System can be found here: https://warwick. ac.uk/fac/sci/dcs/intranet/user_guide/batch_compute
2 MPI INTRODUCTION
Then, connect to the login node of the cluster, kudu: $ ssh kudu
Once you have connected to kudu you will see that you can access all of the files that you have saved in your DCS file system. It’s important that you do not run your C programs directly on the login node, but instead submit them using a batch submission script. A number of sample submission scripts will be provided to you throughout this lab sheet.
In order to compile and run any MPI programs on the cluster, you must load the necessary environment module for the compilers using the following command:
$ module load cs402-mpi
3.2. Hello World. The first program we will look at is the ubiquitous “Hello, world!” program, however, this is an MPI version of the program. This program will have each process print out a message that includes its rank. You can find this program as “helloWorldMPI.c”.
This can then be compiled like so:
$ mpicc helloWorldMPI.c -o helloWorld
To run this on the cluster a submission script must be submitted to slurm. An example submission script can be seen below:
#!/bin/bash
#SBATCH –job-name=example-job
#SBATCH –partition=cs402
#SBATCH –nodes=1
#SBATCH –ntasks=4
#SBATCH –time=00:01:00
module purge
module load cs402-mpi
mpirun ./
To submit the script to the scheduler do the following:
$ sbatch