留学生辅导 FIT3143 Lab Week 9

MPI VIRTUAL TOPOLOGY AND MASTER & SLAVE
OBJECTIVES
● The purpose of this lab is to explore MPI Virtual Topologies and Master/Slave programs by splitting a communicator.
INSTRUCTIONS

Copyright By PowCoder代写 加微信 powcoder

● Download and set up software applications used in this unit [Refer to Lab Week 1]
● Setup eFolio (including Git) and share with tutor and partner [Refer to Lab Week 1]
DESCRIPTION:
● Practice inter-process communication using MPI virtual topology functions.
● Design and implement a simple virtual topology.
● Splitting a communicator for master and slave operations.
WHAT TO SUBMIT:
1. E-folio document containing algorithm or code description, analysis of results, screenshot of the running programs and git repository URL. E-folio template for this lab can be found in Week 06 of Moodle.
2. CodeandsupportingfilesintheGit.
3. This is an assessed lab. Therefore, you are required to submit both the E-folio document, code file(s) and text files into Moodle. Submission link is available in Week 09 of Moodle. Each student makes a submission. Although you are working in a team of two (or three) members and your submitted files will be the same within a team, each team member is required to make a submission independently in Moodle.
FIT3143 Lab Week 9
Lecturers: ABM Russel (MU Australia) and (MU Malaysia)
Last updated – 18th September 2022 Page 1

EVALUATION CRITERIA
This Lab-work is part of grading, with 12 maximum marks, which is then scaled to 3 percentage points of the overall unit marks.
Task21 1 2 1 Task 4 0.5 0.5 Task 5 0.5 0.5
LAB ACTIVITIES (12 MARKS)
Task 1 – Creating a 2D Cartesian grid using MPI Worked Example (No marks for this activity)
Figure 1: Cartesian grid layout
With reference to Figure 1, create a 2D grid using MPI Cartesian topology functions. Each MPI process in the grid is to print out the following:
a) Current rank
b) Cartesian rank
c) Coordinates
d) List of immediate adjacent processes (left, right, top and bottom)
The use has an option to specify the grid size as a command line argument.
Code compiles without errors and executed correctly (2 marks)
Sufficient code comments
Questions or instructions fully answered
Proper presentation of results and analysis (2 marks)
Last updated – 18th September 2022

Sample solution:
/* Gets the neighbors in a cartesian communicator
* Orginally written by
* – Updated Mar, 2015
* Link: https://edoras.sdsu.edu/~mthomas/sp17.605/lectures/MPI-
Cart-Comms-and-Topos.pdf
* Minor modifications to fix bugs and to revise print output
#include
#include
#include
#include
#include
#define SHIFT_ROW 0
#define SHIFT_COL 1
#define DISP 1
int main(int argc, char *argv[]) {
int ndims=2, size, my_rank, reorder, my_cart_rank, ierr;
int nrows, ncols;
int nbr_i_lo, nbr_i_hi;
int nbr_j_lo, nbr_j_hi;
MPI_Comm comm2D;
int dims[ndims],coord[ndims];
int wrap_around[ndims];
/* start up initial MPI environment */
MPI_Init(&argc, &argv);
MPI_Comm_size(MPI_COMM_WORLD, &size);
MPI_Comm_rank(MPI_COMM_WORLD, &my_rank);
/* process command line arguments*/
if (argc == 3) {
nrows = atoi (argv[1]);
ncols = atoi (argv[2]);
dims[0] = nrows; /* number of rows */
dims[1] = ncols; /* number of columns */
if( (nrows*ncols) != size) {
if( my_rank ==0) printf(“ERROR: nrows*ncols)=%d *
%d = %d != %d\n”, nrows, ncols, nrows*ncols,size);
MPI_Finalize();
return 0; }
nrows=ncols=(int)sqrt(size);
dims[0]=dims[1]=0;
Last updated – 18th September 2022

/************************************************************
/* create cartesian topology for processes */
/************************************************************
MPI_Dims_create(size, ndims, dims);
if(my_rank==0)
printf(“Root Rank: %d. Comm Size: %d: Grid Dimension =
[%d x %d] \n”,my_rank,size,dims[0],dims[1]);
/* create cartesian mapping */
wrap_around[0] = wrap_around[1] = 0; /* periodic shift is
.false. */
reorder = 1;
ierr = MPI_Cart_create(MPI_COMM_WORLD, ndims, dims,
wrap_around, reorder, &comm2D);
if(ierr != 0) printf(“ERROR[%d] creating CART\n”,ierr);
/* find my coordinates in the cartesian communicator group */
MPI_Cart_coords(comm2D, my_rank, ndims, coord);
/* use my cartesian coordinates to find my rank in cartesian
MPI_Cart_rank(comm2D, coord, &my_cart_rank);
/* get my neighbors; axis is coordinate dimension of shift */
/* axis=0 ==> shift along the rows: P[my_row-1]: P[me] :
P[my_row+1] */
/* axis=1 ==> shift along the columns P[my_col-1]: P[me] :
P[my_col+1] */
MPI_Cart_shift( comm2D, SHIFT_ROW, DISP, &nbr_i_lo, &nbr_i_hi
MPI_Cart_shift( comm2D, SHIFT_COL, DISP, &nbr_j_lo, &nbr_j_hi
printf(“Global rank: %d. Cart rank: %d. Coord: (%d, %d).
Left: %d. Right: %d. Top: %d. Bottom: %d\n”, my_rank,
my_cart_rank, coord[0], coord[1], nbr_j_lo, nbr_j_hi, nbr_i_lo,
nbr_i_hi);
fflush(stdout);
MPI_Comm_free( &comm2D );
MPI_Finalize();
Last updated – 18th September 2022

Task 2 – Inter process communication between adjacent processes in a Cartesian grid topology
Using the aforementioned sample solution and with reference to Figure 1, implement the following:
a) Each MPI process in the grid generates a random prime number and exchanges the random prime value with its adjacent processes. You can use any type of MPI send and receive function (asynchronous or synchronous here) or other MPI collective communication functions.
b) Upon exchanging the random numbers, each process compares the received prime numbers with its own prime number. If there is match between any one of the received prime numbers with its own prime number, the process logs this information into a text file. This means that if you are running a MPI topology using a 4 × 5 grid, you will have 20 unique log files. Each log file will either contain entries for matching prime numbers or no content (depending on the randomness of the generated prime numbers).
c) Repeat part (b) above using a loop based on a fixed number of iterations.
Task 3 – Master/Slaves program in MPI Worked Example (No marks for this activity)
Message passing is well-suited to handling computations where a task is divided up into subtasks, with most of the processes used to compute the subtasks and a few processes (often just one process) managing the tasks.
The manager is called the “master” and the others the “workers” or the “slaves”.
In this task, you will begin to build an Input/Output master/slave system. This will allow you to relatively easily arrange for different kinds of input and output from your program, including:
• Ordered output (process 2 after process 1)
• Duplicate removal (a single instance of “Hello world” instead of one from each process)
• Input to all processes from a terminal
This will be accomplished by dividing the processes in MPI_COMM_WORLD into two sets – the master (who will do all of the I/O) and the slaves (who will do all of their I/O by contacting the master). The slaves will also do any other computation that they might desire.
For this task, divide the processors into two communicators, with one processor the master and the others the slave. The master should accept messages from the slaves (of type MPI_CHAR) and print them in rank order (that is, first from slave 0, then from slave 1, etc.). The slaves should each send 2 messages to the master. For simplicity, have the slaves send the messages.
Hello from slave 3
Goodbye from slave 3
(with appropriate values for each slave). You may assume a maximum length message of 256 characters.
Last updated – 18th September 2022 Page 5

For this first task, keep the code simple. Do not use intercommunicators. Also, you’ll find that you use the new communicator for the slaves only to get the rank of the slave in its communicator. Note that if the slaves were also computing, they would use that new communicator instead of MPI_COMM_WORLD. You may want to use these MPI routines in your solution:
MPI_Comm_split MPI_Send MPI_Recv
Sample solution:
#include
#include
#include
#include
int master_io(MPI_Comm master_comm, MPI_Comm comm);
int slave_io(MPI_Comm master_comm, MPI_Comm comm);
int main(int argc, char **argv)
MPI_Comm new_comm;
MPI_Init(&argc, &argv);
MPI_Comm_rank(MPI_COMM_WORLD, &rank);
MPI_Comm_split( MPI_COMM_WORLD,rank == 0, 0, &new_comm);
if (rank == 0)
master_io( MPI_COMM_WORLD, new_comm );
slave_io( MPI_COMM_WORLD, new_comm );
MPI_Finalize();
/* This is the master */
int master_io(MPI_Comm master_comm, MPI_Comm comm)
int i,j, size;
char buf[256];
MPI_Status status;
MPI_Comm_size( master_comm, &size );
for (j=1; j<=2; j++) { for (i=1; iCS代考 加微信: powcoder QQ: 1823890830 Email: powcoder@163.com