Virtual Topologies
Sourced from Virtual Topologies Slides from Ohio Supercomputer Center
CMPSC 450
Virtual Topologies
• Virtual topologies
• Topology types
• Creating a cartesian virtual topology • Cartesian example
• Cartesian mapping functions • MPI_CART_RANK*
• MPI_CART_COORDS*
• MPI_CART_SHIFT*
• Cartesian partitioning
CMPSC 450
Virtual Topologies
• Convenient process naming
• Naming scheme to fit the communication pattern • Simplifies writing of code
• Can allow MPI to optimize communications
• Rationale: access to useful topology routines
CMPSC 450
How to Use a Virtual Topology
Creating a topology produces a new communicator MPI provides “mapping functions”
Mapping functions compute processor ranks, based on the topology naming scheme
CMPSC 450
Example – 2D torus
CMPSC 450
Topology Types
• Cartesian topologies
• Each process is connected to its neighbors in a virtual grid • Boundaries can be cyclic
• Processes can be identified by cartesian coordinates
• Graph topologies
• General graphs
• Will not be covered here
CMPSC 450
Creating a Certesian Virtual Topology
int MPI_Cart_create (MPI_Comm comm_old,
int ndims,
int *dims,
int *periods,
int reorder,
MPI_Comm *comm_cart)
CMPSC 450
Arguments
comm_old
ndims
periods
reorder
comm_cart
existing communicator number of dimensions
logical array indicating whether a dimension is cyclic
(TRUE=>cyclic boundary conditions)
logical
(FALSE=>rank preserved) (TRUE=>possible rank reordering) new cartesian communicator
CMPSC 450
Cartesian Example
MPI_Comm vu;
int dim[2], period[2], reorder; dim[0]=4; dim[1]=3; period[0]=TRUE; period[1]=FALSE; reorder=TRUE;
MPI_Cart_create(MPI_COMM_WORLD,2,dim,period,reorder,&vu);
CMPSC 450
Cartesian Mapping Functions
Mapping process grid coordinates to ranks
int MPI_Cart_rank (MPI_Comm comm,
init *coords,
int *rank)
CMPSC 450
Cartesian Mapping Functions
Mapping ranks to process grid coordinates
int MPI_Cart_coords (MPI_Comm comm, int rank,
int maxdims,
int *coords)
CMPSC 450
Sample Program
#include
/* Run with 12 processes */
int main(int argc, char *argv[])
{
int rank;
MPI_Comm vu;
int dim[2],period[2],reorder;
int coord[2],id;
MPI_Init(&argc, &argv);
MPI_Comm_rank(MPI_COMM_WORLD,&ran
k);
dim[0]=4; dim[1]=3;
period[0]=TRUE;
period[1]=FALSE;
reorder=TRUE;
MPI_Cart_create(MPI_COMM_WORLD,2,
dim,period,reorder,&vu);
if(rank==5){
MPI_Cart_coords(vu,rank,2,coord);
are %dprintf(“P:%d My coordinates
%d\n”,rank,coord[0],coord[1]);
}
if(rank==0) {
coord[0]=3; coord[1]=1;
MPI_Cart_rank(vu,coord,&id);
printf(“The processor at
position (%d, %d) has rank
%d\n”,coord[0],coord[1],id);
}
}
MPI_Finalize();
CMPSC 450
Cartesian mapping functions
Computing ranks of neighboring processes
int MPI_Cart_shift (MPI_Comm comm,
int direction,
int disp,
int *rank_source,
int *rank_dest)
CMPSC 450
MPI_Cart_shift
• Does not actually shift data: returns the correct ranks for a shift that can be used in subsequent communication calls
• Arguments:
• direction (dimension in which the shift should be made)
• disp (length of the shift in processor coordinates [+ or -])
• rank_source (where calling process should receive a message from during the shift)
• rank_dest (where calling process should send a message to during the shift)
• If we shift off of the topology, MPI_Proc_null (-1) is returned
CMPSC 450
Sample Program
#include
#define TRUE 1
#define FALSE 0
int main(int argc, char *argv[]) {
int rank;
MPI_Comm vu;
int dim[2],period[2],reorder; int up,down,right,left; MPI_Init(&argc, &argv);
MPI_Cart_create(MPI_COMM_WORLD ,2,
dim,period,reorder,&vu); if(rank==9){
MPI_Cart_shift(vu,0,1,&left,&right);
MPI_Cart_shift(vu,1,1,&up,&down);
printf(“P:%d My neighbors are r: %d d:%d 1:%d u:%d\n”,rank,right,down,left,up);
MPI_Comm_rank(MPI_COMM_WORL D,
&rank); dim[0]=4; dim[1]=3;
period[0]=TRUE; period[1]=FALSE; reorder=TRUE;
}
MPI_Finalize(); }
CMPSC 450
Cartesian partitioning
• Often we want to do an operation on only part of an existing cartesian topology
• Cut a grid up into ‘slices’
• A new communicator is produced for each slice
• Each slice can then perform its own collective communications
• MPI_Cart_sub and MPI_CART_SUB generate new communicators for the slice
CMPSC 450
MPI_Cart_sub
int MPI_Cart_sub (MPI_Comm comm,
int *remain_dims,
MPI_Comm *newcomm)
• If comm is a 2x3x4 grid and remain_dims={TRUE,FALSE,TRUE}, then
three new communicators are created, each being a 2×4 grid • Calling processor receives its new communicator only
CMPSC 450
Send Receive Demo (send_recv.c)
// Find out rank, size
int world_rank;
MPI_Comm_rank(MPI_COMM_WORLD, &world_rank);
int world_size; MPI_Comm_size(MPI_COMM_WORLD, &world_size); int number;
if (world_rank == 0) {
number = -1;
MPI_Send(&number, 1, MPI_INT, 1, 0, MPI_COMM_WORLD); } else if (world_rank == 1) {
MPI_Recv(&number, 1, MPI_INT, 0, 0, MPI_COMM_WORLD, MPI_STATUS_IGNORE);
printf(“Process 1 received number %d from process 0\n”, number);
}
CMPSC 450
Ping Pong Demo (ping_pong.c)
int ping_pong_count = 0;
int partner_rank = (world_rank + 1) % 2; while (ping_pong_count < PING_PONG_LIMIT) {
if (world_rank == ping_pong_count % 2) {
// Increment the ping pong count before you send it
ping_pong_count++; MPI_Send(&ping_pong_count, 1, MPI_INT, partner_rank, 0,
MPI_COMM_WORLD);
printf("%d sent and incremented ping_pong_count " "%d to %d\n",
world_rank, ping_pong_count, partner_rank); } else {
MPI_Recv(&ping_pong_count, 1, MPI_INT, partner_rank, 0, MPI_COMM_WORLD, MPI_STATUS_IGNORE);
printf("%d received ping_pong_count %d from %d\n", world_rank, ping_pong_count, partner_rank);
} }
CMPSC 450