留学生辅导 CS 563 Concurrent Programming

CS 563 Concurrent Programming
Lecture 8: Message Passing Interface (MPI)

Processes vs. Threads

Copyright By PowCoder代写 加微信 powcoder

The Message Passing Model
MPI is for communication among processes (Inter-Process Communication), which have separate address spaces
Interprocess communication consists of
Synchronization
Movement of data from one process’s address space to another’s

MPI Library
One program, copy loaded onto every node
SPMD programming style (single program, multiple data)

MPI Library
Today, MPI runs on virtually any hardware platform
Distributed memory Shared memory Hybrid
The programming model clearly remains a distributed memory model, regardless the underlying physical architecture
All parallelism is explicit

Cooperative Operations for Communication
The message-passing approach makes the exchange of data cooperative. Data is explicitly sent by one process and received by another.
An advantage is that any change in the receiver process’s memory is made with the receiver’s explicit participation.
Communication and synchronization are combined.
Send(data)
Receive(data)

What is MPI
A message-passing library specification
extended message-passing model
not a language or compiler specification not a specific implementation or product
For parallel computers, clusters, and heterogeneous networks Designed to provide access to advanced parallel hardware

MPI Resources on Web
The Standard itself:
at http://www.mpi-forum.org

Using MPI: Portable Parallel Programming with the Message-Passing Interface, by Gropp, Lusk, and Skjellum, MIT Press, 1994.
MPI: The Complete Reference, by Snir, Otto, Huss-Lederman, Walker, and Dongarra, MIT Press, 1996.
Designing and Building Parallel Programs, by Ian Foster, Addison-Wesley, 1995.
Parallel Programming with MPI, by , Morgan-Kaufmann, 1997. MPI: The Complete Reference Vol 1 and 2,MIT Press, 1998(Fall).

A Minimal MPI Program
#include “mpi.h”
#include
int main( int argc, char *argv[] )
MPI_Init( &argc, &argv );
printf( “Hello, world!\n” );
MPI_Finalize();

mpi.h must be #included
MPI functions return error codes or MPI_SUCCESS By default, an error causes all processes to abort.

Running MPI Programs
In general, starting an MPI program is dependent on the implementation of MPI you are using, and might require various scripts, program arguments, and/or environment variables.
mpiexec mpirun

Finding Out About the Environment
Two important questions that arise early in a parallel program are:
How many processes are participating in this computation? Which one am I?

Finding Out About the Environment
MPI_Comm_size MPI_Comm_rank
MPI provides functions to answer these questions:

Better Hello
#include “mpi.h”
#include
int main( int argc, char *argv[] )
int rank, size;
MPI_Init( &argc, &argv );
MPI_Comm_rank( MPI_COMM_WORLD, &rank );
MPI_Comm_size( MPI_COMM_WORLD, &size );
printf( “I am %d of %d\n”, rank, size );
MPI_Finalize();

Communicators
MPI uses objects called communicators and groups to define which collection of processes may communicate with each other
Most MPI routines require you to specify a communicator as an argument
For now, simply use MPI_COMM_WORLD wherever a communicator is required

MPI Basic Send/Receive
We need to fill in the details in
Send(data)
Receive(data)
Things that need specifying
how will data be described?
how are processes identified?
how does the receiver recognize/screen messages? what does it mean for these operations to complete?

What is Message Passing
Data transfer plus synchronization
Process 0 Data May I Send?
Process 1 Time
Requires cooperation of sender and receiver Cooperation not always apparent in code

MPI Datatypes
The data in a message to sent or received is described by a triple (address, count, datatype)
Predefined, corresponding to a data type from the language (e.g., MPI_INT, MPI_DOUBLE_PRECISION)
Datatype can also be an array of the above primitive types

Messages are sent with an accompanying user-defined integer tag, to assist the receiving process in identifying the message.
Messages can be screened at the receiving end by specifying a specific tag, or not screened by specifying MPI_ANY_TAG as the tag in a receive.
Some non-MPI message-passing systems have called tags “message types”. MPI calls them tags to avoid confusion with datatypes.

MPI Basic (Blocking) Send
MPI_SEND (start, count, datatype, dest, tag, comm)
The message buffer is described by (start, count, datatype). The target process is specified by dest, which is the rank of the
target process in the communicator specified by comm.
When this function returns, the data has been delivered to the system and the buffer can be reused.

MPI Basic (Blocking) Receive
MPI_RECV(start, count, datatype, source, tag, comm, status)
Waits until a matching (on source and tag) message is received from the system, and the buffer can be used.
source is rank in communicator specified by comm, or MPI_ANY_SOURCE.
status contains further information
Receiving fewer than count occurrences of datatype is OK, but receiving more is an error.

Even Better Hello
int main(int argc, char *argv[]){
char idstr[32];
char buff[BUFSIZE];
int numprocs;
MPI_Status stat;
/* all MPI programs start with MPI_Init; all ‘N’ processes exist thereafter */
MPI_Init(&argc,&argv);
MPI_Comm_size(MPI_COMM_WORLD,&numprocs); /* find out how big the SPMD world is */
MPI_Comm_rank(MPI_COMM_WORLD,&myid); /* and this processes’ rank is */
/* At this point, all the programs are running equivalently, the rank is used to
distinguish the roles of the programs in the SPMD model, with rank 0 often used
specially… */
if(myid == 0){
printf(“%d: We have %d processors\n”, myid, numprocs);
for(i=1;i
#include
#include
#include
#include “math.h”
#include “mpi.h”
int main(int argc,char** argv)
taskid, ntasks;
ierr,i,j,itask;
MPI_Init(&argc, &argv);
MPI_Comm_rank(MPI_COMM_WORLD,&taskid);
MPI_Comm_size(MPI_COMM_WORLD,&ntasks);
if ( taskid == 0 ){
printf(“\n\n\n\n\n”);
ierr=MPI_Barrier(MPI_COMM_WORLD);
if ( taskid == 0 )printf(“Hel”);
if ( taskid == 1 )printf(“lo “);
if ( taskid == 2 )printf(“Wor”);
if ( taskid == 3 )printf(“ld!”);
ierr=MPI_Barrier(MPI_COMM_WORLD);
if ( taskid == 0 ){
printf(” (Ordered)\n\n\n\n\n”);
MPI_Finalize();
int int int

#include
#include
#include
#include
#include “math.h”
#include “mpi.h”
int main(int argc,char** argv)
taskid, ntasks;
ierr,i,j,itask;
MPI_Init(&argc, &argv);
MPI_Comm_rank(MPI_COMM_WORLD,&taskid);
MPI_Comm_size(MPI_COMM_WORLD,&ntasks);
if ( taskid == 0 ){
printf(“\n\n\n\n\n”);
ierr=MPI_Barrier(MPI_COMM_WORLD);
if ( taskid == 0 )printf(“Hel”);
ierr=MPI_Barrier(MPI_COMM_WORLD);
if ( taskid == 1 )printf(“lo “);
ierr=MPI_Barrier(MPI_COMM_WORLD);
if ( taskid == 2 )printf(“Wor”);
ierr=MPI_Barrier(MPI_COMM_WORLD);
if ( taskid == 3 )printf(“ld!”);
ierr=MPI_Barrier(MPI_COMM_WORLD);
if ( taskid == 0 ){
printf(” (Ordered)\n\n\n\n\n”);
MPI_Finalize();
int int int

MPI_Bcast distributes data from one process (the root) to all others in a communicator.
MPI_Bcast(start, count, datatype, source, comm)
P0 P1 P2 P3
count = 1;
source = 1;
MPI_Bcast(buffer, count, MPI_INT, source, MPI_COMM_WORLD);

MPI_Scatter distributes distinct data from one process (the root) to each process in a communicator.
MPI_Scatter(sendbuf, sendcnt, sendtype, recvbuf, recvcnt, recvtype, source, comm)
P0 P1 P2 P3
sendcnt = 1;
recvcnt = 1;
source = 1;
MPI_Scatter(sendbuf, sendcnt, MPI_INT, recvbuf, recvcnt,
MPI_INT, source, MPI_COMM_WORLD);

MPI_Gather gathers distinct data from each process in a communicator to a single destination task.
Reverse operation of MPI_Scatter
MPI_Gather(sendbuf, sendcnt, sendtype, recvbuf, recvcnt, recvtype, dest, comm)
P0 P1 P2 P3
sendcnt = 1;
recvcnt = 1;
MPI_Gather(sendbuf, sendcnt, MPI_INT, recvbuf, recvcnt,
MPI_INT, dest, MPI_COMM_WORLD);

MPI_Reduce applies a reduction operation on all processes in the communicator and places the result in one process.
MPI_Reduce(sendbuf, recvbuf, count, datatype, op, dest, comm)
P0 P1 P2 P3
count = 1;
MPI_Reduce(sendbuf, recvbuf, count, MPI_INT, MPI_SUM, dest,
MPI_COMM_WORLD);

MPI Reduce Operations
MPI_MAX MPI_MIN MPI_SUM MPI_PROD MPI_LAND MPI_LOR MPI_BAND MPI_BOR MPI_MAXLOC MPI_MINLOC

Example: Pi
Sequential code

程序代写 CS代考 加微信: powcoder QQ: 1823890830 Email: powcoder@163.com