CS代写 KIT318 VM on your desktop).

Demonstrating MPI-Message Passing Interface Using Java
This tutorial will teach you how to develop MPI programs using Java. We are using for this MPJ library (http://mpj-express.org/) which is a wrapper around original MPI library. This library has already been integrated with your eclipse (within KIT318 VM on your desktop).
Hello World program
To write any MPI program you need first to import the mpi p ackage. Next, we need to initialise the MPI’s global and internal variables using init() method inside the MPI static class. This init() method takes one argument which is same as argument in main method. Initialisation step creates a communicator and assigned ranks to each process, starting from 0.

Copyright By PowCoder代写 加微信 powcoder

After MPI_Init, two functions are called. These two functions are used in almost every single MPI program that you will write. To know the process id Rank() method is used and the Size() method is used to get the number of processors in MPI.COMM_WORLD . Example: int me = MPI.COMM_WORLD.Rank();
Print the Hello world with process id using System.out.println() and with me variable. MPI.Finalize() method to clear the MPI environment. This is the last method in the program.
import mpi .*;
Listing 1: Hello World Example.
public class HelloWorld {
public static void main(String args[]) throws Exception {
MPI.Init(args);
int me = MPI.COMM_WORLD.Rank(); System.out.println(“Hello World from <" + me + ">“); MPI.Finalize();

Running Application
Figure 1: Running Application
Initialising Parameters
Figure 2: Initialising Parameters
Figure 3: Output

1 2 3 4 5 6 7 8 9
11 12 13 14
MPI.Init(args);
int myrank = MPI.COMM_WORLD.Rank(); if (myrank == 0) {
char[] message = “Hello, there”.toCharArray(); MPI.COMM_WORLD.Send(message, 0, message.length, MPI.CHAR, ←􏰀
1, 99); } else {
char[] message = new char[20];
MPI.COMM_WORLD.Recv(message, 0, 20, MPI.CHAR, 0, 99); System.out.println(“received:” + new String(message) + “:”←􏰀
MPI.Finalize(); }
Blocking Send and Receive Example
The Communication performs in two ways one with synchronous or blocking and the another is asynchronous and non-blocking. The Blocking send and Recieve in MPI performs send and receive messages from one processor to another processor in a synchronous way (It will not return the block until the communication is finished). In MPI the blocking communication is done using MPI.COMM_WORLD.Send() method and MPI.COMM_WORLD.Recv() method.
Note: Rank =0 means master process and Rank ≥ 1 then slaves.
Listing 2: Blocking Send and Receive Hello WOrld Example .
import mpi .*;
class Hello {
static public void main(String[] args) throws MPIException {
Following is the syntax of blocking Send.
void Comm.Send(Object buf, int offset, int count, Datatype datatype, int dest, int tag)
buf: send buffer array
offset: initial offset in send buffer
count: number of items to send
datatype: datatype of each item in send buffer dest: rank of destination
tag: message tag
Blocking send operation. Java binding of the MPI operation MPI SEND. The data part of the message consists of a sequence of count values, each of the type indicated by datatype. The actual argument associated with buf must be an array. The value offset is a subscript in

this array, defining the position of the first item of the message. The elements of buf may have primitive type or class type. If the elements are objects, they must be serializable objects. If the datatype argument represents an MPI basic type, its value must agree with the element type of buf: the basic MPI datatypes supported, and their correspondence to Java types, are as follows
MPI datatype
MPI.BYTE MPI.CHAR MPI.SHORT MPI.BOOLEAN MPI.INT MPI.LONG MPI.FLOAT MPI.DOUBLE MPI.OBJECT
Java datatype
byte char short boolean int
long float double Object
Status Comm.Recv(Object buf, int offset, int count,Datatype datatype, int source, int tag)
buf receive buffer array
offset: initial offset in receive buffer
count: number of items in receive buffer datatype: datatype of each item in receive buffer source: rank of source
tag: message tag
returns: status object
Blocking receive operation. Java binding of the MPI operation MPI RECV. The actual argu- ment associated with buf must be an array. The value offset is a subscript in this array, defining the position into which the first item of the incoming message will be copied. The elements of buf may have primitive type or class type. If the datatype argument represents an MPI basic type, its value must agree with the element type of buf; if datatype represents an MPI derived type. The MPI constants MPI ANY SOURCE and MPI ANY TAG are available as MPI.ANY SOURCE and MPI.ANY TAG. The source and tag of the received message are available in the publically accessible source and tag fields of the returned object. The following method can be used to further interrogate the return status of a receive operation.
Blocking Send and Receive Example
In the below program master sends the array of elements to the slaves using MPI.COMM_WORLD.Send() method. It takes 6 arguments first argument takes the data, send to the slaves in the program
array of integers. Second argument takes the index, to send in the program 0 indicates the
starting index. Third argument takes the size of the data in the program 2 indicates send only

1 2 3 4 5 6 7 8 9
MPI.Init(args);
int me = MPI.COMM_WORLD.Rank(); int size = MPI.COMM_WORLD.Size(); int[] number = new int[10];
if (me == 0) {
2 numbers. Fourth argument takes the data type in the program MPI.INT indicates integer datatype. Fifth argument is the destination address in the program 1 indicates the slave and the last argument is flag 0 means no flags.
Listing 3: Blocking Send and Receive Example .
import java.util.Arrays;
import mpi .*;
public class SendREcv {
public static void main(String args[]) throws Exception {
number [0] number [1] number [2]
= 100; = 101; = 102;
MPI.COMM_WORLD.Send(number, 0, 2, MPI.INT, 1, 0);
} else if (me == 1) {
MPI.COMM_WORLD.Recv(number, 0, 2, MPI.INT, 0, MPI.←􏰀
ANY_SOURCE);
System.out.println(“Process 1 received number from process←􏰀
” + me + Arrays.toString(number)); MPI.Finalize();
Slave process receives the array of elements to the master using MPI.COMM_WORLD.Recv() method. It takes 6 arguments first argument takes the data, receives from the master process in the program array of integers. Second argument takes the index to receive in the program 0 indicates the starting index. Third argument takes the size of the data in the program 2 indicates read only 2 numbers. Fourth argument takes the data type in the program MPI.INT indicates integer datatype. Fifth argument is the source address in the program 0 indicates the master and the last argument is accepting source process MPI.ANY_SOURCE means any source process it accepts the data.
Finally print the receive data in slave using System.out.println() method by converting Arrays to String using Arrays.toString() method.

Running Application
Figure 4: Running Application
Initialising Parameters
Figure 5: Initialising Parameters

Figure 6: Output

1 2 3 4 5 6 7 8 9
MPI.Init(args);
int me = MPI.COMM_WORLD.Rank(); int size = MPI.COMM_WORLD.Size(); int[] number = new int[10];
if (me == 0) {
number [0] = 100; number [1] = 101; number [2] = 102;
MPI.COMM_WORLD.Send(number, 0, 2, MPI.INT, 1, 0); int[] number1 = new int[2]; MPI.COMM_WORLD.Recv(number1, 0, 2, MPI.INT, 1, MPI.←􏰀
ANY_SOURCE);
System.out.println(“Slave Replied with ” + me + Arrays.←􏰀
toString(number1));
} else if (me == 1) {
MPI.COMM_WORLD.Recv(number, 0, 2, MPI.INT, 0, MPI.←􏰀
ANY_SOURCE);
System.out.println(“Master Sent ” + me + Arrays.toString(←􏰀
MPI.COMM_WORLD.Send(number, 0, 2, MPI.INT, 0, 0);
MPI.Finalize(); }
Blocking Send and Receive From Both Master and Slave Example
The Blocking send and Recieve in MPI performs send and receive messages from one proces- sor to another processor in a synchronous way (It will not return the block until the communica- tion is finished). In MPI the blocking communication is done using MPI.COMM_WORLD.Send() method and MPI.COMM_WORLD.Recv() method.
Note: Rank =0 means master process and Rank ≥ 1 then slaves.
Listing 4: Blocking Send and Receive Example from both master and slave.
import java.util.Arrays;
import mpi .*;
public class SendREcv2 {
public static void main(String args[]) throws Exception {
In the below program master sends the array of elements to the slaves using MPI.COMM_WORLD.Send() Page 8

method. It takes 6 arguments first argument takes the data, send to the slaves in the program array of integers. Second argument takes the index, to send in the program 0 indicates the starting index. Third argument takes the size of the data in the program 2 indicates send only 2 numbers. Fourth argument takes the data type in the program MPI.INT indicates integer datatype. Fifth argument is the destination address in the program 1 indicates the slave and the last argument is flag 0 means no flags.
The master will wait until the slave process is sending the data.
Slave process receives the array of elements to the master using MPI.COMM_WORLD.Recv() method. It takes 6 arguments first argument takes the data, receives from the master process in the program array of integers. Second argument takes the index to receive in the program 0 indicates the starting index. Third argument takes the size of the data in the program 2 indicates read only 2 numbers. Fourth argument takes the data type in the program MPI.INT indicates in- teger datatype. Fifth argument is the source address in the program 0 indicates the master and the last argument is accepting source process MPI.ANY_SOURCE means any source process it accepts the data.
After receiving the data print the receive data in slave using System.out.println() method by converting Arrays to String using Arrays.toString() method. Send the received data back to the master using MPI.COMM_WORLD.Send(number, 0, 2, MPI.INT, 0, 0) method.
Then the master process continues its execution by receiving the data from the slave using MPI.COMM_WORLD.Recv(number1, 0, 2, MPI.INT, 1, MPI.ANY_SOURCE) method and print the receive data in master using System.out.println() method by converting Arrays to String using Arrays.toString() method.
Figure 7: Output

1 2 3 4 5 6 7 8 9
MPI.Init(args);
int me = MPI.COMM_WORLD.Rank(); int size = MPI.COMM_WORLD.Size(); int[] number = new int[10];
int[] number1 = new int[2];
if (me == 0) {
number [0] = 100; number [1] = 101;
MPI.COMM_WORLD.Send(number, 0, 2, MPI.INT, 1, 0);
} else if (me == 1) {
MPI.COMM_WORLD.Recv(number1, 0, 2, MPI.INT, 0, MPI.←􏰀
ANY_SOURCE);
System.out.println(“Process 1 received number from process←􏰀
” + me + Arrays.toString(number1)); intsum=0;
for (int i = 0; i < number1.length; i++) { sum += number1[i]; System.out.println("Total sum" + sum); } MPI.Finalize(); } Sum of Array Elements with One Master and Slave The master process(root process) sends the array of elements using send() method. Slave process receives data and calculates the sum of the elements and print the total sum of the array. Input : 100, 101, 102, 103, 104, 105, 106, 107, 108, 109 Master sends: 100, 101 Slave receives: 100, 101 Slave sumup and print 201. Listing 5: Sum of Array Elements with One Master and Slave. import java.util.Arrays; import mpi .*; public class MPISendRecvSum { public static void main(String args[]) throws Exception { Running Application Figure 8: Running Application Initialising Parameters Figure 9: Initialising Parameters Figure 10: Output Sum of Array Elements One Master and Multiple Slaves 2 Input : {100, 101, 102, 103, 104, 105, 106, 107, 108, 109} 3 Master splits the data in to 2 chunks. The each chunk will be sent to ←􏰀 slaves using MPI.COMM_WORLD.Send(original, 0, chunk, MPI.INT, i + ←􏰀 1, 0) method. 5 Master sends to Slave 1: {100, 101, 102, 103} 6 Master sends to Slave 2: {104, 105, 106, 107} 8 The remaining data after splitting will be processed in master. 9 Master: {108, 109} 10 Master adds all the elements in the chunk and store it in sum variable←􏰀 . 11 total sum up= 217 14 Slave1 receives the data {100, 101, 102, 103} and store it in number1 ←􏰀 15 Slaves adds all the elements in the number1 chunk and store it in sum ←􏰀 variable sum =406 17 Slave2 receives: {104, 105, 106, 107} and store it in number1 variable 18 sum =422 20 send slave 1 sum (406) to master. send slave 2 sum (422) to master. 21 22 Master receives slave 1 sum (406) and add to master sum (217) the ←􏰀 result is 623 and then receives slave 2 sum (422) and the total sum←􏰀 1045 will be printed using System.out.println() method. Listing 6: Sum of Array Elements One Master and Multiple Slaves. java.util.Arrays; class MPISendRecvSum { 6 public static void main(String args[]) { 7 MPI.Init(args); 8 int chunk = 4; 9 int me = MPI.COMM_WORLD.Rank(); 10 int size = MPI.COMM_WORLD.Size(); 11 int[] number = new int[10]; 1 import 2 3 import 4 12 int[] number1 13 14 if (me == 0) { 15 16 number [0] 17 number [1] 18 number [2] 19 number [3] 20 number [4] 21 number [5] 22 number [6] 23 number [7] 24 number [8] 25 number [9] = new int[chunk]; = 100; = 101; = 102; = 103; = 104; = 105; = 106; = 107; = 108; = 109; 27 int sub = number.length; 28 int counter = 0; 29 int iterations = 0; 30 int totaldivisions = sub / chunk; 31 if(sub%chunk==0){ 42 ++counter; 45 MPI.COMM_WORLD.Send(original, 0, chunk, MPI.INT, i + ←􏰀 48 // MPI.COMM_WORLD.Barrier(); 49 int remaining = number.length - (totaldivisions * chunk); 50 int[] original = new int[remaining]; (int k = 0; k < (remaining); k++) { original[k] = number[counter]; counter ++; (int i = 0; i < original.length; i++) { sum += original[i]; iterations = totaldivisions; (int i = 0; i < totaldivisions; i++) { int[] original = new int[chunk]; (intj=0;jCS代考 加微信: powcoder QQ: 1823890830 Email: powcoder@163.com