编程辅导 FIT3143 Tutorial Week 5

FIT3143 Tutorial Week 5
Lecturers: ABM Russel (MU Australia) and (MU Malaysia)
DISTRIBUTED MEMORY
OBJECTIVES

Copyright By PowCoder代写 加微信 powcoder

• The purpose of this tutorial is to introduce Parallel Computing on distributed memory.
• Understand basic concepts of Message Passing Interface.
Note: Tutorials are not assessed. Nevertheless, please attempt the questions to improve
your unit comprehension in preparation for the labs, assignments, and final assessments.
QUESTIONS/TOPICS
1. Compare and contrast Remote Procedure Call (RPC) and Inter process Communication (IPC).
Sample solution from: https://medium.com/codex/a-peek-at-ipc-and-rpc- d7af6db29b99
2. Discuss the pros and cons of one of the three different techniques for implementing concurrent access to multiple servers (threads, early reply, call buffering). You can try approaching this question using a literature with Google Scholar (https://scholar-google-com- au.ezproxy.lib.monash.edu.au/).
a) Threads – Very flexible and allows programmers to obtain optimal performance
on the client side. Much more difficult to program and requires significant effort by
the developer on the client side.
b) Early reply – Allows for overlap of computation on the client and server, however
doubles the number of messages required to be sent. Also, potentially underutilized the server machine.
c) Call buffering – Robust and fault tolerant technique that enables the client to assume no responsibility. Again, as above it allows concurrent computation on client and server and enables the buffer service to better utilize/manage the server’s time. More messages required for a single operation.

3. How is Message Passing Interface (MPI) different from Multithreading using OpenMP? Discuss this in relation to intra and inter process communication.
Solution from: http://www.umsl.edu/~siegelj/CS4740_5740/MPIandOpenMP/MPI_OMP.html
The Message Passing Interface (MPI) is designed to enable parallel programming through process communication on distributed-memory machines such as networked clusters, shared-memory high-performance machines, and hybrid clusters.
OpenMP is an implementation of multithreading, a method of parallelizing implemented using pragmas whereby a master thread and slave threads, in parallel, jointly perform a task.
4. Message Passing can be synchronous or asynchronous. Discuss some common benefits and issues of asynchronous communication.
+ Non-blocking, concurrency, speed
– GUI freeze, call back, ordering, buffer, consensus, race condition, deadlock

程序代写 CS代考 加微信: powcoder QQ: 1823890830 Email: powcoder@163.com