CS计算机代考程序代写 flex Java AI cache algorithm NWEN303 Concurrent Programming

NWEN303 Concurrent Programming
10
-Model solutions Ass1
-Starvation and Semaphores -More streams features
Marco Servetto VUW

Model solutions for Ass1
public class MSequentialSorter implements Sorter {//actual merge sort
@Override public > List sort(List list) {
if(list.size()<=20) {return new ISequentialSorter().sort(list);} int half=list.size()/2; Listleft=sort(list.subList(0, half)); Listright=sort(list.subList(half,list.size()));
return merge(left,right);
}
//merge ‘subrutine’ that can be reused in the other solutions
public static>Listmerge(List a,List b){ if(a.isEmpty()) {return b;}
Listres=new ArrayList<>(a.size() + b.size());
Iterator ai = a.iterator();
Iterator bi = b.iterator();
T currA=ai.next();//one element pending while(bi.hasNext()) {
T currB=bi.next();//two elements pending if(currB.compareTo(currA)<= 0) { res.add(currB);//only one element pending continue; } res.add(currA);//only one element pending Iterator tmp=ai;ai=bi;bi=tmp;//swap ai/bi
currA=currB;//that is now called currA
}
res.add(currA);//zero elements pending ai.forEachRemaining(res::add);
return res;
}}

Model solutions for Ass1
public class MParallelSorter1 implements Sorter {
private static final ExecutorService pool =Executors.newCachedThreadPool(); @Override
public > List sort(List list) {
if(list.size()<=20) {return new ISequentialSorter().sort(list);} int half=list.size()/2; Future>fLeft=pool.submit(()->sort(list.subList(0, half))); Listright=sort(list.subList(half,list.size())); Listleft;try{left=fLeft.get();}
catch (InterruptedException e) { Thread.currentThread().interrupt(); throw new Error(e);
}
catch (ExecutionException e) {throw new Error(e.getCause());} return MSequentialSorter.merge(left,right);
}
}

Model solutions for Ass1
public class MParallelSorter2 implements Sorter {
@Override
public >
List sort(List list) {return recSort(list).join();}
public > CompletableFuture> recSort(List list) {
if(list.size()<=20) { return CompletableFuture.supplyAsync( ()->new ISequentialSorter().sort(list) );}
int half=list.size()/2;
CompletableFuture> fLeft = recSort(list.subList(0, half)); CompletableFuture> fRight = recSort(list.subList(half,list.size())); return fLeft.thenCombineAsync(fRight,
(left,right)->MSequentialSorter.merge(left,right)
); }
}

Model solutions for Ass1
public class MParallelSorter3 implements Sorter {
@Override
public > List sort(List list) {
return new MTask(list).compute();
} }
@SuppressWarnings(“serial”)
class MTask> extends RecursiveTask> {
MTask(List list) { this.list = list; }
List list;
public List compute() {
if(list.size()<=20) {return new ISequentialSorter().sort(list);} int half=list.size()/2; MTask left=new MTask<>(list.subList(0, half ));
MTask right=new MTask<>(list.subList(half,list.size() )); invokeAll(left,right);
return MSequentialSorter.merge(left.join(),right.join());
}
}

Performance (2019)
On the data type Float
Sequential merge sort sort takes 1.111 seconds
Parallel merge sort (futures) sort takes 0.544 seconds
Parallel merge sort (completable futures) sort takes 0.327 seconds Parallel merge sort (forkJoin) sort takes 0.302 seconds
On the data type Point
Sequential merge sort sort takes 1.239 seconds
Parallel merge sort (futures) sort takes 0.663 seconds
Parallel merge sort (completable futures) sort takes 0.448 seconds Parallel merge sort (forkJoin) sort takes 0.403 seconds
On the data type BigInteger
Sequential merge sort sort takes 1.641 seconds
Parallel merge sort (futures) sort takes 0.922
Parallel merge sort (completable futures) sort
Parallel merge sort (forkJoin) sort takes 0.46
seconds
takes 0.464 seconds
seconds

Performance (2021)
On the data type Float
Sequential merge sort sort takes 0.635 seconds
Parallel merge sort (futures) sort takes 0.475 seconds
Parallel merge sort (completable futures) sort takes 0.25 seconds Parallel merge sort (forkJoin) sort takes 0.253 seconds
On the data type Point
Sequential merge sort sort takes 0.76 seconds
Parallel merge sort (futures) sort takes 0.505 seconds
Parallel merge sort (completable futures) sort takes 0.279 seconds Parallel merge sort (forkJoin) sort takes 0.296 seconds
On the data type BigInteger
Sequential merge sort sort takes 0.871 seconds
Parallel merge sort (futures) sort takes 0.574 seconds
Parallel merge sort (completable futures) sort takes 0.354 seconds Parallel merge sort (forkJoin) sort takes 0.338 seconds

Model solutions for Ass1
public class MParallelSorter1 implements Sorter {
private static final ExecutorService pool =Executors.newWorkStealingPool(); @Override
public > List sort(List list) {
if(list.size()<=20) {return new ISequentialSorter().sort(list);} int half=list.size()/2; Future>fLeft=pool.submit(()->sort(list.subList(0, half))); Listright=sort(list.subList(half,list.size())); Listleft;try{left=fLeft.get();}
catch (InterruptedException e) { Thread.currentThread().interrupt(); throw new Error(e);
}
catch (ExecutionException e) {throw new Error(e.getCause());} return MSequentialSorter.merge(left,right);
}
}
A better solution for futures, just changing the ExecutorService On the data type Float
Parallel merge sort (futures) sort takes 0.241 seconds //0.475 On the data type Point
Parallel merge sort (futures) sort takes 0.296 seconds //0.505 On the data type BigInteger
Parallel merge sort (futures) sort takes 0.372 seconds //0.574

Starvation

Starvation
● Those machines… 1 minute to do coffee, 20 seconds to do tea.
● So, if you are a responsible coffee wanting citizen and someone in the back of you want the tea, you should let him pass forward:
● if you go first, you wait 60 seconds and he waits 80: (60+80)/2=70 seconds average
● if he go first, he waits 20, you wait 80: (20+80)/ 2=50 seconds average.
● Now, assume you are in the front of the queue and in the back you have 1000 tea wanting colleagues… what should you do?

Starvation formally
● A task is starved when somehow he is never able to complete its role, since something else always ends up having precedence.
● It is not deadlock or livelock. Here in any moment it could happen that no more tea wanting customer arrives. But we can not give an estimate on how long it can takes.
● Be careful! Some students get confused between starvation and deadlock since in the former example (the dinning philosopher) they end up unable to eat. Please, do not fall in this mind trap. The main intuitive difference is that in deadlock nothing can happen, in starvation too much stuff happens at the same time and some process is left behind.

Starvation
● In most cases starvation is caused by some badly designed/implemented priority management. Thus it is a kind of issue that arise our of other optimization attempts!
● It is definitively more rare than deadlocks or livelocks!
● If you encounter problems with starvation, you are probably designing something that looks a lot like a mini operative system.
● For the best of my understanding, from the most common issue to the rarest one, we have
Race condition, Deadlock, Livelock and Starvation.

Semaphores

● ●

Semaphores
Concept invented in 1965
Basic abstraction allowing to define other abstractions that we have seen.
Widely used in Operative Systems and other low level high performance devices/dedicated architectures.





A semaphore encapsulate an integer variable, a queue a blocking/awaking mechanism and atomic increment and decrement operations.
Assume that there is a bridge, that can withstand only the weight of 2 cars at any given time.
The semaphore will let only 2 cars in, and a sensor in the end will record when a car is out of the bridge.
If in any moment 2 car are on the bridge, the semaphore becomes red, and a queue of blocked cars start forming in front of it.
Semaphores

Semaphores
[CarA,CarB,CarC], 2 …………………………….. [CarA,CarB], 1 ..CarC…………………………… [CarA], 0 ……CarB……………CarC………….. [CarD, CarA], 0 …………..CarB……………CarC…… [CarD, CarA], 0 ………………..CarB……………CarC [CarD, CarA], 0 ………………..CarB……………CarC [CarD, CarA], 1 ………………..CarB……………
Hey, I’m DONE!

Semaphores
There are conceptually three basic operations for semaphores: Create a semaphore, with an initial value,
Ask to pass (may block the computation)
Release it (say that you are done!)
Note: the existence of the queue is hidden by the abstraction!
Obviously, semaphores with value 0 or 1 solve the critical section problem. Note that the processes are free to say that they are done when they want, if they want and how many times they want. That can create problems, of course.
● ● ●





Mathematically a Lock is simpler concept since it is in only two states: free and acquired.
Using a lock is possible to implement a semaphore.
However, a direct implementation of the semaphore concept is faster.
A Lock implemented using a semaphore storing either 0 or 1 is as fast as a lock implemented directly.
Semaphores and Mutex/Locks


The simplest way to think of a semaphore is to consider it an abstraction that allows n units to be acquired, and offers acquire and release mechanisms. It safely allows us to ensure that only n tasks can access a certain resource at a given time. However, no distinction is done between the different resources, only the number of used resources is counted.
What purpose would this serve?

Semaphore, a mental model

Java Example
public class ConnectionLimiter{
private final Semaphore semaphore; ConnectionLimiter(int maxConcurrentRequests){
semaphore = new Semaphore(maxConcurrentRequests);
}
URLConnection acquire(URL url)
throws InterruptedException,IOException{ semaphore.acquire();
return url.openConnection();
}
void release(URLConnection conn){ try {
/*
* … clean up here
*/}
finally { semaphore.release();
}}}


Semaphore, use cases Some hardware slows down if overloaded.



Some hardware slows down if overloaded. Disk, Network, sometimes memory
Semaphore, use cases




Some hardware slows down if overloaded.
Disk, Network, sometimes memory
Some algorithms just works better for a smaller set of entities
Semaphore, use cases





Some hardware slows down if overloaded.
Disk, Network, sometimes memory
Some algorithms just works better for a smaller set of entities
You just want no more than 15 players to play together on a given map
Semaphore, use cases





Some hardware slows down if overloaded.
Disk, Network, sometimes memory
Some algorithms just works better for a smaller set of entities
You just want no more than 15 players to play together on a given map
Semaphores can help with this kinds of very high level task. Further synchronization may be to give the specific individual resource to the right entity.

Semaphore, use cases





Some hardware slows down if overloaded.
Disk, Network, sometimes memory
Some algorithms just works better for a smaller set of entities
You just want no more than 15 players to play together on a given map
Semaphores can help with this kinds of very high level task. Further synchronization may be to give the specific individual resource to the right entity.
Paradoxical, since they are also one of the lowest concepts ever…


Semaphore, use cases

Semaphores, comments Here, some comments from experts in the field!

Semaphores, comments Here, some comments from experts in the field!
It’s hard to come up with a simple example for semaphores – most simple things can be done just as effectively using synchronized blocks.

Semaphores, comments Here, some comments from experts in the field!
It’s hard to come up with a simple example for semaphores – most simple things can be done just as effectively using synchronized blocks.
They are useful in operating system code, or concurrent code in some sort of embedded device?

Semaphores, comments Here, some comments from experts in the field!
It’s hard to come up with a simple example for semaphores – most simple things can be done just as effectively using synchronized blocks.
They are useful in operating system code, or concurrent code in some sort of embedded device?
Semaphores will often be used in implementing the kinds of concurrent collections you find in Java


Semaphores, comments Here, some comments from experts in the field!
It’s hard to come up with a simple example for semaphores – most simple things can be done just as effectively using synchronized blocks.
They are useful in operating system code, or concurrent code in some sort of embedded device?
Semaphores will often be used in implementing the kinds of concurrent collections you find in Java
a very nice situation where Java synchronized blocks don’t work is the hand-over-hand locking used in some linked list algorithms, where you have to release locks/semaphores in the same order that you acquire them, whereas Java synchronized blocks release in reverse order.



Semaphores, comments Here, some comments from experts in the field!
It’s hard to come up with a simple example for semaphores – most simple things can be done just as effectively using synchronized blocks.
They are useful in operating system code, or concurrent code in some sort of embedded device?
Semaphores will often be used in implementing the kinds of concurrent collections you find in Java
a very nice situation where Java synchronized blocks don’t work is the hand-over-hand locking used in some linked list algorithms, where you have to release locks/semaphores in the same order that you acquire them, whereas Java synchronized blocks release in reverse order.
If you are using Java, and you can use any other abstraction java and third party libraries can offer, and you have memory to waste… in this case, why would you want to use semaphores?





***Match(…)
This allows to check an arbitrary condition on elements of a collection.
Very useful inside assertions!
List ss=List.of(“a”,”aa”,”b”,”bb”); assert !ss.stream().anyMatch(s->s.length()==5); assert !ss.stream().allMatch(s->s.length()==2); assert ss.stream().noneMatch(s->s.length()==5);

Streams: useful operations


flatMap(..)
This allows to unfold nested collections
Streams: useful operations
List> ss=List.of( List.of(“a”),List.of(“aa”,”b”),List.of(),List.of(“bb”));
long howMany=ss.stream().flatMap(e->e.stream()).count();

Streams: useful operations
flatMap(..)
can also encode a ‘map’+filter by returning an empty stream with ‘Stream.of()’.
convenient when declaring sub-functions
List ss=List.of(“a”,”aa”,”b”,”bb”); List res=ss.stream().flatMap(this::operation)
.collect(Collectors.toUnmodifiableList()); …
Stream operation(String s){ if(s.length()==0){return Stream.of();}//remove if(s.length()>5) {return Stream.of(5,s.length());}//twice return Stream.of(s.length());//map
}


reduce(.. .. ..)
there are various kinds of reduce. The version with 3 arguments is the most flexible and the best for parallel computations. We provide:
a starting point,
a way to accumulate the result onto a partial result given another element
a way to accumulate two partial results
List ss=List.of(“a”,”aa”,”b”,”bb”); int sumLenght=ss.parallelStream().reduce(
0,//starting point (i,s)->i+s.length(),//int + string -> int (i1,i2)->i1+i2);//int + int -> int
– –

Streams: useful operations

0 0
0 0 0
+ +
”a”.lenght() + ”b”.lenght() +
”aa”.lenght() ”bb”.lenght()
+ +
… 3 … 3
.. .. ..
Streams: useful operations
reduce may create a grid like the one below: List ss=List.of(“a”,”aa”,”b”,”bb”);
int sumLenght=ss.parallelStream().reduce(

0,//starting point (i,s)->i+s.length(),//int + string -> int (i1,i2)->i1+i2);//int + int -> int

… ….
+ = total

Streams: useful operations
jumping back and forth from parallel and sequential:
List ss=List.of(“a”,”aa”,”b”,”bb”); List discarded=new ArrayList<>(); List accepted=ss.parallelStream()
.map(s->”[“+s+s+”]”)//this map is ok in parallel .sequential()//the filter below needs to be sequential .filter(s-> s.length()>1 ? true : !discarded.add(s)) .collect(Collectors.toUnmodifiableList());