程序代写代做代考 algorithm Part I

Part I
1.) ¦¨(n2 )
2) ¦¨(nlog(n)) 3) ¦¨(log2 n) 4) ¦¨(n2 )
5) ¦¨(log(n))
Part II 1)
def getMajority(arr) if len(arr) == 1
return arr[0] mid = len(arr)/2
a = getMajority(arr[0:mid+1]) // compute first half
b = getMajority(arr[mid+1:len(arr)] // compute second half if a == b
return a
numa = count(arr, a)
if numa > mid return numa
numb = count(arr, b) if numb > mid
return numb
return None // no majority element

Correctness: If an array has majority element, then it must be either the first half subarray¡¯s majority element or the second subarray¡¯s majority element. So the algorithm find the first half subarray¡¯s majority element and the second half subarray¡¯s majority element, and test if they are the majority element of the whole array.
Runtime: T(N) = 2T(N/2) + O(N) ,
using Master Theorem T(N) = O(N log N )
2)
Suppose initial price is 1.
From N gains we can compute the N prices.
prices(0) = 1 for i =1 to N
prices(i) = prices(i-1)*(1+gain(i)) def findBestGain(prices, i, j)
if i==j
return (i,j)
mid = (i+j)/2
(u, v) = findBestGain(prices, i, mid)
(x, y) = findBestGain(prices, mid+1, len(prices)-1)
a = the minimum price index in (i, mid)
b = the maximum price index in (mid+1, len(prices)-1) return the best of (u,v), (x,y) and (a, b)
Correctness: Split the array to half, there are 3 possibility. 1) i, j in first half
2) i,j in second half 3) i in first half, j in second half. For 1) and 2), we recursively

compute them. For 3) price at i must be minimum in the first half and price at j must be maximum in the second half.
Running Time: compute prices O(N)
runtime of recursive function: T(N) = 2T(N/2) + O(N)
using Master Theorem T(N) = O(N log N )
3)
a) The probability to have either 01 or 10 is 2p(1-p). So the times to call
two random number generator is N/ 2p(1-p), then the expected number of call is (N/ 2p(1-p))*2 = N / ( p(1-p) ).
b) Divide the array of bits to two parts. Recursively extract fair bits from first half and second half.
4)
The running time of insertion sort is equal to the number of the inversion in the array.
If we swap two adjacent elements, then we create 1 inversion. The expected number of inversion is: n*p*k
So the expected running time is: O(n*p*k).
Part III
1
250000
500000
1000000
size(bytes)
158
1153
2122
4062

1) N=1 is greater than the number of bytes for the uncompressed file. File name, file create and modify time may be contained in those extra bytes.
2)
The run-length encoding will be 1000000foo which is 10 bytes. My zip utility
uses 4062 bytes, certainly it doesn¡¯t use this algorithm. 3)
No it is not the case. 500000 zip file size is roughly the two times of 250000 and 1000000 zip file size is roughly two times of 500000. If it is the case as described in the question, the ratio will be much less than 2.
Part IV
Time (microseconds)
10
1000
100000
1000000
Quicksort(normal)
17
852
28553
244053
Quicksort(lazy)
6
256
23299
175964
Quicksort(median of three)
25
176
17060
263960
Quicksort(inplace)
7
314
8166
112060
Bucket sort
533
2679
54901
483447
Of all the quick sort variations, Lazy, median of three and in place all performs better than normal quick sort, and the in place variation produces the best saving in time. The reason is that create additional array is expensive, in place can save the time to create many temporary arrays.
Bucket sort performs worse than all quick sort variations. I suppose the reason for this is that bucket sort involves Arraylist operations which is more expensive than array operation.