代写代考 MATH3411 Information, Codes And Ciphers 2022 T3

MATH3411 Information, Codes And Ciphers 2022 T3
Problems and Answers

MATH3411 Problems Chapter 1: Introduction

Copyright By PowCoder代写 加微信 powcoder

a. Explain why, if n is not a prime, then n has a prime factor less than or equal to √n. b. Explain why, for 18 ≤ n ≤ 400, n ̸= 361, that n is prime if and only if n does not
have a proper factor 2 or 5 and gcd(n, 51051) = 1. c. Hence test n = 323 and n = 373 for primeness.
Provethatif2n−1isaprime,thennisprimeandthatif2n+1isprime,thennisa power of 2.
This problem is a nice but hard challenge; feel free to skip!
a. What is the probability of picking a Queen from a standard pack of cards?
b. Suppose you are told that I have picked a face card. Now what is the probability it is
c. Suppose instead you are told the card I have is black. Now what is the probability it is a Queen?
d. What do the above tell you about the random events “pick a Queen”, “pick a face card” and “pick a black card”?
A certain binary information channel is more likely to transmit a 0 as an error for 1 than a 1 as an error for 0. The probability that a 1 is received in error is 0.1 (that is, P (1 received | 0 sent) = 0.1) and the probability that a 0 is received in error is 0.2. Write down the conditional probabilities for all four possible situations. If 1s and 0s are sent with equal probability, find the probability of receiving a zero. What is the probability that a zero was sent, given that a zero was received?
(For discussion, if you’ve not heard of it.) The famous and much debated problem is as follows: On a game show, a contestant is presented with three identical doors, behind one of which is a major prize, the others hiding a minor prize. The contestant gets to choose one door. If the contestant picks the door hiding a minor prize, then the host, , opens the door showing the other prize; if the contestant picks the door with the major prize, Monty randomly picks one of the doors hiding a minor prize and opens that. The contestant then has the choice of changing to the other door or not, and wins whatever is behind the door he or she has finally settled on.
The question is: should the contestant change to the other door? Can you prove your answer?

6 Suppose that we send a message of n bits in such a way that the probability of an error in any single bit is p and where the errors are assumed to be independent. Use the binomial probability distribution to write down the probability that:
a. k errors occur,
b. an even number of errors occur (including zero errors),
c. show that the answer in (b) can be expressed as 1+(1−2p)n
(Hint: let q = 1 − p and expand the expression 2 .)
(q+p)n +(q−p)n
7 Taking p = .001 and n = 100 in Question 6, find the probability that
a. there is no error,
b. there is an undetected error when a simple parity check is used.
8 Check whether the following can be ISBNs, and if not, then assuming it is the check digit that is wrong, alter the check digit so that they are valid ISBNs:
0 − 552 − 08638 − X 0 − 576 − 08314 − 6
9 (A possible telephone number code)
Let C be the code of all 10 digit (decimal) numbers x = x1x2 · · · x10 that satisfy the two
check equations
􏰂xi ≡ 0 (mod 11), i=1
􏰂ixi ≡ 0 (mod 11). i=1
a. Estimate roughly how many such numbers x there are. (That is, estimate |C|.)
b. Show that this code is single-error correcting and that it can also detect double errors
caused by the transposition of two digits.
c. Correct the number y = 0680271385.
10 (Introducing Chapter 4): A bridge deck is a set of 52 distinguishable cards. A bridge hand is any subset containing 13 cards and a bridge deal is any ordered partition of the bridge deck into 4 bridge hands.
a. Describe a simple but inefficient representation of an arbitrary bridge hand by as- signing a 6-bit binary number to represent each card. How many bits are needed to describe a deal this way?
b. Show that no binary representation of an arbitrary bridge hand can use fewer than 40 bits. Give a representation using 52 bits.
c. Show that no representation of an arbitrary bridge deal can use fewer than 96 bits. Give a representation using 104 bits. This can be reduced to 101 bits with a little thought, can you see how?

Chapter 2: Error Correcting Codes
11 Using 9-character 8-bit even parity ASCII burst code, encode HDforall .
12 The integers from 0 to 15 inclusive are encoded as four-bit binary numbers with one parity check bit at the front, so that, for example, 4 is 10100 and 15 is 01111. A stream of such integers is then encoded into blocks of 4 with one check integer to give overall even parity, similar to the ASCII burst code.
a. Show that the string of integers 1, 13, 2, 6 has check number 8.
b. The block 10010 11011 01110 00011 00110 is received. Find and correct the error (as-
suming at most one) and then decode.
13 A code C consists of all solutions x ∈ Z52 (i.e., binary vectors of length 5) of the equation
HxT =0where 1 1 1 1 0 H = 0 1 1 0 1
a. Find the codewords in C.
b. Find three linearly independent columns of H.
c. Find three linearly dependent columns of H.
d. Show that no two columns of H are linearly dependent.
e. Show that no four columns of H are linearly independent.
14 For the Hamming (7,4)-code described in lectures:
(a) encode 1010, (b) correct and decode 1001111.
15 Write down the parity check matrix of the Hamming (15,11)-code described in lectures, and hence correct and decode 001010101110110.
16 Using the Hamming (15,11)-code described in lectures: a. encode 10101010101,
b. correct and decode 011100010111110.
17 What is the probability that a random sequence of n = 2m − 1 ‘0’s and ‘1’s is a codeword
in a Hamming code?
18 You receive a message which has been encoded as a codeword of the Hamming (7,4)-code. The probability of error in any single bit of the codeword while transmitted through a noisy channel is p = .001, and errors in different bits are independent. Find the probability that:
a. there is no error,
b. there are errors and they are correctly corrected.
c. there are errors and they are not correctly corrected,
d. when the code is used to detect but not correct errors (i.e. using a pure error detection
strategy), there are undetected errors.
19 Suppose that linear code C has parity check matrix H.
a. Prove that d(C) = w(C).

b. Prove that d = d(C) is the smallest integer r for which there are r linearly dependent columns in H modulo 2.
20 Suppose a binary linear code C has parity check matrix
1 1 1 1 0 0 H = 0 1 1 0 1 0
b. What are its error correction and detection capabilities?
a. Find its minimum distance d.
c. Correct and decode the following received words (if possible), assuming that the first
3 bits are the information bits and we are using a single error correcting strategy. (i) 010110
(ii) 010001 (iii) 100110
d. Find a standard form parity check matrix H′ for the code C, and the corresponding generator matrix G′. Find a generator matrix G in similar fashion from the original parity check matrix H.
e. Explain how to extend the code C to give a new code C􏰃 with minimum distance d+1. (Prove that􏰃your new code has minimum distance d + 1.) Write down a parity check matrix for C.
21 We wish to code the four directions N, S, E, W using a binary code.
a. Show that, if we require single error correction, then we need to use messages of at
least 5 bits length. Find such a code of length 5.
b. Show that, if we require double error correction, then we need messages of at least 7 bits length. Construct such a code from messages of 8 bits.
22 Letr≥3beanintegerandsupposethatx=x1x2···xn isavectorinZnr (so0≤xi ≤r−1 for all i).
a. For each nonnegative integer ρ, calculate the number of vectors in Znr at distance ρ from x.
b. Work out what the sphere-packing bound for t-error correcting radix r codes of length n should be, following the argument from the binary case.
c. Prove the radix r Hamming codes are perfect.
Construct the check matrix of a radix 5 Hamming code with parameters m = 2, n = 6, using the method given in lectures.
b. Using your check matrix, correct and decode the received word y = 410013.
24 Let C be the code consisting of all vectors x = x1x2x3x4 ∈ Z45 satisfying the check equations
x1+ x2+3×3+2×4≡0 (mod5) x1+2×2+4×3+3×4≡0 (mod5)
a. Assuming that x3 and x4 are the information bits, find the codeword which encodes the message 21.
b. Which of the following is a valid codeword in C?
(1) 1122 (2) 1212 (3) 2323 (4) 4343

Chapter 3: Compression Codes
25 Decide whether the following codes are uniquely decodable, instantaneous or neither.
a. 0,01,11,00; b. 0, 01, 011, 111 ;
c. 0, 01, 001, 0010, 0011 ; d. 00, 01, 10, 110, 111 .
26 Either prove the following code is uniquely decodable or find an ambiguous concatenated sequence of codewords:
c1 =101, c2 =0011, c3 =1001, c4 =1110
c5 = 00001, c6 = 11001, c7 = 11100, c8 = 010100 .
(This is more difficult than Q25.)
27 Construct instantaneous codes, or show they cannot exist for the following:
a. radix 2, codeword lengths 1, 2, 3, 3, 3 ;
b. radix 2, codeword lengths 2, 2, 3, 3, 4, 4, 4 ;
c. radix3,codewordlengths1,2,3,3,3,3,3,3,3; d. radix 3, codeword lengths 1, 1, 2, 2, 3, 3, 3, 3 .
Can any of these be shortened and if so how?
28 What is the minimum radix that would be needed to create a UD-code for the source
S = {s1, s2, s3, s4, . . . , s8} with respective codeword lengths a. 1,1,2,2,2,3,3,4
b. 2,2,2,4,4,4,4,5
29 Find binary Huffman codes and their expected codeword lengths for:
a. p1 =1/2, p2 =1/3, p3 =1/6;
b. p1 =1/3, p2 =1/4, p3 =1/5, p4 =1/6, p5 =1/20 ;
c. p1 =1/2, p2 =1/4, p3 =1/8, p4 =1/16, p5 =1/16 ; d. p1 =27/40, p2 =9/40, p3 =3/40, p4 =1/40 .
30 A random experiment has seven outcomes with corresponding probabilities 1/3, 1/3, 1/9, 1/9, 1/27, 1/27, 1/27.
The experiment is to be performed once and the outcome transmitted across the country. The telegraph company provides two services. Service 1 transmits binary digits at $2.00 per digit and service 2 transmits ternary digits ∈ {0,1,2} at $3.25 per digit. You are to select a service and design a code to minimize expected cost.
a. Which service should be selected? What code should be used? What is the expected cost?
b. If the ternary cost is changed, at what new value of the cost would you change your mind?

*31 Prove that the (binary) Huffman code for a 2n symbol source where each symbol has equal probability is a block code of length n ≥ 1. (Hint: induction.)
*32 Suppose that we have an n symbol source where the ith symbol occurs with frequency fi, where fi is the ith Fibonacci number and f1 = f2 = 1. Describe the standard binary Huffmancodeforthissource. (NOTE:f1+f2+···+fn =fn+2−1.)
33 Consider the alphabet s1, s2, · · · , s8 where the symbols occur with probabilities 0.22, 0.20, 0.18, 0.15, 0.10, 0.08, 0.05 and 0.02 respectively.
Code this source with a Huffman code of radix 4 using dummy symbols if necessary. What is the expected codeword length for this coding? Contrast it with the expected codeword length if another Huffman code is constructed by not introducing dummy symbols, but instead combining four symbols at a time as long as possible.
34 Consider the source S = {a, b} with probabilities p1 = 3/4 and p2 = 1/4.
a. Find a binary Huffman code for the third extension S3 of the source S. What is the
average codeword length per (original) symbol.
b. Encode the message aababaaaabaa using this code.
35 Suppose we have two symbols which occur with probabilities p1 = 2/3 and p2 = 1/3. Consider the first, second and third extensions. Find Huffman codes for each extension and calculate the corresponding expected codeword lengths and expected codeword lengths per original symbol.
*36 Consider the Markov matrix
M=1/3 1/2 1/4.
a. Show that M has eigenvalues 1, 1/4, 1/12 and find the equilibrium probabilities.
b. Explain why lim Mn exists and find the limit. What do you notice about the answer?
37 A Markov source on symbols s1,s2,s3 has transition matrix M and equilibrium vector p
1/3 1/4 1/4 1/3 1/4 1/2
given as follows:
0.1 0.2 0.5
a. Find Huffman codes for the equilibrium probability distribution and for the Markov
source. Compare the expected codeword lengths in the two cases.
b. Encode the string of symbols s2s2s1s1s2s3s3 using the Markov Huffman code.
c. Decode the code string 010001010 which was encoded using the Markov Huffman code.
0.7 0.2 0.1 M =0.2 0.6 0.4
1 6 p= 177.
38 A source has symbols {a, b, c, •} where • is the stop symbol. The probabilities of these symbols are 2 , 1 , 1 , 1 respectively. Use arithmetic coding to encode the message bac • into
a code number.
39 Three symbols s1, s2, s3 and the stop symbol s4 = • have probabilities p1 = 0.4, p2 = 0.3,
p3 =0.2 and p4 =0.1.
a. Use arithmetic coding to encode the message s2s1s3s1•.

b. Decode the codeword 0.12345 which was encoded using this arithmetic code. 40 Use the LZ78 algorithm to
a. encode “banana and bee” (including spaces).
(0,t)(0,o)(0, )(0,b)(0,e)(3,o)(0,r)(3,n)(2,t)(3,t)(2, )(4,e) Here “ ” denotes a space.
Chapter 4: Information Theory
41 A source S produces the symbols a, b, c, d, e with probabilities 1/3, 1/3, 1/9, 1/9, 1/9. Calculate the entropy of this source in bits.
42 Find the entropies (in appropriate units) for the sources in Questions 29, 33 and 35. Com- pare your answer with the expected codeword lengths of the Huffman codes you obtained previously.
Calculate the decimal entropy of the source in question 39, and compare to the length of the arithmetically coded message.
43 For the situation in Question 30, suppose the experiment was repeated many times and suitably coded (for long symbol words) before the outcomes were sent. What is the answer in part (a) and (b) now?
44 Find Shannon-Fano codes for the sources in Questions 29, 33 and 35.
45 A source S has 5 symbols s1, s2, …, s5 with probabilities 1, 1, 1, 3 , 1 respectively.
a. Calculate the entropy of S in bits to three significant figures.
b. Find a ternary Shannon-Fano code for S and its expected codeword length.
c. A binary Shannon-Fano code is constructed for S4. Find the lengths of the two longest codewords in this code.
46 Let H(p) be the entropy of a binary source with probability of emitting a 1 equal to p and probability of emitting a 0 equal to 1−p. Show that on the interval 0 ≤ p ≤ 1, the function H(p) is nonnegative, concave down and has a unique maximum. Find this maximum and where it occurs and sketch the curve for 0 ≤ p ≤ 1.
47 For the Markov sources with transition matrices as in Questions 36 and 37, find the Markov entropy HM and equilibrium entropy HE.
In a certain mathematics course, 3 of the students pass, and the rest fail. Of those who 4
pass, 10% own cars while half of the failing students own cars. All of the car owning students live at home, while 40% of those who do not own cars and fail, as well as 40% of those who do not own cars and pass, live at home.
a. Explain why the probability that a student lives at home or not given whether they own a car or not is independent of whether they have failed or not.
b. How much information (in bits) is conveyed about a student’s result in the course if you know whether or not they own a car?
c. How much information (in bits) is conveyed about a student’s result in the course if you know whether or not they live at home?
3 4 6 20 10

d. If a student’s result, car owning status and place of residence are transmitted by three binary digits, how much of the total information in the transmission is conveyed by each digit? (Part (a) will be useful for the third digit.)
49 Consider a channel with source symbols A = {a1,a2} and output symbols B = {b1,b2}, where P(b1 | a1) = 0.8 and P(b2 | a2) = 0.6. Suppose that P(a1) = 1/3.
a. Using Bayes’ Law, calculate P(aj | bi) for all i, j. b. Calculate the mutual information of the channel.
50 Find the channel capacity for the channel with source symbols a1 = 0 and a2 = 1, and received symbols b1 = 0,b2 = 1 and b3 =?, where P(b1|a2) = P(b2|a1) = 0, P(b3|a1) = P (b3|a2) = q and P (b1|a1) = P (b2|a2) = 1 − q. (This is a special case of the Binary Symmetric Erasure Channel).
51 A channel has source symbols a1, a2, a3 and received symbols b1, b2, b3 and transition prob- abilities P(bi|ai) = 1/2,P(bj|ai) = 1/4 for all i ̸= j. Find all the entropies, conditional entropies and the mutual information for the cases
a. P(a1) = P(a2) = P(a3) = 1/3,
b. P(a1) = 1/2,P(a2) = 1/3,P(a3) = 1/6,
c. What do you guess the channel capacity is and why?
*d. Prove that your guess in (c) is correct 😉
52 Consider a channel with source symbols A = {a1, a2} and output symbols B = {b1, b2, b3, b4} whereforsome0≤x≤1wehaveP(a1)=x, P(a2)=1−xand
P(b1 |a1)=5/7, P(b2 |a1)=2/7, P(b3 |a2)=1/10, P(b4 |a2)=9/10. a. Calculate H(A | B). How do you explain this result?
b. Hence find the capacity of the channel.
53 (For discussion.) Person A thinks of the name of a person who attends UNSW. Person B tries to determine who they are thinking of by asking A questions to which A has to reply simply “yes” or “no”. Show that B can determine the name in 15 questions (assuming that they have access to the UNSW student database).
Chapter 5: Number Theory
54 Use the Euclidean Algorithm to find the gcd d of the following pairs a and b and express it in the form d = xa + yb where x, y ∈ Z:
(a) 324 and 3876, (b) 7412 and 1513, (c) 1024 and 2187.
55 List all the elements of U24 and hence evaluate φ(24).
Repeat for U36 and U17.
56 Find φ(72), φ(1224) and φ(561561).
57 Draw up addition and multiplication tables for Z5 and Z6 and explain why only the first one is a field.

58 Solve the congruence equations or show there is no solution:
(a) 6x≡7(mod17), (b) 6x≡9(mod12), (c) 11x≡9(mod13).
59 Find, if they exist, the inverse of 6 in (a) Z11, (b) Z10, (c) Z23.
60 Use Euler’s Theorem to find
(a) 21001 in Z17, (b) the last two digits of 31001.
61 Find all the primitive elements for each of Z11 and Z17.
62 Use the polynomial Euclidean Algorithm to find the gcd d(x) of the following pairs of polynomials f(x) and g(x) and express it in the form d(x) = a(x)f(x) + b(x)g(x) where a(x) and b(x) are polynomials:
a. x3+1 and x2+1 in Q[x], b. x3+1 and x2+1 in Z2[x],
c. x2 −x+1 and x3 −x2 −1 in Z3[x].
a. x5 +x2 +1(mod x2 +x+1) in Z2[x], b. x5 +x2 +1(mod x2 +x+1) in Z3[x].
64 Write down addition and multiplication tables for both Z2[x]/⟨x2 +x+1⟩ and Z2[x]/⟨x2 +1⟩. Explain why only the first one is a field.
65 Which of the following are fields: Z2[x]/⟨x4 + x2 + x + 1⟩, Z3[x]/⟨x4 + x2 + x + 1⟩.
66 Construct the following finite fields, giving the table of powers of a primitive element γ (or α if α is primitive) and the corresponding linear combinations of the appropriate powers of the root α of the defining polynomial:
a. Z2[x]/⟨x3 + x + 1⟩,
b. Z2[x]/⟨x4 +x3 +x2 +x+1⟩,
c. Z3[x]/⟨x2 + x − 1⟩
Also list all the primitive elements in each field.
67 InGF(16)=Z2(α)=Z2[x]/⟨x4+x+1⟩:
a. find the inverse of α3 +α+1,
b. evaluate (α3 + α + 1)(α + 1)/(α3 + 1),
c. find the minimal polynomial of α3 + α + 1
d. list all the minimal polynomials formed from powers of α.
68 List all the irreducible polynomials in Z2[x] of degrees up to 4. Which of these are primitive in the fields they generate?
69 Show that 341 is a pseudo-prime to base 2 but not to base 3.

Use the Pollard-ρ method to factor n = 8051, using f (x) = x2 + 1 and x0 = 1. Repeat for n = 201001.
Consider the LFSR which implements the recurrence
xi+3 = xi+1 + xi over Z2
Let x0 = 1, x1 = 1, x2 = 0.
a. Generate the next 5 pseudo-random bits produced by the LFSR, namely x3, . . . , x7.
b. What is the period of this LFSR?
Trace the output of the pseudo-random number generators defined by
a. xi+1≡7xi+1(mod18), where x0=1.
b. xi+4≡xi+3+xi(mod2), where x0=1,×1=x2=x3=0.
*70 Let a be an integer coprime to each of 3, 11 and 17.
a. Using Euler’s theorem, show that a560 ≡ 1 modulo 3, 11 and 17.
b. Hence show that 561 is a Carmichael number. 71 Use Lu

程序代写 CS代考 加微信: powcoder QQ: 1823890830 Email: powcoder@163.com