CS代考 COMP2610/COMP6261 – Information Theory Tutorial 3: Coding and Compression

COMP2610/COMP6261 – Information Theory Tutorial 3: Coding and Compression
Robert C. 2, 2018
1. Probabilistic inequalities
Suppose a coin is tossed n times. The coin is known to land “heads” with probability p. The number of

Copyright By PowCoder代写 加微信 powcoder

observed “heads” is recorded as a random variable X.
(a) What is the exact probability of X being n − 1 or more?
(b) Using Markov’s inequality, compute a bound on the same probability as the previous part.
(c) Suppose n = 2. For what values of p will the bound from Markov’s inequality be within 1% of the exact probability?
2. AEP and source coding (cf. Cover & Thomas, Problem 3.7)
A sequence of bits its generated by i.i.d. draws from an ensemble with probabilities p0 = 0.995 and p1 = 0.005. Sequences are coded in 100-bit blocks. Every 100-bit block with at most three 1s is assigned a codeword. Those blocks with more than three 1s are not assigned codewords.
(a) Whatistheminimumrequiredlengthoftheassignedcodewordsiftheyarealltobeofthesamelength?
(b) Calculate the probability of observing a 100-bit block that has no associated codeword.
(c) (Harder) Use Chebyshev’s inequality to bound the probability of observing a 100-bit block for which no codeword has been assigned. Compare the bound to the probability just calculated.
3. Typical Sets and Smallest δ-Sufficient Subsets (cf. Cover & Thomas, Problem 3.13) LetXN beanextendedensembleforXwithAX ={0,1}andPX ={0.4,0.6}.
(a) Calculate the entropy H(X). (b) LetN=25andβ=0.1.
i. Which sequences in XN fall in the typical set TNβ? (You may find it helpful to refer to Table 1 below.)
ii. Compute P(x ∈ TNβ), the probability of a sequence from XN falling in the typical set.
iii. With reference to Table 1 below, how many elements are there in TN β ?
iv. How many elements are in the smallest δ-sufficient subset Sδ for δ = 0.9?
v. What is the essential bit content Hδ (X N ) for δ = 0.9?
4. Source Coding Theorem
Recall that the source coding theorem (for uniform codes) says that for any ensemble X:
(∀ε>0)(∀δ∈(0,1))(∃N0)(∀N>N0)􏰇NHδ(X )−H(X)􏰇≤ε. 􏰇􏰇

(a) Near, an enthusiastic software developer, has just learned about the source coding theorem. He exclaims: “The theorem allows us to pick any ε > 0. So, if I pick ε = H(X) − ε′, I get that for sufficiently large N,
N1 H δ ( X N ) ≥ ε ′ .
This means that by making ε′ tiny, I can get away with using virtually zero bits per outcome. Great!”.
Is Near’s reasoning correct? Explain why or why not.
(b) Mello,askepticaleconometrician,hasalsojustlearnedaboutthesourcecodingtheorem.Hecomplains:
“The thereom is not really relevant to me. I am interested in coding blocks of outcomes where each outcome is dependent on the previous outcome, rather than them all being independent of each other. The source coding theorem is not useful in this case.”
Is Mello’s reasoning correct? Explain why or why not.
A standard deck of cards contains 4 suits — ♥, ♦, ♣, ♠ (“hearts”, “diamonds”, “clubs”, “spades”) — each with 13 values — A,2,3,4,5,6,7,8,9,10,J,Q,K (The A,J,Q,K are called “Ace”, “Jack”, “Queen”, “King”).
Twenty cards are removed from a standard deck: All 13 ♠s; the A♥, 2♥, 3♥, 4♥, and 5♥; and the A♦ and K♦. Assume the deck is thoroughly shuffled and that after each draw from the deck the card drawn is replaced and the deck reshuffled. That is, repeated card draws are i.i.d. from a uniform distribution.
We would like to code the value V of randomly drawn cards.
(a) How many bits are required to uniformly code V ?
(b) Compute p(V = v) for every possible value of v.
(c) Compute a δ-sufficient subset Sδ for V using δ = 0, 1/16, 1/2 and their associated essential bit content Hδ.
(d) Compute a typical set TNβ for N = 1 and β = 0.3.
(e) For δ = 1/16, is it possible to construct a block code for large choices of N such that only 2 bits per
outcome are used on average (i.e., N1 Hδ (V N ) = 2)? Explain why or why not.
(f) Is it possible to construct a prefix code such that the face cards are given 3-bit codewords and non-face
cards are given 4-bit codewords? If so, construct such a code.
6. Prefix Codes
Consider the codes C1 = {0, 01, 1101, 10101}, C2 = {00, 01, 100, 101}, and C3 = {0, 1, 00, 11}
(a) Are C1, C2, and C3 prefix codes? Are they uniquely decodable?
(b) ConstructnewprefixcodesC1′,C2′,andC3′ thathavethesamelengthsasC1,C2,andC3,respectively. If this is not possible, explain why.
(c) Is it possible to construct a uniquely decodable code that has codeword lengths {1, 2, 3, 4, 4, 4}? If so, construct one.
(d) Is it possible to construct a uniquely decodable code with lengths {1, 3, 3, 4, 4, 4}? If so, construct one.
7. Optimal Coding and
ConsidertheensembleXwithprobabilitiesPX =p={1,1, 31 , 1 }andthecodeC={0,11,100,101}. 2 4 128 128
(a) What is the entropy H(X)?
(b) What is the expected length L(C, X )? Is C an optimal code for X ?
(c) What are the code lengths for X? Construct a prefix Shannon code CS for X. Compute the expected code length L(CS , X ).
(d) What are the probabilities q = {q1, q2, q3, q4} for the code lengths of C?
(e) Compute the relative entropy D(p∥q). What do you notice about D(p∥q), H(X), L(C,X), and
(f) Construct a Huffman code CH for X. How does its code lengths compare to C and CS? How do their expected code lengths compare?

k 􏰀N􏰁 􏰀N􏰁pkpN−k − 1 log kk10N2
0 1 1 25 2 300 3 2300 4 12650 5 53130 6 177100 7 480700 8 1081575 9 2042975 10 3268760 11 4457400 12 5200300 13 5200300 14 4457400 15 3268760 16 2042975 17 1081575 18 480700 19 177100 20 53130 21 12650 22 2300 23 300 24 25 25 1
0.000000 0.000000 0.000000 0.000001 0.000007 0.000045 0.000227 0.000925 0.003121 0.008843 0.021222 0.043410 0.075967 0.113950 0.146507 0.161158 0.151086 0.119980 0.079986 0.044203 0.019891 0.007104 0.001937 0.000379 0.000047 0.000003
p(x) 1.321928
1.298530 1.275131 1.251733 1.228334 1.204936 1.181537 1.158139 1.134740 1.111342 1.087943 1.064545 1.041146 1.017748 0.994349 0.970951 0.947552 0.924154 0.900755 0.877357 0.853958 0.830560 0.807161 0.783763 0.760364 0.736966
Table 1: Table for Question 3. Column 1 shows k, the number of 1s in a block of length N = 25. Column 2
shows the number of such blocks. Column 3 shows the probability p(x) = 􏰀N􏰁pkpN−k of drawing such a block k10
x. Column 4 shows the Shannon information per symbol in x.

程序代写 CS代考 加微信: powcoder QQ: 1823890830 Email: powcoder@163.com