Tutorial 6
Q1. Diminished gradient is an issue when training Generative Adversarial Networks (GANs). In the literature, when training the Generator, Ez∼pz(z)[−log(D(G(z))] is recommended to be an alternative cost function.
a. What is the advantage of using this alternative cost function over the original one, i.e., Ez∼pz(z)[log(1 − D(G(z)))]?
b. Write the Pseudo code for the training of GAN with this alternative cost func- tion.
Copyright By PowCoder代写 加微信 powcoder
Q2. The training of Generative Adversarial Networks (GANs) can be formulated as an optimisation problem shown below:
min max V (D, G) = Ex∼pdata(x)[log D(x)] + Ez∼pz(z)[log(1 − D(G(z)))]. GD
a. Find the optimal Discriminator D(x), denoted as D∗(x). Note that x denotes the sample taken by the Discriminator D(x), which could represent the real or generated sample.
b. Find the optimal V (D, G).
Q3. When training a Generative Adversarial Network (GAN), we consider a dataset of
real samples denoted as Xreal = {x1,x2} and the generated samples as Xfake = 1 3 5 7
{x ̃1,x ̃2},wherex1 = 2 ,x2 = 4 ,x ̃1 = 6 ,x ̃2 = 8 . The Discriminator is given as
1 + e−(θd1 x1 −θd2 x2 −2)
where θd1 and θd2 are parameters of Discriminator, and x = x . Given θd1 = 0.1
and θd2 = 0.2. Each sample from the (real and fake) dataset has equal probability to be selected.
a. GiventhedatasetsXreal andXfake,computeV(D,G)=Ex∼pdata(x)[lnD(x)]+ Ez∼pz(z)[ln(1 − D(G(z)))].
b. Assuming all real and fake samples are selected into the minibatch and k = 1 1 m ( i ) ( i )
for GAN training, compute ▽θd m
lnD x + ln 1 − D(G(z )) and
determine the updated θd1 and θd2 for the next iteration using the learning
rate η = 0.02.
程序代写 CS代考 加微信: powcoder QQ: 1823890830 Email: powcoder@163.com