CS计算机代考程序代写 python Keras In [139]:

In [139]:
import tensorflow as tf
In [140]:
hello = tf.constant(‘Hello! Tensorflow’)
print(hello)

tf.Tensor(b’Hello! Tensorflow’, shape=(), dtype=string)
In [141]:
import numpy as np
In [142]:
mnist = tf.keras.datasets.mnist
In [143]:
(X_train, y_train), (X_test, y_test) = mnist.load_data()
In [144]:
# Q1- Use same method that explain in lecture to show first 6 elements of the dataset
# plot 6 images as gray scale
import matplotlib.pyplot as plt
plt.subplot(231)
plt.imshow(X_train[0])
plt.subplot(232)
plt.imshow(X_train[1])
plt.subplot(233)
plt.imshow(X_train[2])
plt.subplot(234)
plt.imshow(X_train[3])
plt.subplot(235)
plt.imshow(X_train[4])
plt.subplot(236)
plt.imshow(X_train[5])
# show the plot
plt.show()


In [145]:
# Q2- Normalize the data by dividing the values by 255.
# normalize inputs from 0-255 to 0-1
X_train = X_train / 255
X_test = X_test / 255
In [146]:
model1= tf.keras.models.Sequential([tf.keras.layers.Flatten(input_shape=(28, 28)),
tf.keras.layers.Dense(128, activation=’sigmoid’),
tf.keras.layers.Dense(10)])
In [147]:
predictions= model(X_train[:1]).numpy()
predictions
Out[147]:
array([[ 0.46603602, -3.4545794 , 0.42402482, 4.4841733 , -6.55768 ,
5.050018 , -2.1956143 , 0.02335481, -0.24694973, -1.0054747 ]],
dtype=float32)
In [148]:
tf.nn.sigmoid(predictions).numpy()
Out[148]:
array([[0.6144451 , 0.03063259, 0.60444593, 0.9888397 , 0.00141716,
0.99363166, 0.10014502, 0.50583845, 0.43857443, 0.2678664 ]],
dtype=float32)
In [149]:
loss_fn = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)
In [150]:
loss_fn(y_train[:1], predictions).numpy()
Out[150]:
0.47170576
In [151]:
model1.compile(optimizer=’SGD’, loss=loss_fn, metrics=[‘accuracy’])
model1.fit(X_train, y_train, epochs=5)

Train on 60000 samples
Epoch 1/5
60000/60000 [==============================] – 3s 42us/sample – loss: 1.4234 – accuracy: 0.6954
Epoch 2/5
60000/60000 [==============================] – 2s 40us/sample – loss: 0.6989 – accuracy: 0.8433
Epoch 3/5
60000/60000 [==============================] – 2s 40us/sample – loss: 0.5227 – accuracy: 0.8713
Epoch 4/5
60000/60000 [==============================] – 2s 41us/sample – loss: 0.4472 – accuracy: 0.8847
Epoch 5/5
60000/60000 [==============================] – 2s 41us/sample – loss: 0.4052 – accuracy: 0.8917
Out[151]:

In [152]:
model2= tf.keras.models.Sequential([tf.keras.layers.Flatten(input_shape=(28, 28)),
tf.keras.layers.Dense(128, activation=’sigmoid’),tf.keras.layers.Dense(128, activation=’sigmoid’),
tf.keras.layers.Dense(10)])
In [153]:
model2.compile(optimizer=’SGD’, loss=loss_fn, metrics=[‘accuracy’])
model2.fit(X_train, y_train, epochs=10)

Train on 60000 samples
Epoch 1/10
60000/60000 [==============================] – 3s 46us/sample – loss: 2.1902 – accuracy: 0.3605
Epoch 2/10
60000/60000 [==============================] – 3s 43us/sample – loss: 1.6539 – accuracy: 0.6325
Epoch 3/10
60000/60000 [==============================] – 3s 43us/sample – loss: 1.0097 – accuracy: 0.7682
Epoch 4/10
60000/60000 [==============================] – 3s 47us/sample – loss: 0.7204 – accuracy: 0.8226
Epoch 5/10
60000/60000 [==============================] – 3s 43us/sample – loss: 0.5843 – accuracy: 0.8496
Epoch 6/10
60000/60000 [==============================] – 3s 43us/sample – loss: 0.5054 – accuracy: 0.8675
Epoch 7/10
60000/60000 [==============================] – 3s 44us/sample – loss: 0.4541 – accuracy: 0.8781
Epoch 8/10
60000/60000 [==============================] – 3s 43us/sample – loss: 0.4190 – accuracy: 0.8870
Epoch 9/10
60000/60000 [==============================] – 3s 43us/sample – loss: 0.3937 – accuracy: 0.8929
Epoch 10/10
60000/60000 [==============================] – 3s 45us/sample – loss: 0.3747 – accuracy: 0.8961
Out[153]:

In [154]:
hist = model2.fit(X_train, y_train, validation_split=0.2, epochs=10)

Train on 48000 samples, validate on 12000 samples
Epoch 1/10
48000/48000 [==============================] – 3s 55us/sample – loss: 0.3660 – accuracy: 0.8977 – val_loss: 0.3362 – val_accuracy: 0.9044
Epoch 2/10
48000/48000 [==============================] – 3s 55us/sample – loss: 0.3554 – accuracy: 0.8997 – val_loss: 0.3282 – val_accuracy: 0.9066
Epoch 3/10
48000/48000 [==============================] – 3s 56us/sample – loss: 0.3465 – accuracy: 0.9019 – val_loss: 0.3205 – val_accuracy: 0.9068
Epoch 4/10
48000/48000 [==============================] – 3s 53us/sample – loss: 0.3386 – accuracy: 0.9027 – val_loss: 0.3141 – val_accuracy: 0.9083
Epoch 5/10
48000/48000 [==============================] – 3s 54us/sample – loss: 0.3314 – accuracy: 0.9046 – val_loss: 0.3081 – val_accuracy: 0.9122
Epoch 6/10
48000/48000 [==============================] – 3s 53us/sample – loss: 0.3250 – accuracy: 0.9064 – val_loss: 0.3035 – val_accuracy: 0.9125
Epoch 7/10
48000/48000 [==============================] – 3s 53us/sample – loss: 0.3187 – accuracy: 0.9081 – val_loss: 0.2974 – val_accuracy: 0.9147
Epoch 8/10
48000/48000 [==============================] – 3s 53us/sample – loss: 0.3133 – accuracy: 0.9096 – val_loss: 0.2922 – val_accuracy: 0.9156
Epoch 9/10
48000/48000 [==============================] – 3s 53us/sample – loss: 0.3081 – accuracy: 0.9109 – val_loss: 0.2878 – val_accuracy: 0.9177
Epoch 10/10
48000/48000 [==============================] – 3s 52us/sample – loss: 0.3029 – accuracy: 0.9124 – val_loss: 0.2842 – val_accuracy: 0.9177
In [137]:
plt.subplot(2,1,1)
plt.plot(hist.history[‘accuracy’])
plt.plot(hist.history[‘val_accuracy’])
plt.title(‘model accuracy’)
plt.ylabel(‘accuracy’)
plt.xlabel(‘epoch’)
plt.legend([‘train’, ‘test’], loc=’lower right’)
plt.subplot(2,1,2)
plt.plot(hist.history[‘loss’])
plt.plot(hist.history[‘val_loss’])
plt.title(‘model loss’)
plt.ylabel(‘loss’)
plt.xlabel(‘epoch’)
plt.legend([‘train’, ‘test’], loc=’upper right’)
plt.show()


In [155]:
model2.evaluate(X_test, y_test)

10000/10000 [==============================] – 0s 48us/sample – loss: 0.2904 – accuracy: 0.9155
Out[155]:
[0.290443919467926, 0.9155]
In [108]:
#Q3- Add another layer with 128 nodes, increase the epochs to 10 and report the result. How much is the accuracy?
#Ans: The accuracy of training dataset is 91.24%, of test data it is 91.55%
In [156]:
model3= tf.keras.models.Sequential([tf.keras.layers.Flatten(input_shape=(28, 28)),
tf.keras.layers.Dense(128, activation=’sigmoid’),tf.keras.layers.Dense(128, activation=’sigmoid’),
tf.keras.layers.Dropout(0.2),
tf.keras.layers.Dense(10)])
In [157]:
model3.compile(optimizer=’SGD’, loss=loss_fn, metrics=[‘accuracy’])
model3.fit(X_train, y_train, validation_split=0.2, epochs=10)

Train on 48000 samples, validate on 12000 samples
Epoch 1/10
48000/48000 [==============================] – 3s 66us/sample – loss: 2.2880 – accuracy: 0.1430 – val_loss: 2.1541 – val_accuracy: 0.4526
Epoch 2/10
48000/48000 [==============================] – 3s 57us/sample – loss: 2.0378 – accuracy: 0.3265 – val_loss: 1.7499 – val_accuracy: 0.6379
Epoch 3/10
48000/48000 [==============================] – 3s 55us/sample – loss: 1.5294 – accuracy: 0.5395 – val_loss: 1.1936 – val_accuracy: 0.7040
Epoch 4/10
48000/48000 [==============================] – 3s 57us/sample – loss: 1.1222 – accuracy: 0.6558 – val_loss: 0.8938 – val_accuracy: 0.7903
Epoch 5/10
48000/48000 [==============================] – 3s 53us/sample – loss: 0.9053 – accuracy: 0.7251 – val_loss: 0.7273 – val_accuracy: 0.8191
Epoch 6/10
48000/48000 [==============================] – 3s 53us/sample – loss: 0.7702 – accuracy: 0.7694 – val_loss: 0.6169 – val_accuracy: 0.8462
Epoch 7/10
48000/48000 [==============================] – 3s 55us/sample – loss: 0.6750 – accuracy: 0.8013 – val_loss: 0.5432 – val_accuracy: 0.8609
Epoch 8/10
48000/48000 [==============================] – 3s 56us/sample – loss: 0.6075 – accuracy: 0.8212 – val_loss: 0.4910 – val_accuracy: 0.8724
Epoch 9/10
48000/48000 [==============================] – 3s 53us/sample – loss: 0.5591 – accuracy: 0.8366 – val_loss: 0.4535 – val_accuracy: 0.8794
Epoch 10/10
48000/48000 [==============================] – 3s 55us/sample – loss: 0.5240 – accuracy: 0.8484 – val_loss: 0.4247 – val_accuracy: 0.8852
Out[157]:

In [158]:
hist = model3.fit(X_train, y_train, validation_split=0.2, epochs=10)

Train on 48000 samples, validate on 12000 samples
Epoch 1/10
48000/48000 [==============================] – 3s 57us/sample – loss: 0.4939 – accuracy: 0.8549 – val_loss: 0.4025 – val_accuracy: 0.8884
Epoch 2/10
48000/48000 [==============================] – 3s 54us/sample – loss: 0.4700 – accuracy: 0.8650 – val_loss: 0.3855 – val_accuracy: 0.8928
Epoch 3/10
48000/48000 [==============================] – 3s 56us/sample – loss: 0.4491 – accuracy: 0.8686 – val_loss: 0.3708 – val_accuracy: 0.8957
Epoch 4/10
48000/48000 [==============================] – 3s 57us/sample – loss: 0.4353 – accuracy: 0.8736 – val_loss: 0.3592 – val_accuracy: 0.8973
Epoch 5/10
48000/48000 [==============================] – 3s 56us/sample – loss: 0.4218 – accuracy: 0.8785 – val_loss: 0.3494 – val_accuracy: 0.9003
Epoch 6/10
48000/48000 [==============================] – 3s 56us/sample – loss: 0.4101 – accuracy: 0.8787 – val_loss: 0.3409 – val_accuracy: 0.9023
Epoch 7/10
48000/48000 [==============================] – 3s 55us/sample – loss: 0.3997 – accuracy: 0.8834 – val_loss: 0.3339 – val_accuracy: 0.9032
Epoch 8/10
48000/48000 [==============================] – 3s 55us/sample – loss: 0.3923 – accuracy: 0.8844 – val_loss: 0.3265 – val_accuracy: 0.9044
Epoch 9/10
48000/48000 [==============================] – 3s 56us/sample – loss: 0.3845 – accuracy: 0.8877 – val_loss: 0.3215 – val_accuracy: 0.9059
Epoch 10/10
48000/48000 [==============================] – 3s 59us/sample – loss: 0.3764 – accuracy: 0.8903 – val_loss: 0.3155 – val_accuracy: 0.9083
In [159]:
plt.subplot(2,1,1)
plt.plot(hist.history[‘accuracy’])
plt.plot(hist.history[‘val_accuracy’])
plt.title(‘model accuracy’)
plt.ylabel(‘accuracy’)
plt.xlabel(‘epoch’)
plt.legend([‘train’, ‘test’], loc=’lower right’)
plt.subplot(2,1,2)
plt.plot(hist.history[‘loss’])
plt.plot(hist.history[‘val_loss’])
plt.title(‘model loss’)
plt.ylabel(‘loss’)
plt.xlabel(‘epoch’)
plt.legend([‘train’, ‘test’], loc=’upper right’)
plt.show()


In [160]:
model3.evaluate(X_test, y_test)

10000/10000 [==============================] – 0s 49us/sample – loss: 0.3204 – accuracy: 0.9093
Out[160]:
[0.32036801151633265, 0.9093]
In [166]:
model4= tf.keras.models.Sequential([tf.keras.layers.Flatten(input_shape=(28, 28)),
tf.keras.layers.Dense(128, activation=’sigmoid’),tf.keras.layers.Dense(128, activation=’sigmoid’),
tf.keras.layers.Dropout(0.2),
tf.keras.layers.Dense(10)])
In [167]:
model4.compile(optimizer=’SGD’, loss=loss_fn, metrics=[‘accuracy’])
model4.fit(X_train, y_train, validation_split=0.2, epochs=30,batch_size = 200)

Train on 48000 samples, validate on 12000 samples
Epoch 1/30
48000/48000 [==============================] – 1s 17us/sample – loss: 2.3670 – accuracy: 0.1060 – val_loss: 2.2804 – val_accuracy: 0.1423
Epoch 2/30
48000/48000 [==============================] – 1s 12us/sample – loss: 2.3174 – accuracy: 0.1213 – val_loss: 2.2584 – val_accuracy: 0.1319
Epoch 3/30
48000/48000 [==============================] – 1s 12us/sample – loss: 2.2949 – accuracy: 0.1340 – val_loss: 2.2349 – val_accuracy: 0.3977
Epoch 4/30
48000/48000 [==============================] – 1s 12us/sample – loss: 2.2668 – accuracy: 0.1595 – val_loss: 2.2094 – val_accuracy: 0.3453
Epoch 5/30
48000/48000 [==============================] – 1s 12us/sample – loss: 2.2414 – accuracy: 0.1785 – val_loss: 2.1807 – val_accuracy: 0.4672
Epoch 6/30
48000/48000 [==============================] – 1s 13us/sample – loss: 2.2073 – accuracy: 0.2120 – val_loss: 2.1453 – val_accuracy: 0.5554
Epoch 7/30
48000/48000 [==============================] – 1s 13us/sample – loss: 2.1695 – accuracy: 0.2442 – val_loss: 2.1047 – val_accuracy: 0.5567
Epoch 8/30
48000/48000 [==============================] – 1s 12us/sample – loss: 2.1256 – accuracy: 0.2796 – val_loss: 2.0544 – val_accuracy: 0.5496
Epoch 9/30
48000/48000 [==============================] – 1s 12us/sample – loss: 2.0725 – accuracy: 0.3214 – val_loss: 1.9948 – val_accuracy: 0.5860
Epoch 10/30
48000/48000 [==============================] – 1s 13us/sample – loss: 2.0112 – accuracy: 0.3565 – val_loss: 1.9253 – val_accuracy: 0.6095
Epoch 11/30
48000/48000 [==============================] – 1s 12us/sample – loss: 1.9349 – accuracy: 0.3972 – val_loss: 1.8442 – val_accuracy: 0.6408
Epoch 12/30
48000/48000 [==============================] – 1s 13us/sample – loss: 1.8578 – accuracy: 0.4312 – val_loss: 1.7550 – val_accuracy: 0.6445
Epoch 13/30
48000/48000 [==============================] – 1s 13us/sample – loss: 1.7716 – accuracy: 0.4636 – val_loss: 1.6609 – val_accuracy: 0.6526
Epoch 14/30
48000/48000 [==============================] – 1s 12us/sample – loss: 1.6796 – accuracy: 0.4979 – val_loss: 1.5649 – val_accuracy: 0.6539
Epoch 15/30
48000/48000 [==============================] – 1s 13us/sample – loss: 1.5896 – accuracy: 0.5240 – val_loss: 1.4711 – val_accuracy: 0.6662
Epoch 16/30
48000/48000 [==============================] – 1s 12us/sample – loss: 1.5041 – accuracy: 0.5451 – val_loss: 1.3830 – val_accuracy: 0.6948
Epoch 17/30
48000/48000 [==============================] – 1s 12us/sample – loss: 1.4208 – accuracy: 0.5712 – val_loss: 1.3013 – val_accuracy: 0.7023
Epoch 18/30
48000/48000 [==============================] – 1s 12us/sample – loss: 1.3462 – accuracy: 0.5871 – val_loss: 1.2278 – val_accuracy: 0.7153
Epoch 19/30
48000/48000 [==============================] – 1s 13us/sample – loss: 1.2794 – accuracy: 0.6069 – val_loss: 1.1620 – val_accuracy: 0.7249
Epoch 20/30
48000/48000 [==============================] – 1s 12us/sample – loss: 1.2212 – accuracy: 0.6244 – val_loss: 1.1034 – val_accuracy: 0.7371
Epoch 21/30
48000/48000 [==============================] – 1s 12us/sample – loss: 1.1678 – accuracy: 0.6389 – val_loss: 1.0515 – val_accuracy: 0.7406
Epoch 22/30
48000/48000 [==============================] – 1s 12us/sample – loss: 1.1191 – accuracy: 0.6519 – val_loss: 1.0046 – val_accuracy: 0.7597
Epoch 23/30
48000/48000 [==============================] – 1s 12us/sample – loss: 1.0797 – accuracy: 0.6647 – val_loss: 0.9633 – val_accuracy: 0.7691
Epoch 24/30
48000/48000 [==============================] – 1s 12us/sample – loss: 1.0404 – accuracy: 0.6755 – val_loss: 0.9250 – val_accuracy: 0.7760
Epoch 25/30
48000/48000 [==============================] – 1s 12us/sample – loss: 1.0044 – accuracy: 0.6909 – val_loss: 0.8908 – val_accuracy: 0.7846
Epoch 26/30
48000/48000 [==============================] – 1s 12us/sample – loss: 0.9712 – accuracy: 0.7016 – val_loss: 0.8595 – val_accuracy: 0.7896
Epoch 27/30
48000/48000 [==============================] – 1s 12us/sample – loss: 0.9422 – accuracy: 0.7092 – val_loss: 0.8302 – val_accuracy: 0.7959
Epoch 28/30
48000/48000 [==============================] – 1s 12us/sample – loss: 0.9130 – accuracy: 0.7176 – val_loss: 0.8040 – val_accuracy: 0.8007
Epoch 29/30
48000/48000 [==============================] – 1s 12us/sample – loss: 0.8856 – accuracy: 0.7262 – val_loss: 0.7790 – val_accuracy: 0.8057
Epoch 30/30
48000/48000 [==============================] – 1s 12us/sample – loss: 0.8667 – accuracy: 0.7344 – val_loss: 0.7560 – val_accuracy: 0.8104
Out[167]:

In [168]:
hist = model4.fit(X_train, y_train, validation_split=0.2, epochs=30, batch_size = 200)

Train on 48000 samples, validate on 12000 samples
Epoch 1/30
48000/48000 [==============================] – 1s 12us/sample – loss: 0.8409 – accuracy: 0.7436 – val_loss: 0.7344 – val_accuracy: 0.8183
Epoch 2/30
48000/48000 [==============================] – 1s 12us/sample – loss: 0.8215 – accuracy: 0.7497 – val_loss: 0.7144 – val_accuracy: 0.8196
Epoch 3/30
48000/48000 [==============================] – 1s 13us/sample – loss: 0.8042 – accuracy: 0.7541 – val_loss: 0.6956 – val_accuracy: 0.8243
Epoch 4/30
48000/48000 [==============================] – 1s 12us/sample – loss: 0.7828 – accuracy: 0.7618 – val_loss: 0.6779 – val_accuracy: 0.8293
Epoch 5/30
48000/48000 [==============================] – 1s 11us/sample – loss: 0.7642 – accuracy: 0.7682 – val_loss: 0.6610 – val_accuracy: 0.8314
Epoch 6/30
48000/48000 [==============================] – 1s 13us/sample – loss: 0.7499 – accuracy: 0.7704 – val_loss: 0.6454 – val_accuracy: 0.8371
Epoch 7/30
48000/48000 [==============================] – 1s 12us/sample – loss: 0.7356 – accuracy: 0.7749 – val_loss: 0.6307 – val_accuracy: 0.8404
Epoch 8/30
48000/48000 [==============================] – 1s 12us/sample – loss: 0.7221 – accuracy: 0.7811 – val_loss: 0.6167 – val_accuracy: 0.8413
Epoch 9/30
48000/48000 [==============================] – 1s 12us/sample – loss: 0.7056 – accuracy: 0.7876 – val_loss: 0.6036 – val_accuracy: 0.8470
Epoch 10/30
48000/48000 [==============================] – 1s 12us/sample – loss: 0.6913 – accuracy: 0.7930 – val_loss: 0.5908 – val_accuracy: 0.8484
Epoch 11/30
48000/48000 [==============================] – 1s 12us/sample – loss: 0.6800 – accuracy: 0.7950 – val_loss: 0.5790 – val_accuracy: 0.8508
Epoch 12/30
48000/48000 [==============================] – 1s 12us/sample – loss: 0.6705 – accuracy: 0.7980 – val_loss: 0.5677 – val_accuracy: 0.8537
Epoch 13/30
48000/48000 [==============================] – 1s 12us/sample – loss: 0.6550 – accuracy: 0.8034 – val_loss: 0.5570 – val_accuracy: 0.8556
Epoch 14/30
48000/48000 [==============================] – 1s 11us/sample – loss: 0.6465 – accuracy: 0.8058 – val_loss: 0.5469 – val_accuracy: 0.8572
Epoch 15/30
48000/48000 [==============================] – 1s 13us/sample – loss: 0.6364 – accuracy: 0.8086 – val_loss: 0.5373 – val_accuracy: 0.8603
Epoch 16/30
48000/48000 [==============================] – 1s 11us/sample – loss: 0.6271 – accuracy: 0.8121 – val_loss: 0.5285 – val_accuracy: 0.8616
Epoch 17/30
48000/48000 [==============================] – 1s 12us/sample – loss: 0.6166 – accuracy: 0.8155 – val_loss: 0.5196 – val_accuracy: 0.8632
Epoch 18/30
48000/48000 [==============================] – 1s 12us/sample – loss: 0.6073 – accuracy: 0.8191 – val_loss: 0.5115 – val_accuracy: 0.8648
Epoch 19/30
48000/48000 [==============================] – 1s 12us/sample – loss: 0.5982 – accuracy: 0.8218 – val_loss: 0.5037 – val_accuracy: 0.8664
Epoch 20/30
48000/48000 [==============================] – 1s 12us/sample – loss: 0.5894 – accuracy: 0.8248 – val_loss: 0.4962 – val_accuracy: 0.8670
Epoch 21/30
48000/48000 [==============================] – 1s 13us/sample – loss: 0.5807 – accuracy: 0.8281 – val_loss: 0.4895 – val_accuracy: 0.8686
Epoch 22/30
48000/48000 [==============================] – 1s 13us/sample – loss: 0.5780 – accuracy: 0.8272 – val_loss: 0.4827 – val_accuracy: 0.8694
Epoch 23/30
48000/48000 [==============================] – 1s 12us/sample – loss: 0.5649 – accuracy: 0.8324 – val_loss: 0.4762 – val_accuracy: 0.8704
Epoch 24/30
48000/48000 [==============================] – 1s 13us/sample – loss: 0.5632 – accuracy: 0.8314 – val_loss: 0.4704 – val_accuracy: 0.8720
Epoch 25/30
48000/48000 [==============================] – 1s 12us/sample – loss: 0.5555 – accuracy: 0.8348 – val_loss: 0.4645 – val_accuracy: 0.8742
Epoch 26/30
48000/48000 [==============================] – 1s 12us/sample – loss: 0.5513 – accuracy: 0.8357 – val_loss: 0.4592 – val_accuracy: 0.8753
Epoch 27/30
48000/48000 [==============================] – 1s 13us/sample – loss: 0.5424 – accuracy: 0.8406 – val_loss: 0.4538 – val_accuracy: 0.8763
Epoch 28/30
48000/48000 [==============================] – 1s 12us/sample – loss: 0.5364 – accuracy: 0.8417 – val_loss: 0.4489 – val_accuracy: 0.8766
Epoch 29/30
48000/48000 [==============================] – 1s 12us/sample – loss: 0.5322 – accuracy: 0.8413 – val_loss: 0.4441 – val_accuracy: 0.8779
Epoch 30/30
48000/48000 [==============================] – 1s 13us/sample – loss: 0.5271 – accuracy: 0.8449 – val_loss: 0.4396 – val_accuracy: 0.8799
In [169]:
plt.subplot(2,1,1)
plt.plot(hist.history[‘accuracy’])
plt.plot(hist.history[‘val_accuracy’])
plt.title(‘model accuracy’)
plt.ylabel(‘accuracy’)
plt.xlabel(‘epoch’)
plt.legend([‘train’, ‘test’], loc=’lower right’)
plt.subplot(2,1,2)
plt.plot(hist.history[‘loss’])
plt.plot(hist.history[‘val_loss’])
plt.title(‘model loss’)
plt.ylabel(‘loss’)
plt.xlabel(‘epoch’)
plt.legend([‘train’, ‘test’], loc=’upper right’)
plt.show()


In [170]:
model4.evaluate(X_test, y_test)

10000/10000 [==============================] – 0s 47us/sample – loss: 0.4465 – accuracy: 0.8761
Out[170]:
[0.4465168588638306, 0.8761]
In [ ]:
#Q4- Now train the model again with adding stochastic gradient descent with batch size of 200.What do you expect to be changed? Speed or accuracy?

#The processing time is longer but epoch to 30 is not enough to achieve higher accuracy, the processing time is longer
In [173]:
import pandas as pd
In [174]:
file = ‘C:/Users/angel/Desktop/pima-indians-diabetes.csv’
dataframe = pd.read_csv(file)
dataframe.head()
Out[174]:

ID
preg_no
plasma
diastolic
triceps
serum
mass
pedigree
age
diabete
0
1
6
148
72
35
0
33.6
0.627
50
1
1
2
1
85
66
29
0
26.6
0.351
31
0
2
3
8
183
64
0
0
23.3
0.672
32
1
3
4
1
89
66
23
94
28.1
0.167
21
0
4
5
0
137
40
35
168
43.1
2.288
33
1
In [175]:
dataframe.describe()
Out[175]:

ID
preg_no
plasma
diastolic
triceps
serum
mass
pedigree
age
diabete
count
768.000000
768.000000
768.000000
768.000000
768.000000
768.000000
768.000000
768.000000
768.000000
768.000000
mean
384.500000
3.845052
120.894531
69.105469
20.536458
79.799479
31.992578
0.471876
33.240885
0.348958
std
221.846794
3.369578
31.972618
19.355807
15.952218
115.244002
7.884160
0.331329
11.760232
0.476951
min
1.000000
0.000000
0.000000
0.000000
0.000000
0.000000
0.000000
0.078000
21.000000
0.000000
25%
192.750000
1.000000
99.000000
62.000000
0.000000
0.000000
27.300000
0.243750
24.000000
0.000000
50%
384.500000
3.000000
117.000000
72.000000
23.000000
30.500000
32.000000
0.372500
29.000000
0.000000
75%
576.250000
6.000000
140.250000
80.000000
32.000000
127.250000
36.600000
0.626250
41.000000
1.000000
max
768.000000
17.000000
199.000000
122.000000
99.000000
846.000000
67.100000
2.420000
81.000000
1.000000
In [176]:
y=dataframe[‘diabete’]
X=dataframe.drop([‘ID’, ‘diabete’], axis = 1)
In [177]:
from sklearn.model_selection import train_test_split
In [178]:
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.20, random_state=1234)
In [179]:
# X_test = (X_test-X_test.mean())/X_test.std()
X_test = (X_test)/X_test.max()
In [180]:
X_test.describe()
Out[180]:

preg_no
plasma
diastolic
triceps
serum
mass
pedigree
age
count
154.000000
154.000000
154.000000
154.000000
154.000000
154.000000
154.000000
154.000000
mean
0.211994
0.604157
0.652291
0.214876
0.100099
0.562442
0.220079
0.457978
std
0.185092
0.162984
0.176076
0.169067
0.149095
0.138768
0.163284
0.161106
min
0.000000
0.000000
0.000000
0.000000
0.000000
0.000000
0.041219
0.300000
25%
0.058824
0.502513
0.603774
0.000000
0.000000
0.486038
0.109918
0.342857
50%
0.176471
0.587940
0.660377
0.232323
0.000000
0.567190
0.172177
0.400000
75%
0.294118
0.692211
0.754717
0.333333
0.161290
0.640052
0.277587
0.542857
max
1.000000
1.000000
1.000000
1.000000
1.000000
1.000000
1.000000
1.000000
In [181]:
model_struc = tf.keras.models.Sequential([
tf.keras.layers.Flatten(input_shape=(8, )),
tf.keras.layers.Dense(12, activation=’sigmoid’),
tf.keras.layers.Dense(8, activation=’sigmoid’),
tf.keras.layers.Dense(4) ])
In [182]:
loss_fn = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)
In [183]:
model_struc.compile(optimizer=’Adam’, loss=loss_fn, metrics=[‘accuracy’])
model_output = model_struc.fit(X_train, y_train, validation_data = (X_test, y_test), epochs=10)

Train on 614 samples, validate on 154 samples
Epoch 1/10
614/614 [==============================] – 0s 540us/sample – loss: 1.0688 – accuracy: 0.6531 – val_loss: 1.0455 – val_accuracy: 0.6429
Epoch 2/10
614/614 [==============================] – 0s 78us/sample – loss: 0.9841 – accuracy: 0.6531 – val_loss: 0.9771 – val_accuracy: 0.6429
Epoch 3/10
614/614 [==============================] – 0s 75us/sample – loss: 0.9205 – accuracy: 0.6531 – val_loss: 0.9237 – val_accuracy: 0.6429
Epoch 4/10
614/614 [==============================] – 0s 80us/sample – loss: 0.8703 – accuracy: 0.6531 – val_loss: 0.8841 – val_accuracy: 0.6429
Epoch 5/10
614/614 [==============================] – 0s 76us/sample – loss: 0.8331 – accuracy: 0.6531 – val_loss: 0.8519 – val_accuracy: 0.6429
Epoch 6/10
614/614 [==============================] – 0s 75us/sample – loss: 0.8048 – accuracy: 0.6531 – val_loss: 0.8260 – val_accuracy: 0.6429
Epoch 7/10
614/614 [==============================] – 0s 78us/sample – loss: 0.7824 – accuracy: 0.6531 – val_loss: 0.8041 – val_accuracy: 0.6429
Epoch 8/10
614/614 [==============================] – 0s 76us/sample – loss: 0.7641 – accuracy: 0.6531 – val_loss: 0.7876 – val_accuracy: 0.6429
Epoch 9/10
614/614 [==============================] – 0s 71us/sample – loss: 0.7491 – accuracy: 0.6531 – val_loss: 0.7728 – val_accuracy: 0.6429
Epoch 10/10
614/614 [==============================] – 0s 78us/sample – loss: 0.7367 – accuracy: 0.6531 – val_loss: 0.7600 – val_accuracy: 0.6429
In [184]:
plt.subplot(2,1,1)
plt.plot(model_output.history[‘accuracy’])
plt.plot(model_output.history[‘val_accuracy’])
plt.title(‘model accuracy’)
plt.ylabel(‘accuracy’)
plt.xlabel(‘epoch’)
plt.legend([‘train’, ‘test’], loc=’lower right’)
plt.subplot(2,1,2)
plt.plot(model_output.history[‘loss’])
plt.plot(model_output.history[‘val_loss’])
plt.title(‘model loss’)
plt.ylabel(‘loss’)
plt.xlabel(‘epoch’)
plt.legend([‘train’, ‘test’], loc=’upper right’)
plt.show()


In [185]:
model_struc1 = tf.keras.models.Sequential([
tf.keras.layers.Flatten(input_shape=(8,)),
tf.keras.layers.Dense(12, activation=’sigmoid’),
tf.keras.layers.Dense(8, activation=’sigmoid’),
tf.keras.layers.Dropout(0.2), tf.keras.layers.Dense(4) ])
In [186]:
loss_fn = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)
In [187]:
model_struc1.compile(optimizer=’Adam’, loss=loss_fn, metrics=[‘accuracy’])
model_output1 = model_struc1.fit(X_train, y_train, validation_data = (X_test, y_test), epochs=10)

Train on 614 samples, validate on 154 samples
Epoch 1/10
614/614 [==============================] – 0s 591us/sample – loss: 1.7092 – accuracy: 0.3453 – val_loss: 1.4820 – val_accuracy: 0.3571
Epoch 2/10
614/614 [==============================] – 0s 78us/sample – loss: 1.5449 – accuracy: 0.3453 – val_loss: 1.3598 – val_accuracy: 0.3571
Epoch 3/10
614/614 [==============================] – 0s 81us/sample – loss: 1.3955 – accuracy: 0.3436 – val_loss: 1.2520 – val_accuracy: 0.3571
Epoch 4/10
614/614 [==============================] – 0s 83us/sample – loss: 1.2633 – accuracy: 0.3453 – val_loss: 1.1556 – val_accuracy: 0.3571
Epoch 5/10
614/614 [==============================] – 0s 83us/sample – loss: 1.1473 – accuracy: 0.3550 – val_loss: 1.0729 – val_accuracy: 0.3571
Epoch 6/10
614/614 [==============================] – 0s 83us/sample – loss: 1.0603 – accuracy: 0.3453 – val_loss: 1.0062 – val_accuracy: 0.3571
Epoch 7/10
614/614 [==============================] – 0s 78us/sample – loss: 0.9931 – accuracy: 0.3974 – val_loss: 0.9493 – val_accuracy: 0.3571
Epoch 8/10
614/614 [==============================] – 0s 80us/sample – loss: 0.9203 – accuracy: 0.4251 – val_loss: 0.9027 – val_accuracy: 0.3571
Epoch 9/10
614/614 [==============================] – 0s 78us/sample – loss: 0.8948 – accuracy: 0.4495 – val_loss: 0.8653 – val_accuracy: 0.3571
Epoch 10/10
614/614 [==============================] – 0s 80us/sample – loss: 0.8523 – accuracy: 0.5358 – val_loss: 0.8379 – val_accuracy: 0.6429
In [188]:
plt.subplot(2,1,1)
plt.plot(model_output1.history[‘accuracy’])
plt.plot(model_output1.history[‘val_accuracy’])
plt.title(‘model accuracy’)
plt.ylabel(‘accuracy’)
plt.xlabel(‘epoch’)
plt.legend([‘train’, ‘test’], loc=’lower right’)
plt.subplot(2,1,2)
plt.plot(model_output1.history[‘loss’])
plt.plot(model_output1.history[‘val_loss’])
plt.title(‘model loss’)
plt.ylabel(‘loss’)
plt.xlabel(‘epoch’)
plt.legend([‘train’, ‘test’], loc=’upper right’)
plt.show()


In [68]:
#The accuracy is decreased when we add a dropout.