CS计算机代考程序代写 python Keras In [2]:

In [2]:
import tensorflow as tf
In [4]:
hello = tf.constant(‘Hello! Tensorflow’)
print(hello)

tf.Tensor(b’Hello! Tensorflow’, shape=(), dtype=string)
In [15]:
import numpy as np
In [5]:
mnist = tf.keras.datasets.mnist
In [90]:
(X_train, y_train), (X_test, y_test) = mnist.load_data()

Q1- Use same method that explain in lecture to show first 6 elements of the dataset
In [11]:
# plot 6 images as gray scale
import matplotlib.pyplot as plt
plt.subplot(231)
plt.imshow(X_train[0], cmap=plt.get_cmap(‘gray’))
plt.subplot(232)
plt.imshow(X_train[1], cmap=plt.get_cmap(‘gray’))
plt.subplot(233)
plt.imshow(X_train[2], cmap=plt.get_cmap(‘gray’))
plt.subplot(234)
plt.imshow(X_train[3], cmap=plt.get_cmap(‘gray’))
plt.subplot(235)
plt.imshow(X_train[4], cmap=plt.get_cmap(‘gray’))
plt.subplot(236)
plt.imshow(X_train[5], cmap=plt.get_cmap(‘gray’))
# show the plot
plt.show()

Q2- Normalize the data by dividing the values by 255.
In [91]:
# normalize inputs from 0-255 to 0-1
X_train = X_train / 255
X_test = X_test / 255
In [13]:
model = tf.keras.models.Sequential([ tf.keras.layers.Flatten(input_shape=(28, 28)), tf.keras.layers.Dense(128, activation=’sigmoid’), tf.keras.layers.Dense(10) ])
In [17]:
predictions= model(X_train[:1]).numpy()
predictions

WARNING:tensorflow:Layer flatten is casting an input tensor from dtype float64 to the layer’s dtype of float32, which is new behavior in TensorFlow 2. The layer has dtype float32 because it’s dtype defaults to floatx.

If you intended to run this layer in float32, you can safely ignore this warning. If in doubt, this warning is likely only an issue if you are porting a TensorFlow 1.X model to TensorFlow 2.

To change all layers to have dtype float64 by default, call `tf.keras.backend.set_floatx(‘float64′)`. To change just this layer, pass dtype=’float64′ to the layer constructor. If you are the author of this layer, you can disable autocasting by passing autocast=False to the base Layer constructor.

Out[17]:
array([[-0.06442886, -0.9217786 , 0.24707636, -0.36117092, -0.18383479,
-0.14080182, -0.35930985, -0.20997886, 0.39601555, 0.52246016]],
dtype=float32)
In [18]:
tf.nn.sigmoid(predictions).numpy()
Out[18]:
array([[0.48389834, 0.28459564, 0.56145674, 0.41067615, 0.4541703 ,
0.46485758, 0.41112664, 0.44769728, 0.59773 , 0.62772286]],
dtype=float32)
In [19]:
loss_fn = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)
In [20]:
loss_fn(y_train[:1], predictions).numpy()
Out[20]:
2.4123201
In [21]:
model.compile(optimizer=’SGD’, loss=loss_fn, metrics=[‘accuracy’])
model.fit(X_train, y_train, epochs=5)

Train on 60000 samples
Epoch 1/5
60000/60000 [==============================] – 8s 140us/sample – loss: 1.4277 – accuracy: 0.7063
Epoch 2/5
60000/60000 [==============================] – 5s 82us/sample – loss: 0.7005 – accuracy: 0.8483
Epoch 3/5
60000/60000 [==============================] – 5s 78us/sample – loss: 0.5216 – accuracy: 0.8724
Epoch 4/5
60000/60000 [==============================] – 5s 84us/sample – loss: 0.4468 – accuracy: 0.8842
Epoch 5/5
60000/60000 [==============================] – 6s 92us/sample – loss: 0.4054 – accuracy: 0.8910
Out[21]:

Q3- Add another layer with 128 nodes, increase the epochs to 10 and report the result. How much is the accuracy? (model.evaluate can tell you the accuracy).
In [22]:
model2 = tf.keras.models.Sequential([ tf.keras.layers.Flatten(input_shape=(28, 28)),
tf.keras.layers.Dense(128, activation=’sigmoid’),
tf.keras.layers.Dense(128, activation=’sigmoid’),
tf.keras.layers.Dense(10) ])
In [24]:
model2.compile(optimizer=’SGD’, loss=loss_fn, metrics=[‘accuracy’])
model2.fit(X_train, y_train, epochs=10)

Train on 60000 samples
Epoch 1/10
60000/60000 [==============================] – 6s 101us/sample – loss: 2.1926 – accuracy: 0.3429
Epoch 2/10
60000/60000 [==============================] – 6s 98us/sample – loss: 1.6887 – accuracy: 0.6053
Epoch 3/10
60000/60000 [==============================] – 5s 91us/sample – loss: 1.0752 – accuracy: 0.7591
Epoch 4/10
60000/60000 [==============================] – 5s 87us/sample – loss: 0.7487 – accuracy: 0.8228
Epoch 5/10
60000/60000 [==============================] – 6s 95us/sample – loss: 0.5912 – accuracy: 0.8498
Epoch 6/10
60000/60000 [==============================] – 6s 98us/sample – loss: 0.5052 – accuracy: 0.8677
Epoch 7/10
60000/60000 [==============================] – 5s 91us/sample – loss: 0.4529 – accuracy: 0.8793
Epoch 8/10
60000/60000 [==============================] – 6s 99us/sample – loss: 0.4181 – accuracy: 0.8862
Epoch 9/10
60000/60000 [==============================] – 6s 93us/sample – loss: 0.3936 – accuracy: 0.8912
Epoch 10/10
60000/60000 [==============================] – 7s 124us/sample – loss: 0.3751 – accuracy: 0.8946
Out[24]:

In [31]:
hist = model2.fit(X_train, y_train, validation_split=0.2, epochs=10)

Train on 48000 samples, validate on 12000 samples
Epoch 1/10
48000/48000 [==============================] – 6s 126us/sample – loss: 0.3670 – accuracy: 0.8960 – val_loss: 0.3364 – val_accuracy: 0.9043
Epoch 2/10
48000/48000 [==============================] – 6s 121us/sample – loss: 0.3568 – accuracy: 0.8989 – val_loss: 0.3275 – val_accuracy: 0.9081
Epoch 3/10
48000/48000 [==============================] – 5s 95us/sample – loss: 0.3479 – accuracy: 0.9012 – val_loss: 0.3203 – val_accuracy: 0.9082
Epoch 4/10
48000/48000 [==============================] – 5s 110us/sample – loss: 0.3399 – accuracy: 0.9028 – val_loss: 0.3138 – val_accuracy: 0.9118
Epoch 5/10
48000/48000 [==============================] – 5s 108us/sample – loss: 0.3326 – accuracy: 0.9051 – val_loss: 0.3090 – val_accuracy: 0.9114
Epoch 6/10
48000/48000 [==============================] – 5s 103us/sample – loss: 0.3262 – accuracy: 0.9062 – val_loss: 0.3026 – val_accuracy: 0.9138
Epoch 7/10
48000/48000 [==============================] – 5s 109us/sample – loss: 0.3201 – accuracy: 0.9082 – val_loss: 0.2973 – val_accuracy: 0.9142
Epoch 8/10
48000/48000 [==============================] – 7s 141us/sample – loss: 0.3146 – accuracy: 0.9097 – val_loss: 0.2928 – val_accuracy: 0.9162
Epoch 9/10
48000/48000 [==============================] – 6s 116us/sample – loss: 0.3094 – accuracy: 0.9108 – val_loss: 0.2875 – val_accuracy: 0.9178
Epoch 10/10
48000/48000 [==============================] – 6s 130us/sample – loss: 0.3042 – accuracy: 0.9129 – val_loss: 0.2836 – val_accuracy: 0.9189
In [34]:
model2.evaluate(X_test, y_test)

10000/10000 [==============================] – 1s 64us/sample – loss: 0.2904 – accuracy: 0.9160
Out[34]:
[0.2903906262457371, 0.916]

Q3 (Answer): The accuracy of the model on training dataset is 91.29%; on the test data it is 91.6%
In [35]:
plt.subplot(2,1,1)
plt.plot(hist.history[‘accuracy’])
plt.plot(hist.history[‘val_accuracy’])
plt.title(‘model accuracy’)
plt.ylabel(‘accuracy’)
plt.xlabel(‘epoch’)
plt.legend([‘train’, ‘test’], loc=’lower right’)
plt.subplot(2,1,2)
plt.plot(hist.history[‘loss’])
plt.plot(hist.history[‘val_loss’])
plt.title(‘model loss’)
plt.ylabel(‘loss’)
plt.xlabel(‘epoch’)
plt.legend([‘train’, ‘test’], loc=’upper right’)
plt.show()


In [88]:
model3 = tf.keras.models.Sequential([
tf.keras.layers.Flatten(input_shape=(28, 28)),
tf.keras.layers.Dense(128, activation=’sigmoid’),
tf.keras.layers.Dense(128, activation=’sigmoid’),
tf.keras.layers.Dropout(0.2), tf.keras.layers.Dense(10) ])
In [92]:
model3.compile(optimizer=’SGD’, loss=loss_fn, metrics=[‘accuracy’])
hist = model3.fit(X_train, y_train, validation_split=0.2, epochs=10)

Train on 48000 samples, validate on 12000 samples
Epoch 1/10
48000/48000 [==============================] – 6s 116us/sample – loss: 2.2731 – accuracy: 0.1567 – val_loss: 2.1196 – val_accuracy: 0.4916
Epoch 2/10
48000/48000 [==============================] – 5s 108us/sample – loss: 1.9763 – accuracy: 0.3650 – val_loss: 1.6527 – val_accuracy: 0.6860
Epoch 3/10
48000/48000 [==============================] – 5s 110us/sample – loss: 1.4431 – accuracy: 0.5723 – val_loss: 1.1113 – val_accuracy: 0.7566
Epoch 4/10
48000/48000 [==============================] – 5s 102us/sample – loss: 1.0545 – accuracy: 0.6840 – val_loss: 0.8303 – val_accuracy: 0.8020
Epoch 5/10
48000/48000 [==============================] – 5s 105us/sample – loss: 0.8496 – accuracy: 0.7449 – val_loss: 0.6779 – val_accuracy: 0.8352
Epoch 6/10
48000/48000 [==============================] – 5s 105us/sample – loss: 0.7277 – accuracy: 0.7823 – val_loss: 0.5857 – val_accuracy: 0.8536
Epoch 7/10
48000/48000 [==============================] – 5s 103us/sample – loss: 0.6451 – accuracy: 0.8082 – val_loss: 0.5230 – val_accuracy: 0.8619
Epoch 8/10
48000/48000 [==============================] – 5s 105us/sample – loss: 0.5897 – accuracy: 0.8260 – val_loss: 0.4785 – val_accuracy: 0.8702
Epoch 9/10
48000/48000 [==============================] – 5s 113us/sample – loss: 0.5461 – accuracy: 0.8394 – val_loss: 0.4471 – val_accuracy: 0.8775
Epoch 10/10
48000/48000 [==============================] – 5s 103us/sample – loss: 0.5098 – accuracy: 0.8494 – val_loss: 0.4202 – val_accuracy: 0.8861
In [93]:
plt.subplot(2,1,1)
plt.plot(hist.history[‘accuracy’])
plt.plot(hist.history[‘val_accuracy’])
plt.title(‘model accuracy’)
plt.ylabel(‘accuracy’)
plt.xlabel(‘epoch’)
plt.legend([‘train’, ‘test’], loc=’lower right’)
plt.subplot(2,1,2)
plt.plot(hist.history[‘loss’])
plt.plot(hist.history[‘val_loss’])
plt.title(‘model loss’)
plt.ylabel(‘loss’)
plt.xlabel(‘epoch’)
plt.legend([‘train’, ‘test’], loc=’upper right’)
plt.show()


In [94]:
model3.evaluate(X_test, y_test)

10000/10000 [==============================] – 1s 71us/sample – loss: 0.4241 – accuracy: 0.8806
Out[94]:
[0.4241497843027115, 0.8806]

Adding a dropout in this case hasn’t really improved accuracy of the model; however, from the accuracy vs epoch curve, we can see that it achieves close to model accuracy earlier on in the iterations and subsequently flattens out, which means that adding more iterations doesn’t add much to improvement in accuracy.
In [95]:
model4 = tf.keras.models.Sequential([
tf.keras.layers.Flatten(input_shape=(28, 28)),
tf.keras.layers.Dense(128, activation=’sigmoid’),
tf.keras.layers.Dense(128, activation=’sigmoid’),
tf.keras.layers.Dropout(0.2), tf.keras.layers.Dense(10) ])
In [102]:
model4.compile(optimizer=’SGD’, loss=loss_fn, metrics=[‘accuracy’])
hist = model4.fit(X_train, y_train, validation_split=0.2, epochs=30, batch_size = 200)

Train on 48000 samples, validate on 12000 samples
Epoch 1/30
48000/48000 [==============================] – 2s 45us/sample – loss: 0.8176 – accuracy: 0.7551 – val_loss: 0.7092 – val_accuracy: 0.8288
Epoch 2/30
48000/48000 [==============================] – 2s 34us/sample – loss: 0.7957 – accuracy: 0.7650 – val_loss: 0.6885 – val_accuracy: 0.8309
Epoch 3/30
48000/48000 [==============================] – 2s 34us/sample – loss: 0.7796 – accuracy: 0.7674 – val_loss: 0.6691 – val_accuracy: 0.8344
Epoch 4/30
48000/48000 [==============================] – 2s 33us/sample – loss: 0.7583 – accuracy: 0.7754 – val_loss: 0.6514 – val_accuracy: 0.8379
Epoch 5/30
48000/48000 [==============================] – 2s 34us/sample – loss: 0.7435 – accuracy: 0.7787 – val_loss: 0.6348 – val_accuracy: 0.8423
Epoch 6/30
48000/48000 [==============================] – 2s 34us/sample – loss: 0.7232 – accuracy: 0.7843 – val_loss: 0.6196 – val_accuracy: 0.8447
Epoch 7/30
48000/48000 [==============================] – 2s 33us/sample – loss: 0.7093 – accuracy: 0.7897 – val_loss: 0.6052 – val_accuracy: 0.8479
Epoch 8/30
48000/48000 [==============================] – 2s 33us/sample – loss: 0.6937 – accuracy: 0.7941 – val_loss: 0.5915 – val_accuracy: 0.8508
Epoch 9/30
48000/48000 [==============================] – 2s 38us/sample – loss: 0.6821 – accuracy: 0.7976 – val_loss: 0.5795 – val_accuracy: 0.8535
Epoch 10/30
48000/48000 [==============================] – 2s 40us/sample – loss: 0.6676 – accuracy: 0.8010 – val_loss: 0.5674 – val_accuracy: 0.8568
Epoch 11/30
48000/48000 [==============================] – 2s 41us/sample – loss: 0.6568 – accuracy: 0.8060 – val_loss: 0.5564 – val_accuracy: 0.8584
Epoch 12/30
48000/48000 [==============================] – 2s 39us/sample – loss: 0.6431 – accuracy: 0.8104 – val_loss: 0.5465 – val_accuracy: 0.8605
Epoch 13/30
48000/48000 [==============================] – 2s 36us/sample – loss: 0.6337 – accuracy: 0.8117 – val_loss: 0.5364 – val_accuracy: 0.8627
Epoch 14/30
48000/48000 [==============================] – 2s 35us/sample – loss: 0.6233 – accuracy: 0.8170 – val_loss: 0.5274 – val_accuracy: 0.8650
Epoch 15/30
48000/48000 [==============================] – 2s 38us/sample – loss: 0.6145 – accuracy: 0.8184 – val_loss: 0.5187 – val_accuracy: 0.8649
Epoch 16/30
48000/48000 [==============================] – 2s 38us/sample – loss: 0.6032 – accuracy: 0.8241 – val_loss: 0.5105 – val_accuracy: 0.8676
Epoch 17/30
48000/48000 [==============================] – 2s 39us/sample – loss: 0.5949 – accuracy: 0.8232 – val_loss: 0.5028 – val_accuracy: 0.8693
Epoch 18/30
48000/48000 [==============================] – 3s 55us/sample – loss: 0.5877 – accuracy: 0.8280 – val_loss: 0.4955 – val_accuracy: 0.8711
Epoch 19/30
48000/48000 [==============================] – 2s 37us/sample – loss: 0.5796 – accuracy: 0.8297 – val_loss: 0.4887 – val_accuracy: 0.8733
Epoch 20/30
48000/48000 [==============================] – 2s 37us/sample – loss: 0.5692 – accuracy: 0.8324 – val_loss: 0.4820 – val_accuracy: 0.8747
Epoch 21/30
48000/48000 [==============================] – 2s 35us/sample – loss: 0.5648 – accuracy: 0.8348 – val_loss: 0.4759 – val_accuracy: 0.8749
Epoch 22/30
48000/48000 [==============================] – 2s 36us/sample – loss: 0.5575 – accuracy: 0.8357 – val_loss: 0.4700 – val_accuracy: 0.8757
Epoch 23/30
48000/48000 [==============================] – 2s 36us/sample – loss: 0.5505 – accuracy: 0.8384 – val_loss: 0.4642 – val_accuracy: 0.8766
Epoch 24/30
48000/48000 [==============================] – 2s 38us/sample – loss: 0.5461 – accuracy: 0.8410 – val_loss: 0.4585 – val_accuracy: 0.8792
Epoch 25/30
48000/48000 [==============================] – 2s 36us/sample – loss: 0.5383 – accuracy: 0.8418 – val_loss: 0.4533 – val_accuracy: 0.8797
Epoch 26/30
48000/48000 [==============================] – 2s 36us/sample – loss: 0.5347 – accuracy: 0.8431 – val_loss: 0.4488 – val_accuracy: 0.8814
Epoch 27/30
48000/48000 [==============================] – 2s 42us/sample – loss: 0.5290 – accuracy: 0.8450 – val_loss: 0.4437 – val_accuracy: 0.8825
Epoch 28/30
48000/48000 [==============================] – 2s 40us/sample – loss: 0.5224 – accuracy: 0.8460 – val_loss: 0.4392 – val_accuracy: 0.8823
Epoch 29/30
48000/48000 [==============================] – 2s 46us/sample – loss: 0.5184 – accuracy: 0.8477 – val_loss: 0.4349 – val_accuracy: 0.8844
Epoch 30/30
48000/48000 [==============================] – 2s 35us/sample – loss: 0.5126 – accuracy: 0.8487 – val_loss: 0.4307 – val_accuracy: 0.8852
In [100]:
plt.subplot(2,1,1)
plt.plot(hist.history[‘accuracy’])
plt.plot(hist.history[‘val_accuracy’])
plt.title(‘model accuracy’)
plt.ylabel(‘accuracy’)
plt.xlabel(‘epoch’)
plt.legend([‘train’, ‘test’], loc=’lower right’)
plt.subplot(2,1,2)
plt.plot(hist.history[‘loss’])
plt.plot(hist.history[‘val_loss’])
plt.title(‘model loss’)
plt.ylabel(‘loss’)
plt.xlabel(‘epoch’)
plt.legend([‘train’, ‘test’], loc=’upper right’)
plt.show()


In [101]:
model4.evaluate(X_test, y_test)

10000/10000 [==============================] – 1s 72us/sample – loss: 0.7423 – accuracy: 0.8114
Out[101]:
[0.7422793316841125, 0.8114]

Modeling in batches should improve accuracy, however, the processing time is a lot longer as it takes more iterations to achieve better accuracy. In the case above, even 30 epochs weren’t enough to achieve higher accuracy.
In [ ]:

In [38]:
import pandas as pd
In [39]:
file = ‘C:/Users/subha/Downloads/pima-indians-diabetes.csv’
dataframe = pd.read_csv(file)
dataframe.head()
Out[39]:

ID
preg_no
plasma
diastolic
triceps
serum
mass
pedigree
age
diabete
0
1
6
148
72
35
0
33.6
0.627
50
1
1
2
1
85
66
29
0
26.6
0.351
31
0
2
3
8
183
64
0
0
23.3
0.672
32
1
3
4
1
89
66
23
94
28.1
0.167
21
0
4
5
0
137
40
35
168
43.1
2.288
33
1
In [42]:
dataframe.describe()
Out[42]:

ID
preg_no
plasma
diastolic
triceps
serum
mass
pedigree
age
diabete
count
768.000000
768.000000
768.000000
768.000000
768.000000
768.000000
768.000000
768.000000
768.000000
768.000000
mean
384.500000
3.845052
120.894531
69.105469
20.536458
79.799479
31.992578
0.471876
33.240885
0.348958
std
221.846794
3.369578
31.972618
19.355807
15.952218
115.244002
7.884160
0.331329
11.760232
0.476951
min
1.000000
0.000000
0.000000
0.000000
0.000000
0.000000
0.000000
0.078000
21.000000
0.000000
25%
192.750000
1.000000
99.000000
62.000000
0.000000
0.000000
27.300000
0.243750
24.000000
0.000000
50%
384.500000
3.000000
117.000000
72.000000
23.000000
30.500000
32.000000
0.372500
29.000000
0.000000
75%
576.250000
6.000000
140.250000
80.000000
32.000000
127.250000
36.600000
0.626250
41.000000
1.000000
max
768.000000
17.000000
199.000000
122.000000
99.000000
846.000000
67.100000
2.420000
81.000000
1.000000
In [43]:
y=dataframe[‘diabete’]
In [44]:
X=dataframe.drop([‘ID’, ‘diabete’], axis = 1)
In [45]:
from sklearn.model_selection import train_test_split
In [67]:
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.20, random_state=1234)
In [68]:
# X_train = (X_train-X_train.mean())/X_train.std()
X_train = (X_train)/X_train.max()
In [69]:
X_train.describe()
Out[69]:

preg_no
plasma
diastolic
triceps
serum
mass
pedigree
age
count
614.000000
614.000000
614.000000
614.000000
614.000000
614.000000
614.000000
614.000000
mean
0.260369
0.611424
0.566361
0.323044
0.095904
0.475910
0.190773
0.414043
std
0.228201
0.161012
0.160166
0.250125
0.137531
0.117327
0.131156
0.146527
min
0.000000
0.000000
0.000000
0.000000
0.000000
0.000000
0.032231
0.259259
25%
0.066667
0.500000
0.508197
0.000000
0.000000
0.403875
0.098347
0.296296
50%
0.200000
0.590909
0.590164
0.365079
0.042553
0.476900
0.150620
0.358025
75%
0.400000
0.712121
0.655738
0.507937
0.153664
0.543964
0.254029
0.506173
max
1.000000
1.000000
1.000000
1.000000
1.000000
1.000000
1.000000
1.000000
In [70]:
# X_test = (X_test-X_test.mean())/X_test.std()
X_test = (X_test)/X_test.max()
In [71]:
X_test.describe()
Out[71]:

preg_no
plasma
diastolic
triceps
serum
mass
pedigree
age
count
154.000000
154.000000
154.000000
154.000000
154.000000
154.000000
154.000000
154.000000
mean
0.211994
0.604157
0.652291
0.214876
0.100099
0.562442
0.220079
0.457978
std
0.185092
0.162984
0.176076
0.169067
0.149095
0.138768
0.163284
0.161106
min
0.000000
0.000000
0.000000
0.000000
0.000000
0.000000
0.041219
0.300000
25%
0.058824
0.502513
0.603774
0.000000
0.000000
0.486038
0.109918
0.342857
50%
0.176471
0.587940
0.660377
0.232323
0.000000
0.567190
0.172177
0.400000
75%
0.294118
0.692211
0.754717
0.333333
0.161290
0.640052
0.277587
0.542857
max
1.000000
1.000000
1.000000
1.000000
1.000000
1.000000
1.000000
1.000000

Q5- Design a neural network with 2 hidden layers (hidden layer 1 with 12 nodes and hidden layer 2 with 8 nodes), report the accuracy and draw the learning curves. Apply a dropout to see if you could get any better result. Use ‘adam’ as optimizer.
In [75]:
model_struc = tf.keras.models.Sequential([
tf.keras.layers.Flatten(input_shape=(8, )),
tf.keras.layers.Dense(12, activation=’sigmoid’),
tf.keras.layers.Dense(8, activation=’sigmoid’),
tf.keras.layers.Dense(2) ])
In [76]:
loss_fn = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)
In [82]:
model_struc.compile(optimizer=’Adam’, loss=loss_fn, metrics=[‘accuracy’])
model_output = model_struc.fit(X_train, y_train, validation_data = (X_test, y_test), epochs=10)

Train on 614 samples, validate on 154 samples
Epoch 1/10
614/614 [==============================] – 1s 1ms/sample – loss: 0.6341 – accuracy: 0.6531 – val_loss: 0.6390 – val_accuracy: 0.6429
Epoch 2/10
614/614 [==============================] – 0s 83us/sample – loss: 0.6321 – accuracy: 0.6531 – val_loss: 0.6387 – val_accuracy: 0.6429
Epoch 3/10
614/614 [==============================] – 0s 86us/sample – loss: 0.6314 – accuracy: 0.6531 – val_loss: 0.6379 – val_accuracy: 0.6429
Epoch 4/10
614/614 [==============================] – 0s 127us/sample – loss: 0.6308 – accuracy: 0.6531 – val_loss: 0.6366 – val_accuracy: 0.6429
Epoch 5/10
614/614 [==============================] – 0s 161us/sample – loss: 0.6298 – accuracy: 0.6531 – val_loss: 0.6357 – val_accuracy: 0.6429
Epoch 6/10
614/614 [==============================] – 0s 153us/sample – loss: 0.6293 – accuracy: 0.6531 – val_loss: 0.6350 – val_accuracy: 0.6429
Epoch 7/10
614/614 [==============================] – 0s 114us/sample – loss: 0.6280 – accuracy: 0.6531 – val_loss: 0.6346 – val_accuracy: 0.6429
Epoch 8/10
614/614 [==============================] – 0s 112us/sample – loss: 0.6269 – accuracy: 0.6531 – val_loss: 0.6336 – val_accuracy: 0.6429
Epoch 9/10
614/614 [==============================] – 0s 143us/sample – loss: 0.6260 – accuracy: 0.6531 – val_loss: 0.6323 – val_accuracy: 0.6429
Epoch 10/10
614/614 [==============================] – 0s 107us/sample – loss: 0.6251 – accuracy: 0.6531 – val_loss: 0.6314 – val_accuracy: 0.6429
In [83]:
plt.subplot(2,1,1)
plt.plot(model_output.history[‘accuracy’])
plt.plot(model_output.history[‘val_accuracy’])
plt.title(‘model accuracy’)
plt.ylabel(‘accuracy’)
plt.xlabel(‘epoch’)
plt.legend([‘train’, ‘test’], loc=’lower right’)
plt.subplot(2,1,2)
plt.plot(model_output.history[‘loss’])
plt.plot(model_output.history[‘val_loss’])
plt.title(‘model loss’)
plt.ylabel(‘loss’)
plt.xlabel(‘epoch’)
plt.legend([‘train’, ‘test’], loc=’upper right’)
plt.show()

This model accuracy shows two parallel lines with test accuracy well below training accuracy; This suggests that the model design is very poor.
In [84]:
model_struc1 = tf.keras.models.Sequential([
tf.keras.layers.Flatten(input_shape=(8,)),
tf.keras.layers.Dense(12, activation=’sigmoid’),
tf.keras.layers.Dense(8, activation=’sigmoid’),
tf.keras.layers.Dropout(0.2), tf.keras.layers.Dense(2) ])
In [85]:
loss_fn = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)
In [86]:
model_struc1.compile(optimizer=’Adam’, loss=loss_fn, metrics=[‘accuracy’])
model_output1 = model_struc1.fit(X_train, y_train, validation_data = (X_test, y_test), epochs=10)

Train on 614 samples, validate on 154 samples
Epoch 1/10
614/614 [==============================] – 1s 2ms/sample – loss: 0.8234 – accuracy: 0.3909 – val_loss: 0.7694 – val_accuracy: 0.3571
Epoch 2/10
614/614 [==============================] – 0s 71us/sample – loss: 0.7678 – accuracy: 0.4251 – val_loss: 0.7110 – val_accuracy: 0.3571
Epoch 3/10
614/614 [==============================] – 0s 96us/sample – loss: 0.7156 – accuracy: 0.4805 – val_loss: 0.6762 – val_accuracy: 0.6429
Epoch 4/10
614/614 [==============================] – 0s 159us/sample – loss: 0.6955 – accuracy: 0.5831 – val_loss: 0.6620 – val_accuracy: 0.6429
Epoch 5/10
614/614 [==============================] – 0s 158us/sample – loss: 0.6793 – accuracy: 0.5896 – val_loss: 0.6568 – val_accuracy: 0.6429
Epoch 6/10
614/614 [==============================] – 0s 109us/sample – loss: 0.6785 – accuracy: 0.5847 – val_loss: 0.6556 – val_accuracy: 0.6429
Epoch 7/10
614/614 [==============================] – 0s 99us/sample – loss: 0.6660 – accuracy: 0.6221 – val_loss: 0.6551 – val_accuracy: 0.6429
Epoch 8/10
614/614 [==============================] – 0s 117us/sample – loss: 0.6883 – accuracy: 0.5961 – val_loss: 0.6549 – val_accuracy: 0.6429
Epoch 9/10
614/614 [==============================] – 0s 154us/sample – loss: 0.6745 – accuracy: 0.6205 – val_loss: 0.6546 – val_accuracy: 0.6429
Epoch 10/10
614/614 [==============================] – 0s 130us/sample – loss: 0.6790 – accuracy: 0.6238 – val_loss: 0.6545 – val_accuracy: 0.6429
In [87]:
plt.subplot(2,1,1)
plt.plot(model_output1.history[‘accuracy’])
plt.plot(model_output1.history[‘val_accuracy’])
plt.title(‘model accuracy’)
plt.ylabel(‘accuracy’)
plt.xlabel(‘epoch’)
plt.legend([‘train’, ‘test’], loc=’lower right’)
plt.subplot(2,1,2)
plt.plot(model_output1.history[‘loss’])
plt.plot(model_output1.history[‘val_loss’])
plt.title(‘model loss’)
plt.ylabel(‘loss’)
plt.xlabel(‘epoch’)
plt.legend([‘train’, ‘test’], loc=’upper right’)
plt.show()

Adding a dropout does not improve accuracy in this case
In [ ]: