CS计算机代考程序代写 python Keras ANLY535 HW4

ANLY535 HW4

In [49]:

import tensorflow as tf

from tensorflow import keras

In [50]:

import numpy as np

import matplotlib.pyplot as plt

In [51]:

fashion_mnist = keras.datasets.fashion_mnist

In [52]:

(train_images, train_labels), (test_images, test_labels) = fashion_mnist.load_data()

In [53]:

class_names = [‘T-shirt/top’, ‘Trouser’, ‘Pullover’, ‘Dress’, ‘Coat’, ‘Sandal’, ‘Shirt’, ‘Sneaker’, ‘Bag’, ‘Ankle boot’]

In [54]:

plt.figure()
plt.imshow(train_images[0])
plt.colorbar()
plt.gca().grid(False)
plt.show()

In [55]:

train_images=train_images/255.0
test_images=test_images/255.0

In [56]:

plt.figure(figsize=(10,10))
for i in range(25):
plt.subplot(5,5,i+1)
plt.xticks([])
plt.yticks([])
plt.grid(‘off’)
plt.imshow(train_images[i],cmap=plt.cm.binary)
plt.xlabel(class_names[train_labels[i]])
plt.show()

In [57]:

num_pixels = train_images.shape[1] * train_images.shape[2] #28*28 = 784

In [58]:

X_train = train_images.reshape(train_images.shape[0], num_pixels)
X_test = test_images.reshape(test_images.shape[0], num_pixels)

In [59]:

# normalize inputs from 0-255 to 0-1
X_train = X_train / 255
X_test = X_test / 255
Y_test = test_labels

In [34]:

from keras.utils import np_utils

In [60]:

# one hot encode outputs
y_train = np_utils.to_categorical(train_labels )
y_test = np_utils.to_categorical(test_labels )

In [61]:

from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense
from tensorflow.keras import optimizers

In [62]:

hidden_nodes = 128
num_classes = y_test.shape[1]

In [68]:

# create model
model = Sequential()
model.add(Dense(64, input_dim= 784, activation=’relu’))
model.add(Dense(64, activation=’relu’))
model.add(Dense(10, activation=’softmax’))
sgd = optimizers.SGD(lr=0.01)

In [69]:

# Compile model
model.compile(loss=’mean_squared_error’, optimizer=sgd, metrics=[‘accuracy’])

In [70]:

nn_simple = model.fit(X_train, y_train, validation_split=0.2, epochs=80, batch_size=200)

Train on 48000 samples, validate on 12000 samples
WARNING:tensorflow:From D:\Python\lib\site-packages\tensorflow\python\ops\math_ops.py:3066: to_int32 (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.cast instead.
Epoch 1/80
48000/48000 [==============================] – 2s 44us/sample – loss: 0.0900 – acc: 0.1237 – val_loss: 0.0900 – val_acc: 0.1442
Epoch 2/80
48000/48000 [==============================] – 1s 16us/sample – loss: 0.0900 – acc: 0.1582 – val_loss: 0.0900 – val_acc: 0.1781
Epoch 3/80
48000/48000 [==============================] – 1s 16us/sample – loss: 0.0900 – acc: 0.1971 – val_loss: 0.0900 – val_acc: 0.2002
Epoch 4/80
48000/48000 [==============================] – 1s 16us/sample – loss: 0.0900 – acc: 0.2023 – val_loss: 0.0900 – val_acc: 0.1969
Epoch 5/80
48000/48000 [==============================] – 1s 15us/sample – loss: 0.0900 – acc: 0.1934 – val_loss: 0.0900 – val_acc: 0.1933
Epoch 6/80
48000/48000 [==============================] – 1s 16us/sample – loss: 0.0900 – acc: 0.1891 – val_loss: 0.0900 – val_acc: 0.1932
Epoch 7/80
48000/48000 [==============================] – 1s 16us/sample – loss: 0.0900 – acc: 0.1967 – val_loss: 0.0900 – val_acc: 0.1939
Epoch 8/80
48000/48000 [==============================] – 1s 15us/sample – loss: 0.0900 – acc: 0.1965 – val_loss: 0.0900 – val_acc: 0.1967
Epoch 9/80
48000/48000 [==============================] – 1s 15us/sample – loss: 0.0900 – acc: 0.1998 – val_loss: 0.0900 – val_acc: 0.2023
Epoch 10/80
48000/48000 [==============================] – 1s 16us/sample – loss: 0.0900 – acc: 0.2084 – val_loss: 0.0900 – val_acc: 0.2063
Epoch 11/80
48000/48000 [==============================] – 1s 16us/sample – loss: 0.0900 – acc: 0.2101 – val_loss: 0.0900 – val_acc: 0.2103
Epoch 12/80
48000/48000 [==============================] – 1s 16us/sample – loss: 0.0900 – acc: 0.2120 – val_loss: 0.0900 – val_acc: 0.2130
Epoch 13/80
48000/48000 [==============================] – 1s 16us/sample – loss: 0.0900 – acc: 0.2155 – val_loss: 0.0900 – val_acc: 0.2159
Epoch 14/80
48000/48000 [==============================] – 1s 16us/sample – loss: 0.0900 – acc: 0.2187 – val_loss: 0.0900 – val_acc: 0.2177
Epoch 15/80
48000/48000 [==============================] – 1s 16us/sample – loss: 0.0900 – acc: 0.2199 – val_loss: 0.0900 – val_acc: 0.2202
Epoch 16/80
48000/48000 [==============================] – 1s 16us/sample – loss: 0.0900 – acc: 0.2191 – val_loss: 0.0900 – val_acc: 0.2211
Epoch 17/80
48000/48000 [==============================] – 1s 16us/sample – loss: 0.0900 – acc: 0.2200 – val_loss: 0.0900 – val_acc: 0.2210
Epoch 18/80
48000/48000 [==============================] – 1s 16us/sample – loss: 0.0900 – acc: 0.2209 – val_loss: 0.0900 – val_acc: 0.2218
Epoch 19/80
48000/48000 [==============================] – 1s 16us/sample – loss: 0.0900 – acc: 0.2219 – val_loss: 0.0900 – val_acc: 0.2212
Epoch 20/80
48000/48000 [==============================] – 1s 16us/sample – loss: 0.0900 – acc: 0.2214 – val_loss: 0.0900 – val_acc: 0.2207
Epoch 21/80
48000/48000 [==============================] – 1s 16us/sample – loss: 0.0900 – acc: 0.2221 – val_loss: 0.0900 – val_acc: 0.2202
Epoch 22/80
48000/48000 [==============================] – 1s 15us/sample – loss: 0.0900 – acc: 0.2204 – val_loss: 0.0900 – val_acc: 0.2206
Epoch 23/80
48000/48000 [==============================] – 1s 16us/sample – loss: 0.0900 – acc: 0.2217 – val_loss: 0.0900 – val_acc: 0.2211
Epoch 24/80
48000/48000 [==============================] – 1s 16us/sample – loss: 0.0900 – acc: 0.2231 – val_loss: 0.0900 – val_acc: 0.2206
Epoch 25/80
48000/48000 [==============================] – 1s 16us/sample – loss: 0.0900 – acc: 0.2231 – val_loss: 0.0900 – val_acc: 0.2202
Epoch 26/80
48000/48000 [==============================] – 1s 15us/sample – loss: 0.0900 – acc: 0.2239 – val_loss: 0.0900 – val_acc: 0.2200
Epoch 27/80
48000/48000 [==============================] – 1s 16us/sample – loss: 0.0900 – acc: 0.2238 – val_loss: 0.0900 – val_acc: 0.2200
Epoch 28/80
48000/48000 [==============================] – 1s 17us/sample – loss: 0.0900 – acc: 0.2217 – val_loss: 0.0900 – val_acc: 0.2202
Epoch 29/80
48000/48000 [==============================] – 1s 16us/sample – loss: 0.0900 – acc: 0.2231 – val_loss: 0.0900 – val_acc: 0.2201
Epoch 30/80
48000/48000 [==============================] – 1s 17us/sample – loss: 0.0900 – acc: 0.2235 – val_loss: 0.0900 – val_acc: 0.2196
Epoch 31/80
48000/48000 [==============================] – 1s 17us/sample – loss: 0.0900 – acc: 0.2226 – val_loss: 0.0900 – val_acc: 0.2191
Epoch 32/80
48000/48000 [==============================] – 1s 16us/sample – loss: 0.0900 – acc: 0.2218 – val_loss: 0.0900 – val_acc: 0.2187
Epoch 33/80
48000/48000 [==============================] – 1s 18us/sample – loss: 0.0900 – acc: 0.2236 – val_loss: 0.0900 – val_acc: 0.2192
Epoch 34/80
48000/48000 [==============================] – 1s 17us/sample – loss: 0.0900 – acc: 0.2239 – val_loss: 0.0900 – val_acc: 0.2198
Epoch 35/80
48000/48000 [==============================] – 1s 17us/sample – loss: 0.0900 – acc: 0.2218 – val_loss: 0.0900 – val_acc: 0.2208
Epoch 36/80
48000/48000 [==============================] – 1s 16us/sample – loss: 0.0900 – acc: 0.2238 – val_loss: 0.0900 – val_acc: 0.2202
Epoch 37/80
48000/48000 [==============================] – 1s 16us/sample – loss: 0.0900 – acc: 0.2251 – val_loss: 0.0900 – val_acc: 0.2192
Epoch 38/80
48000/48000 [==============================] – 1s 16us/sample – loss: 0.0900 – acc: 0.2208 – val_loss: 0.0900 – val_acc: 0.2188
Epoch 39/80
48000/48000 [==============================] – 1s 16us/sample – loss: 0.0900 – acc: 0.2199 – val_loss: 0.0900 – val_acc: 0.2184
Epoch 40/80
48000/48000 [==============================] – 1s 16us/sample – loss: 0.0900 – acc: 0.2207 – val_loss: 0.0900 – val_acc: 0.2183
Epoch 41/80
48000/48000 [==============================] – 1s 16us/sample – loss: 0.0900 – acc: 0.2222 – val_loss: 0.0900 – val_acc: 0.2173
Epoch 42/80
48000/48000 [==============================] – 1s 16us/sample – loss: 0.0900 – acc: 0.2200 – val_loss: 0.0900 – val_acc: 0.2170
Epoch 43/80
48000/48000 [==============================] – 1s 16us/sample – loss: 0.0900 – acc: 0.2173 – val_loss: 0.0900 – val_acc: 0.2167
Epoch 44/80
48000/48000 [==============================] – 1s 17us/sample – loss: 0.0900 – acc: 0.2217 – val_loss: 0.0900 – val_acc: 0.2163
Epoch 45/80
48000/48000 [==============================] – 1s 18us/sample – loss: 0.0900 – acc: 0.2195 – val_loss: 0.0900 – val_acc: 0.2163
Epoch 46/80
48000/48000 [==============================] – 1s 18us/sample – loss: 0.0900 – acc: 0.2187 – val_loss: 0.0900 – val_acc: 0.2163
Epoch 47/80
48000/48000 [==============================] – 1s 16us/sample – loss: 0.0900 – acc: 0.2188 – val_loss: 0.0900 – val_acc: 0.2170
Epoch 48/80
48000/48000 [==============================] – 1s 16us/sample – loss: 0.0900 – acc: 0.2184 – val_loss: 0.0900 – val_acc: 0.2173
Epoch 49/80
48000/48000 [==============================] – 1s 16us/sample – loss: 0.0900 – acc: 0.2178 – val_loss: 0.0900 – val_acc: 0.2181
Epoch 50/80
48000/48000 [==============================] – 1s 16us/sample – loss: 0.0900 – acc: 0.2188 – val_loss: 0.0900 – val_acc: 0.2179
Epoch 51/80
48000/48000 [==============================] – 1s 15us/sample – loss: 0.0900 – acc: 0.2194 – val_loss: 0.0900 – val_acc: 0.2175
Epoch 52/80
48000/48000 [==============================] – 1s 16us/sample – loss: 0.0900 – acc: 0.2186 – val_loss: 0.0900 – val_acc: 0.2186
Epoch 53/80
48000/48000 [==============================] – 1s 15us/sample – loss: 0.0900 – acc: 0.2214 – val_loss: 0.0900 – val_acc: 0.2186
Epoch 54/80
48000/48000 [==============================] – 1s 15us/sample – loss: 0.0900 – acc: 0.2231 – val_loss: 0.0900 – val_acc: 0.2194
Epoch 55/80
48000/48000 [==============================] – 1s 15us/sample – loss: 0.0900 – acc: 0.2226 – val_loss: 0.0900 – val_acc: 0.2185
Epoch 56/80
48000/48000 [==============================] – 1s 16us/sample – loss: 0.0900 – acc: 0.2203 – val_loss: 0.0900 – val_acc: 0.2185
Epoch 57/80
48000/48000 [==============================] – 1s 16us/sample – loss: 0.0900 – acc: 0.2224 – val_loss: 0.0900 – val_acc: 0.2188
Epoch 58/80
48000/48000 [==============================] – 1s 16us/sample – loss: 0.0900 – acc: 0.2229 – val_loss: 0.0900 – val_acc: 0.2184
Epoch 59/80
48000/48000 [==============================] – 1s 15us/sample – loss: 0.0900 – acc: 0.2183 – val_loss: 0.0900 – val_acc: 0.2184
Epoch 60/80
48000/48000 [==============================] – 1s 15us/sample – loss: 0.0900 – acc: 0.2208 – val_loss: 0.0900 – val_acc: 0.2190
Epoch 61/80
48000/48000 [==============================] – 1s 15us/sample – loss: 0.0900 – acc: 0.2208 – val_loss: 0.0900 – val_acc: 0.2194
Epoch 62/80
48000/48000 [==============================] – 1s 15us/sample – loss: 0.0900 – acc: 0.2229 – val_loss: 0.0900 – val_acc: 0.2197
Epoch 63/80
48000/48000 [==============================] – 1s 15us/sample – loss: 0.0900 – acc: 0.2226 – val_loss: 0.0900 – val_acc: 0.2198
Epoch 64/80
48000/48000 [==============================] – 1s 16us/sample – loss: 0.0900 – acc: 0.2216 – val_loss: 0.0900 – val_acc: 0.2200
Epoch 65/80
48000/48000 [==============================] – 1s 15us/sample – loss: 0.0900 – acc: 0.2223 – val_loss: 0.0900 – val_acc: 0.2200
Epoch 66/80
48000/48000 [==============================] – 1s 15us/sample – loss: 0.0900 – acc: 0.2223 – val_loss: 0.0900 – val_acc: 0.2198
Epoch 67/80
48000/48000 [==============================] – 1s 15us/sample – loss: 0.0900 – acc: 0.2213 – val_loss: 0.0900 – val_acc: 0.2198
Epoch 68/80
48000/48000 [==============================] – 1s 16us/sample – loss: 0.0900 – acc: 0.2224 – val_loss: 0.0900 – val_acc: 0.2206
Epoch 69/80
48000/48000 [==============================] – 1s 15us/sample – loss: 0.0900 – acc: 0.2211 – val_loss: 0.0900 – val_acc: 0.2203
Epoch 70/80
48000/48000 [==============================] – 1s 16us/sample – loss: 0.0900 – acc: 0.2202 – val_loss: 0.0900 – val_acc: 0.2210
Epoch 71/80
48000/48000 [==============================] – 1s 16us/sample – loss: 0.0900 – acc: 0.2226 – val_loss: 0.0900 – val_acc: 0.2203
Epoch 72/80
48000/48000 [==============================] – 1s 16us/sample – loss: 0.0900 – acc: 0.2222 – val_loss: 0.0900 – val_acc: 0.2199
Epoch 73/80
48000/48000 [==============================] – 1s 15us/sample – loss: 0.0900 – acc: 0.2217 – val_loss: 0.0900 – val_acc: 0.2198
Epoch 74/80
48000/48000 [==============================] – 1s 15us/sample – loss: 0.0900 – acc: 0.2206 – val_loss: 0.0900 – val_acc: 0.2195
Epoch 75/80
48000/48000 [==============================] – 1s 16us/sample – loss: 0.0900 – acc: 0.2209 – val_loss: 0.0900 – val_acc: 0.2193
Epoch 76/80
48000/48000 [==============================] – 1s 15us/sample – loss: 0.0900 – acc: 0.2209 – val_loss: 0.0900 – val_acc: 0.2192
Epoch 77/80
48000/48000 [==============================] – 1s 15us/sample – loss: 0.0900 – acc: 0.2194 – val_loss: 0.0900 – val_acc: 0.2187
Epoch 78/80
48000/48000 [==============================] – 1s 18us/sample – loss: 0.0900 – acc: 0.2203 – val_loss: 0.0900 – val_acc: 0.2183
Epoch 79/80
48000/48000 [==============================] – 1s 16us/sample – loss: 0.0900 – acc: 0.2212 – val_loss: 0.0900 – val_acc: 0.2178
Epoch 80/80
48000/48000 [==============================] – 1s 16us/sample – loss: 0.0900 – acc: 0.2181 – val_loss: 0.0900 – val_acc: 0.2177

In [71]:

scores = model.evaluate(X_test, y_test)

10000/10000 [==============================] – 0s 37us/sample – loss: 0.0900 – acc: 0.2195

In [72]:

print(“Accuracy: %.2f%%” % (scores[1]*100))

Accuracy: 21.95%

In [82]:

model.compile(loss=’categorical_crossentropy’, optimizer=’adam’, metrics=[‘accuracy’])

In [83]:

nn_simple = model.fit(X_train, y_train, validation_split=0.2, epochs=80, batch_size=200)

Train on 48000 samples, validate on 12000 samples
Epoch 1/80
48000/48000 [==============================] – 1s 22us/sample – loss: 0.3193 – acc: 0.8857 – val_loss: 0.3605 – val_acc: 0.8717
Epoch 2/80
48000/48000 [==============================] – 1s 18us/sample – loss: 0.3193 – acc: 0.8854 – val_loss: 0.3612 – val_acc: 0.8722
Epoch 3/80
48000/48000 [==============================] – 1s 18us/sample – loss: 0.3171 – acc: 0.8864 – val_loss: 0.3601 – val_acc: 0.8708
Epoch 4/80
48000/48000 [==============================] – 1s 18us/sample – loss: 0.3160 – acc: 0.8858 – val_loss: 0.3580 – val_acc: 0.8716
Epoch 5/80
48000/48000 [==============================] – 1s 19us/sample – loss: 0.3143 – acc: 0.8871 – val_loss: 0.3655 – val_acc: 0.8695
Epoch 6/80
48000/48000 [==============================] – 1s 18us/sample – loss: 0.3137 – acc: 0.8869 – val_loss: 0.3569 – val_acc: 0.8733
Epoch 7/80
48000/48000 [==============================] – 1s 18us/sample – loss: 0.3126 – acc: 0.8872 – val_loss: 0.3593 – val_acc: 0.8743
Epoch 8/80
48000/48000 [==============================] – 1s 18us/sample – loss: 0.3104 – acc: 0.8881 – val_loss: 0.3541 – val_acc: 0.8732
Epoch 9/80
48000/48000 [==============================] – 1s 17us/sample – loss: 0.3108 – acc: 0.8887 – val_loss: 0.3543 – val_acc: 0.8734
Epoch 10/80
48000/48000 [==============================] – 1s 17us/sample – loss: 0.3089 – acc: 0.8886 – val_loss: 0.3540 – val_acc: 0.8738
Epoch 11/80
48000/48000 [==============================] – 1s 18us/sample – loss: 0.3081 – acc: 0.8881 – val_loss: 0.3534 – val_acc: 0.8737
Epoch 12/80
48000/48000 [==============================] – 1s 18us/sample – loss: 0.3069 – acc: 0.8897 – val_loss: 0.3632 – val_acc: 0.8704
Epoch 13/80
48000/48000 [==============================] – 1s 18us/sample – loss: 0.3066 – acc: 0.8891 – val_loss: 0.3534 – val_acc: 0.8744
Epoch 14/80
48000/48000 [==============================] – 1s 18us/sample – loss: 0.3041 – acc: 0.8894 – val_loss: 0.3539 – val_acc: 0.8741
Epoch 15/80
48000/48000 [==============================] – 1s 18us/sample – loss: 0.3035 – acc: 0.8900 – val_loss: 0.3521 – val_acc: 0.8750
Epoch 16/80
48000/48000 [==============================] – 1s 18us/sample – loss: 0.3033 – acc: 0.8903 – val_loss: 0.3521 – val_acc: 0.8751
Epoch 17/80
48000/48000 [==============================] – 1s 18us/sample – loss: 0.3022 – acc: 0.8908 – val_loss: 0.3511 – val_acc: 0.8758
Epoch 18/80
48000/48000 [==============================] – 1s 18us/sample – loss: 0.3001 – acc: 0.8915 – val_loss: 0.3530 – val_acc: 0.8742
Epoch 19/80
48000/48000 [==============================] – 1s 18us/sample – loss: 0.2991 – acc: 0.8912 – val_loss: 0.3511 – val_acc: 0.8759
Epoch 20/80
48000/48000 [==============================] – 1s 18us/sample – loss: 0.2984 – acc: 0.8919 – val_loss: 0.3479 – val_acc: 0.8761
Epoch 21/80
48000/48000 [==============================] – 1s 18us/sample – loss: 0.2974 – acc: 0.8921 – val_loss: 0.3518 – val_acc: 0.8755
Epoch 22/80
48000/48000 [==============================] – 1s 18us/sample – loss: 0.2973 – acc: 0.8925 – val_loss: 0.3480 – val_acc: 0.8767
Epoch 23/80
48000/48000 [==============================] – 1s 18us/sample – loss: 0.2968 – acc: 0.8917 – val_loss: 0.3508 – val_acc: 0.8745
Epoch 24/80
48000/48000 [==============================] – 1s 18us/sample – loss: 0.2948 – acc: 0.8924 – val_loss: 0.3486 – val_acc: 0.8748
Epoch 25/80
48000/48000 [==============================] – 1s 17us/sample – loss: 0.2943 – acc: 0.8932 – val_loss: 0.3498 – val_acc: 0.8752
Epoch 26/80
48000/48000 [==============================] – 1s 18us/sample – loss: 0.2923 – acc: 0.8940 – val_loss: 0.3498 – val_acc: 0.8758
Epoch 27/80
48000/48000 [==============================] – 1s 18us/sample – loss: 0.2918 – acc: 0.8936 – val_loss: 0.3463 – val_acc: 0.8771
Epoch 28/80
48000/48000 [==============================] – 1s 17us/sample – loss: 0.2905 – acc: 0.8942 – val_loss: 0.3472 – val_acc: 0.8766
Epoch 29/80
48000/48000 [==============================] – 1s 17us/sample – loss: 0.2895 – acc: 0.8947 – val_loss: 0.3455 – val_acc: 0.8759
Epoch 30/80
48000/48000 [==============================] – 1s 18us/sample – loss: 0.2892 – acc: 0.8948 – val_loss: 0.3537 – val_acc: 0.8733
Epoch 31/80
48000/48000 [==============================] – 1s 17us/sample – loss: 0.2884 – acc: 0.8948 – val_loss: 0.3481 – val_acc: 0.8742
Epoch 32/80
48000/48000 [==============================] – 1s 18us/sample – loss: 0.2877 – acc: 0.8947 – val_loss: 0.3439 – val_acc: 0.8783
Epoch 33/80
48000/48000 [==============================] – 1s 18us/sample – loss: 0.2861 – acc: 0.8950 – val_loss: 0.3434 – val_acc: 0.8778
Epoch 34/80
48000/48000 [==============================] – 1s 18us/sample – loss: 0.2850 – acc: 0.8958 – val_loss: 0.3468 – val_acc: 0.8764
Epoch 35/80
48000/48000 [==============================] – 1s 18us/sample – loss: 0.2841 – acc: 0.8965 – val_loss: 0.3495 – val_acc: 0.8737
Epoch 36/80
48000/48000 [==============================] – 1s 18us/sample – loss: 0.2840 – acc: 0.8963 – val_loss: 0.3456 – val_acc: 0.8770
Epoch 37/80
48000/48000 [==============================] – 1s 18us/sample – loss: 0.2842 – acc: 0.8959 – val_loss: 0.3419 – val_acc: 0.8776
Epoch 38/80
48000/48000 [==============================] – 1s 18us/sample – loss: 0.2807 – acc: 0.8974 – val_loss: 0.3449 – val_acc: 0.8761
Epoch 39/80
48000/48000 [==============================] – 1s 18us/sample – loss: 0.2814 – acc: 0.8976 – val_loss: 0.3449 – val_acc: 0.8770
Epoch 40/80
48000/48000 [==============================] – 1s 18us/sample – loss: 0.2804 – acc: 0.8979 – val_loss: 0.3433 – val_acc: 0.8784
Epoch 41/80
48000/48000 [==============================] – 1s 20us/sample – loss: 0.2783 – acc: 0.8989 – val_loss: 0.3545 – val_acc: 0.8730
Epoch 42/80
48000/48000 [==============================] – 1s 18us/sample – loss: 0.2784 – acc: 0.8980 – val_loss: 0.3432 – val_acc: 0.8790
Epoch 43/80
48000/48000 [==============================] – 1s 18us/sample – loss: 0.2775 – acc: 0.8988 – val_loss: 0.3452 – val_acc: 0.8786
Epoch 44/80
48000/48000 [==============================] – 1s 18us/sample – loss: 0.2774 – acc: 0.8985 – val_loss: 0.3467 – val_acc: 0.8770
Epoch 45/80
48000/48000 [==============================] – 1s 18us/sample – loss: 0.2758 – acc: 0.8990 – val_loss: 0.3427 – val_acc: 0.8788
Epoch 46/80
48000/48000 [==============================] – 1s 18us/sample – loss: 0.2754 – acc: 0.8995 – val_loss: 0.3432 – val_acc: 0.8767
Epoch 47/80
48000/48000 [==============================] – 1s 18us/sample – loss: 0.2739 – acc: 0.9006 – val_loss: 0.3423 – val_acc: 0.8783
Epoch 48/80
48000/48000 [==============================] – 1s 18us/sample – loss: 0.2735 – acc: 0.8993 – val_loss: 0.3431 – val_acc: 0.8783
Epoch 49/80
48000/48000 [==============================] – 1s 18us/sample – loss: 0.2732 – acc: 0.9002 – val_loss: 0.3408 – val_acc: 0.8786
Epoch 50/80
48000/48000 [==============================] – 1s 18us/sample – loss: 0.2715 – acc: 0.9008 – val_loss: 0.3429 – val_acc: 0.8770
Epoch 51/80
48000/48000 [==============================] – 1s 18us/sample – loss: 0.2713 – acc: 0.9010 – val_loss: 0.3412 – val_acc: 0.8778
Epoch 52/80
48000/48000 [==============================] – 1s 18us/sample – loss: 0.2701 – acc: 0.9009 – val_loss: 0.3467 – val_acc: 0.8772
Epoch 53/80
48000/48000 [==============================] – 1s 19us/sample – loss: 0.2699 – acc: 0.9010 – val_loss: 0.3389 – val_acc: 0.8802
Epoch 54/80
48000/48000 [==============================] – 1s 19us/sample – loss: 0.2679 – acc: 0.9024 – val_loss: 0.3424 – val_acc: 0.8766
Epoch 55/80
48000/48000 [==============================] – 1s 18us/sample – loss: 0.2686 – acc: 0.9015 – val_loss: 0.3400 – val_acc: 0.8790
Epoch 56/80
48000/48000 [==============================] – 1s 18us/sample – loss: 0.2669 – acc: 0.9016 – val_loss: 0.3395 – val_acc: 0.8803
Epoch 57/80
48000/48000 [==============================] – 1s 18us/sample – loss: 0.2668 – acc: 0.9022 – val_loss: 0.3405 – val_acc: 0.8806
Epoch 58/80
48000/48000 [==============================] – 1s 18us/sample – loss: 0.2664 – acc: 0.9018 – val_loss: 0.3416 – val_acc: 0.8794
Epoch 59/80
48000/48000 [==============================] – 1s 19us/sample – loss: 0.2654 – acc: 0.9034 – val_loss: 0.3419 – val_acc: 0.8760
Epoch 60/80
48000/48000 [==============================] – 1s 19us/sample – loss: 0.2637 – acc: 0.9034 – val_loss: 0.3422 – val_acc: 0.8790
Epoch 61/80
48000/48000 [==============================] – 1s 18us/sample – loss: 0.2640 – acc: 0.9036 – val_loss: 0.3376 – val_acc: 0.8801
Epoch 62/80
48000/48000 [==============================] – 1s 18us/sample – loss: 0.2632 – acc: 0.9039 – val_loss: 0.3435 – val_acc: 0.8777
Epoch 63/80
48000/48000 [==============================] – 1s 18us/sample – loss: 0.2621 – acc: 0.9049 – val_loss: 0.3367 – val_acc: 0.8793
Epoch 64/80
48000/48000 [==============================] – 1s 18us/sample – loss: 0.2609 – acc: 0.9044 – val_loss: 0.3418 – val_acc: 0.8790
Epoch 65/80
48000/48000 [==============================] – 1s 18us/sample – loss: 0.2616 – acc: 0.9035 – val_loss: 0.3364 – val_acc: 0.8803
Epoch 66/80
48000/48000 [==============================] – 1s 18us/sample – loss: 0.2605 – acc: 0.9048 – val_loss: 0.3397 – val_acc: 0.8800
Epoch 67/80
48000/48000 [==============================] – 1s 18us/sample – loss: 0.2599 – acc: 0.9048 – val_loss: 0.3466 – val_acc: 0.8756
Epoch 68/80
48000/48000 [==============================] – 1s 18us/sample – loss: 0.2593 – acc: 0.9046 – val_loss: 0.3362 – val_acc: 0.8801
Epoch 69/80
48000/48000 [==============================] – 1s 19us/sample – loss: 0.2584 – acc: 0.9054 – val_loss: 0.3379 – val_acc: 0.8797
Epoch 70/80
48000/48000 [==============================] – 1s 19us/sample – loss: 0.2582 – acc: 0.9055 – val_loss: 0.3413 – val_acc: 0.8771
Epoch 71/80
48000/48000 [==============================] – 1s 18us/sample – loss: 0.2571 – acc: 0.9063 – val_loss: 0.3446 – val_acc: 0.8765
Epoch 72/80
48000/48000 [==============================] – 1s 18us/sample – loss: 0.2563 – acc: 0.9061 – val_loss: 0.3350 – val_acc: 0.8817
Epoch 73/80
48000/48000 [==============================] – 1s 20us/sample – loss: 0.2548 – acc: 0.9071 – val_loss: 0.3410 – val_acc: 0.8778
Epoch 74/80
48000/48000 [==============================] – 1s 20us/sample – loss: 0.2557 – acc: 0.9068 – val_loss: 0.3360 – val_acc: 0.8812
Epoch 75/80
48000/48000 [==============================] – 1s 18us/sample – loss: 0.2532 – acc: 0.9076 – val_loss: 0.3388 – val_acc: 0.8816
Epoch 76/80
48000/48000 [==============================] – 1s 18us/sample – loss: 0.2536 – acc: 0.9071 – val_loss: 0.3412 – val_acc: 0.8783
Epoch 77/80
48000/48000 [==============================] – 1s 19us/sample – loss: 0.2537 – acc: 0.9070 – val_loss: 0.3365 – val_acc: 0.8817
Epoch 78/80
48000/48000 [==============================] – 1s 20us/sample – loss: 0.2508 – acc: 0.9083 – val_loss: 0.3374 – val_acc: 0.8815
Epoch 79/80
48000/48000 [==============================] – 1s 18us/sample – loss: 0.2506 – acc: 0.9080 – val_loss: 0.3413 – val_acc: 0.8775
Epoch 80/80
48000/48000 [==============================] – 1s 20us/sample – loss: 0.2510 – acc: 0.9082 – val_loss: 0.3366 – val_acc: 0.8798

In [84]:

scores = model.evaluate(X_test, y_test)

10000/10000 [==============================] – 0s 35us/sample – loss: 0.3635 – acc: 0.8742

In [85]:

print(“Accuracy: %.2f%%” % (scores[1]*100))

Accuracy: 87.42%

In [86]:

plt.subplot(2,1,1)
plt.plot(nn_simple.history[‘acc’])
plt.plot(nn_simple.history[‘val_acc’])
plt.title(‘model accuracy’)
plt.ylabel(‘accuracy’)
plt.xlabel(‘epoch’)
plt.legend([‘train’, ‘test’], loc=’lower right’)

plt.subplot(2,1,2)
plt.plot(nn_simple.history[‘loss’])
plt.plot(nn_simple.history[‘val_loss’])
plt.title(‘model loss’)
plt.ylabel(‘loss’)
plt.xlabel(‘epoch’)
plt.legend([‘train’, ‘test’], loc=’upper right’)

plt.show()

In [77]:

model.compile(loss=’categorical_crossentropy’, optimizer=sgd, metrics=[‘accuracy’])

In [78]:

nn_simple = model.fit(X_train, y_train, validation_split=0.2, epochs=80, batch_size=200)

Train on 48000 samples, validate on 12000 samples
Epoch 1/80
48000/48000 [==============================] – 1s 20us/sample – loss: 0.3196 – acc: 0.8848 – val_loss: 0.3601 – val_acc: 0.8720
Epoch 2/80
48000/48000 [==============================] – 1s 17us/sample – loss: 0.3193 – acc: 0.8853 – val_loss: 0.3619 – val_acc: 0.8712
Epoch 3/80
48000/48000 [==============================] – 1s 16us/sample – loss: 0.3192 – acc: 0.8849 – val_loss: 0.3636 – val_acc: 0.8705
Epoch 4/80
48000/48000 [==============================] – 1s 18us/sample – loss: 0.3190 – acc: 0.8840 – val_loss: 0.3605 – val_acc: 0.8712
Epoch 5/80
48000/48000 [==============================] – 1s 16us/sample – loss: 0.3191 – acc: 0.8855 – val_loss: 0.3609 – val_acc: 0.8708
Epoch 6/80
48000/48000 [==============================] – 1s 17us/sample – loss: 0.3186 – acc: 0.8842 – val_loss: 0.3608 – val_acc: 0.8707
Epoch 7/80
48000/48000 [==============================] – 1s 18us/sample – loss: 0.3192 – acc: 0.8851 – val_loss: 0.3598 – val_acc: 0.8714
Epoch 8/80
48000/48000 [==============================] – 1s 17us/sample – loss: 0.3187 – acc: 0.8864 – val_loss: 0.3702 – val_acc: 0.8692
Epoch 9/80
48000/48000 [==============================] – 1s 17us/sample – loss: 0.3186 – acc: 0.8852 – val_loss: 0.3606 – val_acc: 0.8715
Epoch 10/80
48000/48000 [==============================] – 1s 17us/sample – loss: 0.3198 – acc: 0.8861 – val_loss: 0.3622 – val_acc: 0.8708
Epoch 11/80
48000/48000 [==============================] – 1s 17us/sample – loss: 0.3188 – acc: 0.8850 – val_loss: 0.3606 – val_acc: 0.8718
Epoch 12/80
48000/48000 [==============================] – 1s 17us/sample – loss: 0.3188 – acc: 0.8859 – val_loss: 0.3596 – val_acc: 0.8710
Epoch 13/80
48000/48000 [==============================] – 1s 17us/sample – loss: 0.3200 – acc: 0.8849 – val_loss: 0.3599 – val_acc: 0.8715
Epoch 14/80
48000/48000 [==============================] – 1s 17us/sample – loss: 0.3187 – acc: 0.8861 – val_loss: 0.3614 – val_acc: 0.8704
Epoch 15/80
48000/48000 [==============================] – 1s 17us/sample – loss: 0.3186 – acc: 0.8851 – val_loss: 0.3600 – val_acc: 0.8706
Epoch 16/80
48000/48000 [==============================] – 1s 18us/sample – loss: 0.3196 – acc: 0.8847 – val_loss: 0.3595 – val_acc: 0.8717
Epoch 17/80
48000/48000 [==============================] – 1s 17us/sample – loss: 0.3188 – acc: 0.8845 – val_loss: 0.3596 – val_acc: 0.8716
Epoch 18/80
48000/48000 [==============================] – 1s 17us/sample – loss: 0.3194 – acc: 0.8846 – val_loss: 0.3601 – val_acc: 0.8704
Epoch 19/80
48000/48000 [==============================] – 1s 17us/sample – loss: 0.3187 – acc: 0.8860 – val_loss: 0.3609 – val_acc: 0.8709
Epoch 20/80
48000/48000 [==============================] – 1s 17us/sample – loss: 0.3195 – acc: 0.8855 – val_loss: 0.3704 – val_acc: 0.8683
Epoch 21/80
48000/48000 [==============================] – 1s 17us/sample – loss: 0.3193 – acc: 0.8842 – val_loss: 0.3611 – val_acc: 0.8714
Epoch 22/80
48000/48000 [==============================] – 1s 16us/sample – loss: 0.3188 – acc: 0.8856 – val_loss: 0.3600 – val_acc: 0.8722
Epoch 23/80
48000/48000 [==============================] – 1s 17us/sample – loss: 0.3195 – acc: 0.8853 – val_loss: 0.3606 – val_acc: 0.8713
Epoch 24/80
48000/48000 [==============================] – 1s 17us/sample – loss: 0.3186 – acc: 0.8856 – val_loss: 0.3595 – val_acc: 0.8722
Epoch 25/80
48000/48000 [==============================] – 1s 18us/sample – loss: 0.3187 – acc: 0.8854 – val_loss: 0.3588 – val_acc: 0.8710
Epoch 26/80
48000/48000 [==============================] – 1s 18us/sample – loss: 0.3184 – acc: 0.8857 – val_loss: 0.3618 – val_acc: 0.8714
Epoch 27/80
48000/48000 [==============================] – 1s 19us/sample – loss: 0.3194 – acc: 0.8841 – val_loss: 0.3599 – val_acc: 0.8716
Epoch 28/80
48000/48000 [==============================] – 1s 18us/sample – loss: 0.3184 – acc: 0.8853 – val_loss: 0.3596 – val_acc: 0.8719
Epoch 29/80
48000/48000 [==============================] – 1s 17us/sample – loss: 0.3188 – acc: 0.8851 – val_loss: 0.3586 – val_acc: 0.8722
Epoch 30/80
48000/48000 [==============================] – 1s 16us/sample – loss: 0.3188 – acc: 0.8855 – val_loss: 0.3616 – val_acc: 0.8705
Epoch 31/80
48000/48000 [==============================] – 1s 17us/sample – loss: 0.3190 – acc: 0.8852 – val_loss: 0.3606 – val_acc: 0.8721
Epoch 32/80
48000/48000 [==============================] – 1s 17us/sample – loss: 0.3182 – acc: 0.8849 – val_loss: 0.3687 – val_acc: 0.8683
Epoch 33/80
48000/48000 [==============================] – 1s 17us/sample – loss: 0.3190 – acc: 0.8861 – val_loss: 0.3601 – val_acc: 0.8708
Epoch 34/80
48000/48000 [==============================] – 1s 18us/sample – loss: 0.3190 – acc: 0.8852 – val_loss: 0.3597 – val_acc: 0.8716
Epoch 35/80
48000/48000 [==============================] – 1s 19us/sample – loss: 0.3191 – acc: 0.8855 – val_loss: 0.3603 – val_acc: 0.8715
Epoch 36/80
48000/48000 [==============================] – 1s 18us/sample – loss: 0.3189 – acc: 0.8855 – val_loss: 0.3612 – val_acc: 0.8719
Epoch 37/80
48000/48000 [==============================] – 1s 22us/sample – loss: 0.3192 – acc: 0.8857 – val_loss: 0.3598 – val_acc: 0.8725
Epoch 38/80
48000/48000 [==============================] – 1s 18us/sample – loss: 0.3188 – acc: 0.8854 – val_loss: 0.3591 – val_acc: 0.8720
Epoch 39/80
48000/48000 [==============================] – 1s 20us/sample – loss: 0.3190 – acc: 0.8857 – val_loss: 0.3642 – val_acc: 0.8698
Epoch 40/80
48000/48000 [==============================] – 1s 19us/sample – loss: 0.3192 – acc: 0.8856 – val_loss: 0.3603 – val_acc: 0.8705
Epoch 41/80
48000/48000 [==============================] – 1s 17us/sample – loss: 0.3189 – acc: 0.8846 – val_loss: 0.3597 – val_acc: 0.8712
Epoch 42/80
48000/48000 [==============================] – 1s 18us/sample – loss: 0.3186 – acc: 0.8859 – val_loss: 0.3606 – val_acc: 0.8705
Epoch 43/80
48000/48000 [==============================] – 1s 16us/sample – loss: 0.3188 – acc: 0.8856 – val_loss: 0.3589 – val_acc: 0.8724
Epoch 44/80
48000/48000 [==============================] – 1s 16us/sample – loss: 0.3184 – acc: 0.8858 – val_loss: 0.3627 – val_acc: 0.8714
Epoch 45/80
48000/48000 [==============================] – 1s 16us/sample – loss: 0.3186 – acc: 0.8855 – val_loss: 0.3613 – val_acc: 0.8717
Epoch 46/80
48000/48000 [==============================] – 1s 17us/sample – loss: 0.3189 – acc: 0.8845 – val_loss: 0.3593 – val_acc: 0.8717
Epoch 47/80
48000/48000 [==============================] – 1s 21us/sample – loss: 0.3183 – acc: 0.8857 – val_loss: 0.3615 – val_acc: 0.8708
Epoch 48/80
48000/48000 [==============================] – 1s 19us/sample – loss: 0.3185 – acc: 0.8860 – val_loss: 0.3633 – val_acc: 0.8692
Epoch 49/80
48000/48000 [==============================] – 1s 18us/sample – loss: 0.3189 – acc: 0.8846 – val_loss: 0.3593 – val_acc: 0.8714
Epoch 50/80
48000/48000 [==============================] – 1s 17us/sample – loss: 0.3194 – acc: 0.8852 – val_loss: 0.3586 – val_acc: 0.8725
Epoch 51/80
48000/48000 [==============================] – 1s 18us/sample – loss: 0.3183 – acc: 0.8856 – val_loss: 0.3597 – val_acc: 0.8715
Epoch 52/80
48000/48000 [==============================] – 1s 20us/sample – loss: 0.3188 – acc: 0.8856 – val_loss: 0.3598 – val_acc: 0.8719
Epoch 53/80
48000/48000 [==============================] – 1s 17us/sample – loss: 0.3187 – acc: 0.8866 – val_loss: 0.3595 – val_acc: 0.8716
Epoch 54/80
48000/48000 [==============================] – 1s 16us/sample – loss: 0.3190 – acc: 0.8857 – val_loss: 0.3596 – val_acc: 0.8722
Epoch 55/80
48000/48000 [==============================] – 1s 18us/sample – loss: 0.3187 – acc: 0.8853 – val_loss: 0.3610 – val_acc: 0.8714
Epoch 56/80
48000/48000 [==============================] – 1s 20us/sample – loss: 0.3187 – acc: 0.8858 – val_loss: 0.3621 – val_acc: 0.8700
Epoch 57/80
48000/48000 [==============================] – 1s 18us/sample – loss: 0.3191 – acc: 0.8853 – val_loss: 0.3612 – val_acc: 0.8716
Epoch 58/80
48000/48000 [==============================] – 1s 17us/sample – loss: 0.3177 – acc: 0.8867 – val_loss: 0.3611 – val_acc: 0.8711
Epoch 59/80
48000/48000 [==============================] – 1s 17us/sample – loss: 0.3184 – acc: 0.8846 – val_loss: 0.3630 – val_acc: 0.8695
Epoch 60/80
48000/48000 [==============================] – 1s 17us/sample – loss: 0.3186 – acc: 0.8854 – val_loss: 0.3600 – val_acc: 0.8717
Epoch 61/80
48000/48000 [==============================] – 1s 17us/sample – loss: 0.3185 – acc: 0.8852 – val_loss: 0.3587 – val_acc: 0.8719
Epoch 62/80
48000/48000 [==============================] – 1s 16us/sample – loss: 0.3186 – acc: 0.8853 – val_loss: 0.3608 – val_acc: 0.8708
Epoch 63/80
48000/48000 [==============================] – 1s 16us/sample – loss: 0.3182 – acc: 0.8863 – val_loss: 0.3604 – val_acc: 0.8713
Epoch 64/80
48000/48000 [==============================] – 1s 18us/sample – loss: 0.3187 – acc: 0.8851 – val_loss: 0.3623 – val_acc: 0.8702
Epoch 65/80
48000/48000 [==============================] – 1s 17us/sample – loss: 0.3186 – acc: 0.8853 – val_loss: 0.3626 – val_acc: 0.8712
Epoch 66/80
48000/48000 [==============================] – 1s 16us/sample – loss: 0.3182 – acc: 0.8855 – val_loss: 0.3626 – val_acc: 0.8706
Epoch 67/80
48000/48000 [==============================] – 1s 17us/sample – loss: 0.3182 – acc: 0.8856 – val_loss: 0.3650 – val_acc: 0.8688
Epoch 68/80
48000/48000 [==============================] – 1s 17us/sample – loss: 0.3184 – acc: 0.8862 – val_loss: 0.3609 – val_acc: 0.8698
Epoch 69/80
48000/48000 [==============================] – 1s 19us/sample – loss: 0.3189 – acc: 0.8847 – val_loss: 0.3598 – val_acc: 0.8712
Epoch 70/80
48000/48000 [==============================] – 1s 17us/sample – loss: 0.3175 – acc: 0.8857 – val_loss: 0.3598 – val_acc: 0.8722
Epoch 71/80
48000/48000 [==============================] – 1s 17us/sample – loss: 0.3176 – acc: 0.8854 – val_loss: 0.3605 – val_acc: 0.8721
Epoch 72/80
48000/48000 [==============================] – 1s 17us/sample – loss: 0.3181 – acc: 0.8852 – val_loss: 0.3598 – val_acc: 0.8705
Epoch 73/80
48000/48000 [==============================] – 1s 16us/sample – loss: 0.3177 – acc: 0.8857 – val_loss: 0.3596 – val_acc: 0.8721
Epoch 74/80
48000/48000 [==============================] – 1s 16us/sample – loss: 0.3179 – acc: 0.8854 – val_loss: 0.3600 – val_acc: 0.8717
Epoch 75/80
48000/48000 [==============================] – 1s 17us/sample – loss: 0.3190 – acc: 0.8846 – val_loss: 0.3607 – val_acc: 0.8707
Epoch 76/80
48000/48000 [==============================] – 1s 16us/sample – loss: 0.3184 – acc: 0.8859 – val_loss: 0.3625 – val_acc: 0.8709
Epoch 77/80
48000/48000 [==============================] – 1s 16us/sample – loss: 0.3178 – acc: 0.8863 – val_loss: 0.3596 – val_acc: 0.8717
Epoch 78/80
48000/48000 [==============================] – 1s 16us/sample – loss: 0.3177 – acc: 0.8856 – val_loss: 0.3615 – val_acc: 0.8708
Epoch 79/80
48000/48000 [==============================] – 1s 16us/sample – loss: 0.3185 – acc: 0.8855 – val_loss: 0.3600 – val_acc: 0.8699
Epoch 80/80
48000/48000 [==============================] – 1s 17us/sample – loss: 0.3180 – acc: 0.8853 – val_loss: 0.3598 – val_acc: 0.8725

In [79]:

scores = model.evaluate(X_test, y_test)

10000/10000 [==============================] – 0s 35us/sample – loss: 0.3858 – acc: 0.8626

In [80]:

print(“Accuracy: %.2f%%” % (scores[1]*100))

Accuracy: 86.26%

In [81]:

plt.subplot(2,1,1)
plt.plot(nn_simple.history[‘acc’])
plt.plot(nn_simple.history[‘val_acc’])
plt.title(‘model accuracy’)
plt.ylabel(‘accuracy’)
plt.xlabel(‘epoch’)
plt.legend([‘train’, ‘test’], loc=’lower right’)

plt.subplot(2,1,2)
plt.plot(nn_simple.history[‘loss’])
plt.plot(nn_simple.history[‘val_loss’])
plt.title(‘model loss’)
plt.ylabel(‘loss’)
plt.xlabel(‘epoch’)
plt.legend([‘train’, ‘test’], loc=’upper right’)

plt.show()

In [ ]: