CS计算机代考程序代写 python Keras In [1]:

In [1]:
# Simple KNN on MNIST
from sklearn.cluster import KMeans
from sklearn.metrics import accuracy_score
from tensorflow.keras.datasets import mnist
import numpy as np

(x_train, y_train), (x_test, y_test) = mnist.load_data()
x = np.concatenate((x_train, x_test))
y = np.concatenate((y_train, y_test))
x = x.reshape((x.shape[0], -1))
x = np.divide(x, 255.)
# 10 clusters
n_clusters = 10
# Runs in parallel 4 CPUs
kmeans = KMeans(n_clusters=n_clusters, n_init=20, n_jobs=4)
# Train K-Means.
y_pred_kmeans = kmeans.fit_predict(x)
# Evaluate the K-Means clustering accuracy.
accuracy = accuracy_score(y, y_pred_kmeans)

ERROR:root:Internal Python error in the inspect module.
Below is the traceback from this internal error.

KeyboardInterrupt

In [2]:
# Design autoencoder and decoder
from tensorflow.keras.layers import Input, Dense
from tensorflow.keras.models import Model

from tensorflow.keras.datasets import mnist
import numpy as np

# this is the size of our encoded representations
encoding_dim = 32 # 32 floats -> compression of factor 24.5, assuming the input is 784 floats

# this is our input placeholder
input_img = Input(shape=(784,))
# “encoded” is the encoded representation of the input
encoded = Dense(encoding_dim, activation=’relu’)(input_img)
# “decoded” is the lossy reconstruction of the input
decoded = Dense(784, activation=’sigmoid’)(encoded)

# this model maps an input to its reconstruction
autoencoder = Model(input_img, decoded)
In [3]:
# Let’s also create a separate encoder model:
# this model maps an input to its encoded representation
encoder = Model(input_img, encoded)
In [4]:
# create a placeholder for an encoded (32-dimensional) input
encoded_input = Input(shape=(encoding_dim,))
# retrieve the last layer of the autoencoder model
decoder_layer = autoencoder.layers[-1]
# create the decoder model
decoder = Model(encoded_input, decoder_layer(encoded_input))
In [5]:
# Insert model
autoencoder.compile(optimizer=’adam’, loss=’binary_crossentropy’)
In [6]:
(x_train, _), (x_test, _) = mnist.load_data()
# We will normalize all values between 0 and 1 and we will flatten the 28×28 images into vectors of size 784.
x_train = x_train.astype(‘float32’) / 255.
x_test = x_test.astype(‘float32’) / 255.
x_train = x_train.reshape((len(x_train), np.prod(x_train.shape[1:])))
x_test = x_test.reshape((len(x_test), np.prod(x_test.shape[1:])))
print (x_train.shape)
print (x_test.shape)

(60000, 784)
(10000, 784)
In [7]:
# let’s train our autoencoder for 50 epochs:
autoencoder.fit(x_train, x_train, epochs=50, batch_size=256, shuffle=True, validation_data=(x_test, x_test))

Train on 60000 samples, validate on 10000 samples
Epoch 1/50
60000/60000 [==============================] – 6s 100us/sample – loss: 0.2786 – val_loss: 0.1933
Epoch 2/50
60000/60000 [==============================] – 5s 81us/sample – loss: 0.1725 – val_loss: 0.1539
Epoch 3/50
60000/60000 [==============================] – 5s 82us/sample – loss: 0.1444 – val_loss: 0.1338
Epoch 4/50
60000/60000 [==============================] – 4s 73us/sample – loss: 0.1287 – val_loss: 0.1215
Epoch 5/50
60000/60000 [==============================] – 4s 63us/sample – loss: 0.1184 – val_loss: 0.1132
Epoch 6/50
60000/60000 [==============================] – 4s 66us/sample – loss: 0.1112 – val_loss: 0.1072
Epoch 7/50
60000/60000 [==============================] – 4s 68us/sample – loss: 0.1062 – val_loss: 0.1031
Epoch 8/50
60000/60000 [==============================] – 4s 70us/sample – loss: 0.1024 – val_loss: 0.0999
Epoch 9/50
60000/60000 [==============================] – 4s 67us/sample – loss: 0.0997 – val_loss: 0.0975
Epoch 10/50
60000/60000 [==============================] – 4s 71us/sample – loss: 0.0977 – val_loss: 0.0960
Epoch 11/50
60000/60000 [==============================] – 4s 67us/sample – loss: 0.0964 – val_loss: 0.0947
Epoch 12/50
60000/60000 [==============================] – 4s 68us/sample – loss: 0.0955 – val_loss: 0.0940
Epoch 13/50
60000/60000 [==============================] – 4s 72us/sample – loss: 0.0949 – val_loss: 0.0934
Epoch 14/50
60000/60000 [==============================] – 4s 72us/sample – loss: 0.0945 – val_loss: 0.0932
Epoch 15/50
60000/60000 [==============================] – 5s 80us/sample – loss: 0.0942 – val_loss: 0.0929
Epoch 16/50
60000/60000 [==============================] – 4s 69us/sample – loss: 0.0940 – val_loss: 0.0927
Epoch 17/50
60000/60000 [==============================] – 4s 71us/sample – loss: 0.0938 – val_loss: 0.0925
Epoch 18/50
60000/60000 [==============================] – 5s 77us/sample – loss: 0.0937 – val_loss: 0.0925
Epoch 19/50
60000/60000 [==============================] – 4s 66us/sample – loss: 0.0936 – val_loss: 0.0923
Epoch 20/50
60000/60000 [==============================] – 4s 68us/sample – loss: 0.0935 – val_loss: 0.0923
Epoch 21/50
60000/60000 [==============================] – 4s 72us/sample – loss: 0.0934 – val_loss: 0.0921A: 0s – loss: 0.093
Epoch 22/50
60000/60000 [==============================] – 4s 70us/sample – loss: 0.0933 – val_loss: 0.0921
Epoch 23/50
60000/60000 [==============================] – 4s 62us/sample – loss: 0.0932 – val_loss: 0.0920
Epoch 24/50
60000/60000 [==============================] – 4s 63us/sample – loss: 0.0932 – val_loss: 0.0920
Epoch 25/50
60000/60000 [==============================] – 4s 62us/sample – loss: 0.0931 – val_loss: 0.0919
Epoch 26/50
60000/60000 [==============================] – 3s 58us/sample – loss: 0.0931 – val_loss: 0.0918
Epoch 27/50
60000/60000 [==============================] – 4s 70us/sample – loss: 0.0930 – val_loss: 0.0918
Epoch 28/50
60000/60000 [==============================] – 4s 70us/sample – loss: 0.0930 – val_loss: 0.0919
Epoch 29/50
60000/60000 [==============================] – 4s 73us/sample – loss: 0.0930 – val_loss: 0.0918
Epoch 30/50
60000/60000 [==============================] – 6s 98us/sample – loss: 0.0929 – val_loss: 0.0917
Epoch 31/50
60000/60000 [==============================] – 6s 100us/sample – loss: 0.0929 – val_loss: 0.0917
Epoch 32/50
60000/60000 [==============================] – 5s 78us/sample – loss: 0.0929 – val_loss: 0.0917
Epoch 33/50
60000/60000 [==============================] – 5s 77us/sample – loss: 0.0929 – val_loss: 0.0917
Epoch 34/50
60000/60000 [==============================] – 4s 73us/sample – loss: 0.0928 – val_loss: 0.0917
Epoch 35/50
60000/60000 [==============================] – 4s 67us/sample – loss: 0.0928 – val_loss: 0.0917
Epoch 36/50
60000/60000 [==============================] – 4s 70us/sample – loss: 0.0928 – val_loss: 0.0916
Epoch 37/50
60000/60000 [==============================] – 4s 72us/sample – loss: 0.0928 – val_loss: 0.0916
Epoch 38/50
60000/60000 [==============================] – 4s 69us/sample – loss: 0.0928 – val_loss: 0.0916
Epoch 39/50
60000/60000 [==============================] – 4s 70us/sample – loss: 0.0928 – val_loss: 0.0916
Epoch 40/50
60000/60000 [==============================] – 4s 60us/sample – loss: 0.0927 – val_loss: 0.0916
Epoch 41/50
60000/60000 [==============================] – 4s 73us/sample – loss: 0.0927 – val_loss: 0.0916
Epoch 42/50
60000/60000 [==============================] – 5s 76us/sample – loss: 0.0927 – val_loss: 0.0915
Epoch 43/50
60000/60000 [==============================] – 4s 73us/sample – loss: 0.0927 – val_loss: 0.0916
Epoch 44/50
60000/60000 [==============================] – 4s 68us/sample – loss: 0.0927 – val_loss: 0.0915
Epoch 45/50
60000/60000 [==============================] – 4s 62us/sample – loss: 0.0927 – val_loss: 0.0916
Epoch 46/50
60000/60000 [==============================] – 4s 64us/sample – loss: 0.0926 – val_loss: 0.0916
Epoch 47/50
60000/60000 [==============================] – 4s 63us/sample – loss: 0.0927 – val_loss: 0.0915
Epoch 48/50
60000/60000 [==============================] – 4s 62us/sample – loss: 0.0926 – val_loss: 0.0915
Epoch 49/50
60000/60000 [==============================] – 4s 60us/sample – loss: 0.0926 – val_loss: 0.0915: 0s – los
Epoch 50/50
60000/60000 [==============================] – 4s 66us/sample – loss: 0.0926 – val_loss: 0.0914
Out[7]:

In [8]:
# After 50 epochs, the autoencoder seems to reach a stable train/test loss value of about 0.09.
# We can try to visualize the reconstructed inputs and the encoded representations. We will use Matplotlib.
# encode and decode some digits
# note that we take them from the *test* set
encoded_imgs = encoder.predict(x_test)
decoded_imgs = decoder.predict(encoded_imgs)
In [9]:
# use Matplotlib
import matplotlib.pyplot as plt

n = 10 # how many digits we will display
plt.figure(figsize=(20, 4))
for i in range(n):
# display original
ax = plt.subplot(2, n, i + 1)
plt.imshow(x_test[i].reshape(28, 28))
plt.gray()
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)

# display reconstruction
ax = plt.subplot(2, n, i + 1 + n)
plt.imshow(decoded_imgs[i].reshape(28, 28))
plt.gray()
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
plt.show()


In [10]:
# deeper model
input_img = Input(shape=(784,))
encoded = Dense(128, activation=’relu’)(input_img)
encoded = Dense(64, activation=’relu’)(encoded)
encoded = Dense(32, activation=’relu’)(encoded)

decoded = Dense(64, activation=’relu’)(encoded)
decoded = Dense(128, activation=’relu’)(decoded)
decoded = Dense(784, activation=’sigmoid’)(decoded)

autoencoder = Model(input_img, decoded)
autoencoder.compile(optimizer=’adam’, loss=’binary_crossentropy’)

encoder = Model(input_img, encoded)
encoded_input = Input(shape=(encoding_dim,))
#decoder_layer = autoencoder.layers[-1]
#decoder = Model(encoded_input, decoder_layer(encoded_input))

autoencoder.fit(x_train, x_train, epochs=50, batch_size=256, shuffle=True, validation_data=(x_test, x_test))

Train on 60000 samples, validate on 10000 samples
Epoch 1/50
60000/60000 [==============================] – 7s 116us/sample – loss: 0.2474 – val_loss: 0.1673
Epoch 2/50
60000/60000 [==============================] – 6s 106us/sample – loss: 0.1491 – val_loss: 0.1351
Epoch 3/50
60000/60000 [==============================] – 9s 148us/sample – loss: 0.1298 – val_loss: 0.1230loss: 0.129
Epoch 4/50
60000/60000 [==============================] – 8s 132us/sample – loss: 0.1210 – val_loss: 0.1163: 0s –
Epoch 5/50
60000/60000 [==============================] – 8s 134us/sample – loss: 0.1154 – val_loss: 0.1124
Epoch 6/50
60000/60000 [==============================] – 8s 133us/sample – loss: 0.1114 – val_loss: 0.1081
Epoch 7/50
60000/60000 [==============================] – 8s 128us/sample – loss: 0.1080 – val_loss: 0.1050
Epoch 8/50
60000/60000 [==============================] – 7s 117us/sample – loss: 0.1052 – val_loss: 0.1028
Epoch 9/50
60000/60000 [==============================] – 7s 121us/sample – loss: 0.1030 – val_loss: 0.1007
Epoch 10/50
60000/60000 [==============================] – 7s 113us/sample – loss: 0.1010 – val_loss: 0.0991
Epoch 11/50
60000/60000 [==============================] – 7s 123us/sample – loss: 0.0993 – val_loss: 0.0976
Epoch 12/50
60000/60000 [==============================] – 8s 126us/sample – loss: 0.0979 – val_loss: 0.0965
Epoch 13/50
60000/60000 [==============================] – 7s 123us/sample – loss: 0.0968 – val_loss: 0.0953
Epoch 14/50
60000/60000 [==============================] – 6s 98us/sample – loss: 0.0958 – val_loss: 0.0944
Epoch 15/50
60000/60000 [==============================] – 6s 97us/sample – loss: 0.0949 – val_loss: 0.0940
Epoch 16/50
60000/60000 [==============================] – 6s 103us/sample – loss: 0.0942 – val_loss: 0.0934
Epoch 17/50
60000/60000 [==============================] – 6s 93us/sample – loss: 0.0935 – val_loss: 0.0925
Epoch 18/50
60000/60000 [==============================] – 6s 97us/sample – loss: 0.0929 – val_loss: 0.0918
Epoch 19/50
60000/60000 [==============================] – 5s 91us/sample – loss: 0.0923 – val_loss: 0.0919
Epoch 20/50
60000/60000 [==============================] – 5s 91us/sample – loss: 0.0918 – val_loss: 0.0906
Epoch 21/50
60000/60000 [==============================] – 6s 93us/sample – loss: 0.0913 – val_loss: 0.0903
Epoch 22/50
60000/60000 [==============================] – 6s 95us/sample – loss: 0.0908 – val_loss: 0.0897
Epoch 23/50
60000/60000 [==============================] – 5s 91us/sample – loss: 0.0904 – val_loss: 0.0895ETA: 3s – ETA: 2s
Epoch 24/50
60000/60000 [==============================] – 6s 99us/sample – loss: 0.0900 – val_loss: 0.0891
Epoch 25/50
60000/60000 [==============================] – 6s 97us/sample – loss: 0.0897 – val_loss: 0.0890
Epoch 26/50
60000/60000 [==============================] – 6s 93us/sample – loss: 0.0893 – val_loss: 0.0886
Epoch 27/50
60000/60000 [==============================] – 6s 95us/sample – loss: 0.0889 – val_loss: 0.0883
Epoch 28/50
60000/60000 [==============================] – 6s 95us/sample – loss: 0.0886 – val_loss: 0.0878
Epoch 29/50
60000/60000 [==============================] – 6s 92us/sample – loss: 0.0881 – val_loss: 0.0873
Epoch 30/50
60000/60000 [==============================] – 6s 94us/sample – loss: 0.0877 – val_loss: 0.0873
Epoch 31/50
60000/60000 [==============================] – 6s 92us/sample – loss: 0.0873 – val_loss: 0.0866
Epoch 32/50
60000/60000 [==============================] – 5s 92us/sample – loss: 0.0869 – val_loss: 0.0864
Epoch 33/50
60000/60000 [==============================] – 6s 96us/sample – loss: 0.0865 – val_loss: 0.0859
Epoch 34/50
60000/60000 [==============================] – 5s 91us/sample – loss: 0.0862 – val_loss: 0.0856
Epoch 35/50
60000/60000 [==============================] – 6s 93us/sample – loss: 0.0859 – val_loss: 0.0857
Epoch 36/50
60000/60000 [==============================] – 6s 94us/sample – loss: 0.0857 – val_loss: 0.0850
Epoch 37/50
60000/60000 [==============================] – 6s 96us/sample – loss: 0.0853 – val_loss: 0.0846
Epoch 38/50
60000/60000 [==============================] – 6s 93us/sample – loss: 0.0851 – val_loss: 0.0847
Epoch 39/50
60000/60000 [==============================] – 6s 98us/sample – loss: 0.0849 – val_loss: 0.0844
Epoch 40/50
60000/60000 [==============================] – 5s 84us/sample – loss: 0.0847 – val_loss: 0.0844
Epoch 41/50
60000/60000 [==============================] – 5s 84us/sample – loss: 0.0845 – val_loss: 0.0839
Epoch 42/50
60000/60000 [==============================] – 6s 98us/sample – loss: 0.0843 – val_loss: 0.0843
Epoch 43/50
60000/60000 [==============================] – 6s 99us/sample – loss: 0.0842 – val_loss: 0.0837
Epoch 44/50
60000/60000 [==============================] – 6s 93us/sample – loss: 0.0840 – val_loss: 0.0837
Epoch 45/50
60000/60000 [==============================] – 6s 96us/sample – loss: 0.0839 – val_loss: 0.0833
Epoch 46/50
60000/60000 [==============================] – 6s 92us/sample – loss: 0.0837 – val_loss: 0.0833
Epoch 47/50
60000/60000 [==============================] – 6s 98us/sample – loss: 0.0836 – val_loss: 0.0833
Epoch 48/50
60000/60000 [==============================] – 7s 111us/sample – loss: 0.0835 – val_loss: 0.0830
Epoch 49/50
60000/60000 [==============================] – 6s 94us/sample – loss: 0.0834 – val_loss: 0.0830
Epoch 50/50
60000/60000 [==============================] – 6s 97us/sample – loss: 0.0833 – val_loss: 0.0829
Out[10]:

In [11]:
encoded_imgs = encoder.predict(x_test)
decoded_imgs = decoder.predict(encoded_imgs)

import matplotlib.pyplot as plt

n = 10 # how many digits we will display
plt.figure(figsize=(20, 4))
for i in range(n):
# display original
ax = plt.subplot(2, n, i + 1)
plt.imshow(x_test[i].reshape(28, 28))
plt.gray()
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)

# display reconstruction
ax = plt.subplot(2, n, i + 1 + n)
plt.imshow(decoded_imgs[i].reshape(28, 28))
plt.gray()
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
plt.show()


In [13]:
import pandas as pd

# Regression example
def get_data():
#get train data
train_data_path =’C:/My Courses/Spring2020/ANLY535/Lecture13_Regression_Unsupervised/train.csv’
train = pd.read_csv(train_data_path)

#get test data
test_data_path =’C:/My Courses/Spring2020/ANLY535/Lecture13_Regression_Unsupervised/test.csv’
test = pd.read_csv(test_data_path)

return train , test

def get_combined_data():
#reading train data
train , test = get_data()
target = train.SalePrice
train.drop([‘SalePrice’],axis = 1 , inplace = True)
combined = train.append(test)
combined.reset_index(inplace=True)
combined.drop([‘index’, ‘Id’], inplace=True, axis=1)
return combined, target

#Load train and test data into pandas DataFrames
train_data, test_data = get_data()

#Combine train and test data to process them together
combined, target = get_combined_data()
In [14]:
combined.describe()
Out[14]:

MSSubClass
LotFrontage
LotArea
OverallQual
OverallCond
YearBuilt
YearRemodAdd
MasVnrArea
BsmtFinSF1
BsmtFinSF2

GarageArea
WoodDeckSF
OpenPorchSF
EnclosedPorch
3SsnPorch
ScreenPorch
PoolArea
MiscVal
MoSold
YrSold
count
2919.000000
2433.000000
2919.000000
2919.000000
2919.000000
2919.000000
2919.000000
2896.000000
2918.000000
2918.000000

2918.000000
2919.000000
2919.000000
2919.000000
2919.000000
2919.000000
2919.000000
2919.000000
2919.000000
2919.000000
mean
57.137718
69.305795
10168.114080
6.089072
5.564577
1971.312778
1984.264474
102.201312
441.423235
49.582248

472.874572
93.709832
47.486811
23.098321
2.602261
16.062350
2.251799
50.825968
6.213087
2007.792737
std
42.517628
23.344905
7886.996359
1.409947
1.113131
30.291442
20.894344
179.334253
455.610826
169.205611

215.394815
126.526589
67.575493
64.244246
25.188169
56.184365
35.663946
567.402211
2.714762
1.314964
min
20.000000
21.000000
1300.000000
1.000000
1.000000
1872.000000
1950.000000
0.000000
0.000000
0.000000

0.000000
0.000000
0.000000
0.000000
0.000000
0.000000
0.000000
0.000000
1.000000
2006.000000
25%
20.000000
59.000000
7478.000000
5.000000
5.000000
1953.500000
1965.000000
0.000000
0.000000
0.000000

320.000000
0.000000
0.000000
0.000000
0.000000
0.000000
0.000000
0.000000
4.000000
2007.000000
50%
50.000000
68.000000
9453.000000
6.000000
5.000000
1973.000000
1993.000000
0.000000
368.500000
0.000000

480.000000
0.000000
26.000000
0.000000
0.000000
0.000000
0.000000
0.000000
6.000000
2008.000000
75%
70.000000
80.000000
11570.000000
7.000000
6.000000
2001.000000
2004.000000
164.000000
733.000000
0.000000

576.000000
168.000000
70.000000
0.000000
0.000000
0.000000
0.000000
0.000000
8.000000
2009.000000
max
190.000000
313.000000
215245.000000
10.000000
9.000000
2010.000000
2010.000000
1600.000000
5644.000000
1526.000000

1488.000000
1424.000000
742.000000
1012.000000
508.000000
576.000000
800.000000
17000.000000
12.000000
2010.000000
8 rows × 36 columns
In [15]:
# Drop missing values
def get_cols_with_no_nans(df,col_type):
”’
Arguments :
df : The dataframe to process
col_type :
num : to only get numerical columns with no nans
no_num : to only get nun-numerical columns with no nans
all : to get any columns with no nans
”’
if (col_type == ‘num’):
predictors = df.select_dtypes(exclude=[‘object’])
elif (col_type == ‘no_num’):
predictors = df.select_dtypes(include=[‘object’])
elif (col_type == ‘all’):
predictors = df
else :
print(‘Error : choose a type (num, no_num, all)’)
return 0
cols_with_no_nans = []
for col in predictors.columns:
if not df[col].isnull().any():
cols_with_no_nans.append(col)
return cols_with_no_nans

# Call the function
num_cols = get_cols_with_no_nans(combined , ‘num’)
cat_cols = get_cols_with_no_nans(combined , ‘no_num’)
In [16]:
# How many columns we got
print (‘Number of numerical columns with no nan values :’,len(num_cols))
print (‘Number of nun-numerical columns with no nan values :’,len(cat_cols))

Number of numerical columns with no nan values : 25
Number of nun-numerical columns with no nan values : 20
In [17]:
import matplotlib.pyplot as plt
# Plot the variables
combined = combined[num_cols + cat_cols]
combined.hist(figsize = (12,10))
plt.show()


In [19]:
import seaborn as sb
# How many of features are correlated
train_data = train_data[num_cols + cat_cols]
train_data[‘Target’] = target

C_mat = train_data.corr()
fig = plt.figure(figsize = (15,15))

sb.heatmap(C_mat, vmax = .8, square = True)
plt.show()
# Looks like 15 correlated features


In [20]:
import numpy as np
# We will encode the categorical features using one hot encoding.
def oneHotEncode(df,colNames):
for col in colNames:
if( df[col].dtype == np.dtype(‘object’)):
dummies = pd.get_dummies(df[col],prefix=col)
df = pd.concat([df,dummies],axis=1)

#drop the encoded column
df.drop([col],axis = 1 , inplace=True)
return df

print(‘There were {} columns before encoding categorical features’.format(combined.shape[1]))
combined = oneHotEncode(combined, cat_cols)
print(‘There are {} columns after encoding categorical features’.format(combined.shape[1]))

There were 45 columns before encoding categorical features
There are 149 columns after encoding categorical features
In [21]:
# Split data to train and test
def split_combined():
global combined
train = combined[:1460]
test = combined[1460:]

return train , test

train, test = split_combined()
In [22]:
from tensorflow.keras.layers import Input, Dense
from tensorflow.keras import Sequential

# Use ‘relu’ as the activation function for the hidden layers
# Use a ‘normal’ initializer as the kernal_intializer

#Define the output layer with only one node
#Use ‘linear ’as the activation function for the output layer
NN_model = Sequential()

# The Input Layer :
NN_model.add(Dense(128, kernel_initializer=’normal’,input_dim = train.shape[1], activation=’relu’))

# The Hidden Layers :
NN_model.add(Dense(256, kernel_initializer=’normal’,activation=’relu’))
NN_model.add(Dense(256, kernel_initializer=’normal’,activation=’relu’))
NN_model.add(Dense(256, kernel_initializer=’normal’,activation=’relu’))

# The Output Layer :
NN_model.add(Dense(1, kernel_initializer=’normal’,activation=’linear’))

# Compile the network :
NN_model.compile(loss=’mean_absolute_error’, optimizer=’adam’, metrics=[‘mean_absolute_error’])
NN_model.summary()

Model: “sequential”
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
dense_8 (Dense) (None, 128) 19200
_________________________________________________________________
dense_9 (Dense) (None, 256) 33024
_________________________________________________________________
dense_10 (Dense) (None, 256) 65792
_________________________________________________________________
dense_11 (Dense) (None, 256) 65792
_________________________________________________________________
dense_12 (Dense) (None, 1) 257
=================================================================
Total params: 184,065
Trainable params: 184,065
Non-trainable params: 0
_________________________________________________________________
In [25]:
from tensorflow.keras.callbacks import ModelCheckpoint
# Define a checkpoint to save the data
checkpoint_name = ‘Models/Weights-{epoch:03d}–{val_loss:.5f}.hdf5′
checkpoint = ModelCheckpoint(checkpoint_name, monitor=’val_loss’, verbose = 1, save_best_only = True, mode =’auto’)
callbacks_list = [checkpoint]
In [26]:
# Train the model
hist = NN_model.fit(train, target, epochs=500, batch_size=32, validation_split = 0.2, callbacks=callbacks_list)

WARNING:tensorflow:Falling back from v2 loop because of error: Failed to find data adapter that can handle input: ,

WARNING:tensorflow:Falling back from v2 loop because of error: Failed to find data adapter that can handle input: ,

Train on 1168 samples, validate on 292 samples
Epoch 1/500
928/1168 [======================>…….] – ETA: 0s – loss: 146051.1598 – mean_absolute_error: 146051.1562
Epoch 00001: val_loss improved from inf to 54584.22432, saving model to Models/Weights-001–54584.22432.hdf5
1168/1168 [==============================] – 0s 344us/sample – loss: 129288.7532 – mean_absolute_error: 129288.7422 – val_loss: 54584.2243 – val_mean_absolute_error: 54584.2188
Epoch 2/500
960/1168 [=======================>……] – ETA: 0s – loss: 54859.5047 – mean_absolute_error: 54859.5039
Epoch 00002: val_loss improved from 54584.22432 to 51313.87826, saving model to Models/Weights-002–51313.87826.hdf5
1168/1168 [==============================] – 0s 147us/sample – loss: 54104.6077 – mean_absolute_error: 54104.6055 – val_loss: 51313.8783 – val_mean_absolute_error: 51313.8750
Epoch 3/500
960/1168 [=======================>……] – ETA: 0s – loss: 49570.9777 – mean_absolute_error: 49570.9727
Epoch 00003: val_loss improved from 51313.87826 to 44461.62751, saving model to Models/Weights-003–44461.62751.hdf5
1168/1168 [==============================] – 0s 161us/sample – loss: 47747.1279 – mean_absolute_error: 47747.1250 – val_loss: 44461.6275 – val_mean_absolute_error: 44461.6289
Epoch 4/500
864/1168 [=====================>……..] – ETA: 0s – loss: 42580.9243 – mean_absolute_error: 42580.9297
Epoch 00004: val_loss improved from 44461.62751 to 44229.79570, saving model to Models/Weights-004–44229.79570.hdf5
1168/1168 [==============================] – 0s 167us/sample – loss: 42607.3988 – mean_absolute_error: 42607.4023 – val_loss: 44229.7957 – val_mean_absolute_error: 44229.7969
Epoch 5/500
864/1168 [=====================>……..] – ETA: 0s – loss: 41783.7640 – mean_absolute_error: 41783.7578
Epoch 00005: val_loss improved from 44229.79570 to 40252.58353, saving model to Models/Weights-005–40252.58353.hdf5
1168/1168 [==============================] – 0s 162us/sample – loss: 40177.6901 – mean_absolute_error: 40177.6875 – val_loss: 40252.5835 – val_mean_absolute_error: 40252.5898
Epoch 6/500
832/1168 [====================>………] – ETA: 0s – loss: 36664.8377 – mean_absolute_error: 36664.8398
Epoch 00006: val_loss improved from 40252.58353 to 38839.46896, saving model to Models/Weights-006–38839.46896.hdf5
1168/1168 [==============================] – 0s 155us/sample – loss: 37185.8862 – mean_absolute_error: 37185.8828 – val_loss: 38839.4690 – val_mean_absolute_error: 38839.4688
Epoch 7/500
1088/1168 [==========================>…] – ETA: 0s – loss: 35671.6534 – mean_absolute_error: 35671.6562
Epoch 00007: val_loss did not improve from 38839.46896
1168/1168 [==============================] – 0s 174us/sample – loss: 35468.6949 – mean_absolute_error: 35468.6953 – val_loss: 41244.3096 – val_mean_absolute_error: 41244.3086
Epoch 8/500
960/1168 [=======================>……] – ETA: 0s – loss: 36223.2536 – mean_absolute_error: 36223.2539
Epoch 00008: val_loss did not improve from 38839.46896
1168/1168 [==============================] – 0s 128us/sample – loss: 35467.3525 – mean_absolute_error: 35467.3516 – val_loss: 39716.6052 – val_mean_absolute_error: 39716.6016
Epoch 9/500
864/1168 [=====================>……..] – ETA: 0s – loss: 33103.5897 – mean_absolute_error: 33103.5938
Epoch 00009: val_loss improved from 38839.46896 to 36230.32885, saving model to Models/Weights-009–36230.32885.hdf5
1168/1168 [==============================] – 0s 155us/sample – loss: 33412.8778 – mean_absolute_error: 33412.8750 – val_loss: 36230.3288 – val_mean_absolute_error: 36230.3281
Epoch 10/500
800/1168 [===================>……….] – ETA: 0s – loss: 33714.7844 – mean_absolute_error: 33714.7891
Epoch 00010: val_loss did not improve from 36230.32885
1168/1168 [==============================] – 0s 154us/sample – loss: 33791.7134 – mean_absolute_error: 33791.7227 – val_loss: 38403.6402 – val_mean_absolute_error: 38403.6406
Epoch 11/500
864/1168 [=====================>……..] – ETA: 0s – loss: 33022.4662 – mean_absolute_error: 33022.4688
Epoch 00011: val_loss improved from 36230.32885 to 35154.81170, saving model to Models/Weights-011–35154.81170.hdf5
1168/1168 [==============================] – 0s 162us/sample – loss: 32276.3838 – mean_absolute_error: 32276.3828 – val_loss: 35154.8117 – val_mean_absolute_error: 35154.8125
Epoch 12/500
832/1168 [====================>………] – ETA: 0s – loss: 32966.8400 – mean_absolute_error: 32966.8398
Epoch 00012: val_loss improved from 35154.81170 to 34969.50337, saving model to Models/Weights-012–34969.50337.hdf5
1168/1168 [==============================] – 0s 158us/sample – loss: 32284.1195 – mean_absolute_error: 32284.1191 – val_loss: 34969.5034 – val_mean_absolute_error: 34969.5078
Epoch 13/500
800/1168 [===================>……….] – ETA: 0s – loss: 32445.4270 – mean_absolute_error: 32445.4277
Epoch 00013: val_loss did not improve from 34969.50337
1168/1168 [==============================] – 0s 150us/sample – loss: 32502.4418 – mean_absolute_error: 32502.4414 – val_loss: 35306.6573 – val_mean_absolute_error: 35306.6562
Epoch 14/500
992/1168 [========================>…..] – ETA: 0s – loss: 32602.0493 – mean_absolute_error: 32602.0469
Epoch 00014: val_loss did not improve from 34969.50337
1168/1168 [==============================] – 0s 126us/sample – loss: 32089.2049 – mean_absolute_error: 32089.2051 – val_loss: 35491.9555 – val_mean_absolute_error: 35491.9570
Epoch 15/500
992/1168 [========================>…..] – ETA: 0s – loss: 33364.7096 – mean_absolute_error: 33364.7070
Epoch 00015: val_loss improved from 34969.50337 to 34942.64686, saving model to Models/Weights-015–34942.64686.hdf5
1168/1168 [==============================] – 0s 146us/sample – loss: 33336.3233 – mean_absolute_error: 33336.3203 – val_loss: 34942.6469 – val_mean_absolute_error: 34942.6484
Epoch 16/500
896/1168 [======================>…….] – ETA: 0s – loss: 32827.0546 – mean_absolute_error: 32827.0586
Epoch 00016: val_loss did not improve from 34942.64686
1168/1168 [==============================] – 0s 141us/sample – loss: 32958.2473 – mean_absolute_error: 32958.2539 – val_loss: 35692.7600 – val_mean_absolute_error: 35692.7578
Epoch 17/500
960/1168 [=======================>……] – ETA: 0s – loss: 32168.2706 – mean_absolute_error: 32168.2734
Epoch 00017: val_loss improved from 34942.64686 to 34637.96613, saving model to Models/Weights-017–34637.96613.hdf5
1168/1168 [==============================] – 0s 151us/sample – loss: 31739.2685 – mean_absolute_error: 31739.2715 – val_loss: 34637.9661 – val_mean_absolute_error: 34637.9648
Epoch 18/500
928/1168 [======================>…….] – ETA: 0s – loss: 31629.4223 – mean_absolute_error: 31629.4219
Epoch 00018: val_loss improved from 34637.96613 to 34434.58695, saving model to Models/Weights-018–34434.58695.hdf5
1168/1168 [==============================] – 0s 148us/sample – loss: 31285.9358 – mean_absolute_error: 31285.9316 – val_loss: 34434.5870 – val_mean_absolute_error: 34434.5859
Epoch 19/500
1152/1168 [============================>.] – ETA: 0s – loss: 31619.4567 – mean_absolute_error: 31619.4512
Epoch 00019: val_loss did not improve from 34434.58695
1168/1168 [==============================] – 0s 171us/sample – loss: 31558.2611 – mean_absolute_error: 31558.2578 – val_loss: 34514.1792 – val_mean_absolute_error: 34514.1797
Epoch 20/500
992/1168 [========================>…..] – ETA: 0s – loss: 31615.1013 – mean_absolute_error: 31615.0996
Epoch 00020: val_loss did not improve from 34434.58695
1168/1168 [==============================] – 0s 194us/sample – loss: 31766.8224 – mean_absolute_error: 31766.8223 – val_loss: 35612.7412 – val_mean_absolute_error: 35612.7383
Epoch 21/500
800/1168 [===================>……….] – ETA: 0s – loss: 31296.6576 – mean_absolute_error: 31296.6543
Epoch 00021: val_loss did not improve from 34434.58695
1168/1168 [==============================] – 0s 156us/sample – loss: 31541.9622 – mean_absolute_error: 31541.9590 – val_loss: 34714.3458 – val_mean_absolute_error: 34714.3477
Epoch 22/500
960/1168 [=======================>……] – ETA: 0s – loss: 31077.8378 – mean_absolute_error: 31077.8359
Epoch 00022: val_loss did not improve from 34434.58695
1168/1168 [==============================] – 0s 211us/sample – loss: 31760.4514 – mean_absolute_error: 31760.4492 – val_loss: 34455.9745 – val_mean_absolute_error: 34455.9766
Epoch 23/500
800/1168 [===================>……….] – ETA: 0s – loss: 31816.3003 – mean_absolute_error: 31816.3008
Epoch 00023: val_loss did not improve from 34434.58695
1168/1168 [==============================] – 0s 144us/sample – loss: 32397.4864 – mean_absolute_error: 32397.4863 – val_loss: 35844.8263 – val_mean_absolute_error: 35844.8281
Epoch 24/500
1120/1168 [===========================>..] – ETA: 0s – loss: 34374.5712 – mean_absolute_error: 34374.5703
Epoch 00024: val_loss did not improve from 34434.58695
1168/1168 [==============================] – 0s 172us/sample – loss: 34152.1647 – mean_absolute_error: 34152.1680 – val_loss: 35064.5151 – val_mean_absolute_error: 35064.5156
Epoch 25/500
992/1168 [========================>…..] – ETA: 0s – loss: 31256.5544 – mean_absolute_error: 31256.5547
Epoch 00025: val_loss did not improve from 34434.58695
1168/1168 [==============================] – 0s 198us/sample – loss: 31294.7601 – mean_absolute_error: 31294.7578 – val_loss: 35378.4715 – val_mean_absolute_error: 35378.4727
Epoch 26/500
1120/1168 [===========================>..] – ETA: 0s – loss: 32053.9454 – mean_absolute_error: 32053.9434
Epoch 00026: val_loss did not improve from 34434.58695
1168/1168 [==============================] – 0s 170us/sample – loss: 31937.4984 – mean_absolute_error: 31937.4961 – val_loss: 35311.1510 – val_mean_absolute_error: 35311.1523
Epoch 27/500
1152/1168 [============================>.] – ETA: 0s – loss: 30768.8805 – mean_absolute_error: 30768.8789
Epoch 00027: val_loss improved from 34434.58695 to 34221.07906, saving model to Models/Weights-027–34221.07906.hdf5
1168/1168 [==============================] – 0s 193us/sample – loss: 30749.9133 – mean_absolute_error: 30749.9102 – val_loss: 34221.0791 – val_mean_absolute_error: 34221.0781
Epoch 28/500
928/1168 [======================>…….] – ETA: 0s – loss: 31505.5341 – mean_absolute_error: 31505.5332
Epoch 00028: val_loss did not improve from 34221.07906
1168/1168 [==============================] – 0s 137us/sample – loss: 31176.6792 – mean_absolute_error: 31176.6816 – val_loss: 35542.4461 – val_mean_absolute_error: 35542.4453
Epoch 29/500
960/1168 [=======================>……] – ETA: 0s – loss: 30760.9826 – mean_absolute_error: 30760.9863
Epoch 00029: val_loss did not improve from 34221.07906
1168/1168 [==============================] – 0s 131us/sample – loss: 32357.7644 – mean_absolute_error: 32357.7676 – val_loss: 37289.6147 – val_mean_absolute_error: 37289.6172
Epoch 30/500
960/1168 [=======================>……] – ETA: 0s – loss: 30659.6701 – mean_absolute_error: 30659.6738
Epoch 00030: val_loss improved from 34221.07906 to 34174.85713, saving model to Models/Weights-030–34174.85713.hdf5
1168/1168 [==============================] – 0s 165us/sample – loss: 31043.7623 – mean_absolute_error: 31043.7637 – val_loss: 34174.8571 – val_mean_absolute_error: 34174.8594
Epoch 31/500
928/1168 [======================>…….] – ETA: 0s – loss: 31993.4425 – mean_absolute_error: 31993.4355
Epoch 00031: val_loss did not improve from 34174.85713
1168/1168 [==============================] – 0s 135us/sample – loss: 32132.8707 – mean_absolute_error: 32132.8672 – val_loss: 38210.2826 – val_mean_absolute_error: 38210.2852
Epoch 32/500
928/1168 [======================>…….] – ETA: 0s – loss: 31176.0750 – mean_absolute_error: 31176.0742
Epoch 00032: val_loss improved from 34174.85713 to 34168.87623, saving model to Models/Weights-032–34168.87623.hdf5
1168/1168 [==============================] – 0s 153us/sample – loss: 31160.0680 – mean_absolute_error: 31160.0645 – val_loss: 34168.8762 – val_mean_absolute_error: 34168.8750
Epoch 33/500
928/1168 [======================>…….] – ETA: 0s – loss: 29967.7163 – mean_absolute_error: 29967.7207
Epoch 00033: val_loss did not improve from 34168.87623
1168/1168 [==============================] – 0s 144us/sample – loss: 30303.1479 – mean_absolute_error: 30303.1465 – val_loss: 34185.2633 – val_mean_absolute_error: 34185.2656
Epoch 34/500
928/1168 [======================>…….] – ETA: 0s – loss: 31211.2850 – mean_absolute_error: 31211.2832
Epoch 00034: val_loss did not improve from 34168.87623
1168/1168 [==============================] – 0s 190us/sample – loss: 30564.5954 – mean_absolute_error: 30564.5918 – val_loss: 35941.2848 – val_mean_absolute_error: 35941.2852
Epoch 35/500
896/1168 [======================>…….] – ETA: 0s – loss: 31326.6518 – mean_absolute_error: 31326.6465
Epoch 00035: val_loss improved from 34168.87623 to 33926.79778, saving model to Models/Weights-035–33926.79778.hdf5
1168/1168 [==============================] – 0s 187us/sample – loss: 31664.9680 – mean_absolute_error: 31664.9648 – val_loss: 33926.7978 – val_mean_absolute_error: 33926.7969
Epoch 36/500
832/1168 [====================>………] – ETA: 0s – loss: 30686.1593 – mean_absolute_error: 30686.1641
Epoch 00036: val_loss did not improve from 33926.79778
1168/1168 [==============================] – 0s 137us/sample – loss: 30427.0612 – mean_absolute_error: 30427.0684 – val_loss: 37164.5517 – val_mean_absolute_error: 37164.5508
Epoch 37/500
992/1168 [========================>…..] – ETA: 0s – loss: 30949.3627 – mean_absolute_error: 30949.3652
Epoch 00037: val_loss did not improve from 33926.79778
1168/1168 [==============================] – 0s 141us/sample – loss: 31191.0083 – mean_absolute_error: 31191.0098 – val_loss: 34985.1485 – val_mean_absolute_error: 34985.1484
Epoch 38/500
1120/1168 [===========================>..] – ETA: 0s – loss: 31733.9672 – mean_absolute_error: 31733.9648
Epoch 00038: val_loss did not improve from 33926.79778
1168/1168 [==============================] – 0s 173us/sample – loss: 31871.6424 – mean_absolute_error: 31871.6406 – val_loss: 36924.5239 – val_mean_absolute_error: 36924.5234
Epoch 39/500
704/1168 [=================>…………] – ETA: 0s – loss: 31212.5858 – mean_absolute_error: 31212.5879
Epoch 00039: val_loss did not improve from 33926.79778
1168/1168 [==============================] – 0s 152us/sample – loss: 31855.3273 – mean_absolute_error: 31855.3281 – val_loss: 37740.1763 – val_mean_absolute_error: 37740.1797
Epoch 40/500
928/1168 [======================>…….] – ETA: 0s – loss: 30945.6675 – mean_absolute_error: 30945.6660
Epoch 00040: val_loss did not improve from 33926.79778
1168/1168 [==============================] – 0s 134us/sample – loss: 30705.1351 – mean_absolute_error: 30705.1328 – val_loss: 34466.0439 – val_mean_absolute_error: 34466.0430
Epoch 41/500
1088/1168 [==========================>…] – ETA: 0s – loss: 29950.4824 – mean_absolute_error: 29950.4785
Epoch 00041: val_loss did not improve from 33926.79778
1168/1168 [==============================] – 0s 118us/sample – loss: 30177.2076 – mean_absolute_error: 30177.2012 – val_loss: 34769.0068 – val_mean_absolute_error: 34769.0117
Epoch 42/500
960/1168 [=======================>……] – ETA: 0s – loss: 30699.6488 – mean_absolute_error: 30699.6504
Epoch 00042: val_loss did not improve from 33926.79778
1168/1168 [==============================] – 0s 127us/sample – loss: 30846.3531 – mean_absolute_error: 30846.3535 – val_loss: 34094.0373 – val_mean_absolute_error: 34094.0391
Epoch 43/500
800/1168 [===================>……….] – ETA: 0s – loss: 30833.1138 – mean_absolute_error: 30833.1133
Epoch 00043: val_loss improved from 33926.79778 to 33620.62051, saving model to Models/Weights-043–33620.62051.hdf5
1168/1168 [==============================] – 0s 186us/sample – loss: 30160.7491 – mean_absolute_error: 30160.7461 – val_loss: 33620.6205 – val_mean_absolute_error: 33620.6172
Epoch 44/500
928/1168 [======================>…….] – ETA: 0s – loss: 30423.2086 – mean_absolute_error: 30423.2109
Epoch 00044: val_loss did not improve from 33620.62051
1168/1168 [==============================] – 0s 137us/sample – loss: 30346.2566 – mean_absolute_error: 30346.2598 – val_loss: 33712.9975 – val_mean_absolute_error: 33713.0000
Epoch 45/500
928/1168 [======================>…….] – ETA: 0s – loss: 30120.8768 – mean_absolute_error: 30120.8750
Epoch 00045: val_loss did not improve from 33620.62051
1168/1168 [==============================] – 0s 137us/sample – loss: 30052.9869 – mean_absolute_error: 30052.9863 – val_loss: 34923.5379 – val_mean_absolute_error: 34923.5430
Epoch 46/500
896/1168 [======================>…….] – ETA: 0s – loss: 30715.1831 – mean_absolute_error: 30715.1836
Epoch 00046: val_loss did not improve from 33620.62051
1168/1168 [==============================] – 0s 136us/sample – loss: 30152.1448 – mean_absolute_error: 30152.1465 – val_loss: 33687.2453 – val_mean_absolute_error: 33687.2461
Epoch 47/500
960/1168 [=======================>……] – ETA: 0s – loss: 28726.6753 – mean_absolute_error: 28726.6758
Epoch 00047: val_loss improved from 33620.62051 to 33556.97164, saving model to Models/Weights-047–33556.97164.hdf5
1168/1168 [==============================] – 0s 154us/sample – loss: 29347.2913 – mean_absolute_error: 29347.2910 – val_loss: 33556.9716 – val_mean_absolute_error: 33556.9727
Epoch 48/500
992/1168 [========================>…..] – ETA: 0s – loss: 29437.4860 – mean_absolute_error: 29437.4844
Epoch 00048: val_loss improved from 33556.97164 to 33212.72999, saving model to Models/Weights-048–33212.72999.hdf5
1168/1168 [==============================] – 0s 167us/sample – loss: 29653.4206 – mean_absolute_error: 29653.4180 – val_loss: 33212.7300 – val_mean_absolute_error: 33212.7344
Epoch 49/500
1056/1168 [==========================>…] – ETA: 0s – loss: 29558.0479 – mean_absolute_error: 29558.0508
Epoch 00049: val_loss improved from 33212.72999 to 32958.47983, saving model to Models/Weights-049–32958.47983.hdf5
1168/1168 [==============================] – 0s 202us/sample – loss: 29271.2239 – mean_absolute_error: 29271.2266 – val_loss: 32958.4798 – val_mean_absolute_error: 32958.4844
Epoch 50/500
928/1168 [======================>…….] – ETA: 0s – loss: 29020.3718 – mean_absolute_error: 29020.3730
Epoch 00050: val_loss did not improve from 32958.47983
1168/1168 [==============================] – 0s 131us/sample – loss: 29566.1944 – mean_absolute_error: 29566.1914 – val_loss: 33301.5072 – val_mean_absolute_error: 33301.5078
Epoch 51/500
960/1168 [=======================>……] – ETA: 0s – loss: 29935.9611 – mean_absolute_error: 29935.9629
Epoch 00051: val_loss did not improve from 32958.47983
1168/1168 [==============================] – 0s 135us/sample – loss: 29567.6416 – mean_absolute_error: 29567.6445 – val_loss: 33715.2049 – val_mean_absolute_error: 33715.2070
Epoch 52/500
992/1168 [========================>…..] – ETA: 0s – loss: 29351.2060 – mean_absolute_error: 29351.2051
Epoch 00052: val_loss did not improve from 32958.47983
1168/1168 [==============================] – 0s 142us/sample – loss: 29411.7621 – mean_absolute_error: 29411.7598 – val_loss: 32984.5581 – val_mean_absolute_error: 32984.5586
Epoch 53/500
864/1168 [=====================>……..] – ETA: 0s – loss: 28875.3048 – mean_absolute_error: 28875.3047
Epoch 00053: val_loss did not improve from 32958.47983
1168/1168 [==============================] – 0s 137us/sample – loss: 29359.7013 – mean_absolute_error: 29359.7012 – val_loss: 33345.1983 – val_mean_absolute_error: 33345.1992
Epoch 54/500
960/1168 [=======================>……] – ETA: 0s – loss: 29351.3622 – mean_absolute_error: 29351.3652
Epoch 00054: val_loss did not improve from 32958.47983
1168/1168 [==============================] – 0s 131us/sample – loss: 29708.5048 – mean_absolute_error: 29708.5098 – val_loss: 33342.4736 – val_mean_absolute_error: 33342.4727
Epoch 55/500
864/1168 [=====================>……..] – ETA: 0s – loss: 30128.0343 – mean_absolute_error: 30128.0293
Epoch 00055: val_loss improved from 32958.47983 to 32506.29752, saving model to Models/Weights-055–32506.29752.hdf5
1168/1168 [==============================] – 0s 164us/sample – loss: 30376.8040 – mean_absolute_error: 30376.8047 – val_loss: 32506.2975 – val_mean_absolute_error: 32506.2988
Epoch 56/500
864/1168 [=====================>……..] – ETA: 0s – loss: 30683.7264 – mean_absolute_error: 30683.7285
Epoch 00056: val_loss did not improve from 32506.29752
1168/1168 [==============================] – 0s 137us/sample – loss: 29405.5298 – mean_absolute_error: 29405.5312 – val_loss: 32537.0661 – val_mean_absolute_error: 32537.0645
Epoch 57/500
1024/1168 [=========================>….] – ETA: 0s – loss: 29667.8900 – mean_absolute_error: 29667.8887
Epoch 00057: val_loss did not improve from 32506.29752
1168/1168 [==============================] – 0s 123us/sample – loss: 29306.6489 – mean_absolute_error: 29306.6504 – val_loss: 33773.6228 – val_mean_absolute_error: 33773.6250
Epoch 58/500
992/1168 [========================>…..] – ETA: 0s – loss: 30040.3122 – mean_absolute_error: 30040.3086
Epoch 00058: val_loss did not improve from 32506.29752
1168/1168 [==============================] – 0s 129us/sample – loss: 29446.4411 – mean_absolute_error: 29446.4375 – val_loss: 34379.4880 – val_mean_absolute_error: 34379.4883
Epoch 59/500
1024/1168 [=========================>….] – ETA: 0s – loss: 30676.5307 – mean_absolute_error: 30676.5312
Epoch 00059: val_loss improved from 32506.29752 to 32277.23767, saving model to Models/Weights-059–32277.23767.hdf5
1168/1168 [==============================] – 0s 153us/sample – loss: 30515.0570 – mean_absolute_error: 30515.0586 – val_loss: 32277.2377 – val_mean_absolute_error: 32277.2363
Epoch 60/500
896/1168 [======================>…….] – ETA: 0s – loss: 29084.8131 – mean_absolute_error: 29084.8125
Epoch 00060: val_loss did not improve from 32277.23767
1168/1168 [==============================] – 0s 137us/sample – loss: 28844.5758 – mean_absolute_error: 28844.5781 – val_loss: 32773.5539 – val_mean_absolute_error: 32773.5547
Epoch 61/500
992/1168 [========================>…..] – ETA: 0s – loss: 29951.5143 – mean_absolute_error: 29951.5156
Epoch 00061: val_loss did not improve from 32277.23767
1168/1168 [==============================] – 0s 126us/sample – loss: 30196.1671 – mean_absolute_error: 30196.1719 – val_loss: 40293.1779 – val_mean_absolute_error: 40293.1797
Epoch 62/500
736/1168 [=================>…………] – ETA: 0s – loss: 30562.7425 – mean_absolute_error: 30562.7383
Epoch 00062: val_loss did not improve from 32277.23767
1168/1168 [==============================] – 0s 146us/sample – loss: 31323.2977 – mean_absolute_error: 31323.2949 – val_loss: 41055.6726 – val_mean_absolute_error: 41055.6719
Epoch 63/500
1024/1168 [=========================>….] – ETA: 0s – loss: 29702.0454 – mean_absolute_error: 29702.0449
Epoch 00063: val_loss improved from 32277.23767 to 31938.56095, saving model to Models/Weights-063–31938.56095.hdf5
1168/1168 [==============================] – 0s 145us/sample – loss: 29274.0273 – mean_absolute_error: 29274.0273 – val_loss: 31938.5609 – val_mean_absolute_error: 31938.5625
Epoch 64/500
1024/1168 [=========================>….] – ETA: 0s – loss: 28671.3094 – mean_absolute_error: 28671.3066
Epoch 00064: val_loss did not improve from 31938.56095
1168/1168 [==============================] – 0s 122us/sample – loss: 29012.6054 – mean_absolute_error: 29012.6035 – val_loss: 33052.3953 – val_mean_absolute_error: 33052.3984
Epoch 65/500
896/1168 [======================>…….] – ETA: 0s – loss: 28423.9443 – mean_absolute_error: 28423.9434
Epoch 00065: val_loss did not improve from 31938.56095
1168/1168 [==============================] – 0s 133us/sample – loss: 28776.9676 – mean_absolute_error: 28776.9648 – val_loss: 32233.9651 – val_mean_absolute_error: 32233.9648
Epoch 66/500
1024/1168 [=========================>….] – ETA: 0s – loss: 28338.2582 – mean_absolute_error: 28338.2617
Epoch 00066: val_loss did not improve from 31938.56095
1168/1168 [==============================] – 0s 123us/sample – loss: 28252.7958 – mean_absolute_error: 28252.7988 – val_loss: 32187.5245 – val_mean_absolute_error: 32187.5234
Epoch 67/500
896/1168 [======================>…….] – ETA: 0s – loss: 29512.4423 – mean_absolute_error: 29512.4414
Epoch 00067: val_loss improved from 31938.56095 to 31095.47790, saving model to Models/Weights-067–31095.47790.hdf5
1168/1168 [==============================] – 0s 156us/sample – loss: 29248.3397 – mean_absolute_error: 29248.3359 – val_loss: 31095.4779 – val_mean_absolute_error: 31095.4766
Epoch 68/500
928/1168 [======================>…….] – ETA: 0s – loss: 28059.1724 – mean_absolute_error: 28059.1719
Epoch 00068: val_loss did not improve from 31095.47790
1168/1168 [==============================] – 0s 145us/sample – loss: 27922.4711 – mean_absolute_error: 27922.4707 – val_loss: 31146.5256 – val_mean_absolute_error: 31146.5234
Epoch 69/500
1024/1168 [=========================>….] – ETA: 0s – loss: 27349.8297 – mean_absolute_error: 27349.8242
Epoch 00069: val_loss did not improve from 31095.47790
1168/1168 [==============================] – 0s 123us/sample – loss: 27752.3083 – mean_absolute_error: 27752.3027 – val_loss: 31126.7590 – val_mean_absolute_error: 31126.7578
Epoch 70/500
1024/1168 [=========================>….] – ETA: 0s – loss: 28541.9838 – mean_absolute_error: 28541.9844
Epoch 00070: val_loss did not improve from 31095.47790
1168/1168 [==============================] – 0s 120us/sample – loss: 28090.4146 – mean_absolute_error: 28090.4141 – val_loss: 31210.1716 – val_mean_absolute_error: 31210.1719
Epoch 71/500
1056/1168 [==========================>…] – ETA: 0s – loss: 28032.1433 – mean_absolute_error: 28032.1445
Epoch 00071: val_loss improved from 31095.47790 to 30943.30383, saving model to Models/Weights-071–30943.30383.hdf5
1168/1168 [==============================] – 0s 149us/sample – loss: 28028.6576 – mean_absolute_error: 28028.6602 – val_loss: 30943.3038 – val_mean_absolute_error: 30943.3008
Epoch 72/500
1056/1168 [==========================>…] – ETA: 0s – loss: 28022.7090 – mean_absolute_error: 28022.7070
Epoch 00072: val_loss did not improve from 30943.30383
1168/1168 [==============================] – 0s 124us/sample – loss: 27566.3818 – mean_absolute_error: 27566.3789 – val_loss: 31974.1039 – val_mean_absolute_error: 31974.1035
Epoch 73/500
1056/1168 [==========================>…] – ETA: 0s – loss: 28030.0553 – mean_absolute_error: 28030.0547
Epoch 00073: val_loss did not improve from 30943.30383
1168/1168 [==============================] – 0s 121us/sample – loss: 27858.1019 – mean_absolute_error: 27858.1035 – val_loss: 31921.6572 – val_mean_absolute_error: 31921.6543
Epoch 74/500
1056/1168 [==========================>…] – ETA: 0s – loss: 28027.9385 – mean_absolute_error: 28027.9395
Epoch 00074: val_loss did not improve from 30943.30383
1168/1168 [==============================] – 0s 122us/sample – loss: 27495.7247 – mean_absolute_error: 27495.7266 – val_loss: 31130.3252 – val_mean_absolute_error: 31130.3223
Epoch 75/500
896/1168 [======================>…….] – ETA: 0s – loss: 27746.9514 – mean_absolute_error: 27746.9551
Epoch 00075: val_loss did not improve from 30943.30383
1168/1168 [==============================] – 0s 129us/sample – loss: 27333.0311 – mean_absolute_error: 27333.0352 – val_loss: 32923.8868 – val_mean_absolute_error: 32923.8867
Epoch 76/500
1056/1168 [==========================>…] – ETA: 0s – loss: 26513.2771 – mean_absolute_error: 26513.2812
Epoch 00076: val_loss improved from 30943.30383 to 30483.19004, saving model to Models/Weights-076–30483.19004.hdf5
1168/1168 [==============================] – 0s 143us/sample – loss: 26839.5371 – mean_absolute_error: 26839.5410 – val_loss: 30483.1900 – val_mean_absolute_error: 30483.1875
Epoch 77/500
1024/1168 [=========================>….] – ETA: 0s – loss: 28108.3084 – mean_absolute_error: 28108.3105
Epoch 00077: val_loss did not improve from 30483.19004
1168/1168 [==============================] – 0s 121us/sample – loss: 27438.8765 – mean_absolute_error: 27438.8770 – val_loss: 32190.4570 – val_mean_absolute_error: 32190.4590
Epoch 78/500
992/1168 [========================>…..] – ETA: 0s – loss: 26856.4870 – mean_absolute_error: 26856.4883
Epoch 00078: val_loss did not improve from 30483.19004
1168/1168 [==============================] – 0s 127us/sample – loss: 26370.8781 – mean_absolute_error: 26370.8789 – val_loss: 30578.2842 – val_mean_absolute_error: 30578.2852
Epoch 79/500
992/1168 [========================>…..] – ETA: 0s – loss: 28218.4577 – mean_absolute_error: 28218.4551
Epoch 00079: val_loss did not improve from 30483.19004
1168/1168 [==============================] – 0s 124us/sample – loss: 28050.7267 – mean_absolute_error: 28050.7227 – val_loss: 31334.0960 – val_mean_absolute_error: 31334.0957
Epoch 80/500
992/1168 [========================>…..] – ETA: 0s – loss: 26415.0270 – mean_absolute_error: 26415.0273
Epoch 00080: val_loss did not improve from 30483.19004
1168/1168 [==============================] – 0s 128us/sample – loss: 25987.8397 – mean_absolute_error: 25987.8398 – val_loss: 35052.5748 – val_mean_absolute_error: 35052.5742
Epoch 81/500
960/1168 [=======================>……] – ETA: 0s – loss: 27135.9463 – mean_absolute_error: 27135.9492
Epoch 00081: val_loss improved from 30483.19004 to 29034.66757, saving model to Models/Weights-081–29034.66757.hdf5
1168/1168 [==============================] – 0s 151us/sample – loss: 26280.2503 – mean_absolute_error: 26280.2520 – val_loss: 29034.6676 – val_mean_absolute_error: 29034.6719
Epoch 82/500
896/1168 [======================>…….] – ETA: 0s – loss: 26352.3481 – mean_absolute_error: 26352.3457
Epoch 00082: val_loss improved from 29034.66757 to 28778.58505, saving model to Models/Weights-082–28778.58505.hdf5
1168/1168 [==============================] – 0s 156us/sample – loss: 26542.4242 – mean_absolute_error: 26542.4219 – val_loss: 28778.5851 – val_mean_absolute_error: 28778.5859
Epoch 83/500
1024/1168 [=========================>….] – ETA: 0s – loss: 26073.3654 – mean_absolute_error: 26073.3652
Epoch 00083: val_loss did not improve from 28778.58505
1168/1168 [==============================] – 0s 123us/sample – loss: 26029.8058 – mean_absolute_error: 26029.8047 – val_loss: 29064.1296 – val_mean_absolute_error: 29064.1270
Epoch 84/500
1024/1168 [=========================>….] – ETA: 0s – loss: 27103.3174 – mean_absolute_error: 27103.3184
Epoch 00084: val_loss did not improve from 28778.58505
1168/1168 [==============================] – 0s 124us/sample – loss: 26344.7847 – mean_absolute_error: 26344.7852 – val_loss: 29133.9880 – val_mean_absolute_error: 29133.9902
Epoch 85/500
896/1168 [======================>…….] – ETA: 0s – loss: 25699.0648 – mean_absolute_error: 25699.0664
Epoch 00085: val_loss did not improve from 28778.58505
1168/1168 [==============================] – 0s 131us/sample – loss: 25454.6394 – mean_absolute_error: 25454.6406 – val_loss: 29310.8457 – val_mean_absolute_error: 29310.8457
Epoch 86/500
1024/1168 [=========================>….] – ETA: 0s – loss: 24942.9625 – mean_absolute_error: 24942.9629
Epoch 00086: val_loss did not improve from 28778.58505
1168/1168 [==============================] – 0s 121us/sample – loss: 25408.4566 – mean_absolute_error: 25408.4551 – val_loss: 29321.1127 – val_mean_absolute_error: 29321.1133
Epoch 87/500
992/1168 [========================>…..] – ETA: 0s – loss: 25885.1266 – mean_absolute_error: 25885.1270
Epoch 00087: val_loss improved from 28778.58505 to 27940.99189, saving model to Models/Weights-087–27940.99189.hdf5
1168/1168 [==============================] – 0s 146us/sample – loss: 26428.4007 – mean_absolute_error: 26428.4004 – val_loss: 27940.9919 – val_mean_absolute_error: 27940.9922
Epoch 88/500
992/1168 [========================>…..] – ETA: 0s – loss: 25173.5539 – mean_absolute_error: 25173.5547
Epoch 00088: val_loss did not improve from 27940.99189
1168/1168 [==============================] – 0s 129us/sample – loss: 24957.8675 – mean_absolute_error: 24957.8672 – val_loss: 28118.0465 – val_mean_absolute_error: 28118.0469
Epoch 89/500
960/1168 [=======================>……] – ETA: 0s – loss: 24087.6130 – mean_absolute_error: 24087.6133
Epoch 00089: val_loss did not improve from 27940.99189
1168/1168 [==============================] – 0s 130us/sample – loss: 24655.8400 – mean_absolute_error: 24655.8398 – val_loss: 39633.4615 – val_mean_absolute_error: 39633.4609
Epoch 90/500
1056/1168 [==========================>…] – ETA: 0s – loss: 26635.5010 – mean_absolute_error: 26635.4980
Epoch 00090: val_loss did not improve from 27940.99189
1168/1168 [==============================] – 0s 121us/sample – loss: 26382.5081 – mean_absolute_error: 26382.5059 – val_loss: 33455.3237 – val_mean_absolute_error: 33455.3242
Epoch 91/500
992/1168 [========================>…..] – ETA: 0s – loss: 26464.6474 – mean_absolute_error: 26464.6406
Epoch 00091: val_loss did not improve from 27940.99189
1168/1168 [==============================] – 0s 123us/sample – loss: 25824.3840 – mean_absolute_error: 25824.3789 – val_loss: 27971.5628 – val_mean_absolute_error: 27971.5625
Epoch 92/500
928/1168 [======================>…….] – ETA: 0s – loss: 24951.2045 – mean_absolute_error: 24951.2012
Epoch 00092: val_loss did not improve from 27940.99189
1168/1168 [==============================] – 0s 135us/sample – loss: 24770.9550 – mean_absolute_error: 24770.9492 – val_loss: 28365.4914 – val_mean_absolute_error: 28365.4922
Epoch 93/500
928/1168 [======================>…….] – ETA: 0s – loss: 25305.6162 – mean_absolute_error: 25305.6172
Epoch 00093: val_loss did not improve from 27940.99189
1168/1168 [==============================] – 0s 130us/sample – loss: 25394.3419 – mean_absolute_error: 25394.3398 – val_loss: 29577.3414 – val_mean_absolute_error: 29577.3398
Epoch 94/500
1024/1168 [=========================>….] – ETA: 0s – loss: 24799.4147 – mean_absolute_error: 24799.4141
Epoch 00094: val_loss did not improve from 27940.99189
1168/1168 [==============================] – 0s 124us/sample – loss: 24473.5634 – mean_absolute_error: 24473.5625 – val_loss: 28135.3023 – val_mean_absolute_error: 28135.3008
Epoch 95/500
1024/1168 [=========================>….] – ETA: 0s – loss: 23834.3518 – mean_absolute_error: 23834.3555
Epoch 00095: val_loss did not improve from 27940.99189
1168/1168 [==============================] – 0s 126us/sample – loss: 23956.2569 – mean_absolute_error: 23956.2598 – val_loss: 29207.4493 – val_mean_absolute_error: 29207.4512
Epoch 96/500
992/1168 [========================>…..] – ETA: 0s – loss: 23637.2030 – mean_absolute_error: 23637.2012
Epoch 00096: val_loss improved from 27940.99189 to 26100.68311, saving model to Models/Weights-096–26100.68311.hdf5
1168/1168 [==============================] – 0s 146us/sample – loss: 23303.2630 – mean_absolute_error: 23303.2617 – val_loss: 26100.6831 – val_mean_absolute_error: 26100.6836
Epoch 97/500
1056/1168 [==========================>…] – ETA: 0s – loss: 22743.9552 – mean_absolute_error: 22743.9570
Epoch 00097: val_loss improved from 26100.68311 to 25760.82379, saving model to Models/Weights-097–25760.82379.hdf5
1168/1168 [==============================] – 0s 145us/sample – loss: 22953.2090 – mean_absolute_error: 22953.2109 – val_loss: 25760.8238 – val_mean_absolute_error: 25760.8242
Epoch 98/500
1024/1168 [=========================>….] – ETA: 0s – loss: 22534.2486 – mean_absolute_error: 22534.2520
Epoch 00098: val_loss did not improve from 25760.82379
1168/1168 [==============================] – 0s 122us/sample – loss: 22524.5306 – mean_absolute_error: 22524.5352 – val_loss: 25947.0678 – val_mean_absolute_error: 25947.0664
Epoch 99/500
1024/1168 [=========================>….] – ETA: 0s – loss: 25985.3834 – mean_absolute_error: 25985.3848
Epoch 00099: val_loss did not improve from 25760.82379
1168/1168 [==============================] – 0s 125us/sample – loss: 25665.2437 – mean_absolute_error: 25665.2441 – val_loss: 26430.8609 – val_mean_absolute_error: 26430.8613
Epoch 100/500
992/1168 [========================>…..] – ETA: 0s – loss: 24675.7551 – mean_absolute_error: 24675.7539
Epoch 00100: val_loss improved from 25760.82379 to 25423.93140, saving model to Models/Weights-100–25423.93140.hdf5
1168/1168 [==============================] – 0s 149us/sample – loss: 23900.3597 – mean_absolute_error: 23900.3574 – val_loss: 25423.9314 – val_mean_absolute_error: 25423.9297
Epoch 101/500
992/1168 [========================>…..] – ETA: 0s – loss: 23942.5267 – mean_absolute_error: 23942.5254
Epoch 00101: val_loss did not improve from 25423.93140
1168/1168 [==============================] – 0s 122us/sample – loss: 24057.9681 – mean_absolute_error: 24057.9648 – val_loss: 26359.0853 – val_mean_absolute_error: 26359.0840
Epoch 102/500
1056/1168 [==========================>…] – ETA: 0s – loss: 24639.4232 – mean_absolute_error: 24639.4258
Epoch 00102: val_loss improved from 25423.93140 to 25299.91856, saving model to Models/Weights-102–25299.91856.hdf5
1168/1168 [==============================] – 0s 144us/sample – loss: 24498.2606 – mean_absolute_error: 24498.2617 – val_loss: 25299.9186 – val_mean_absolute_error: 25299.9219
Epoch 103/500
864/1168 [=====================>……..] – ETA: 0s – loss: 24140.3747 – mean_absolute_error: 24140.3750
Epoch 00103: val_loss did not improve from 25299.91856
1168/1168 [==============================] – 0s 149us/sample – loss: 22928.1778 – mean_absolute_error: 22928.1777 – val_loss: 25388.9081 – val_mean_absolute_error: 25388.9102
Epoch 104/500
1056/1168 [==========================>…] – ETA: 0s – loss: 21198.7976 – mean_absolute_error: 21198.7949
Epoch 00104: val_loss did not improve from 25299.91856
1168/1168 [==============================] – 0s 120us/sample – loss: 21726.7578 – mean_absolute_error: 21726.7559 – val_loss: 25912.1047 – val_mean_absolute_error: 25912.1035
Epoch 105/500
1056/1168 [==========================>…] – ETA: 0s – loss: 21942.8060 – mean_absolute_error: 21942.8086
Epoch 00105: val_loss did not improve from 25299.91856
1168/1168 [==============================] – 0s 119us/sample – loss: 22327.4290 – mean_absolute_error: 22327.4297 – val_loss: 25741.3558 – val_mean_absolute_error: 25741.3555
Epoch 106/500
960/1168 [=======================>……] – ETA: 0s – loss: 21447.1472 – mean_absolute_error: 21447.1465
Epoch 00106: val_loss did not improve from 25299.91856
1168/1168 [==============================] – 0s 126us/sample – loss: 22595.6095 – mean_absolute_error: 22595.6074 – val_loss: 33114.9163 – val_mean_absolute_error: 33114.9141
Epoch 107/500
992/1168 [========================>…..] – ETA: 0s – loss: 25748.8987 – mean_absolute_error: 25748.8984
Epoch 00107: val_loss did not improve from 25299.91856
1168/1168 [==============================] – 0s 125us/sample – loss: 25241.7861 – mean_absolute_error: 25241.7852 – val_loss: 29168.2449 – val_mean_absolute_error: 29168.2461
Epoch 108/500
1024/1168 [=========================>….] – ETA: 0s – loss: 24116.7479 – mean_absolute_error: 24116.7480
Epoch 00108: val_loss did not improve from 25299.91856
1168/1168 [==============================] – 0s 123us/sample – loss: 23996.6789 – mean_absolute_error: 23996.6797 – val_loss: 27693.3996 – val_mean_absolute_error: 27693.3965
Epoch 109/500
1056/1168 [==========================>…] – ETA: 0s – loss: 22092.6867 – mean_absolute_error: 22092.6855
Epoch 00109: val_loss improved from 25299.91856 to 25195.78286, saving model to Models/Weights-109–25195.78286.hdf5
1168/1168 [==============================] – 0s 145us/sample – loss: 22208.5353 – mean_absolute_error: 22208.5352 – val_loss: 25195.7829 – val_mean_absolute_error: 25195.7852
Epoch 110/500
960/1168 [=======================>……] – ETA: 0s – loss: 22978.0640 – mean_absolute_error: 22978.0625
Epoch 00110: val_loss improved from 25195.78286 to 24048.99358, saving model to Models/Weights-110–24048.99358.hdf5
1168/1168 [==============================] – 0s 146us/sample – loss: 22305.4088 – mean_absolute_error: 22305.4082 – val_loss: 24048.9936 – val_mean_absolute_error: 24048.9941
Epoch 111/500
960/1168 [=======================>……] – ETA: 0s – loss: 20773.4979 – mean_absolute_error: 20773.4980
Epoch 00111: val_loss did not improve from 24048.99358
1168/1168 [==============================] – 0s 127us/sample – loss: 21048.8256 – mean_absolute_error: 21048.8262 – val_loss: 24187.5317 – val_mean_absolute_error: 24187.5312
Epoch 112/500
992/1168 [========================>…..] – ETA: 0s – loss: 21646.4073 – mean_absolute_error: 21646.4102
Epoch 00112: val_loss did not improve from 24048.99358
1168/1168 [==============================] – 0s 130us/sample – loss: 21354.6229 – mean_absolute_error: 21354.6230 – val_loss: 26335.3701 – val_mean_absolute_error: 26335.3691
Epoch 113/500
928/1168 [======================>…….] – ETA: 0s – loss: 21113.6827 – mean_absolute_error: 21113.6816
Epoch 00113: val_loss did not improve from 24048.99358
1168/1168 [==============================] – 0s 131us/sample – loss: 21619.5710 – mean_absolute_error: 21619.5703 – val_loss: 24102.6518 – val_mean_absolute_error: 24102.6543
Epoch 114/500
1120/1168 [===========================>..] – ETA: 0s – loss: 20842.2819 – mean_absolute_error: 20842.2812
Epoch 00114: val_loss did not improve from 24048.99358
1168/1168 [==============================] – 0s 118us/sample – loss: 20982.8759 – mean_absolute_error: 20982.8770 – val_loss: 24498.5087 – val_mean_absolute_error: 24498.5078
Epoch 115/500
992/1168 [========================>…..] – ETA: 0s – loss: 21414.8847 – mean_absolute_error: 21414.8828
Epoch 00115: val_loss did not improve from 24048.99358
1168/1168 [==============================] – 0s 126us/sample – loss: 20981.1840 – mean_absolute_error: 20981.1836 – val_loss: 24848.6483 – val_mean_absolute_error: 24848.6484
Epoch 116/500
1024/1168 [=========================>….] – ETA: 0s – loss: 20378.6135 – mean_absolute_error: 20378.6133
Epoch 00116: val_loss did not improve from 24048.99358
1168/1168 [==============================] – 0s 126us/sample – loss: 20829.3941 – mean_absolute_error: 20829.3945 – val_loss: 25249.4987 – val_mean_absolute_error: 25249.4961
Epoch 117/500
1024/1168 [=========================>….] – ETA: 0s – loss: 23928.0799 – mean_absolute_error: 23928.0762
Epoch 00117: val_loss did not improve from 24048.99358
1168/1168 [==============================] – 0s 122us/sample – loss: 23875.6426 – mean_absolute_error: 23875.6406 – val_loss: 24907.4303 – val_mean_absolute_error: 24907.4297
Epoch 118/500
1088/1168 [==========================>…] – ETA: 0s – loss: 21419.9300 – mean_absolute_error: 21419.9297
Epoch 00118: val_loss improved from 24048.99358 to 23949.07853, saving model to Models/Weights-118–23949.07853.hdf5
1168/1168 [==============================] – 0s 135us/sample – loss: 21298.5195 – mean_absolute_error: 21298.5215 – val_loss: 23949.0785 – val_mean_absolute_error: 23949.0781
Epoch 119/500
928/1168 [======================>…….] – ETA: 0s – loss: 21830.8555 – mean_absolute_error: 21830.8574
Epoch 00119: val_loss improved from 23949.07853 to 23623.56138, saving model to Models/Weights-119–23623.56138.hdf5
1168/1168 [==============================] – 0s 150us/sample – loss: 22368.5957 – mean_absolute_error: 22368.5957 – val_loss: 23623.5614 – val_mean_absolute_error: 23623.5625
Epoch 120/500
992/1168 [========================>…..] – ETA: 0s – loss: 20149.5036 – mean_absolute_error: 20149.5039
Epoch 00120: val_loss did not improve from 23623.56138
1168/1168 [==============================] – 0s 123us/sample – loss: 20081.4499 – mean_absolute_error: 20081.4512 – val_loss: 25220.7721 – val_mean_absolute_error: 25220.7715
Epoch 121/500
1024/1168 [=========================>….] – ETA: 0s – loss: 20658.0835 – mean_absolute_error: 20658.0840
Epoch 00121: val_loss did not improve from 23623.56138
1168/1168 [==============================] – 0s 122us/sample – loss: 20536.2754 – mean_absolute_error: 20536.2773 – val_loss: 25703.7220 – val_mean_absolute_error: 25703.7227
Epoch 122/500
1024/1168 [=========================>….] – ETA: 0s – loss: 21215.4172 – mean_absolute_error: 21215.4199
Epoch 00122: val_loss improved from 23623.56138 to 23430.09346, saving model to Models/Weights-122–23430.09346.hdf5
1168/1168 [==============================] – 0s 141us/sample – loss: 21068.8157 – mean_absolute_error: 21068.8184 – val_loss: 23430.0935 – val_mean_absolute_error: 23430.0918
Epoch 123/500
1024/1168 [=========================>….] – ETA: 0s – loss: 21721.4243 – mean_absolute_error: 21721.4199
Epoch 00123: val_loss did not improve from 23430.09346
1168/1168 [==============================] – 0s 122us/sample – loss: 21798.4437 – mean_absolute_error: 21798.4375 – val_loss: 24153.1636 – val_mean_absolute_error: 24153.1621
Epoch 124/500
1024/1168 [=========================>….] – ETA: 0s – loss: 19182.4017 – mean_absolute_error: 19182.4023
Epoch 00124: val_loss did not improve from 23430.09346
1168/1168 [==============================] – 0s 124us/sample – loss: 19862.6203 – mean_absolute_error: 19862.6191 – val_loss: 24546.9250 – val_mean_absolute_error: 24546.9258
Epoch 125/500
1120/1168 [===========================>..] – ETA: 0s – loss: 19825.7501 – mean_absolute_error: 19825.7480
Epoch 00125: val_loss improved from 23430.09346 to 22990.28673, saving model to Models/Weights-125–22990.28673.hdf5
1168/1168 [==============================] – 0s 199us/sample – loss: 20590.0403 – mean_absolute_error: 20590.0371 – val_loss: 22990.2867 – val_mean_absolute_error: 22990.2852
Epoch 126/500
992/1168 [========================>…..] – ETA: 0s – loss: 24256.8672 – mean_absolute_error: 24256.8691
Epoch 00126: val_loss did not improve from 22990.28673
1168/1168 [==============================] – 0s 123us/sample – loss: 25031.4101 – mean_absolute_error: 25031.4102 – val_loss: 24527.6775 – val_mean_absolute_error: 24527.6777
Epoch 127/500
1024/1168 [=========================>….] – ETA: 0s – loss: 23066.7787 – mean_absolute_error: 23066.7773
Epoch 00127: val_loss did not improve from 22990.28673
1168/1168 [==============================] – 0s 122us/sample – loss: 23180.3557 – mean_absolute_error: 23180.3555 – val_loss: 23835.9401 – val_mean_absolute_error: 23835.9395
Epoch 128/500
896/1168 [======================>…….] – ETA: 0s – loss: 21316.4180 – mean_absolute_error: 21316.4180
Epoch 00128: val_loss did not improve from 22990.28673
1168/1168 [==============================] – 0s 133us/sample – loss: 22259.3757 – mean_absolute_error: 22259.3730 – val_loss: 28891.5201 – val_mean_absolute_error: 28891.5234
Epoch 129/500
1024/1168 [=========================>….] – ETA: 0s – loss: 21610.1424 – mean_absolute_error: 21610.1426
Epoch 00129: val_loss did not improve from 22990.28673
1168/1168 [==============================] – 0s 123us/sample – loss: 21752.3701 – mean_absolute_error: 21752.3711 – val_loss: 23449.6869 – val_mean_absolute_error: 23449.6875
Epoch 130/500
992/1168 [========================>…..] – ETA: 0s – loss: 19681.9045 – mean_absolute_error: 19681.9082
Epoch 00130: val_loss did not improve from 22990.28673
1168/1168 [==============================] – 0s 128us/sample – loss: 19822.5543 – mean_absolute_error: 19822.5566 – val_loss: 23540.5572 – val_mean_absolute_error: 23540.5586
Epoch 131/500
896/1168 [======================>…….] – ETA: 0s – loss: 19941.5317 – mean_absolute_error: 19941.5312
Epoch 00131: val_loss did not improve from 22990.28673
1168/1168 [==============================] – 0s 140us/sample – loss: 19459.6975 – mean_absolute_error: 19459.6953 – val_loss: 24059.8980 – val_mean_absolute_error: 24059.8965
Epoch 132/500
1024/1168 [=========================>….] – ETA: 0s – loss: 20500.9478 – mean_absolute_error: 20500.9492
Epoch 00132: val_loss did not improve from 22990.28673
1168/1168 [==============================] – 0s 120us/sample – loss: 20422.8080 – mean_absolute_error: 20422.8105 – val_loss: 24255.8713 – val_mean_absolute_error: 24255.8711
Epoch 133/500
928/1168 [======================>…….] – ETA: 0s – loss: 20286.0744 – mean_absolute_error: 20286.0742
Epoch 00133: val_loss improved from 22990.28673 to 22736.08631, saving model to Models/Weights-133–22736.08631.hdf5
1168/1168 [==============================] – 0s 160us/sample – loss: 19879.3674 – mean_absolute_error: 19879.3652 – val_loss: 22736.0863 – val_mean_absolute_error: 22736.0859
Epoch 134/500
960/1168 [=======================>……] – ETA: 0s – loss: 19584.4340 – mean_absolute_error: 19584.4355
Epoch 00134: val_loss did not improve from 22736.08631
1168/1168 [==============================] – 0s 129us/sample – loss: 19960.3777 – mean_absolute_error: 19960.3789 – val_loss: 22768.2735 – val_mean_absolute_error: 22768.2734
Epoch 135/500
1088/1168 [==========================>…] – ETA: 0s – loss: 19306.6121 – mean_absolute_error: 19306.6113
Epoch 00135: val_loss did not improve from 22736.08631
1168/1168 [==============================] – 0s 117us/sample – loss: 19856.8466 – mean_absolute_error: 19856.8477 – val_loss: 25743.5274 – val_mean_absolute_error: 25743.5273
Epoch 136/500
992/1168 [========================>…..] – ETA: 0s – loss: 22888.0952 – mean_absolute_error: 22888.0918
Epoch 00136: val_loss improved from 22736.08631 to 22509.13206, saving model to Models/Weights-136–22509.13206.hdf5
1168/1168 [==============================] – 0s 143us/sample – loss: 21882.5016 – mean_absolute_error: 21882.5000 – val_loss: 22509.1321 – val_mean_absolute_error: 22509.1309
Epoch 137/500
1024/1168 [=========================>….] – ETA: 0s – loss: 21698.5780 – mean_absolute_error: 21698.5762
Epoch 00137: val_loss improved from 22509.13206 to 22075.88057, saving model to Models/Weights-137–22075.88057.hdf5
1168/1168 [==============================] – 0s 143us/sample – loss: 21297.3330 – mean_absolute_error: 21297.3320 – val_loss: 22075.8806 – val_mean_absolute_error: 22075.8828
Epoch 138/500
832/1168 [====================>………] – ETA: 0s – loss: 20204.9289 – mean_absolute_error: 20204.9277
Epoch 00138: val_loss did not improve from 22075.88057
1168/1168 [==============================] – 0s 139us/sample – loss: 19574.7814 – mean_absolute_error: 19574.7793 – val_loss: 25334.9347 – val_mean_absolute_error: 25334.9355
Epoch 139/500
1024/1168 [=========================>….] – ETA: 0s – loss: 20024.7159 – mean_absolute_error: 20024.7168
Epoch 00139: val_loss did not improve from 22075.88057
1168/1168 [==============================] – 0s 120us/sample – loss: 19933.3670 – mean_absolute_error: 19933.3672 – val_loss: 24447.5856 – val_mean_absolute_error: 24447.5840
Epoch 140/500
992/1168 [========================>…..] – ETA: 0s – loss: 20165.7422 – mean_absolute_error: 20165.7422
Epoch 00140: val_loss did not improve from 22075.88057
1168/1168 [==============================] – 0s 126us/sample – loss: 19826.8654 – mean_absolute_error: 19826.8672 – val_loss: 24421.6281 – val_mean_absolute_error: 24421.6289
Epoch 141/500
1024/1168 [=========================>….] – ETA: 0s – loss: 20413.7510 – mean_absolute_error: 20413.7520
Epoch 00141: val_loss did not improve from 22075.88057
1168/1168 [==============================] – 0s 132us/sample – loss: 20341.0662 – mean_absolute_error: 20341.0684 – val_loss: 22116.8315 – val_mean_absolute_error: 22116.8320
Epoch 142/500
896/1168 [======================>…….] – ETA: 0s – loss: 18369.8790 – mean_absolute_error: 18369.8809
Epoch 00142: val_loss did not improve from 22075.88057
1168/1168 [==============================] – 0s 142us/sample – loss: 19120.9927 – mean_absolute_error: 19120.9922 – val_loss: 23239.4618 – val_mean_absolute_error: 23239.4629
Epoch 143/500
1056/1168 [==========================>…] – ETA: 0s – loss: 19653.2905 – mean_absolute_error: 19653.2891
Epoch 00143: val_loss did not improve from 22075.88057
1168/1168 [==============================] – 0s 178us/sample – loss: 19486.1513 – mean_absolute_error: 19486.1523 – val_loss: 22572.4516 – val_mean_absolute_error: 22572.4531
Epoch 144/500
864/1168 [=====================>……..] – ETA: 0s – loss: 18582.9372 – mean_absolute_error: 18582.9375
Epoch 00144: val_loss did not improve from 22075.88057
1168/1168 [==============================] – 0s 144us/sample – loss: 19279.1903 – mean_absolute_error: 19279.1895 – val_loss: 22416.9828 – val_mean_absolute_error: 22416.9824
Epoch 145/500
896/1168 [======================>…….] – ETA: 0s – loss: 19888.5622 – mean_absolute_error: 19888.5625
Epoch 00145: val_loss did not improve from 22075.88057
1168/1168 [==============================] – 0s 139us/sample – loss: 20096.3535 – mean_absolute_error: 20096.3555 – val_loss: 22156.8480 – val_mean_absolute_error: 22156.8457
Epoch 146/500
896/1168 [======================>…….] – ETA: 0s – loss: 18793.9495 – mean_absolute_error: 18793.9492
Epoch 00146: val_loss improved from 22075.88057 to 21666.45960, saving model to Models/Weights-146–21666.45960.hdf5
1168/1168 [==============================] – 0s 163us/sample – loss: 18675.3499 – mean_absolute_error: 18675.3516 – val_loss: 21666.4596 – val_mean_absolute_error: 21666.4590
Epoch 147/500
864/1168 [=====================>……..] – ETA: 0s – loss: 19813.6294 – mean_absolute_error: 19813.6289
Epoch 00147: val_loss did not improve from 21666.45960
1168/1168 [==============================] – 0s 143us/sample – loss: 20417.8488 – mean_absolute_error: 20417.8496 – val_loss: 24471.1900 – val_mean_absolute_error: 24471.1895
Epoch 148/500
800/1168 [===================>……….] – ETA: 0s – loss: 24095.1317 – mean_absolute_error: 24095.1328
Epoch 00148: val_loss did not improve from 21666.45960
1168/1168 [==============================] – 0s 152us/sample – loss: 23418.1695 – mean_absolute_error: 23418.1699 – val_loss: 23902.6045 – val_mean_absolute_error: 23902.6055
Epoch 149/500
832/1168 [====================>………] – ETA: 0s – loss: 21772.1105 – mean_absolute_error: 21772.1113
Epoch 00149: val_loss did not improve from 21666.45960
1168/1168 [==============================] – 0s 148us/sample – loss: 20991.2372 – mean_absolute_error: 20991.2363 – val_loss: 25115.5517 – val_mean_absolute_error: 25115.5508
Epoch 150/500
1056/1168 [==========================>…] – ETA: 0s – loss: 20880.6542 – mean_absolute_error: 20880.6562
Epoch 00150: val_loss did not improve from 21666.45960
1168/1168 [==============================] – 0s 189us/sample – loss: 20671.5915 – mean_absolute_error: 20671.5938 – val_loss: 24414.5071 – val_mean_absolute_error: 24414.5078
Epoch 151/500
960/1168 [=======================>……] – ETA: 0s – loss: 19232.8300 – mean_absolute_error: 19232.8320
Epoch 00151: val_loss did not improve from 21666.45960
1168/1168 [==============================] – 0s 198us/sample – loss: 19354.8126 – mean_absolute_error: 19354.8125 – val_loss: 22437.7816 – val_mean_absolute_error: 22437.7812
Epoch 152/500
1088/1168 [==========================>…] – ETA: 0s – loss: 19520.2866 – mean_absolute_error: 19520.2852
Epoch 00152: val_loss did not improve from 21666.45960
1168/1168 [==============================] – 0s 170us/sample – loss: 19612.1080 – mean_absolute_error: 19612.1055 – val_loss: 22419.4082 – val_mean_absolute_error: 22419.4082
Epoch 153/500
960/1168 [=======================>……] – ETA: 0s – loss: 19849.0144 – mean_absolute_error: 19849.0117
Epoch 00153: val_loss did not improve from 21666.45960
1168/1168 [==============================] – 0s 137us/sample – loss: 20214.8688 – mean_absolute_error: 20214.8672 – val_loss: 24436.8417 – val_mean_absolute_error: 24436.8398
Epoch 154/500
896/1168 [======================>…….] – ETA: 0s – loss: 19952.9760 – mean_absolute_error: 19952.9746
Epoch 00154: val_loss did not improve from 21666.45960
1168/1168 [==============================] – 0s 142us/sample – loss: 19737.4842 – mean_absolute_error: 19737.4824 – val_loss: 22191.6927 – val_mean_absolute_error: 22191.6914
Epoch 155/500
832/1168 [====================>………] – ETA: 0s – loss: 18376.6402 – mean_absolute_error: 18376.6387
Epoch 00155: val_loss did not improve from 21666.45960
1168/1168 [==============================] – 0s 153us/sample – loss: 18201.7378 – mean_absolute_error: 18201.7363 – val_loss: 22312.0291 – val_mean_absolute_error: 22312.0293
Epoch 156/500
928/1168 [======================>…….] – ETA: 0s – loss: 19806.6617 – mean_absolute_error: 19806.6602
Epoch 00156: val_loss did not improve from 21666.45960
1168/1168 [==============================] – 0s 139us/sample – loss: 19910.4188 – mean_absolute_error: 19910.4141 – val_loss: 25941.9602 – val_mean_absolute_error: 25941.9609
Epoch 157/500
928/1168 [======================>…….] – ETA: 0s – loss: 21851.5649 – mean_absolute_error: 21851.5684
Epoch 00157: val_loss did not improve from 21666.45960
1168/1168 [==============================] – 0s 140us/sample – loss: 20954.1882 – mean_absolute_error: 20954.1875 – val_loss: 22751.3978 – val_mean_absolute_error: 22751.3965
Epoch 158/500
928/1168 [======================>…….] – ETA: 0s – loss: 18354.8168 – mean_absolute_error: 18354.8145
Epoch 00158: val_loss did not improve from 21666.45960
1168/1168 [==============================] – 0s 128us/sample – loss: 18940.2233 – mean_absolute_error: 18940.2227 – val_loss: 21956.9349 – val_mean_absolute_error: 21956.9336
Epoch 159/500
928/1168 [======================>…….] – ETA: 0s – loss: 18146.0272 – mean_absolute_error: 18146.0254
Epoch 00159: val_loss did not improve from 21666.45960
1168/1168 [==============================] – 0s 141us/sample – loss: 18536.4695 – mean_absolute_error: 18536.4668 – val_loss: 22300.8713 – val_mean_absolute_error: 22300.8711
Epoch 160/500
960/1168 [=======================>……] – ETA: 0s – loss: 18357.5418 – mean_absolute_error: 18357.5410
Epoch 00160: val_loss did not improve from 21666.45960
1168/1168 [==============================] – 0s 130us/sample – loss: 18824.5435 – mean_absolute_error: 18824.5430 – val_loss: 22335.7567 – val_mean_absolute_error: 22335.7578
Epoch 161/500
1088/1168 [==========================>…] – ETA: 0s – loss: 17935.2874 – mean_absolute_error: 17935.2871
Epoch 00161: val_loss did not improve from 21666.45960
1168/1168 [==============================] – 0s 117us/sample – loss: 18230.0467 – mean_absolute_error: 18230.0488 – val_loss: 22687.2400 – val_mean_absolute_error: 22687.2402
Epoch 162/500
1024/1168 [=========================>….] – ETA: 0s – loss: 19314.6831 – mean_absolute_error: 19314.6816
Epoch 00162: val_loss did not improve from 21666.45960
1168/1168 [==============================] – 0s 124us/sample – loss: 19658.9870 – mean_absolute_error: 19658.9844 – val_loss: 23582.7912 – val_mean_absolute_error: 23582.7910
Epoch 163/500
1056/1168 [==========================>…] – ETA: 0s – loss: 19000.3126 – mean_absolute_error: 19000.3125
Epoch 00163: val_loss did not improve from 21666.45960
1168/1168 [==============================] – 0s 120us/sample – loss: 19194.7212 – mean_absolute_error: 19194.7207 – val_loss: 22008.8200 – val_mean_absolute_error: 22008.8184
Epoch 164/500
1024/1168 [=========================>….] – ETA: 0s – loss: 18659.6918 – mean_absolute_error: 18659.6895
Epoch 00164: val_loss did not improve from 21666.45960
1168/1168 [==============================] – 0s 123us/sample – loss: 18577.4239 – mean_absolute_error: 18577.4219 – val_loss: 22718.7215 – val_mean_absolute_error: 22718.7227
Epoch 165/500
960/1168 [=======================>……] – ETA: 0s – loss: 17977.9676 – mean_absolute_error: 17977.9688
Epoch 00165: val_loss did not improve from 21666.45960
1168/1168 [==============================] – 0s 135us/sample – loss: 18463.3787 – mean_absolute_error: 18463.3828 – val_loss: 37545.8113 – val_mean_absolute_error: 37545.8125
Epoch 166/500
960/1168 [=======================>……] – ETA: 0s – loss: 22661.5484 – mean_absolute_error: 22661.5449
Epoch 00166: val_loss did not improve from 21666.45960
1168/1168 [==============================] – 0s 138us/sample – loss: 22098.2380 – mean_absolute_error: 22098.2363 – val_loss: 21720.7644 – val_mean_absolute_error: 21720.7617
Epoch 167/500
800/1168 [===================>……….] – ETA: 0s – loss: 19648.3425 – mean_absolute_error: 19648.3418
Epoch 00167: val_loss did not improve from 21666.45960
1168/1168 [==============================] – 0s 153us/sample – loss: 19088.7970 – mean_absolute_error: 19088.7949 – val_loss: 22364.1132 – val_mean_absolute_error: 22364.1133
Epoch 168/500
1024/1168 [=========================>….] – ETA: 0s – loss: 19813.3383 – mean_absolute_error: 19813.3418
Epoch 00168: val_loss did not improve from 21666.45960
1168/1168 [==============================] – 0s 128us/sample – loss: 19368.5707 – mean_absolute_error: 19368.5742 – val_loss: 21680.4195 – val_mean_absolute_error: 21680.4219
Epoch 169/500
1056/1168 [==========================>…] – ETA: 0s – loss: 17879.1916 – mean_absolute_error: 17879.1895
Epoch 00169: val_loss did not improve from 21666.45960
1168/1168 [==============================] – 0s 120us/sample – loss: 17747.8121 – mean_absolute_error: 17747.8105 – val_loss: 21717.7318 – val_mean_absolute_error: 21717.7324
Epoch 170/500
992/1168 [========================>…..] – ETA: 0s – loss: 18064.1510 – mean_absolute_error: 18064.1523
Epoch 00170: val_loss did not improve from 21666.45960
1168/1168 [==============================] – 0s 126us/sample – loss: 18557.7776 – mean_absolute_error: 18557.7793 – val_loss: 22181.6759 – val_mean_absolute_error: 22181.6758
Epoch 171/500
1024/1168 [=========================>….] – ETA: 0s – loss: 19720.1451 – mean_absolute_error: 19720.1484
Epoch 00171: val_loss did not improve from 21666.45960
1168/1168 [==============================] – 0s 120us/sample – loss: 19503.2294 – mean_absolute_error: 19503.2344 – val_loss: 21962.8538 – val_mean_absolute_error: 21962.8535
Epoch 172/500
1024/1168 [=========================>….] – ETA: 0s – loss: 17606.8132 – mean_absolute_error: 17606.8145
Epoch 00172: val_loss did not improve from 21666.45960
1168/1168 [==============================] – 0s 121us/sample – loss: 17797.3198 – mean_absolute_error: 17797.3184 – val_loss: 22593.2917 – val_mean_absolute_error: 22593.2910
Epoch 173/500
1024/1168 [=========================>….] – ETA: 0s – loss: 17931.2068 – mean_absolute_error: 17931.2051
Epoch 00173: val_loss did not improve from 21666.45960
1168/1168 [==============================] – 0s 122us/sample – loss: 17831.9037 – mean_absolute_error: 17831.9043 – val_loss: 21900.8054 – val_mean_absolute_error: 21900.8066
Epoch 174/500
928/1168 [======================>…….] – ETA: 0s – loss: 17359.3067 – mean_absolute_error: 17359.3086
Epoch 00174: val_loss did not improve from 21666.45960
1168/1168 [==============================] – 0s 130us/sample – loss: 17763.5687 – mean_absolute_error: 17763.5703 – val_loss: 22182.1499 – val_mean_absolute_error: 22182.1484
Epoch 175/500
992/1168 [========================>…..] – ETA: 0s – loss: 18278.6717 – mean_absolute_error: 18278.6738
Epoch 00175: val_loss improved from 21666.45960 to 21445.55988, saving model to Models/Weights-175–21445.55988.hdf5
1168/1168 [==============================] – 0s 144us/sample – loss: 18457.5285 – mean_absolute_error: 18457.5312 – val_loss: 21445.5599 – val_mean_absolute_error: 21445.5625
Epoch 176/500
1056/1168 [==========================>…] – ETA: 0s – loss: 17808.7448 – mean_absolute_error: 17808.7441
Epoch 00176: val_loss did not improve from 21445.55988
1168/1168 [==============================] – 0s 122us/sample – loss: 18149.1385 – mean_absolute_error: 18149.1367 – val_loss: 21936.6283 – val_mean_absolute_error: 21936.6289
Epoch 177/500
768/1168 [==================>………..] – ETA: 0s – loss: 18784.7463 – mean_absolute_error: 18784.7480
Epoch 00177: val_loss did not improve from 21445.55988
1168/1168 [==============================] – 0s 147us/sample – loss: 17722.4225 – mean_absolute_error: 17722.4238 – val_loss: 22755.9389 – val_mean_absolute_error: 22755.9395
Epoch 178/500
768/1168 [==================>………..] – ETA: 0s – loss: 17963.9006 – mean_absolute_error: 17963.9023
Epoch 00178: val_loss improved from 21445.55988 to 20631.49443, saving model to Models/Weights-178–20631.49443.hdf5
1168/1168 [==============================] – 0s 190us/sample – loss: 17987.7644 – mean_absolute_error: 17987.7656 – val_loss: 20631.4944 – val_mean_absolute_error: 20631.4922
Epoch 179/500
1152/1168 [============================>.] – ETA: 0s – loss: 17388.9605 – mean_absolute_error: 17388.9609
Epoch 00179: val_loss did not improve from 20631.49443
1168/1168 [==============================] – 0s 164us/sample – loss: 17421.3617 – mean_absolute_error: 17421.3613 – val_loss: 22063.3021 – val_mean_absolute_error: 22063.3008
Epoch 180/500
800/1168 [===================>……….] – ETA: 0s – loss: 19839.0639 – mean_absolute_error: 19839.0664
Epoch 00180: val_loss did not improve from 20631.49443
1168/1168 [==============================] – 0s 145us/sample – loss: 19409.3890 – mean_absolute_error: 19409.3906 – val_loss: 22611.2677 – val_mean_absolute_error: 22611.2695
Epoch 181/500
800/1168 [===================>……….] – ETA: 0s – loss: 17706.6698 – mean_absolute_error: 17706.6699
Epoch 00181: val_loss did not improve from 20631.49443
1168/1168 [==============================] – 0s 156us/sample – loss: 17567.1855 – mean_absolute_error: 17567.1875 – val_loss: 21238.6232 – val_mean_absolute_error: 21238.6211
Epoch 182/500
960/1168 [=======================>……] – ETA: 0s – loss: 19183.4176 – mean_absolute_error: 19183.4160
Epoch 00182: val_loss did not improve from 20631.49443
1168/1168 [==============================] – 0s 136us/sample – loss: 19098.2191 – mean_absolute_error: 19098.2188 – val_loss: 21494.7002 – val_mean_absolute_error: 21494.7031
Epoch 183/500
928/1168 [======================>…….] – ETA: 0s – loss: 18146.1964 – mean_absolute_error: 18146.1953
Epoch 00183: val_loss did not improve from 20631.49443
1168/1168 [==============================] – 0s 127us/sample – loss: 17874.3248 – mean_absolute_error: 17874.3242 – val_loss: 20787.3895 – val_mean_absolute_error: 20787.3887
Epoch 184/500
960/1168 [=======================>……] – ETA: 0s – loss: 17375.9347 – mean_absolute_error: 17375.9336
Epoch 00184: val_loss did not improve from 20631.49443
1168/1168 [==============================] – 0s 141us/sample – loss: 17474.0987 – mean_absolute_error: 17474.0996 – val_loss: 22985.5092 – val_mean_absolute_error: 22985.5078
Epoch 185/500
960/1168 [=======================>……] – ETA: 0s – loss: 18187.7201 – mean_absolute_error: 18187.7188
Epoch 00185: val_loss did not improve from 20631.49443
1168/1168 [==============================] – 0s 132us/sample – loss: 18273.0552 – mean_absolute_error: 18273.0547 – val_loss: 21852.5378 – val_mean_absolute_error: 21852.5371
Epoch 186/500
1024/1168 [=========================>….] – ETA: 0s – loss: 18095.3817 – mean_absolute_error: 18095.3809
Epoch 00186: val_loss did not improve from 20631.49443
1168/1168 [==============================] – 0s 181us/sample – loss: 18186.4450 – mean_absolute_error: 18186.4434 – val_loss: 26895.1211 – val_mean_absolute_error: 26895.1191
Epoch 187/500
1152/1168 [============================>.] – ETA: 0s – loss: 20272.6115 – mean_absolute_error: 20272.6133
Epoch 00187: val_loss did not improve from 20631.49443
1168/1168 [==============================] – 0s 175us/sample – loss: 20425.7142 – mean_absolute_error: 20425.7148 – val_loss: 25391.6566 – val_mean_absolute_error: 25391.6582
Epoch 188/500
1088/1168 [==========================>…] – ETA: 0s – loss: 20374.7450 – mean_absolute_error: 20374.7441
Epoch 00188: val_loss did not improve from 20631.49443
1168/1168 [==============================] – 0s 173us/sample – loss: 20170.6711 – mean_absolute_error: 20170.6699 – val_loss: 27169.5036 – val_mean_absolute_error: 27169.5059
Epoch 189/500
1088/1168 [==========================>…] – ETA: 0s – loss: 18142.4016 – mean_absolute_error: 18142.4023
Epoch 00189: val_loss did not improve from 20631.49443
1168/1168 [==============================] – 0s 171us/sample – loss: 18290.7011 – mean_absolute_error: 18290.7012 – val_loss: 22035.4797 – val_mean_absolute_error: 22035.4785
Epoch 190/500
992/1168 [========================>…..] – ETA: 0s – loss: 16832.1843 – mean_absolute_error: 16832.1836
Epoch 00190: val_loss did not improve from 20631.49443
1168/1168 [==============================] – 0s 126us/sample – loss: 16827.6151 – mean_absolute_error: 16827.6152 – val_loss: 23532.1576 – val_mean_absolute_error: 23532.1582
Epoch 191/500
992/1168 [========================>…..] – ETA: 0s – loss: 17601.1799 – mean_absolute_error: 17601.1797
Epoch 00191: val_loss improved from 20631.49443 to 20155.00433, saving model to Models/Weights-191–20155.00433.hdf5
1168/1168 [==============================] – 0s 149us/sample – loss: 17674.6929 – mean_absolute_error: 17674.6934 – val_loss: 20155.0043 – val_mean_absolute_error: 20155.0059
Epoch 192/500
1024/1168 [=========================>….] – ETA: 0s – loss: 17979.1756 – mean_absolute_error: 17979.1758
Epoch 00192: val_loss did not improve from 20155.00433
1168/1168 [==============================] – 0s 122us/sample – loss: 17978.0637 – mean_absolute_error: 17978.0625 – val_loss: 22116.0785 – val_mean_absolute_error: 22116.0781
Epoch 193/500
1024/1168 [=========================>….] – ETA: 0s – loss: 17594.3775 – mean_absolute_error: 17594.3750
Epoch 00193: val_loss did not improve from 20155.00433
1168/1168 [==============================] – 0s 123us/sample – loss: 17470.7536 – mean_absolute_error: 17470.7520 – val_loss: 21925.2167 – val_mean_absolute_error: 21925.2168
Epoch 194/500
1056/1168 [==========================>…] – ETA: 0s – loss: 17813.3890 – mean_absolute_error: 17813.3906
Epoch 00194: val_loss did not improve from 20155.00433
1168/1168 [==============================] – 0s 122us/sample – loss: 17855.5724 – mean_absolute_error: 17855.5762 – val_loss: 21620.3046 – val_mean_absolute_error: 21620.3047
Epoch 195/500
1056/1168 [==========================>…] – ETA: 0s – loss: 17941.3406 – mean_absolute_error: 17941.3398
Epoch 00195: val_loss did not improve from 20155.00433
1168/1168 [==============================] – 0s 121us/sample – loss: 17745.1993 – mean_absolute_error: 17745.1992 – val_loss: 21745.9998 – val_mean_absolute_error: 21746.0000
Epoch 196/500
992/1168 [========================>…..] – ETA: 0s – loss: 17841.7636 – mean_absolute_error: 17841.7617
Epoch 00196: val_loss did not improve from 20155.00433
1168/1168 [==============================] – 0s 126us/sample – loss: 17530.8337 – mean_absolute_error: 17530.8320 – val_loss: 22355.5655 – val_mean_absolute_error: 22355.5645
Epoch 197/500
928/1168 [======================>…….] – ETA: 0s – loss: 18113.1269 – mean_absolute_error: 18113.1250
Epoch 00197: val_loss did not improve from 20155.00433
1168/1168 [==============================] – 0s 132us/sample – loss: 17497.6222 – mean_absolute_error: 17497.6191 – val_loss: 24520.3872 – val_mean_absolute_error: 24520.3867
Epoch 198/500
992/1168 [========================>…..] – ETA: 0s – loss: 18652.6703 – mean_absolute_error: 18652.6738
Epoch 00198: val_loss did not improve from 20155.00433
1168/1168 [==============================] – 0s 130us/sample – loss: 18408.8482 – mean_absolute_error: 18408.8496 – val_loss: 21456.2143 – val_mean_absolute_error: 21456.2148
Epoch 199/500
960/1168 [=======================>……] – ETA: 0s – loss: 19418.0370 – mean_absolute_error: 19418.0332
Epoch 00199: val_loss did not improve from 20155.00433
1168/1168 [==============================] – 0s 127us/sample – loss: 19406.3802 – mean_absolute_error: 19406.3789 – val_loss: 22157.3756 – val_mean_absolute_error: 22157.3750
Epoch 200/500
992/1168 [========================>…..] – ETA: 0s – loss: 18192.3553 – mean_absolute_error: 18192.3594
Epoch 00200: val_loss did not improve from 20155.00433
1168/1168 [==============================] – 0s 132us/sample – loss: 17936.0510 – mean_absolute_error: 17936.0527 – val_loss: 21945.7626 – val_mean_absolute_error: 21945.7617
Epoch 201/500
1056/1168 [==========================>…] – ETA: 0s – loss: 17595.7971 – mean_absolute_error: 17595.7969
Epoch 00201: val_loss did not improve from 20155.00433
1168/1168 [==============================] – 0s 117us/sample – loss: 17748.6112 – mean_absolute_error: 17748.6113 – val_loss: 21444.1679 – val_mean_absolute_error: 21444.1680
Epoch 202/500
992/1168 [========================>…..] – ETA: 0s – loss: 17524.2800 – mean_absolute_error: 17524.2832
Epoch 00202: val_loss did not improve from 20155.00433
1168/1168 [==============================] – 0s 126us/sample – loss: 17817.0301 – mean_absolute_error: 17817.0332 – val_loss: 21999.3509 – val_mean_absolute_error: 21999.3516
Epoch 203/500
1088/1168 [==========================>…] – ETA: 0s – loss: 16343.4837 – mean_absolute_error: 16343.4834
Epoch 00203: val_loss did not improve from 20155.00433
1168/1168 [==============================] – 0s 119us/sample – loss: 16422.5375 – mean_absolute_error: 16422.5371 – val_loss: 20920.3667 – val_mean_absolute_error: 20920.3652
Epoch 204/500
928/1168 [======================>…….] – ETA: 0s – loss: 17114.0579 – mean_absolute_error: 17114.0586
Epoch 00204: val_loss did not improve from 20155.00433
1168/1168 [==============================] – 0s 128us/sample – loss: 17157.7442 – mean_absolute_error: 17157.7422 – val_loss: 21255.8736 – val_mean_absolute_error: 21255.8750
Epoch 205/500
960/1168 [=======================>……] – ETA: 0s – loss: 16970.0336 – mean_absolute_error: 16970.0332
Epoch 00205: val_loss did not improve from 20155.00433
1168/1168 [==============================] – 0s 133us/sample – loss: 17226.9866 – mean_absolute_error: 17226.9844 – val_loss: 22757.7323 – val_mean_absolute_error: 22757.7344
Epoch 206/500
736/1168 [=================>…………] – ETA: 0s – loss: 18083.9791 – mean_absolute_error: 18083.9805
Epoch 00206: val_loss did not improve from 20155.00433
1168/1168 [==============================] – 0s 156us/sample – loss: 17666.3450 – mean_absolute_error: 17666.3457 – val_loss: 21564.2475 – val_mean_absolute_error: 21564.2461
Epoch 207/500
960/1168 [=======================>……] – ETA: 0s – loss: 18146.5751 – mean_absolute_error: 18146.5742
Epoch 00207: val_loss did not improve from 20155.00433
1168/1168 [==============================] – 0s 134us/sample – loss: 17794.0219 – mean_absolute_error: 17794.0234 – val_loss: 21190.4570 – val_mean_absolute_error: 21190.4551
Epoch 208/500
992/1168 [========================>…..] – ETA: 0s – loss: 17383.7374 – mean_absolute_error: 17383.7363
Epoch 00208: val_loss did not improve from 20155.00433
1168/1168 [==============================] – 0s 123us/sample – loss: 17113.9445 – mean_absolute_error: 17113.9434 – val_loss: 21233.5063 – val_mean_absolute_error: 21233.5078
Epoch 209/500
1088/1168 [==========================>…] – ETA: 0s – loss: 16716.7325 – mean_absolute_error: 16716.7344
Epoch 00209: val_loss did not improve from 20155.00433
1168/1168 [==============================] – 0s 119us/sample – loss: 16855.9071 – mean_absolute_error: 16855.9082 – val_loss: 20916.5543 – val_mean_absolute_error: 20916.5547
Epoch 210/500
992/1168 [========================>…..] – ETA: 0s – loss: 17376.3285 – mean_absolute_error: 17376.3281
Epoch 00210: val_loss did not improve from 20155.00433
1168/1168 [==============================] – 0s 123us/sample – loss: 16735.8622 – mean_absolute_error: 16735.8613 – val_loss: 21219.6518 – val_mean_absolute_error: 21219.6504
Epoch 211/500
960/1168 [=======================>……] – ETA: 0s – loss: 16825.6265 – mean_absolute_error: 16825.6270
Epoch 00211: val_loss did not improve from 20155.00433
1168/1168 [==============================] – 0s 131us/sample – loss: 16772.7442 – mean_absolute_error: 16772.7441 – val_loss: 21099.3310 – val_mean_absolute_error: 21099.3301
Epoch 212/500
1088/1168 [==========================>…] – ETA: 0s – loss: 17852.9990 – mean_absolute_error: 17852.9980
Epoch 00212: val_loss did not improve from 20155.00433
1168/1168 [==============================] – 0s 120us/sample – loss: 17684.2882 – mean_absolute_error: 17684.2871 – val_loss: 25009.5557 – val_mean_absolute_error: 25009.5566
Epoch 213/500
992/1168 [========================>…..] – ETA: 0s – loss: 17393.6846 – mean_absolute_error: 17393.6836
Epoch 00213: val_loss did not improve from 20155.00433
1168/1168 [==============================] – 0s 127us/sample – loss: 17436.1117 – mean_absolute_error: 17436.1094 – val_loss: 23266.4582 – val_mean_absolute_error: 23266.4590
Epoch 214/500
1024/1168 [=========================>….] – ETA: 0s – loss: 18240.7403 – mean_absolute_error: 18240.7402
Epoch 00214: val_loss did not improve from 20155.00433
1168/1168 [==============================] – 0s 122us/sample – loss: 17678.5707 – mean_absolute_error: 17678.5703 – val_loss: 23713.1950 – val_mean_absolute_error: 23713.1953
Epoch 215/500
992/1168 [========================>…..] – ETA: 0s – loss: 17090.8072 – mean_absolute_error: 17090.8086
Epoch 00215: val_loss did not improve from 20155.00433
1168/1168 [==============================] – 0s 123us/sample – loss: 17532.9506 – mean_absolute_error: 17532.9512 – val_loss: 22621.2292 – val_mean_absolute_error: 22621.2285
Epoch 216/500
1056/1168 [==========================>…] – ETA: 0s – loss: 16952.2753 – mean_absolute_error: 16952.2773
Epoch 00216: val_loss did not improve from 20155.00433
1168/1168 [==============================] – 0s 120us/sample – loss: 17466.6257 – mean_absolute_error: 17466.6270 – val_loss: 28233.5022 – val_mean_absolute_error: 28233.5039
Epoch 217/500
928/1168 [======================>…….] – ETA: 0s – loss: 18928.8814 – mean_absolute_error: 18928.8809
Epoch 00217: val_loss did not improve from 20155.00433
1168/1168 [==============================] – 0s 137us/sample – loss: 18694.0999 – mean_absolute_error: 18694.0996 – val_loss: 22236.3388 – val_mean_absolute_error: 22236.3398
Epoch 218/500
864/1168 [=====================>……..] – ETA: 0s – loss: 18424.4779 – mean_absolute_error: 18424.4785
Epoch 00218: val_loss did not improve from 20155.00433
1168/1168 [==============================] – 0s 149us/sample – loss: 19451.6528 – mean_absolute_error: 19451.6562 – val_loss: 30793.4209 – val_mean_absolute_error: 30793.4219
Epoch 219/500
1024/1168 [=========================>….] – ETA: 0s – loss: 18564.1952 – mean_absolute_error: 18564.1953
Epoch 00219: val_loss did not improve from 20155.00433
1168/1168 [==============================] – 0s 180us/sample – loss: 18161.6501 – mean_absolute_error: 18161.6504 – val_loss: 22360.8116 – val_mean_absolute_error: 22360.8125
Epoch 220/500
960/1168 [=======================>……] – ETA: 0s – loss: 18712.4631 – mean_absolute_error: 18712.4648
Epoch 00220: val_loss did not improve from 20155.00433
1168/1168 [==============================] – 0s 127us/sample – loss: 18147.5643 – mean_absolute_error: 18147.5664 – val_loss: 21235.4537 – val_mean_absolute_error: 21235.4531
Epoch 221/500
1024/1168 [=========================>….] – ETA: 0s – loss: 17126.2538 – mean_absolute_error: 17126.2539
Epoch 00221: val_loss did not improve from 20155.00433
1168/1168 [==============================] – 0s 122us/sample – loss: 16953.1734 – mean_absolute_error: 16953.1738 – val_loss: 21309.6798 – val_mean_absolute_error: 21309.6797
Epoch 222/500
1024/1168 [=========================>….] – ETA: 0s – loss: 17256.7879 – mean_absolute_error: 17256.7891
Epoch 00222: val_loss did not improve from 20155.00433
1168/1168 [==============================] – 0s 126us/sample – loss: 17414.1159 – mean_absolute_error: 17414.1172 – val_loss: 20780.6492 – val_mean_absolute_error: 20780.6484
Epoch 223/500
1024/1168 [=========================>….] – ETA: 0s – loss: 16502.3582 – mean_absolute_error: 16502.3574
Epoch 00223: val_loss did not improve from 20155.00433
1168/1168 [==============================] – 0s 122us/sample – loss: 16785.6678 – mean_absolute_error: 16785.6680 – val_loss: 22345.9591 – val_mean_absolute_error: 22345.9590
Epoch 224/500
960/1168 [=======================>……] – ETA: 0s – loss: 18068.6375 – mean_absolute_error: 18068.6367
Epoch 00224: val_loss did not improve from 20155.00433
1168/1168 [==============================] – 0s 127us/sample – loss: 17612.7142 – mean_absolute_error: 17612.7148 – val_loss: 21163.5887 – val_mean_absolute_error: 21163.5879
Epoch 225/500
832/1168 [====================>………] – ETA: 0s – loss: 18940.0248 – mean_absolute_error: 18940.0273
Epoch 00225: val_loss did not improve from 20155.00433
1168/1168 [==============================] – 0s 143us/sample – loss: 17987.7733 – mean_absolute_error: 17987.7734 – val_loss: 21402.4371 – val_mean_absolute_error: 21402.4375
Epoch 226/500
1024/1168 [=========================>….] – ETA: 0s – loss: 16990.0551 – mean_absolute_error: 16990.0566
Epoch 00226: val_loss did not improve from 20155.00433
1168/1168 [==============================] – 0s 126us/sample – loss: 16519.9988 – mean_absolute_error: 16519.9980 – val_loss: 21422.4060 – val_mean_absolute_error: 21422.4062
Epoch 227/500
928/1168 [======================>…….] – ETA: 0s – loss: 16201.6850 – mean_absolute_error: 16201.6855
Epoch 00227: val_loss did not improve from 20155.00433
1168/1168 [==============================] – 0s 130us/sample – loss: 16702.7078 – mean_absolute_error: 16702.7090 – val_loss: 24205.7937 – val_mean_absolute_error: 24205.7949
Epoch 228/500
1088/1168 [==========================>…] – ETA: 0s – loss: 18069.9448 – mean_absolute_error: 18069.9434
Epoch 00228: val_loss did not improve from 20155.00433
1168/1168 [==============================] – 0s 118us/sample – loss: 18095.8682 – mean_absolute_error: 18095.8672 – val_loss: 28085.5630 – val_mean_absolute_error: 28085.5625
Epoch 229/500
1024/1168 [=========================>….] – ETA: 0s – loss: 20224.8027 – mean_absolute_error: 20224.8008
Epoch 00229: val_loss did not improve from 20155.00433
1168/1168 [==============================] – 0s 127us/sample – loss: 20397.9877 – mean_absolute_error: 20397.9863 – val_loss: 29275.8818 – val_mean_absolute_error: 29275.8809
Epoch 230/500
928/1168 [======================>…….] – ETA: 0s – loss: 18245.3180 – mean_absolute_error: 18245.3164
Epoch 00230: val_loss did not improve from 20155.00433
1168/1168 [==============================] – 0s 133us/sample – loss: 17676.7267 – mean_absolute_error: 17676.7246 – val_loss: 25759.2458 – val_mean_absolute_error: 25759.2480
Epoch 231/500
1088/1168 [==========================>…] – ETA: 0s – loss: 16790.9760 – mean_absolute_error: 16790.9766
Epoch 00231: val_loss did not improve from 20155.00433
1168/1168 [==============================] – 0s 116us/sample – loss: 16776.8694 – mean_absolute_error: 16776.8711 – val_loss: 22923.1904 – val_mean_absolute_error: 22923.1914
Epoch 232/500
992/1168 [========================>…..] – ETA: 0s – loss: 18069.3876 – mean_absolute_error: 18069.3867
Epoch 00232: val_loss did not improve from 20155.00433
1168/1168 [==============================] – 0s 130us/sample – loss: 17855.9702 – mean_absolute_error: 17855.9688 – val_loss: 20742.2024 – val_mean_absolute_error: 20742.2012
Epoch 233/500
1056/1168 [==========================>…] – ETA: 0s – loss: 16159.3843 – mean_absolute_error: 16159.3848
Epoch 00233: val_loss improved from 20155.00433 to 20069.28385, saving model to Models/Weights-233–20069.28385.hdf5
1168/1168 [==============================] – 0s 137us/sample – loss: 16115.8319 – mean_absolute_error: 16115.8301 – val_loss: 20069.2838 – val_mean_absolute_error: 20069.2832
Epoch 234/500
1024/1168 [=========================>….] – ETA: 0s – loss: 15169.1620 – mean_absolute_error: 15169.1621
Epoch 00234: val_loss did not improve from 20069.28385
1168/1168 [==============================] – 0s 128us/sample – loss: 15566.3287 – mean_absolute_error: 15566.3301 – val_loss: 20652.9901 – val_mean_absolute_error: 20652.9902
Epoch 235/500
1024/1168 [=========================>….] – ETA: 0s – loss: 16428.6206 – mean_absolute_error: 16428.6211
Epoch 00235: val_loss did not improve from 20069.28385
1168/1168 [==============================] – 0s 191us/sample – loss: 16407.7068 – mean_absolute_error: 16407.7070 – val_loss: 24597.7613 – val_mean_absolute_error: 24597.7598
Epoch 236/500
1088/1168 [==========================>…] – ETA: 0s – loss: 18405.6732 – mean_absolute_error: 18405.6719
Epoch 00236: val_loss did not improve from 20069.28385
1168/1168 [==============================] – 0s 239us/sample – loss: 18258.9174 – mean_absolute_error: 18258.9180 – val_loss: 22148.2448 – val_mean_absolute_error: 22148.2441
Epoch 237/500
800/1168 [===================>……….] – ETA: 0s – loss: 17720.6498 – mean_absolute_error: 17720.6504
Epoch 00237: val_loss did not improve from 20069.28385
1168/1168 [==============================] – 0s 231us/sample – loss: 17658.5994 – mean_absolute_error: 17658.5996 – val_loss: 24697.9895 – val_mean_absolute_error: 24697.9902
Epoch 238/500
832/1168 [====================>………] – ETA: 0s – loss: 16765.4304 – mean_absolute_error: 16765.4297
Epoch 00238: val_loss did not improve from 20069.28385
1168/1168 [==============================] – 0s 150us/sample – loss: 16537.9248 – mean_absolute_error: 16537.9238 – val_loss: 21444.8503 – val_mean_absolute_error: 21444.8516
Epoch 239/500
800/1168 [===================>……….] – ETA: 0s – loss: 16637.2343 – mean_absolute_error: 16637.2344
Epoch 00239: val_loss did not improve from 20069.28385
1168/1168 [==============================] – 0s 207us/sample – loss: 18154.8612 – mean_absolute_error: 18154.8613 – val_loss: 23119.1866 – val_mean_absolute_error: 23119.1875
Epoch 240/500
800/1168 [===================>……….] – ETA: 0s – loss: 18242.5641 – mean_absolute_error: 18242.5645
Epoch 00240: val_loss did not improve from 20069.28385
1168/1168 [==============================] – 0s 148us/sample – loss: 18331.9750 – mean_absolute_error: 18331.9727 – val_loss: 20356.7376 – val_mean_absolute_error: 20356.7383
Epoch 241/500
992/1168 [========================>…..] – ETA: 0s – loss: 18108.0380 – mean_absolute_error: 18108.0391- ETA: 0s – loss: 17735.0366 – mean_absolute_error: 17735.037
Epoch 00241: val_loss did not improve from 20069.28385
1168/1168 [==============================] – 0s 131us/sample – loss: 17796.6541 – mean_absolute_error: 17796.6562 – val_loss: 21359.1164 – val_mean_absolute_error: 21359.1152
Epoch 242/500
896/1168 [======================>…….] – ETA: 0s – loss: 17424.8707 – mean_absolute_error: 17424.8711
Epoch 00242: val_loss did not improve from 20069.28385
1168/1168 [==============================] – 0s 138us/sample – loss: 17077.0787 – mean_absolute_error: 17077.0762 – val_loss: 21891.9460 – val_mean_absolute_error: 21891.9453
Epoch 243/500
1152/1168 [============================>.] – ETA: 0s – loss: 17554.0576 – mean_absolute_error: 17554.0586
Epoch 00243: val_loss did not improve from 20069.28385
1168/1168 [==============================] – 0s 161us/sample – loss: 17544.3392 – mean_absolute_error: 17544.3398 – val_loss: 23006.5459 – val_mean_absolute_error: 23006.5469
Epoch 244/500
864/1168 [=====================>……..] – ETA: 0s – loss: 16692.5310 – mean_absolute_error: 16692.5332
Epoch 00244: val_loss did not improve from 20069.28385
1168/1168 [==============================] – 0s 143us/sample – loss: 16929.2249 – mean_absolute_error: 16929.2285 – val_loss: 22671.9738 – val_mean_absolute_error: 22671.9746
Epoch 245/500
1152/1168 [============================>.] – ETA: 0s – loss: 17243.0515 – mean_absolute_error: 17243.0488
Epoch 00245: val_loss did not improve from 20069.28385
1168/1168 [==============================] – 0s 168us/sample – loss: 17170.6751 – mean_absolute_error: 17170.6738 – val_loss: 20381.8023 – val_mean_absolute_error: 20381.8008
Epoch 246/500
704/1168 [=================>…………] – ETA: 0s – loss: 16236.5425 – mean_absolute_error: 16236.5439
Epoch 00246: val_loss did not improve from 20069.28385
1168/1168 [==============================] – 0s 152us/sample – loss: 16982.0901 – mean_absolute_error: 16982.0918 – val_loss: 24780.7623 – val_mean_absolute_error: 24780.7617
Epoch 247/500
928/1168 [======================>…….] – ETA: 0s – loss: 20228.8479 – mean_absolute_error: 20228.8496
Epoch 00247: val_loss did not improve from 20069.28385
1168/1168 [==============================] – 0s 141us/sample – loss: 19864.9892 – mean_absolute_error: 19864.9883 – val_loss: 21974.0476 – val_mean_absolute_error: 21974.0469
Epoch 248/500
864/1168 [=====================>……..] – ETA: 0s – loss: 16036.1611 – mean_absolute_error: 16036.1602
Epoch 00248: val_loss did not improve from 20069.28385
1168/1168 [==============================] – 0s 138us/sample – loss: 16379.2478 – mean_absolute_error: 16379.2461 – val_loss: 21433.2928 – val_mean_absolute_error: 21433.2930
Epoch 249/500
800/1168 [===================>……….] – ETA: 0s – loss: 16518.3551 – mean_absolute_error: 16518.3555
Epoch 00249: val_loss improved from 20069.28385 to 19552.26584, saving model to Models/Weights-249–19552.26584.hdf5
1168/1168 [==============================] – 0s 173us/sample – loss: 16466.6442 – mean_absolute_error: 16466.6445 – val_loss: 19552.2658 – val_mean_absolute_error: 19552.2676
Epoch 250/500
960/1168 [=======================>……] – ETA: 0s – loss: 15940.9112 – mean_absolute_error: 15940.9102
Epoch 00250: val_loss did not improve from 19552.26584
1168/1168 [==============================] – 0s 137us/sample – loss: 15824.7580 – mean_absolute_error: 15824.7588 – val_loss: 22135.3169 – val_mean_absolute_error: 22135.3164
Epoch 251/500
768/1168 [==================>………..] – ETA: 0s – loss: 17763.6964 – mean_absolute_error: 17763.6953
Epoch 00251: val_loss did not improve from 19552.26584
1168/1168 [==============================] – 0s 151us/sample – loss: 17791.5399 – mean_absolute_error: 17791.5391 – val_loss: 21760.9916 – val_mean_absolute_error: 21760.9922
Epoch 252/500
928/1168 [======================>…….] – ETA: 0s – loss: 17704.3297 – mean_absolute_error: 17704.3281
Epoch 00252: val_loss did not improve from 19552.26584
1168/1168 [==============================] – 0s 137us/sample – loss: 17444.1040 – mean_absolute_error: 17444.1035 – val_loss: 21660.3052 – val_mean_absolute_error: 21660.3047
Epoch 253/500
928/1168 [======================>…….] – ETA: 0s – loss: 16472.4878 – mean_absolute_error: 16472.4863
Epoch 00253: val_loss did not improve from 19552.26584
1168/1168 [==============================] – 0s 136us/sample – loss: 16498.5621 – mean_absolute_error: 16498.5625 – val_loss: 20606.9024 – val_mean_absolute_error: 20606.9023
Epoch 254/500
928/1168 [======================>…….] – ETA: 0s – loss: 15476.8300 – mean_absolute_error: 15476.8311
Epoch 00254: val_loss did not improve from 19552.26584
1168/1168 [==============================] – 0s 129us/sample – loss: 15906.7527 – mean_absolute_error: 15906.7539 – val_loss: 22189.1129 – val_mean_absolute_error: 22189.1133
Epoch 255/500
1024/1168 [=========================>….] – ETA: 0s – loss: 16763.3632 – mean_absolute_error: 16763.3613
Epoch 00255: val_loss did not improve from 19552.26584
1168/1168 [==============================] – 0s 128us/sample – loss: 16903.6495 – mean_absolute_error: 16903.6484 – val_loss: 20568.3010 – val_mean_absolute_error: 20568.3008
Epoch 256/500
1152/1168 [============================>.] – ETA: 0s – loss: 16499.8887 – mean_absolute_error: 16499.8887
Epoch 00256: val_loss did not improve from 19552.26584
1168/1168 [==============================] – 0s 182us/sample – loss: 16631.3987 – mean_absolute_error: 16631.3984 – val_loss: 22989.8949 – val_mean_absolute_error: 22989.8945
Epoch 257/500
864/1168 [=====================>……..] – ETA: 0s – loss: 15998.1697 – mean_absolute_error: 15998.1689
Epoch 00257: val_loss did not improve from 19552.26584
1168/1168 [==============================] – 0s 157us/sample – loss: 16307.4438 – mean_absolute_error: 16307.4414 – val_loss: 20928.1759 – val_mean_absolute_error: 20928.1758
Epoch 258/500
960/1168 [=======================>……] – ETA: 0s – loss: 16818.2880 – mean_absolute_error: 16818.2891
Epoch 00258: val_loss did not improve from 19552.26584
1168/1168 [==============================] – 0s 199us/sample – loss: 16845.8988 – mean_absolute_error: 16845.8984 – val_loss: 22740.4519 – val_mean_absolute_error: 22740.4531
Epoch 259/500
1088/1168 [==========================>…] – ETA: 0s – loss: 17640.1318 – mean_absolute_error: 17640.1328
Epoch 00259: val_loss did not improve from 19552.26584
1168/1168 [==============================] – 0s 185us/sample – loss: 17930.7688 – mean_absolute_error: 17930.7715 – val_loss: 28538.2329 – val_mean_absolute_error: 28538.2344
Epoch 260/500
992/1168 [========================>…..] – ETA: 0s – loss: 19765.2004 – mean_absolute_error: 19765.2012
Epoch 00260: val_loss did not improve from 19552.26584
1168/1168 [==============================] – 0s 148us/sample – loss: 19585.5472 – mean_absolute_error: 19585.5469 – val_loss: 21866.5651 – val_mean_absolute_error: 21866.5645
Epoch 261/500
1152/1168 [============================>.] – ETA: 0s – loss: 17472.3678 – mean_absolute_error: 17472.3672
Epoch 00261: val_loss did not improve from 19552.26584
1168/1168 [==============================] – 0s 229us/sample – loss: 17459.4199 – mean_absolute_error: 17459.4199 – val_loss: 20352.7615 – val_mean_absolute_error: 20352.7617
Epoch 262/500
928/1168 [======================>…….] – ETA: 0s – loss: 16128.1924 – mean_absolute_error: 16128.1904
Epoch 00262: val_loss did not improve from 19552.26584
1168/1168 [==============================] – 0s 186us/sample – loss: 16427.3798 – mean_absolute_error: 16427.3789 – val_loss: 20527.0056 – val_mean_absolute_error: 20527.0078
Epoch 263/500
1152/1168 [============================>.] – ETA: 0s – loss: 17079.2316 – mean_absolute_error: 17079.2305
Epoch 00263: val_loss did not improve from 19552.26584
1168/1168 [==============================] – 0s 167us/sample – loss: 17050.4989 – mean_absolute_error: 17050.4980 – val_loss: 21765.3273 – val_mean_absolute_error: 21765.3262
Epoch 264/500
864/1168 [=====================>……..] – ETA: 0s – loss: 16896.7051 – mean_absolute_error: 16896.7051
Epoch 00264: val_loss did not improve from 19552.26584
1168/1168 [==============================] – 0s 196us/sample – loss: 16548.0403 – mean_absolute_error: 16548.0391 – val_loss: 19612.0594 – val_mean_absolute_error: 19612.0605
Epoch 265/500
800/1168 [===================>……….] – ETA: 0s – loss: 16664.9939 – mean_absolute_error: 16664.9922
Epoch 00265: val_loss did not improve from 19552.26584
1168/1168 [==============================] – 0s 153us/sample – loss: 17149.8533 – mean_absolute_error: 17149.8516 – val_loss: 21726.2985 – val_mean_absolute_error: 21726.2988
Epoch 266/500
1152/1168 [============================>.] – ETA: 0s – loss: 16223.9035 – mean_absolute_error: 16223.9023
Epoch 00266: val_loss did not improve from 19552.26584
1168/1168 [==============================] – 0s 165us/sample – loss: 16172.4513 – mean_absolute_error: 16172.4502 – val_loss: 21069.7747 – val_mean_absolute_error: 21069.7734
Epoch 267/500
928/1168 [======================>…….] – ETA: 0s – loss: 15232.5469 – mean_absolute_error: 15232.5488
Epoch 00267: val_loss did not improve from 19552.26584
1168/1168 [==============================] – 0s 136us/sample – loss: 15366.1795 – mean_absolute_error: 15366.1816 – val_loss: 21146.0879 – val_mean_absolute_error: 21146.0879
Epoch 268/500
1152/1168 [============================>.] – ETA: 0s – loss: 16732.4875 – mean_absolute_error: 16732.4883
Epoch 00268: val_loss improved from 19552.26584 to 19021.54955, saving model to Models/Weights-268–19021.54955.hdf5
1168/1168 [==============================] – 0s 190us/sample – loss: 16639.0665 – mean_absolute_error: 16639.0664 – val_loss: 19021.5496 – val_mean_absolute_error: 19021.5488
Epoch 269/500
992/1168 [========================>…..] – ETA: 0s – loss: 16030.3541 – mean_absolute_error: 16030.3506
Epoch 00269: val_loss did not improve from 19021.54955
1168/1168 [==============================] – 0s 125us/sample – loss: 15836.1735 – mean_absolute_error: 15836.1699 – val_loss: 22087.7949 – val_mean_absolute_error: 22087.7949
Epoch 270/500
992/1168 [========================>…..] – ETA: 0s – loss: 16020.0936 – mean_absolute_error: 16020.0938
Epoch 00270: val_loss did not improve from 19021.54955
1168/1168 [==============================] – 0s 137us/sample – loss: 16012.6197 – mean_absolute_error: 16012.6201 – val_loss: 20154.3281 – val_mean_absolute_error: 20154.3262
Epoch 271/500
928/1168 [======================>…….] – ETA: 0s – loss: 17051.5984 – mean_absolute_error: 17051.5996
Epoch 00271: val_loss did not improve from 19021.54955
1168/1168 [==============================] – 0s 128us/sample – loss: 17127.2782 – mean_absolute_error: 17127.2773 – val_loss: 21760.3748 – val_mean_absolute_error: 21760.3750
Epoch 272/500
1024/1168 [=========================>….] – ETA: 0s – loss: 15667.7315 – mean_absolute_error: 15667.7324
Epoch 00272: val_loss did not improve from 19021.54955
1168/1168 [==============================] – 0s 125us/sample – loss: 15673.0947 – mean_absolute_error: 15673.0938 – val_loss: 20548.7428 – val_mean_absolute_error: 20548.7422
Epoch 273/500
960/1168 [=======================>……] – ETA: 0s – loss: 15879.2524 – mean_absolute_error: 15879.2520
Epoch 00273: val_loss did not improve from 19021.54955
1168/1168 [==============================] – 0s 130us/sample – loss: 15630.3344 – mean_absolute_error: 15630.3340 – val_loss: 22869.2197 – val_mean_absolute_error: 22869.2207
Epoch 274/500
768/1168 [==================>………..] – ETA: 0s – loss: 15827.6042 – mean_absolute_error: 15827.6045
Epoch 00274: val_loss did not improve from 19021.54955
1168/1168 [==============================] – 0s 153us/sample – loss: 15755.3905 – mean_absolute_error: 15755.3926 – val_loss: 20715.8464 – val_mean_absolute_error: 20715.8438
Epoch 275/500
800/1168 [===================>……….] – ETA: 0s – loss: 16387.5168 – mean_absolute_error: 16387.5176
Epoch 00275: val_loss did not improve from 19021.54955
1168/1168 [==============================] – 0s 145us/sample – loss: 16109.3658 – mean_absolute_error: 16109.3643 – val_loss: 19735.8547 – val_mean_absolute_error: 19735.8555
Epoch 276/500
1056/1168 [==========================>…] – ETA: 0s – loss: 16119.5435 – mean_absolute_error: 16119.5439
Epoch 00276: val_loss did not improve from 19021.54955
1168/1168 [==============================] – 0s 120us/sample – loss: 15931.0402 – mean_absolute_error: 15931.0410 – val_loss: 25450.9626 – val_mean_absolute_error: 25450.9629
Epoch 277/500
896/1168 [======================>…….] – ETA: 0s – loss: 19557.3858 – mean_absolute_error: 19557.3867
Epoch 00277: val_loss did not improve from 19021.54955
1168/1168 [==============================] – 0s 134us/sample – loss: 19044.3402 – mean_absolute_error: 19044.3398 – val_loss: 26165.6626 – val_mean_absolute_error: 26165.6602
Epoch 278/500
1056/1168 [==========================>…] – ETA: 0s – loss: 16900.1645 – mean_absolute_error: 16900.1641
Epoch 00278: val_loss did not improve from 19021.54955
1168/1168 [==============================] – 0s 125us/sample – loss: 16811.5604 – mean_absolute_error: 16811.5625 – val_loss: 24107.0167 – val_mean_absolute_error: 24107.0156
Epoch 279/500
1024/1168 [=========================>….] – ETA: 0s – loss: 16891.9062 – mean_absolute_error: 16891.9062
Epoch 00279: val_loss did not improve from 19021.54955
1168/1168 [==============================] – 0s 122us/sample – loss: 16698.9984 – mean_absolute_error: 16698.9980 – val_loss: 20065.1357 – val_mean_absolute_error: 20065.1367
Epoch 280/500
992/1168 [========================>…..] – ETA: 0s – loss: 15571.6316 – mean_absolute_error: 15571.6299
Epoch 00280: val_loss did not improve from 19021.54955
1168/1168 [==============================] – 0s 128us/sample – loss: 15701.6951 – mean_absolute_error: 15701.6934 – val_loss: 19395.5357 – val_mean_absolute_error: 19395.5352
Epoch 281/500
1024/1168 [=========================>….] – ETA: 0s – loss: 16189.1171 – mean_absolute_error: 16189.1201
Epoch 00281: val_loss did not improve from 19021.54955
1168/1168 [==============================] – 0s 125us/sample – loss: 16343.4096 – mean_absolute_error: 16343.4111 – val_loss: 22247.4430 – val_mean_absolute_error: 22247.4434
Epoch 282/500
992/1168 [========================>…..] – ETA: 0s – loss: 15492.6667 – mean_absolute_error: 15492.6670
Epoch 00282: val_loss did not improve from 19021.54955
1168/1168 [==============================] – 0s 125us/sample – loss: 15420.3920 – mean_absolute_error: 15420.3936 – val_loss: 22276.0804 – val_mean_absolute_error: 22276.0801
Epoch 283/500
1056/1168 [==========================>…] – ETA: 0s – loss: 16917.6558 – mean_absolute_error: 16917.6562
Epoch 00283: val_loss did not improve from 19021.54955
1168/1168 [==============================] – 0s 175us/sample – loss: 17146.0998 – mean_absolute_error: 17146.1016 – val_loss: 19553.4938 – val_mean_absolute_error: 19553.4922
Epoch 284/500
928/1168 [======================>…….] – ETA: 0s – loss: 15814.2379 – mean_absolute_error: 15814.2383
Epoch 00284: val_loss did not improve from 19021.54955
1168/1168 [==============================] – 0s 132us/sample – loss: 16255.6788 – mean_absolute_error: 16255.6777 – val_loss: 20169.8747 – val_mean_absolute_error: 20169.8750
Epoch 285/500
800/1168 [===================>……….] – ETA: 0s – loss: 15561.3565 – mean_absolute_error: 15561.3564
Epoch 00285: val_loss did not improve from 19021.54955
1168/1168 [==============================] – 0s 149us/sample – loss: 15750.6733 – mean_absolute_error: 15750.6729 – val_loss: 21262.0163 – val_mean_absolute_error: 21262.0156
Epoch 286/500
928/1168 [======================>…….] – ETA: 0s – loss: 20331.1423 – mean_absolute_error: 20331.1426
Epoch 00286: val_loss did not improve from 19021.54955
1168/1168 [==============================] – 0s 131us/sample – loss: 19537.0691 – mean_absolute_error: 19537.0684 – val_loss: 20746.7298 – val_mean_absolute_error: 20746.7305
Epoch 287/500
896/1168 [======================>…….] – ETA: 0s – loss: 15707.3424 – mean_absolute_error: 15707.3438
Epoch 00287: val_loss did not improve from 19021.54955
1168/1168 [==============================] – 0s 131us/sample – loss: 15712.3728 – mean_absolute_error: 15712.3730 – val_loss: 20555.5917 – val_mean_absolute_error: 20555.5918
Epoch 288/500
1024/1168 [=========================>….] – ETA: 0s – loss: 16422.4984 – mean_absolute_error: 16422.4980
Epoch 00288: val_loss did not improve from 19021.54955
1168/1168 [==============================] – 0s 123us/sample – loss: 16525.8595 – mean_absolute_error: 16525.8594 – val_loss: 23512.1669 – val_mean_absolute_error: 23512.1660
Epoch 289/500
960/1168 [=======================>……] – ETA: 0s – loss: 18270.2671 – mean_absolute_error: 18270.2695
Epoch 00289: val_loss did not improve from 19021.54955
1168/1168 [==============================] – 0s 132us/sample – loss: 18054.7661 – mean_absolute_error: 18054.7676 – val_loss: 20327.7378 – val_mean_absolute_error: 20327.7402
Epoch 290/500
960/1168 [=======================>……] – ETA: 0s – loss: 15998.9400 – mean_absolute_error: 15998.9404
Epoch 00290: val_loss did not improve from 19021.54955
1168/1168 [==============================] – 0s 129us/sample – loss: 15949.9951 – mean_absolute_error: 15949.9951 – val_loss: 24954.3564 – val_mean_absolute_error: 24954.3555
Epoch 291/500
992/1168 [========================>…..] – ETA: 0s – loss: 17184.4082 – mean_absolute_error: 17184.4082
Epoch 00291: val_loss did not improve from 19021.54955
1168/1168 [==============================] – 0s 126us/sample – loss: 17057.5022 – mean_absolute_error: 17057.5039 – val_loss: 20287.4848 – val_mean_absolute_error: 20287.4844
Epoch 292/500
1056/1168 [==========================>…] – ETA: 0s – loss: 15214.2940 – mean_absolute_error: 15214.2930
Epoch 00292: val_loss did not improve from 19021.54955
1168/1168 [==============================] – 0s 121us/sample – loss: 15625.2864 – mean_absolute_error: 15625.2861 – val_loss: 24601.4041 – val_mean_absolute_error: 24601.4043
Epoch 293/500
992/1168 [========================>…..] – ETA: 0s – loss: 16877.8315 – mean_absolute_error: 16877.8320
Epoch 00293: val_loss did not improve from 19021.54955
1168/1168 [==============================] – 0s 124us/sample – loss: 17068.6204 – mean_absolute_error: 17068.6211 – val_loss: 20566.5935 – val_mean_absolute_error: 20566.5938
Epoch 294/500
928/1168 [======================>…….] – ETA: 0s – loss: 16125.0835 – mean_absolute_error: 16125.0840
Epoch 00294: val_loss did not improve from 19021.54955
1168/1168 [==============================] – 0s 130us/sample – loss: 15676.8761 – mean_absolute_error: 15676.8770 – val_loss: 19518.0571 – val_mean_absolute_error: 19518.0566
Epoch 295/500
1088/1168 [==========================>…] – ETA: 0s – loss: 15786.6820 – mean_absolute_error: 15786.6797
Epoch 00295: val_loss did not improve from 19021.54955
1168/1168 [==============================] – 0s 171us/sample – loss: 15789.9397 – mean_absolute_error: 15789.9365 – val_loss: 20805.8538 – val_mean_absolute_error: 20805.8535
Epoch 296/500
1120/1168 [===========================>..] – ETA: 0s – loss: 17059.8597 – mean_absolute_error: 17059.8594
Epoch 00296: val_loss did not improve from 19021.54955
1168/1168 [==============================] – 0s 226us/sample – loss: 16968.3377 – mean_absolute_error: 16968.3359 – val_loss: 19980.0143 – val_mean_absolute_error: 19980.0156
Epoch 297/500
928/1168 [======================>…….] – ETA: 0s – loss: 16853.2219 – mean_absolute_error: 16853.2188
Epoch 00297: val_loss did not improve from 19021.54955
1168/1168 [==============================] – 0s 185us/sample – loss: 16299.5798 – mean_absolute_error: 16299.5771 – val_loss: 20998.8319 – val_mean_absolute_error: 20998.8320
Epoch 298/500
832/1168 [====================>………] – ETA: 0s – loss: 17189.7474 – mean_absolute_error: 17189.7461
Epoch 00298: val_loss did not improve from 19021.54955
1168/1168 [==============================] – 0s 148us/sample – loss: 16547.5464 – mean_absolute_error: 16547.5469 – val_loss: 20500.2505 – val_mean_absolute_error: 20500.2520
Epoch 299/500
800/1168 [===================>……….] – ETA: 0s – loss: 16240.4274 – mean_absolute_error: 16240.4277
Epoch 00299: val_loss did not improve from 19021.54955
1168/1168 [==============================] – 0s 148us/sample – loss: 16497.7649 – mean_absolute_error: 16497.7676 – val_loss: 19426.1331 – val_mean_absolute_error: 19426.1348
Epoch 300/500
1120/1168 [===========================>..] – ETA: 0s – loss: 18564.7002 – mean_absolute_error: 18564.6973
Epoch 00300: val_loss did not improve from 19021.54955
1168/1168 [==============================] – 0s 168us/sample – loss: 18353.6405 – mean_absolute_error: 18353.6387 – val_loss: 23184.6914 – val_mean_absolute_error: 23184.6895
Epoch 301/500
1056/1168 [==========================>…] – ETA: 0s – loss: 16415.8812 – mean_absolute_error: 16415.8828
Epoch 00301: val_loss did not improve from 19021.54955
1168/1168 [==============================] – 0s 188us/sample – loss: 16182.4559 – mean_absolute_error: 16182.4570 – val_loss: 19928.5236 – val_mean_absolute_error: 19928.5215
Epoch 302/500
864/1168 [=====================>……..] – ETA: 0s – loss: 14938.4753 – mean_absolute_error: 14938.4756
Epoch 00302: val_loss did not improve from 19021.54955
1168/1168 [==============================] – 0s 217us/sample – loss: 14988.5488 – mean_absolute_error: 14988.5498 – val_loss: 22334.1454 – val_mean_absolute_error: 22334.1445
Epoch 303/500
832/1168 [====================>………] – ETA: 0s – loss: 17604.1705 – mean_absolute_error: 17604.1680
Epoch 00303: val_loss did not improve from 19021.54955
1168/1168 [==============================] – 0s 148us/sample – loss: 16694.4279 – mean_absolute_error: 16694.4238 – val_loss: 19945.4006 – val_mean_absolute_error: 19945.3984
Epoch 304/500
1120/1168 [===========================>..] – ETA: 0s – loss: 14578.6728 – mean_absolute_error: 14578.6729
Epoch 00304: val_loss did not improve from 19021.54955
1168/1168 [==============================] – 0s 170us/sample – loss: 14781.0041 – mean_absolute_error: 14781.0049 – val_loss: 21344.4728 – val_mean_absolute_error: 21344.4746
Epoch 305/500
800/1168 [===================>……….] – ETA: 0s – loss: 17946.0412 – mean_absolute_error: 17946.0391
Epoch 00305: val_loss did not improve from 19021.54955
1168/1168 [==============================] – 0s 155us/sample – loss: 17176.5767 – mean_absolute_error: 17176.5762 – val_loss: 19942.2400 – val_mean_absolute_error: 19942.2402
Epoch 306/500
960/1168 [=======================>……] – ETA: 0s – loss: 18263.3584 – mean_absolute_error: 18263.3633
Epoch 00306: val_loss did not improve from 19021.54955
1168/1168 [==============================] – 0s 195us/sample – loss: 17947.7809 – mean_absolute_error: 17947.7832 – val_loss: 22813.3359 – val_mean_absolute_error: 22813.3359
Epoch 307/500
1120/1168 [===========================>..] – ETA: 0s – loss: 18194.0981 – mean_absolute_error: 18194.0977
Epoch 00307: val_loss did not improve from 19021.54955
1168/1168 [==============================] – 0s 171us/sample – loss: 18188.1225 – mean_absolute_error: 18188.1230 – val_loss: 20307.9393 – val_mean_absolute_error: 20307.9375
Epoch 308/500
768/1168 [==================>………..] – ETA: 0s – loss: 16342.9684 – mean_absolute_error: 16342.9678
Epoch 00308: val_loss did not improve from 19021.54955
1168/1168 [==============================] – 0s 161us/sample – loss: 16511.4445 – mean_absolute_error: 16511.4414 – val_loss: 21438.8253 – val_mean_absolute_error: 21438.8262
Epoch 309/500
1088/1168 [==========================>…] – ETA: 0s – loss: 15429.0893 – mean_absolute_error: 15429.0879
Epoch 00309: val_loss did not improve from 19021.54955
1168/1168 [==============================] – 0s 175us/sample – loss: 15511.1876 – mean_absolute_error: 15511.1865 – val_loss: 19409.9737 – val_mean_absolute_error: 19409.9746
Epoch 310/500
800/1168 [===================>……….] – ETA: 0s – loss: 15644.7456 – mean_absolute_error: 15644.7471
Epoch 00310: val_loss did not improve from 19021.54955
1168/1168 [==============================] – 0s 160us/sample – loss: 16401.0288 – mean_absolute_error: 16401.0312 – val_loss: 22178.1154 – val_mean_absolute_error: 22178.1172
Epoch 311/500
800/1168 [===================>……….] – ETA: 0s – loss: 15468.2246 – mean_absolute_error: 15468.2236
Epoch 00311: val_loss did not improve from 19021.54955
1168/1168 [==============================] – 0s 155us/sample – loss: 15445.2091 – mean_absolute_error: 15445.2070 – val_loss: 19642.8854 – val_mean_absolute_error: 19642.8867
Epoch 312/500
896/1168 [======================>…….] – ETA: 0s – loss: 15158.9816 – mean_absolute_error: 15158.9824
Epoch 00312: val_loss did not improve from 19021.54955
1168/1168 [==============================] – 0s 156us/sample – loss: 15648.2909 – mean_absolute_error: 15648.2910 – val_loss: 21732.4877 – val_mean_absolute_error: 21732.4883
Epoch 313/500
960/1168 [=======================>……] – ETA: 0s – loss: 15815.3923 – mean_absolute_error: 15815.3916
Epoch 00313: val_loss did not improve from 19021.54955
1168/1168 [==============================] – 0s 243us/sample – loss: 15752.9024 – mean_absolute_error: 15752.9004 – val_loss: 20260.1375 – val_mean_absolute_error: 20260.1367
Epoch 314/500
1024/1168 [=========================>….] – ETA: 0s – loss: 14915.5204 – mean_absolute_error: 14915.5195
Epoch 00314: val_loss did not improve from 19021.54955
1168/1168 [==============================] – 0s 185us/sample – loss: 14860.5902 – mean_absolute_error: 14860.5889 – val_loss: 20059.6154 – val_mean_absolute_error: 20059.6172
Epoch 315/500
896/1168 [======================>…….] – ETA: 0s – loss: 15606.0799 – mean_absolute_error: 15606.0801
Epoch 00315: val_loss did not improve from 19021.54955
1168/1168 [==============================] – 0s 136us/sample – loss: 15546.2348 – mean_absolute_error: 15546.2363 – val_loss: 22224.4467 – val_mean_absolute_error: 22224.4473
Epoch 316/500
1088/1168 [==========================>…] – ETA: 0s – loss: 15926.3848 – mean_absolute_error: 15926.3838
Epoch 00316: val_loss did not improve from 19021.54955
1168/1168 [==============================] – 0s 120us/sample – loss: 15807.5814 – mean_absolute_error: 15807.5801 – val_loss: 22424.5847 – val_mean_absolute_error: 22424.5840
Epoch 317/500
960/1168 [=======================>……] – ETA: 0s – loss: 17969.3337 – mean_absolute_error: 17969.3340
Epoch 00317: val_loss did not improve from 19021.54955
1168/1168 [==============================] – 0s 132us/sample – loss: 17446.8622 – mean_absolute_error: 17446.8613 – val_loss: 19893.0884 – val_mean_absolute_error: 19893.0879
Epoch 318/500
992/1168 [========================>…..] – ETA: 0s – loss: 16374.7144 – mean_absolute_error: 16374.7129
Epoch 00318: val_loss did not improve from 19021.54955
1168/1168 [==============================] – 0s 131us/sample – loss: 16515.0883 – mean_absolute_error: 16515.0879 – val_loss: 20956.5375 – val_mean_absolute_error: 20956.5371
Epoch 319/500
1024/1168 [=========================>….] – ETA: 0s – loss: 16782.5903 – mean_absolute_error: 16782.5938
Epoch 00319: val_loss did not improve from 19021.54955
1168/1168 [==============================] – 0s 122us/sample – loss: 16235.5993 – mean_absolute_error: 16235.6025 – val_loss: 19953.0642 – val_mean_absolute_error: 19953.0625
Epoch 320/500
896/1168 [======================>…….] – ETA: 0s – loss: 15111.1277 – mean_absolute_error: 15111.1260
Epoch 00320: val_loss did not improve from 19021.54955
1168/1168 [==============================] – 0s 136us/sample – loss: 15035.4058 – mean_absolute_error: 15035.4072 – val_loss: 19988.2789 – val_mean_absolute_error: 19988.2773
Epoch 321/500
992/1168 [========================>…..] – ETA: 0s – loss: 15403.9074 – mean_absolute_error: 15403.9072
Epoch 00321: val_loss did not improve from 19021.54955
1168/1168 [==============================] – 0s 121us/sample – loss: 15615.6131 – mean_absolute_error: 15615.6133 – val_loss: 21855.8030 – val_mean_absolute_error: 21855.8027
Epoch 322/500
1056/1168 [==========================>…] – ETA: 0s – loss: 15255.2090 – mean_absolute_error: 15255.2090
Epoch 00322: val_loss did not improve from 19021.54955
1168/1168 [==============================] – 0s 120us/sample – loss: 15369.9876 – mean_absolute_error: 15369.9883 – val_loss: 21530.8035 – val_mean_absolute_error: 21530.8047
Epoch 323/500
1056/1168 [==========================>…] – ETA: 0s – loss: 14993.1332 – mean_absolute_error: 14993.1338
Epoch 00323: val_loss did not improve from 19021.54955
1168/1168 [==============================] – 0s 120us/sample – loss: 15098.2304 – mean_absolute_error: 15098.2314 – val_loss: 22872.3384 – val_mean_absolute_error: 22872.3379
Epoch 324/500
1024/1168 [=========================>….] – ETA: 0s – loss: 15530.0456 – mean_absolute_error: 15530.0469
Epoch 00324: val_loss did not improve from 19021.54955
1168/1168 [==============================] – 0s 123us/sample – loss: 15529.8799 – mean_absolute_error: 15529.8818 – val_loss: 19881.1419 – val_mean_absolute_error: 19881.1426
Epoch 325/500
1056/1168 [==========================>…] – ETA: 0s – loss: 17534.8034 – mean_absolute_error: 17534.8047
Epoch 00325: val_loss did not improve from 19021.54955
1168/1168 [==============================] – 0s 124us/sample – loss: 17775.5622 – mean_absolute_error: 17775.5625 – val_loss: 24894.0337 – val_mean_absolute_error: 24894.0332
Epoch 326/500
896/1168 [======================>…….] – ETA: 0s – loss: 18753.5224 – mean_absolute_error: 18753.5195
Epoch 00326: val_loss did not improve from 19021.54955
1168/1168 [==============================] – 0s 143us/sample – loss: 17768.1002 – mean_absolute_error: 17768.0977 – val_loss: 25733.6027 – val_mean_absolute_error: 25733.6035
Epoch 327/500
1152/1168 [============================>.] – ETA: 0s – loss: 15787.4358 – mean_absolute_error: 15787.4355
Epoch 00327: val_loss did not improve from 19021.54955
1168/1168 [==============================] – 0s 272us/sample – loss: 15856.7580 – mean_absolute_error: 15856.7588 – val_loss: 19551.8092 – val_mean_absolute_error: 19551.8105
Epoch 328/500
832/1168 [====================>………] – ETA: 0s – loss: 14932.0005 – mean_absolute_error: 14932.0000- ETA: 0s – loss: 14282.0821 – mean_absolute_error: 14282.082
Epoch 00328: val_loss did not improve from 19021.54955
1168/1168 [==============================] – 0s 315us/sample – loss: 14568.4967 – mean_absolute_error: 14568.4961 – val_loss: 21351.7682 – val_mean_absolute_error: 21351.7676
Epoch 329/500
1120/1168 [===========================>..] – ETA: 0s – loss: 14677.3046 – mean_absolute_error: 14677.3037
Epoch 00329: val_loss did not improve from 19021.54955
1168/1168 [==============================] – 0s 180us/sample – loss: 14716.9363 – mean_absolute_error: 14716.9346 – val_loss: 22458.2519 – val_mean_absolute_error: 22458.2520
Epoch 330/500
1120/1168 [===========================>..] – ETA: 0s – loss: 16471.9417 – mean_absolute_error: 16471.9434
Epoch 00330: val_loss did not improve from 19021.54955
1168/1168 [==============================] – 0s 181us/sample – loss: 16400.5851 – mean_absolute_error: 16400.5859 – val_loss: 23558.7308 – val_mean_absolute_error: 23558.7285
Epoch 331/500
1088/1168 [==========================>…] – ETA: 0s – loss: 16488.3830 – mean_absolute_error: 16488.3828
Epoch 00331: val_loss did not improve from 19021.54955
1168/1168 [==============================] – 1s 576us/sample – loss: 16418.9693 – mean_absolute_error: 16418.9668 – val_loss: 23887.5446 – val_mean_absolute_error: 23887.5469
Epoch 332/500
1120/1168 [===========================>..] – ETA: 0s – loss: 17315.4756 – mean_absolute_error: 17315.4746
Epoch 00332: val_loss did not improve from 19021.54955
1168/1168 [==============================] – 1s 634us/sample – loss: 17214.4504 – mean_absolute_error: 17214.4512 – val_loss: 22558.6069 – val_mean_absolute_error: 22558.6055
Epoch 333/500
992/1168 [========================>…..] – ETA: 0s – loss: 15654.2718 – mean_absolute_error: 15654.2734
Epoch 00333: val_loss did not improve from 19021.54955
1168/1168 [==============================] – 0s 309us/sample – loss: 16088.4289 – mean_absolute_error: 16088.4316 – val_loss: 23768.9476 – val_mean_absolute_error: 23768.9453
Epoch 334/500
1088/1168 [==========================>…] – ETA: 0s – loss: 16878.6407 – mean_absolute_error: 16878.6406
Epoch 00334: val_loss did not improve from 19021.54955
1168/1168 [==============================] – 0s 241us/sample – loss: 17006.0741 – mean_absolute_error: 17006.0762 – val_loss: 21168.0973 – val_mean_absolute_error: 21168.0996
Epoch 335/500
928/1168 [======================>…….] – ETA: 0s – loss: 16303.8836 – mean_absolute_error: 16303.883 – ETA: 0s – loss: 15356.5684 – mean_absolute_error: 15356.5684
Epoch 00335: val_loss did not improve from 19021.54955
1168/1168 [==============================] – 0s 205us/sample – loss: 15413.3890 – mean_absolute_error: 15413.3867 – val_loss: 23183.8555 – val_mean_absolute_error: 23183.8555
Epoch 336/500
992/1168 [========================>…..] – ETA: 0s – loss: 15133.5436 – mean_absolute_error: 15133.5430
Epoch 00336: val_loss did not improve from 19021.54955
1168/1168 [==============================] – 0s 197us/sample – loss: 14925.7137 – mean_absolute_error: 14925.7119 – val_loss: 21094.3756 – val_mean_absolute_error: 21094.3770
Epoch 337/500
832/1168 [====================>………] – ETA: 0s – loss: 16569.6904 – mean_absolute_error: 16569.6875
Epoch 00337: val_loss did not improve from 19021.54955
1168/1168 [==============================] – 0s 152us/sample – loss: 16936.7284 – mean_absolute_error: 16936.7266 – val_loss: 20448.4828 – val_mean_absolute_error: 20448.4824
Epoch 338/500
832/1168 [====================>………] – ETA: 0s – loss: 15928.8205 – mean_absolute_error: 15928.8193
Epoch 00338: val_loss did not improve from 19021.54955
1168/1168 [==============================] – 0s 155us/sample – loss: 15865.5481 – mean_absolute_error: 15865.5459 – val_loss: 23073.1254 – val_mean_absolute_error: 23073.1250
Epoch 339/500
832/1168 [====================>………] – ETA: 0s – loss: 16937.4749 – mean_absolute_error: 16937.4746
Epoch 00339: val_loss did not improve from 19021.54955
1168/1168 [==============================] – 0s 145us/sample – loss: 16739.6268 – mean_absolute_error: 16739.6270 – val_loss: 20120.0441 – val_mean_absolute_error: 20120.0469
Epoch 340/500
864/1168 [=====================>……..] – ETA: 0s – loss: 15475.4761 – mean_absolute_error: 15475.4746
Epoch 00340: val_loss did not improve from 19021.54955
1168/1168 [==============================] – 0s 154us/sample – loss: 15949.6367 – mean_absolute_error: 15949.6338 – val_loss: 26906.8640 – val_mean_absolute_error: 26906.8633
Epoch 341/500
864/1168 [=====================>……..] – ETA: 0s – loss: 19513.4959 – mean_absolute_error: 19513.4980
Epoch 00341: val_loss did not improve from 19021.54955
1168/1168 [==============================] – 0s 149us/sample – loss: 18729.9658 – mean_absolute_error: 18729.9688 – val_loss: 22793.9219 – val_mean_absolute_error: 22793.9219
Epoch 342/500
928/1168 [======================>…….] – ETA: 0s – loss: 17928.8845 – mean_absolute_error: 17928.8867
Epoch 00342: val_loss did not improve from 19021.54955
1168/1168 [==============================] – 0s 188us/sample – loss: 17457.6031 – mean_absolute_error: 17457.6055 – val_loss: 21034.4485 – val_mean_absolute_error: 21034.4512
Epoch 343/500
960/1168 [=======================>……] – ETA: 0s – loss: 17394.1881 – mean_absolute_error: 17394.1895
Epoch 00343: val_loss did not improve from 19021.54955
1168/1168 [==============================] – 0s 137us/sample – loss: 17011.6332 – mean_absolute_error: 17011.6328 – val_loss: 20211.6056 – val_mean_absolute_error: 20211.6035
Epoch 344/500
928/1168 [======================>…….] – ETA: 0s – loss: 14964.8713 – mean_absolute_error: 14964.8691
Epoch 00344: val_loss did not improve from 19021.54955
1168/1168 [==============================] – 0s 137us/sample – loss: 15106.4549 – mean_absolute_error: 15106.4541 – val_loss: 21448.1360 – val_mean_absolute_error: 21448.1348
Epoch 345/500
960/1168 [=======================>……] – ETA: 0s – loss: 15253.5210 – mean_absolute_error: 15253.5195
Epoch 00345: val_loss did not improve from 19021.54955
1168/1168 [==============================] – 0s 129us/sample – loss: 15161.0075 – mean_absolute_error: 15161.0068 – val_loss: 20558.7120 – val_mean_absolute_error: 20558.7129
Epoch 346/500
896/1168 [======================>…….] – ETA: 0s – loss: 14714.8508 – mean_absolute_error: 14714.8486
Epoch 00346: val_loss did not improve from 19021.54955
1168/1168 [==============================] – 0s 131us/sample – loss: 14758.7189 – mean_absolute_error: 14758.7178 – val_loss: 22725.4251 – val_mean_absolute_error: 22725.4258
Epoch 347/500
1024/1168 [=========================>….] – ETA: 0s – loss: 15348.0690 – mean_absolute_error: 15348.0674
Epoch 00347: val_loss did not improve from 19021.54955
1168/1168 [==============================] – 0s 121us/sample – loss: 15661.5295 – mean_absolute_error: 15661.5273 – val_loss: 24563.7273 – val_mean_absolute_error: 24563.7285
Epoch 348/500
992/1168 [========================>…..] – ETA: 0s – loss: 16913.2380 – mean_absolute_error: 16913.2383
Epoch 00348: val_loss did not improve from 19021.54955
1168/1168 [==============================] – 0s 125us/sample – loss: 16903.9252 – mean_absolute_error: 16903.9238 – val_loss: 20011.8382 – val_mean_absolute_error: 20011.8379
Epoch 349/500
864/1168 [=====================>……..] – ETA: 0s – loss: 14166.5973 – mean_absolute_error: 14166.5996
Epoch 00349: val_loss did not improve from 19021.54955
1168/1168 [==============================] – 0s 137us/sample – loss: 15007.3905 – mean_absolute_error: 15007.3926 – val_loss: 29576.9707 – val_mean_absolute_error: 29576.9688
Epoch 350/500
992/1168 [========================>…..] – ETA: 0s – loss: 16801.2579 – mean_absolute_error: 16801.2578
Epoch 00350: val_loss did not improve from 19021.54955
1168/1168 [==============================] – 0s 124us/sample – loss: 16801.5092 – mean_absolute_error: 16801.5078 – val_loss: 22118.7928 – val_mean_absolute_error: 22118.7930
Epoch 351/500
1024/1168 [=========================>….] – ETA: 0s – loss: 15724.9623 – mean_absolute_error: 15724.9629
Epoch 00351: val_loss did not improve from 19021.54955
1168/1168 [==============================] – 0s 132us/sample – loss: 15914.2795 – mean_absolute_error: 15914.2793 – val_loss: 23899.2353 – val_mean_absolute_error: 23899.2344
Epoch 352/500
896/1168 [======================>…….] – ETA: 0s – loss: 16033.9812 – mean_absolute_error: 16033.9785
Epoch 00352: val_loss did not improve from 19021.54955
1168/1168 [==============================] – 0s 132us/sample – loss: 15926.6021 – mean_absolute_error: 15926.5996 – val_loss: 19911.0517 – val_mean_absolute_error: 19911.0527
Epoch 353/500
832/1168 [====================>………] – ETA: 0s – loss: 15163.5993 – mean_absolute_error: 15163.5996
Epoch 00353: val_loss did not improve from 19021.54955
1168/1168 [==============================] – 0s 140us/sample – loss: 15146.3862 – mean_absolute_error: 15146.3887 – val_loss: 21056.5532 – val_mean_absolute_error: 21056.5527
Epoch 354/500
1152/1168 [============================>.] – ETA: 0s – loss: 14828.3260 – mean_absolute_error: 14828.3262
Epoch 00354: val_loss did not improve from 19021.54955
1168/1168 [==============================] – 0s 156us/sample – loss: 14749.9822 – mean_absolute_error: 14749.9824 – val_loss: 19807.2181 – val_mean_absolute_error: 19807.2188
Epoch 355/500
992/1168 [========================>…..] – ETA: 0s – loss: 14456.4809 – mean_absolute_error: 14456.4805
Epoch 00355: val_loss did not improve from 19021.54955
1168/1168 [==============================] – 0s 153us/sample – loss: 14302.0524 – mean_absolute_error: 14302.0518 – val_loss: 20872.8785 – val_mean_absolute_error: 20872.8809
Epoch 356/500
1024/1168 [=========================>….] – ETA: 0s – loss: 15089.3184 – mean_absolute_error: 15089.3193
Epoch 00356: val_loss did not improve from 19021.54955
1168/1168 [==============================] – 0s 126us/sample – loss: 15210.0742 – mean_absolute_error: 15210.0771 – val_loss: 21819.4739 – val_mean_absolute_error: 21819.4746
Epoch 357/500
960/1168 [=======================>……] – ETA: 0s – loss: 15713.8789 – mean_absolute_error: 15713.8799
Epoch 00357: val_loss did not improve from 19021.54955
1168/1168 [==============================] – 0s 131us/sample – loss: 16825.7533 – mean_absolute_error: 16825.7578 – val_loss: 20172.3342 – val_mean_absolute_error: 20172.3359
Epoch 358/500
1056/1168 [==========================>…] – ETA: 0s – loss: 17026.9816 – mean_absolute_error: 17026.9805
Epoch 00358: val_loss did not improve from 19021.54955
1168/1168 [==============================] – 0s 120us/sample – loss: 16556.3900 – mean_absolute_error: 16556.3906 – val_loss: 19810.5738 – val_mean_absolute_error: 19810.5762
Epoch 359/500
960/1168 [=======================>……] – ETA: 0s – loss: 14319.6381 – mean_absolute_error: 14319.6396
Epoch 00359: val_loss did not improve from 19021.54955
1168/1168 [==============================] – 0s 134us/sample – loss: 14379.2859 – mean_absolute_error: 14379.2881 – val_loss: 22097.9127 – val_mean_absolute_error: 22097.9121
Epoch 360/500
1024/1168 [=========================>….] – ETA: 0s – loss: 15120.0550 – mean_absolute_error: 15120.0557
Epoch 00360: val_loss did not improve from 19021.54955
1168/1168 [==============================] – 0s 124us/sample – loss: 14925.5874 – mean_absolute_error: 14925.5869 – val_loss: 20549.7819 – val_mean_absolute_error: 20549.7852
Epoch 361/500
1088/1168 [==========================>…] – ETA: 0s – loss: 15417.0240 – mean_absolute_error: 15417.0254
Epoch 00361: val_loss did not improve from 19021.54955
1168/1168 [==============================] – 0s 119us/sample – loss: 15751.6813 – mean_absolute_error: 15751.6816 – val_loss: 21726.1744 – val_mean_absolute_error: 21726.1738
Epoch 362/500
960/1168 [=======================>……] – ETA: 0s – loss: 15348.6249 – mean_absolute_error: 15348.6240
Epoch 00362: val_loss did not improve from 19021.54955
1168/1168 [==============================] – 0s 128us/sample – loss: 15134.3548 – mean_absolute_error: 15134.3545 – val_loss: 20437.9143 – val_mean_absolute_error: 20437.9160
Epoch 363/500
992/1168 [========================>…..] – ETA: 0s – loss: 14299.0334 – mean_absolute_error: 14299.0332
Epoch 00363: val_loss did not improve from 19021.54955
1168/1168 [==============================] – 0s 124us/sample – loss: 14921.6041 – mean_absolute_error: 14921.6045 – val_loss: 20761.9821 – val_mean_absolute_error: 20761.9824
Epoch 364/500
992/1168 [========================>…..] – ETA: 0s – loss: 16006.2771 – mean_absolute_error: 16006.2783
Epoch 00364: val_loss did not improve from 19021.54955
1168/1168 [==============================] – 0s 126us/sample – loss: 15768.9410 – mean_absolute_error: 15768.9404 – val_loss: 20392.5676 – val_mean_absolute_error: 20392.5664
Epoch 365/500
960/1168 [=======================>……] – ETA: 0s – loss: 14123.5484 – mean_absolute_error: 14123.5508
Epoch 00365: val_loss did not improve from 19021.54955
1168/1168 [==============================] – 0s 132us/sample – loss: 14802.2682 – mean_absolute_error: 14802.2705 – val_loss: 20529.9502 – val_mean_absolute_error: 20529.9512
Epoch 366/500
992/1168 [========================>…..] – ETA: 0s – loss: 15388.4183 – mean_absolute_error: 15388.4199
Epoch 00366: val_loss did not improve from 19021.54955
1168/1168 [==============================] – 0s 128us/sample – loss: 15316.6379 – mean_absolute_error: 15316.6367 – val_loss: 20741.1024 – val_mean_absolute_error: 20741.1035
Epoch 367/500
1024/1168 [=========================>….] – ETA: 0s – loss: 14909.6963 – mean_absolute_error: 14909.6963
Epoch 00367: val_loss did not improve from 19021.54955
1168/1168 [==============================] – 0s 123us/sample – loss: 14780.0674 – mean_absolute_error: 14780.0654 – val_loss: 20313.8469 – val_mean_absolute_error: 20313.8477
Epoch 368/500
960/1168 [=======================>……] – ETA: 0s – loss: 14881.7781 – mean_absolute_error: 14881.7793
Epoch 00368: val_loss did not improve from 19021.54955
1168/1168 [==============================] – 0s 128us/sample – loss: 14830.6807 – mean_absolute_error: 14830.6797 – val_loss: 21033.6005 – val_mean_absolute_error: 21033.6016
Epoch 369/500
960/1168 [=======================>……] – ETA: 0s – loss: 17600.7660 – mean_absolute_error: 17600.7676
Epoch 00369: val_loss did not improve from 19021.54955
1168/1168 [==============================] – 0s 129us/sample – loss: 17795.1072 – mean_absolute_error: 17795.1074 – val_loss: 21423.5371 – val_mean_absolute_error: 21423.5371
Epoch 370/500
992/1168 [========================>…..] – ETA: 0s – loss: 14250.9241 – mean_absolute_error: 14250.9238
Epoch 00370: val_loss did not improve from 19021.54955
1168/1168 [==============================] – 0s 126us/sample – loss: 14599.1867 – mean_absolute_error: 14599.1846 – val_loss: 19927.1830 – val_mean_absolute_error: 19927.1836
Epoch 371/500
896/1168 [======================>…….] – ETA: 0s – loss: 15492.9236 – mean_absolute_error: 15492.9219
Epoch 00371: val_loss did not improve from 19021.54955
1168/1168 [==============================] – 0s 131us/sample – loss: 16391.2059 – mean_absolute_error: 16391.2051 – val_loss: 22520.9067 – val_mean_absolute_error: 22520.9043
Epoch 372/500
864/1168 [=====================>……..] – ETA: 0s – loss: 17264.2965 – mean_absolute_error: 17264.2949
Epoch 00372: val_loss did not improve from 19021.54955
1168/1168 [==============================] – 0s 154us/sample – loss: 17104.2285 – mean_absolute_error: 17104.2285 – val_loss: 21141.4331 – val_mean_absolute_error: 21141.4336
Epoch 373/500
992/1168 [========================>…..] – ETA: 0s – loss: 18492.4579 – mean_absolute_error: 18492.4609
Epoch 00373: val_loss did not improve from 19021.54955
1168/1168 [==============================] – 0s 124us/sample – loss: 18108.3498 – mean_absolute_error: 18108.3535 – val_loss: 21293.6901 – val_mean_absolute_error: 21293.6875
Epoch 374/500
1056/1168 [==========================>…] – ETA: 0s – loss: 17528.1077 – mean_absolute_error: 17528.1035
Epoch 00374: val_loss did not improve from 19021.54955
1168/1168 [==============================] – 0s 120us/sample – loss: 17517.1538 – mean_absolute_error: 17517.1523 – val_loss: 21316.8897 – val_mean_absolute_error: 21316.8887
Epoch 375/500
1056/1168 [==========================>…] – ETA: 0s – loss: 16033.0815 – mean_absolute_error: 16033.0811
Epoch 00375: val_loss did not improve from 19021.54955
1168/1168 [==============================] – 0s 120us/sample – loss: 16054.4688 – mean_absolute_error: 16054.4678 – val_loss: 21928.1924 – val_mean_absolute_error: 21928.1914
Epoch 376/500
1024/1168 [=========================>….] – ETA: 0s – loss: 15087.9535 – mean_absolute_error: 15087.9531
Epoch 00376: val_loss did not improve from 19021.54955
1168/1168 [==============================] – 0s 126us/sample – loss: 14854.2329 – mean_absolute_error: 14854.2324 – val_loss: 20041.3304 – val_mean_absolute_error: 20041.3301
Epoch 377/500
960/1168 [=======================>……] – ETA: 0s – loss: 14841.4031 – mean_absolute_error: 14841.4043
Epoch 00377: val_loss did not improve from 19021.54955
1168/1168 [==============================] – 0s 135us/sample – loss: 15246.6847 – mean_absolute_error: 15246.6846 – val_loss: 21578.8753 – val_mean_absolute_error: 21578.8750
Epoch 378/500
1024/1168 [=========================>….] – ETA: 0s – loss: 17025.7240 – mean_absolute_error: 17025.7246
Epoch 00378: val_loss did not improve from 19021.54955
1168/1168 [==============================] – 0s 120us/sample – loss: 16824.6958 – mean_absolute_error: 16824.6953 – val_loss: 21896.2646 – val_mean_absolute_error: 21896.2637
Epoch 379/500
928/1168 [======================>…….] – ETA: 0s – loss: 15030.1840 – mean_absolute_error: 15030.1836
Epoch 00379: val_loss did not improve from 19021.54955
1168/1168 [==============================] – 0s 131us/sample – loss: 14902.8983 – mean_absolute_error: 14902.8975 – val_loss: 21383.7381 – val_mean_absolute_error: 21383.7363
Epoch 380/500
1056/1168 [==========================>…] – ETA: 0s – loss: 14267.6779 – mean_absolute_error: 14267.6787
Epoch 00380: val_loss did not improve from 19021.54955
1168/1168 [==============================] – 0s 121us/sample – loss: 14164.1220 – mean_absolute_error: 14164.1221 – val_loss: 20567.6058 – val_mean_absolute_error: 20567.6035
Epoch 381/500
1024/1168 [=========================>….] – ETA: 0s – loss: 14422.5734 – mean_absolute_error: 14422.5732
Epoch 00381: val_loss did not improve from 19021.54955
1168/1168 [==============================] – 0s 121us/sample – loss: 14835.5396 – mean_absolute_error: 14835.5391 – val_loss: 22170.4826 – val_mean_absolute_error: 22170.4824
Epoch 382/500
992/1168 [========================>…..] – ETA: 0s – loss: 14527.4846 – mean_absolute_error: 14527.4873
Epoch 00382: val_loss did not improve from 19021.54955
1168/1168 [==============================] – 0s 125us/sample – loss: 14478.3695 – mean_absolute_error: 14478.3701 – val_loss: 22242.7331 – val_mean_absolute_error: 22242.7324
Epoch 383/500
1024/1168 [=========================>….] – ETA: 0s – loss: 14763.3050 – mean_absolute_error: 14763.3066
Epoch 00383: val_loss did not improve from 19021.54955
1168/1168 [==============================] – 0s 125us/sample – loss: 14903.3884 – mean_absolute_error: 14903.3906 – val_loss: 22794.2194 – val_mean_absolute_error: 22794.2188
Epoch 384/500
960/1168 [=======================>……] – ETA: 0s – loss: 15809.1714 – mean_absolute_error: 15809.1719
Epoch 00384: val_loss did not improve from 19021.54955
1168/1168 [==============================] – 0s 128us/sample – loss: 15463.8322 – mean_absolute_error: 15463.8320 – val_loss: 21620.9754 – val_mean_absolute_error: 21620.9766
Epoch 385/500
1056/1168 [==========================>…] – ETA: 0s – loss: 14766.5511 – mean_absolute_error: 14766.5498
Epoch 00385: val_loss did not improve from 19021.54955
1168/1168 [==============================] – 0s 123us/sample – loss: 14759.4205 – mean_absolute_error: 14759.4209 – val_loss: 20690.8033 – val_mean_absolute_error: 20690.8047
Epoch 386/500
1024/1168 [=========================>….] – ETA: 0s – loss: 15445.5867 – mean_absolute_error: 15445.5879
Epoch 00386: val_loss did not improve from 19021.54955
1168/1168 [==============================] – 0s 125us/sample – loss: 15251.1144 – mean_absolute_error: 15251.1143 – val_loss: 20205.1397 – val_mean_absolute_error: 20205.1406
Epoch 387/500
960/1168 [=======================>……] – ETA: 0s – loss: 14196.3321 – mean_absolute_error: 14196.3301
Epoch 00387: val_loss did not improve from 19021.54955
1168/1168 [==============================] – 0s 128us/sample – loss: 14442.8795 – mean_absolute_error: 14442.8789 – val_loss: 20679.8848 – val_mean_absolute_error: 20679.8828
Epoch 388/500
1088/1168 [==========================>…] – ETA: 0s – loss: 17457.7319 – mean_absolute_error: 17457.7305
Epoch 00388: val_loss did not improve from 19021.54955
1168/1168 [==============================] – 0s 119us/sample – loss: 17358.3350 – mean_absolute_error: 17358.3340 – val_loss: 20816.9597 – val_mean_absolute_error: 20816.9590
Epoch 389/500
864/1168 [=====================>……..] – ETA: 0s – loss: 13980.9396 – mean_absolute_error: 13980.9375
Epoch 00389: val_loss did not improve from 19021.54955
1168/1168 [==============================] – 0s 137us/sample – loss: 14576.8045 – mean_absolute_error: 14576.8027 – val_loss: 20469.2269 – val_mean_absolute_error: 20469.2285
Epoch 390/500
1056/1168 [==========================>…] – ETA: 0s – loss: 14387.4011 – mean_absolute_error: 14387.4014
Epoch 00390: val_loss did not improve from 19021.54955
1168/1168 [==============================] – 0s 123us/sample – loss: 14379.8189 – mean_absolute_error: 14379.8203 – val_loss: 19796.8717 – val_mean_absolute_error: 19796.8691
Epoch 391/500
896/1168 [======================>…….] – ETA: 0s – loss: 14762.0925 – mean_absolute_error: 14762.0918
Epoch 00391: val_loss did not improve from 19021.54955
1168/1168 [==============================] – 0s 134us/sample – loss: 15133.1830 – mean_absolute_error: 15133.1816 – val_loss: 21783.0958 – val_mean_absolute_error: 21783.0957
Epoch 392/500
928/1168 [======================>…….] – ETA: 0s – loss: 15247.6208 – mean_absolute_error: 15247.6211
Epoch 00392: val_loss did not improve from 19021.54955
1168/1168 [==============================] – 0s 149us/sample – loss: 16160.4541 – mean_absolute_error: 16160.4541 – val_loss: 20947.2779 – val_mean_absolute_error: 20947.2773
Epoch 393/500
896/1168 [======================>…….] – ETA: 0s – loss: 15614.9150 – mean_absolute_error: 15614.9141
Epoch 00393: val_loss did not improve from 19021.54955
1168/1168 [==============================] – 0s 149us/sample – loss: 15430.8046 – mean_absolute_error: 15430.8066 – val_loss: 21319.6911 – val_mean_absolute_error: 21319.6914
Epoch 394/500
960/1168 [=======================>……] – ETA: 0s – loss: 14438.1852 – mean_absolute_error: 14438.1855
Epoch 00394: val_loss did not improve from 19021.54955
1168/1168 [==============================] – 0s 131us/sample – loss: 14228.9914 – mean_absolute_error: 14228.9902 – val_loss: 20146.5828 – val_mean_absolute_error: 20146.5820
Epoch 395/500
1088/1168 [==========================>…] – ETA: 0s – loss: 14472.3176 – mean_absolute_error: 14472.3164
Epoch 00395: val_loss did not improve from 19021.54955
1168/1168 [==============================] – 0s 120us/sample – loss: 14398.4574 – mean_absolute_error: 14398.4551 – val_loss: 20585.9247 – val_mean_absolute_error: 20585.9238
Epoch 396/500
1152/1168 [============================>.] – ETA: 0s – loss: 13930.0449 – mean_absolute_error: 13930.0449
Epoch 00396: val_loss did not improve from 19021.54955
1168/1168 [==============================] – 0s 161us/sample – loss: 13891.1053 – mean_absolute_error: 13891.1055 – val_loss: 20545.6099 – val_mean_absolute_error: 20545.6074
Epoch 397/500
960/1168 [=======================>……] – ETA: 0s – loss: 14230.7075 – mean_absolute_error: 14230.7080
Epoch 00397: val_loss did not improve from 19021.54955
1168/1168 [==============================] – 0s 134us/sample – loss: 14241.0597 – mean_absolute_error: 14241.0596 – val_loss: 20206.5463 – val_mean_absolute_error: 20206.5469
Epoch 398/500
960/1168 [=======================>……] – ETA: 0s – loss: 14043.8622 – mean_absolute_error: 14043.8604
Epoch 00398: val_loss did not improve from 19021.54955
1168/1168 [==============================] – 0s 140us/sample – loss: 14481.5040 – mean_absolute_error: 14481.5020 – val_loss: 19746.7072 – val_mean_absolute_error: 19746.7070
Epoch 399/500
864/1168 [=====================>……..] – ETA: 0s – loss: 13967.7686 – mean_absolute_error: 13967.7686
Epoch 00399: val_loss did not improve from 19021.54955
1168/1168 [==============================] – 0s 144us/sample – loss: 14467.2675 – mean_absolute_error: 14467.2686 – val_loss: 21081.3887 – val_mean_absolute_error: 21081.3867
Epoch 400/500
768/1168 [==================>………..] – ETA: 0s – loss: 14033.5311 – mean_absolute_error: 14033.5312
Epoch 00400: val_loss did not improve from 19021.54955
1168/1168 [==============================] – 0s 152us/sample – loss: 14589.3799 – mean_absolute_error: 14589.3789 – val_loss: 21607.6176 – val_mean_absolute_error: 21607.6172
Epoch 401/500
896/1168 [======================>…….] – ETA: 0s – loss: 15005.8307 – mean_absolute_error: 15005.8291
Epoch 00401: val_loss did not improve from 19021.54955
1168/1168 [==============================] – 0s 144us/sample – loss: 14890.8719 – mean_absolute_error: 14890.8711 – val_loss: 22093.8774 – val_mean_absolute_error: 22093.8789
Epoch 402/500
1152/1168 [============================>.] – ETA: 0s – loss: 14712.9746 – mean_absolute_error: 14712.9756
Epoch 00402: val_loss did not improve from 19021.54955
1168/1168 [==============================] – 0s 160us/sample – loss: 14677.0392 – mean_absolute_error: 14677.0410 – val_loss: 21520.1642 – val_mean_absolute_error: 21520.1621
Epoch 403/500
992/1168 [========================>…..] – ETA: 0s – loss: 14746.1308 – mean_absolute_error: 14746.1289
Epoch 00403: val_loss did not improve from 19021.54955
1168/1168 [==============================] – 0s 127us/sample – loss: 14782.9402 – mean_absolute_error: 14782.9404 – val_loss: 20303.6143 – val_mean_absolute_error: 20303.6133
Epoch 404/500
992/1168 [========================>…..] – ETA: 0s – loss: 15045.8342 – mean_absolute_error: 15045.8350
Epoch 00404: val_loss did not improve from 19021.54955
1168/1168 [==============================] – 0s 124us/sample – loss: 15207.0172 – mean_absolute_error: 15207.0176 – val_loss: 19923.2407 – val_mean_absolute_error: 19923.2402
Epoch 405/500
1056/1168 [==========================>…] – ETA: 0s – loss: 15570.7501 – mean_absolute_error: 15570.7510
Epoch 00405: val_loss did not improve from 19021.54955
1168/1168 [==============================] – 0s 121us/sample – loss: 15720.4504 – mean_absolute_error: 15720.4502 – val_loss: 20735.1406 – val_mean_absolute_error: 20735.1426
Epoch 406/500
1056/1168 [==========================>…] – ETA: 0s – loss: 14766.1141 – mean_absolute_error: 14766.1143
Epoch 00406: val_loss did not improve from 19021.54955
1168/1168 [==============================] – 0s 120us/sample – loss: 14852.5453 – mean_absolute_error: 14852.5459 – val_loss: 20173.2770 – val_mean_absolute_error: 20173.2754
Epoch 407/500
992/1168 [========================>…..] – ETA: 0s – loss: 13418.0606 – mean_absolute_error: 13418.0625
Epoch 00407: val_loss did not improve from 19021.54955
1168/1168 [==============================] – 0s 123us/sample – loss: 13988.1077 – mean_absolute_error: 13988.1084 – val_loss: 20921.7791 – val_mean_absolute_error: 20921.7793
Epoch 408/500
960/1168 [=======================>……] – ETA: 0s – loss: 15384.5604 – mean_absolute_error: 15384.5605
Epoch 00408: val_loss did not improve from 19021.54955
1168/1168 [==============================] – 0s 137us/sample – loss: 16129.3982 – mean_absolute_error: 16129.3994 – val_loss: 23485.0858 – val_mean_absolute_error: 23485.0898
Epoch 409/500
1056/1168 [==========================>…] – ETA: 0s – loss: 16019.7239 – mean_absolute_error: 16019.7256
Epoch 00409: val_loss did not improve from 19021.54955
1168/1168 [==============================] – 0s 237us/sample – loss: 15999.7013 – mean_absolute_error: 15999.7021 – val_loss: 20732.6632 – val_mean_absolute_error: 20732.6621
Epoch 410/500
896/1168 [======================>…….] – ETA: 0s – loss: 14407.4587 – mean_absolute_error: 14407.4609
Epoch 00410: val_loss did not improve from 19021.54955
1168/1168 [==============================] – 0s 179us/sample – loss: 14417.6908 – mean_absolute_error: 14417.6934 – val_loss: 20812.2567 – val_mean_absolute_error: 20812.2578
Epoch 411/500
1088/1168 [==========================>…] – ETA: 0s – loss: 15146.6086 – mean_absolute_error: 15146.6104
Epoch 00411: val_loss did not improve from 19021.54955
1168/1168 [==============================] – 0s 281us/sample – loss: 15106.9815 – mean_absolute_error: 15106.9814 – val_loss: 21764.6454 – val_mean_absolute_error: 21764.6445
Epoch 412/500
1056/1168 [==========================>…] – ETA: 0s – loss: 15199.4033 – mean_absolute_error: 15199.4043
Epoch 00412: val_loss did not improve from 19021.54955
1168/1168 [==============================] – 0s 376us/sample – loss: 15499.2408 – mean_absolute_error: 15499.2432 – val_loss: 21973.0601 – val_mean_absolute_error: 21973.0605
Epoch 413/500
1088/1168 [==========================>…] – ETA: 0s – loss: 17532.6324 – mean_absolute_error: 17532.6328
Epoch 00413: val_loss did not improve from 19021.54955
1168/1168 [==============================] – 0s 280us/sample – loss: 17418.4748 – mean_absolute_error: 17418.4766 – val_loss: 21422.2697 – val_mean_absolute_error: 21422.2715
Epoch 414/500
1120/1168 [===========================>..] – ETA: 0s – loss: 16875.1547 – mean_absolute_error: 16875.1543
Epoch 00414: val_loss did not improve from 19021.54955
1168/1168 [==============================] – 0s 297us/sample – loss: 16736.2794 – mean_absolute_error: 16736.2773 – val_loss: 20063.5805 – val_mean_absolute_error: 20063.5781
Epoch 415/500
864/1168 [=====================>……..] – ETA: 0s – loss: 15063.7038 – mean_absolute_error: 15063.7041
Epoch 00415: val_loss did not improve from 19021.54955
1168/1168 [==============================] – 0s 214us/sample – loss: 14958.4468 – mean_absolute_error: 14958.4473 – val_loss: 25498.4567 – val_mean_absolute_error: 25498.4570
Epoch 416/500
960/1168 [=======================>……] – ETA: 0s – loss: 14805.6788 – mean_absolute_error: 14805.6787
Epoch 00416: val_loss did not improve from 19021.54955
1168/1168 [==============================] – 0s 202us/sample – loss: 14506.7739 – mean_absolute_error: 14506.7744 – val_loss: 20069.9039 – val_mean_absolute_error: 20069.9043
Epoch 417/500
832/1168 [====================>………] – ETA: 0s – loss: 14078.0315 – mean_absolute_error: 14078.0322
Epoch 00417: val_loss did not improve from 19021.54955
1168/1168 [==============================] – 0s 155us/sample – loss: 13721.8001 – mean_absolute_error: 13721.8018 – val_loss: 20507.2341 – val_mean_absolute_error: 20507.2324
Epoch 418/500
864/1168 [=====================>……..] – ETA: 0s – loss: 13157.5209 – mean_absolute_error: 13157.5205
Epoch 00418: val_loss did not improve from 19021.54955
1168/1168 [==============================] – 0s 144us/sample – loss: 13520.3549 – mean_absolute_error: 13520.3545 – val_loss: 20622.4551 – val_mean_absolute_error: 20622.4551
Epoch 419/500
896/1168 [======================>…….] – ETA: 0s – loss: 13872.3142 – mean_absolute_error: 13872.3154
Epoch 00419: val_loss did not improve from 19021.54955
1168/1168 [==============================] – 0s 143us/sample – loss: 13942.9585 – mean_absolute_error: 13942.9600 – val_loss: 21041.8620 – val_mean_absolute_error: 21041.8613
Epoch 420/500
864/1168 [=====================>……..] – ETA: 0s – loss: 14808.0598 – mean_absolute_error: 14808.0586
Epoch 00420: val_loss did not improve from 19021.54955
1168/1168 [==============================] – 0s 142us/sample – loss: 14944.6996 – mean_absolute_error: 14944.6982 – val_loss: 22529.9663 – val_mean_absolute_error: 22529.9648
Epoch 421/500
960/1168 [=======================>……] – ETA: 0s – loss: 14463.3302 – mean_absolute_error: 14463.3291
Epoch 00421: val_loss did not improve from 19021.54955
1168/1168 [==============================] – 0s 133us/sample – loss: 14580.7967 – mean_absolute_error: 14580.7979 – val_loss: 19707.7302 – val_mean_absolute_error: 19707.7305
Epoch 422/500
896/1168 [======================>…….] – ETA: 0s – loss: 13837.4266 – mean_absolute_error: 13837.4277
Epoch 00422: val_loss did not improve from 19021.54955
1168/1168 [==============================] – 0s 139us/sample – loss: 13998.0369 – mean_absolute_error: 13998.0381 – val_loss: 20511.1238 – val_mean_absolute_error: 20511.1230
Epoch 423/500
960/1168 [=======================>……] – ETA: 0s – loss: 16042.3515 – mean_absolute_error: 16042.3525
Epoch 00423: val_loss did not improve from 19021.54955
1168/1168 [==============================] – 0s 133us/sample – loss: 16363.1911 – mean_absolute_error: 16363.1904 – val_loss: 21591.3618 – val_mean_absolute_error: 21591.3613
Epoch 424/500
864/1168 [=====================>……..] – ETA: 0s – loss: 15588.4439 – mean_absolute_error: 15588.4443
Epoch 00424: val_loss did not improve from 19021.54955
1168/1168 [==============================] – 0s 143us/sample – loss: 16633.2031 – mean_absolute_error: 16633.2012 – val_loss: 21464.8307 – val_mean_absolute_error: 21464.8301
Epoch 425/500
960/1168 [=======================>……] – ETA: 0s – loss: 15581.8015 – mean_absolute_error: 15581.8027
Epoch 00425: val_loss did not improve from 19021.54955
1168/1168 [==============================] – 0s 137us/sample – loss: 15654.8730 – mean_absolute_error: 15654.8750 – val_loss: 23370.3757 – val_mean_absolute_error: 23370.3770
Epoch 426/500
864/1168 [=====================>……..] – ETA: 0s – loss: 14927.7908 – mean_absolute_error: 14927.7891
Epoch 00426: val_loss did not improve from 19021.54955
1168/1168 [==============================] – 0s 143us/sample – loss: 14724.5034 – mean_absolute_error: 14724.5039 – val_loss: 20691.3507 – val_mean_absolute_error: 20691.3496
Epoch 427/500
928/1168 [======================>…….] – ETA: 0s – loss: 14352.9057 – mean_absolute_error: 14352.9043
Epoch 00427: val_loss did not improve from 19021.54955
1168/1168 [==============================] – 0s 199us/sample – loss: 14857.3964 – mean_absolute_error: 14857.3936 – val_loss: 23522.8354 – val_mean_absolute_error: 23522.8359
Epoch 428/500
1152/1168 [============================>.] – ETA: 0s – loss: 14119.7272 – mean_absolute_error: 14119.7275
Epoch 00428: val_loss did not improve from 19021.54955
1168/1168 [==============================] – 0s 168us/sample – loss: 14159.2495 – mean_absolute_error: 14159.2500 – val_loss: 20504.9887 – val_mean_absolute_error: 20504.9902
Epoch 429/500
1088/1168 [==========================>…] – ETA: 0s – loss: 14266.9906 – mean_absolute_error: 14266.9912
Epoch 00429: val_loss did not improve from 19021.54955
1168/1168 [==============================] – 0s 179us/sample – loss: 14270.8645 – mean_absolute_error: 14270.8652 – val_loss: 19776.2923 – val_mean_absolute_error: 19776.2930
Epoch 430/500
1056/1168 [==========================>…] – ETA: 0s – loss: 16165.6495 – mean_absolute_error: 16165.6494
Epoch 00430: val_loss did not improve from 19021.54955
1168/1168 [==============================] – 0s 176us/sample – loss: 15757.8773 – mean_absolute_error: 15757.8789 – val_loss: 20522.5475 – val_mean_absolute_error: 20522.5469
Epoch 431/500
896/1168 [======================>…….] – ETA: 0s – loss: 15042.5746 – mean_absolute_error: 15042.5762
Epoch 00431: val_loss did not improve from 19021.54955
1168/1168 [==============================] – 0s 137us/sample – loss: 15081.6061 – mean_absolute_error: 15081.6074 – val_loss: 21918.0388 – val_mean_absolute_error: 21918.0391
Epoch 432/500
960/1168 [=======================>……] – ETA: 0s – loss: 14446.1263 – mean_absolute_error: 14446.1260
Epoch 00432: val_loss did not improve from 19021.54955
1168/1168 [==============================] – 0s 130us/sample – loss: 14118.4315 – mean_absolute_error: 14118.4316 – val_loss: 20178.1891 – val_mean_absolute_error: 20178.1875
Epoch 433/500
1056/1168 [==========================>…] – ETA: 0s – loss: 14015.7029 – mean_absolute_error: 14015.7031
Epoch 00433: val_loss did not improve from 19021.54955
1168/1168 [==============================] – 0s 120us/sample – loss: 14008.7264 – mean_absolute_error: 14008.7256 – val_loss: 20090.7153 – val_mean_absolute_error: 20090.7148
Epoch 434/500
992/1168 [========================>…..] – ETA: 0s – loss: 14850.6303 – mean_absolute_error: 14850.6299
Epoch 00434: val_loss did not improve from 19021.54955
1168/1168 [==============================] – 0s 127us/sample – loss: 14921.0210 – mean_absolute_error: 14921.0205 – val_loss: 22175.5360 – val_mean_absolute_error: 22175.5352
Epoch 435/500
960/1168 [=======================>……] – ETA: 0s – loss: 15104.0317 – mean_absolute_error: 15104.0312
Epoch 00435: val_loss did not improve from 19021.54955
1168/1168 [==============================] – 0s 131us/sample – loss: 14713.3482 – mean_absolute_error: 14713.3477 – val_loss: 21132.9668 – val_mean_absolute_error: 21132.9668
Epoch 436/500
1056/1168 [==========================>…] – ETA: 0s – loss: 14055.2515 – mean_absolute_error: 14055.2549
Epoch 00436: val_loss did not improve from 19021.54955
1168/1168 [==============================] – 0s 124us/sample – loss: 14100.6908 – mean_absolute_error: 14100.6934 – val_loss: 20280.5771 – val_mean_absolute_error: 20280.5762
Epoch 437/500
928/1168 [======================>…….] – ETA: 0s – loss: 13483.3265 – mean_absolute_error: 13483.3252
Epoch 00437: val_loss did not improve from 19021.54955
1168/1168 [==============================] – 0s 129us/sample – loss: 13595.7874 – mean_absolute_error: 13595.7871 – val_loss: 21580.8058 – val_mean_absolute_error: 21580.8066
Epoch 438/500
1056/1168 [==========================>…] – ETA: 0s – loss: 14078.7517 – mean_absolute_error: 14078.7520
Epoch 00438: val_loss did not improve from 19021.54955
1168/1168 [==============================] – 0s 120us/sample – loss: 14273.1702 – mean_absolute_error: 14273.1709 – val_loss: 21115.1065 – val_mean_absolute_error: 21115.1055
Epoch 439/500
928/1168 [======================>…….] – ETA: 0s – loss: 15660.8069 – mean_absolute_error: 15660.8066
Epoch 00439: val_loss did not improve from 19021.54955
1168/1168 [==============================] – 0s 140us/sample – loss: 15726.7151 – mean_absolute_error: 15726.7139 – val_loss: 21526.1163 – val_mean_absolute_error: 21526.1152
Epoch 440/500
1024/1168 [=========================>….] – ETA: 0s – loss: 16152.1456 – mean_absolute_error: 16152.1465
Epoch 00440: val_loss did not improve from 19021.54955
1168/1168 [==============================] – 0s 123us/sample – loss: 15848.3714 – mean_absolute_error: 15848.3730 – val_loss: 19915.0706 – val_mean_absolute_error: 19915.0723
Epoch 441/500
1056/1168 [==========================>…] – ETA: 0s – loss: 14167.8328 – mean_absolute_error: 14167.8320
Epoch 00441: val_loss did not improve from 19021.54955
1168/1168 [==============================] – 0s 120us/sample – loss: 14281.8887 – mean_absolute_error: 14281.8877 – val_loss: 21858.2217 – val_mean_absolute_error: 21858.2207
Epoch 442/500
1024/1168 [=========================>….] – ETA: 0s – loss: 14646.2816 – mean_absolute_error: 14646.2812
Epoch 00442: val_loss did not improve from 19021.54955
1168/1168 [==============================] – 0s 123us/sample – loss: 14598.8508 – mean_absolute_error: 14598.8496 – val_loss: 20645.6546 – val_mean_absolute_error: 20645.6562
Epoch 443/500
1024/1168 [=========================>….] – ETA: 0s – loss: 16066.5299 – mean_absolute_error: 16066.5283
Epoch 00443: val_loss did not improve from 19021.54955
1168/1168 [==============================] – 0s 125us/sample – loss: 16124.5736 – mean_absolute_error: 16124.5732 – val_loss: 22049.5109 – val_mean_absolute_error: 22049.5098
Epoch 444/500
1024/1168 [=========================>….] – ETA: 0s – loss: 15399.3902 – mean_absolute_error: 15399.3896
Epoch 00444: val_loss did not improve from 19021.54955
1168/1168 [==============================] – 0s 126us/sample – loss: 15682.8362 – mean_absolute_error: 15682.8369 – val_loss: 21554.1650 – val_mean_absolute_error: 21554.1660
Epoch 445/500
992/1168 [========================>…..] – ETA: 0s – loss: 18136.1030 – mean_absolute_error: 18136.1035
Epoch 00445: val_loss did not improve from 19021.54955
1168/1168 [==============================] – 0s 131us/sample – loss: 17665.9577 – mean_absolute_error: 17665.9570 – val_loss: 21579.7411 – val_mean_absolute_error: 21579.7402
Epoch 446/500
1024/1168 [=========================>….] – ETA: 0s – loss: 15122.9383 – mean_absolute_error: 15122.9365
Epoch 00446: val_loss did not improve from 19021.54955
1168/1168 [==============================] – 0s 121us/sample – loss: 15104.3569 – mean_absolute_error: 15104.3564 – val_loss: 20169.8637 – val_mean_absolute_error: 20169.8652
Epoch 447/500
736/1168 [=================>…………] – ETA: 0s – loss: 13387.5981 – mean_absolute_error: 13387.5967
Epoch 00447: val_loss did not improve from 19021.54955
1168/1168 [==============================] – 0s 150us/sample – loss: 13864.2558 – mean_absolute_error: 13864.2549 – val_loss: 19634.0642 – val_mean_absolute_error: 19634.0625
Epoch 448/500
960/1168 [=======================>……] – ETA: 0s – loss: 14022.0524 – mean_absolute_error: 14022.0527
Epoch 00448: val_loss did not improve from 19021.54955
1168/1168 [==============================] – 0s 127us/sample – loss: 14035.7583 – mean_absolute_error: 14035.7607 – val_loss: 24074.1371 – val_mean_absolute_error: 24074.1367
Epoch 449/500
832/1168 [====================>………] – ETA: 0s – loss: 18390.5351 – mean_absolute_error: 18390.5371
Epoch 00449: val_loss did not improve from 19021.54955
1168/1168 [==============================] – 0s 147us/sample – loss: 17621.8406 – mean_absolute_error: 17621.8457 – val_loss: 20905.0840 – val_mean_absolute_error: 20905.0840
Epoch 450/500
992/1168 [========================>…..] – ETA: 0s – loss: 14157.9878 – mean_absolute_error: 14157.9873
Epoch 00450: val_loss did not improve from 19021.54955
1168/1168 [==============================] – 0s 122us/sample – loss: 14023.4549 – mean_absolute_error: 14023.4541 – val_loss: 19991.6394 – val_mean_absolute_error: 19991.6406
Epoch 451/500
1024/1168 [=========================>….] – ETA: 0s – loss: 14349.6573 – mean_absolute_error: 14349.6572
Epoch 00451: val_loss did not improve from 19021.54955
1168/1168 [==============================] – 0s 127us/sample – loss: 14294.7909 – mean_absolute_error: 14294.7910 – val_loss: 20355.6864 – val_mean_absolute_error: 20355.6875
Epoch 452/500
896/1168 [======================>…….] – ETA: 0s – loss: 13507.4978 – mean_absolute_error: 13507.4980
Epoch 00452: val_loss did not improve from 19021.54955
1168/1168 [==============================] – 0s 135us/sample – loss: 13552.1243 – mean_absolute_error: 13552.1250 – val_loss: 21270.4807 – val_mean_absolute_error: 21270.4785
Epoch 453/500
1024/1168 [=========================>….] – ETA: 0s – loss: 14186.5132 – mean_absolute_error: 14186.5117
Epoch 00453: val_loss did not improve from 19021.54955
1168/1168 [==============================] – 0s 123us/sample – loss: 14584.4355 – mean_absolute_error: 14584.4336 – val_loss: 23918.4784 – val_mean_absolute_error: 23918.4785
Epoch 454/500
1088/1168 [==========================>…] – ETA: 0s – loss: 14737.4963 – mean_absolute_error: 14737.4951
Epoch 00454: val_loss did not improve from 19021.54955
1168/1168 [==============================] – 0s 120us/sample – loss: 14701.5626 – mean_absolute_error: 14701.5635 – val_loss: 20175.8788 – val_mean_absolute_error: 20175.8809
Epoch 455/500
928/1168 [======================>…….] – ETA: 0s – loss: 14790.6585 – mean_absolute_error: 14790.6582
Epoch 00455: val_loss did not improve from 19021.54955
1168/1168 [==============================] – 0s 130us/sample – loss: 14589.5621 – mean_absolute_error: 14589.5635 – val_loss: 20659.9534 – val_mean_absolute_error: 20659.9551
Epoch 456/500
992/1168 [========================>…..] – ETA: 0s – loss: 13435.3501 – mean_absolute_error: 13435.3506
Epoch 00456: val_loss did not improve from 19021.54955
1168/1168 [==============================] – 0s 126us/sample – loss: 13530.5448 – mean_absolute_error: 13530.5449 – val_loss: 22627.3492 – val_mean_absolute_error: 22627.3477
Epoch 457/500
992/1168 [========================>…..] – ETA: 0s – loss: 15800.9155 – mean_absolute_error: 15800.9150
Epoch 00457: val_loss did not improve from 19021.54955
1168/1168 [==============================] – 0s 125us/sample – loss: 15528.1973 – mean_absolute_error: 15528.1973 – val_loss: 19348.9627 – val_mean_absolute_error: 19348.9629
Epoch 458/500
960/1168 [=======================>……] – ETA: 0s – loss: 13228.9563 – mean_absolute_error: 13228.9541
Epoch 00458: val_loss did not improve from 19021.54955
1168/1168 [==============================] – 0s 139us/sample – loss: 13419.5860 – mean_absolute_error: 13419.5850 – val_loss: 19819.7969 – val_mean_absolute_error: 19819.7988
Epoch 459/500
1024/1168 [=========================>….] – ETA: 0s – loss: 14316.6400 – mean_absolute_error: 14316.6387
Epoch 00459: val_loss did not improve from 19021.54955
1168/1168 [==============================] – 0s 123us/sample – loss: 14475.0008 – mean_absolute_error: 14475.0000 – val_loss: 22079.2545 – val_mean_absolute_error: 22079.2559
Epoch 460/500
992/1168 [========================>…..] – ETA: 0s – loss: 14063.2345 – mean_absolute_error: 14063.2334
Epoch 00460: val_loss did not improve from 19021.54955
1168/1168 [==============================] – 0s 126us/sample – loss: 14151.5165 – mean_absolute_error: 14151.5176 – val_loss: 20580.4763 – val_mean_absolute_error: 20580.4785
Epoch 461/500
1056/1168 [==========================>…] – ETA: 0s – loss: 13874.3369 – mean_absolute_error: 13874.3340
Epoch 00461: val_loss did not improve from 19021.54955
1168/1168 [==============================] – 0s 124us/sample – loss: 13802.2805 – mean_absolute_error: 13802.2783 – val_loss: 21258.6699 – val_mean_absolute_error: 21258.6699
Epoch 462/500
928/1168 [======================>…….] – ETA: 0s – loss: 13678.9797 – mean_absolute_error: 13678.9795
Epoch 00462: val_loss did not improve from 19021.54955
1168/1168 [==============================] – 0s 135us/sample – loss: 14152.8272 – mean_absolute_error: 14152.8271 – val_loss: 23484.5117 – val_mean_absolute_error: 23484.5117
Epoch 463/500
992/1168 [========================>…..] – ETA: 0s – loss: 14652.3095 – mean_absolute_error: 14652.3105
Epoch 00463: val_loss did not improve from 19021.54955
1168/1168 [==============================] – 0s 123us/sample – loss: 14538.5104 – mean_absolute_error: 14538.5107 – val_loss: 21905.9122 – val_mean_absolute_error: 21905.9102
Epoch 464/500
992/1168 [========================>…..] – ETA: 0s – loss: 16645.6280 – mean_absolute_error: 16645.6309
Epoch 00464: val_loss did not improve from 19021.54955
1168/1168 [==============================] – 0s 126us/sample – loss: 16084.3097 – mean_absolute_error: 16084.3115 – val_loss: 20323.4585 – val_mean_absolute_error: 20323.4609
Epoch 465/500
864/1168 [=====================>……..] – ETA: 0s – loss: 15229.7613 – mean_absolute_error: 15229.7588
Epoch 00465: val_loss did not improve from 19021.54955
1168/1168 [==============================] – 0s 141us/sample – loss: 14991.7334 – mean_absolute_error: 14991.7295 – val_loss: 20973.0342 – val_mean_absolute_error: 20973.0332
Epoch 466/500
992/1168 [========================>…..] – ETA: 0s – loss: 14959.0172 – mean_absolute_error: 14959.0176
Epoch 00466: val_loss did not improve from 19021.54955
1168/1168 [==============================] – 0s 125us/sample – loss: 15542.0663 – mean_absolute_error: 15542.0664 – val_loss: 22160.7071 – val_mean_absolute_error: 22160.7070
Epoch 467/500
1056/1168 [==========================>…] – ETA: 0s – loss: 16200.0662 – mean_absolute_error: 16200.0664
Epoch 00467: val_loss did not improve from 19021.54955
1168/1168 [==============================] – 0s 121us/sample – loss: 16161.2082 – mean_absolute_error: 16161.2090 – val_loss: 20880.2627 – val_mean_absolute_error: 20880.2617
Epoch 468/500
992/1168 [========================>…..] – ETA: 0s – loss: 16434.9585 – mean_absolute_error: 16434.9570
Epoch 00468: val_loss did not improve from 19021.54955
1168/1168 [==============================] – 0s 131us/sample – loss: 15921.1917 – mean_absolute_error: 15921.1904 – val_loss: 20467.7013 – val_mean_absolute_error: 20467.7012
Epoch 469/500
1024/1168 [=========================>….] – ETA: 0s – loss: 14109.3159 – mean_absolute_error: 14109.3164
Epoch 00469: val_loss did not improve from 19021.54955
1168/1168 [==============================] – 0s 128us/sample – loss: 14007.2533 – mean_absolute_error: 14007.2539 – val_loss: 21241.0563 – val_mean_absolute_error: 21241.0566
Epoch 470/500
864/1168 [=====================>……..] – ETA: 0s – loss: 13963.7756 – mean_absolute_error: 13963.7754
Epoch 00470: val_loss did not improve from 19021.54955
1168/1168 [==============================] – 0s 140us/sample – loss: 14163.2466 – mean_absolute_error: 14163.2461 – val_loss: 20729.8093 – val_mean_absolute_error: 20729.8105
Epoch 471/500
1056/1168 [==========================>…] – ETA: 0s – loss: 13781.9896 – mean_absolute_error: 13781.9873
Epoch 00471: val_loss did not improve from 19021.54955
1168/1168 [==============================] – 0s 125us/sample – loss: 14061.9108 – mean_absolute_error: 14061.9082 – val_loss: 20257.6809 – val_mean_absolute_error: 20257.6797
Epoch 472/500
1024/1168 [=========================>….] – ETA: 0s – loss: 17278.4297 – mean_absolute_error: 17278.4297
Epoch 00472: val_loss did not improve from 19021.54955
1168/1168 [==============================] – 0s 120us/sample – loss: 17355.3693 – mean_absolute_error: 17355.3691 – val_loss: 25877.3856 – val_mean_absolute_error: 25877.3848
Epoch 473/500
960/1168 [=======================>……] – ETA: 0s – loss: 14794.6887 – mean_absolute_error: 14794.6895
Epoch 00473: val_loss did not improve from 19021.54955
1168/1168 [==============================] – 0s 140us/sample – loss: 14568.8348 – mean_absolute_error: 14568.8359 – val_loss: 20397.6283 – val_mean_absolute_error: 20397.6289
Epoch 474/500
928/1168 [======================>…….] – ETA: 0s – loss: 14228.1290 – mean_absolute_error: 14228.1289
Epoch 00474: val_loss did not improve from 19021.54955
1168/1168 [==============================] – 0s 140us/sample – loss: 13829.6723 – mean_absolute_error: 13829.6729 – val_loss: 20309.1032 – val_mean_absolute_error: 20309.1035
Epoch 475/500
928/1168 [======================>…….] – ETA: 0s – loss: 14263.3244 – mean_absolute_error: 14263.3232
Epoch 00475: val_loss did not improve from 19021.54955
1168/1168 [==============================] – 0s 137us/sample – loss: 14164.5187 – mean_absolute_error: 14164.5176 – val_loss: 20046.4128 – val_mean_absolute_error: 20046.4102
Epoch 476/500
928/1168 [======================>…….] – ETA: 0s – loss: 13850.0321 – mean_absolute_error: 13850.0322
Epoch 00476: val_loss did not improve from 19021.54955
1168/1168 [==============================] – 0s 137us/sample – loss: 13682.3427 – mean_absolute_error: 13682.3418 – val_loss: 19864.1399 – val_mean_absolute_error: 19864.1387
Epoch 477/500
960/1168 [=======================>……] – ETA: 0s – loss: 14264.4938 – mean_absolute_error: 14264.4932
Epoch 00477: val_loss did not improve from 19021.54955
1168/1168 [==============================] – 0s 131us/sample – loss: 14567.2910 – mean_absolute_error: 14567.2891 – val_loss: 20929.7508 – val_mean_absolute_error: 20929.7480
Epoch 478/500
928/1168 [======================>…….] – ETA: 0s – loss: 14492.2464 – mean_absolute_error: 14492.2471
Epoch 00478: val_loss did not improve from 19021.54955
1168/1168 [==============================] – 0s 136us/sample – loss: 15234.8086 – mean_absolute_error: 15234.8086 – val_loss: 24013.7803 – val_mean_absolute_error: 24013.7793
Epoch 479/500
928/1168 [======================>…….] – ETA: 0s – loss: 15423.4942 – mean_absolute_error: 15423.4961
Epoch 00479: val_loss did not improve from 19021.54955
1168/1168 [==============================] – 0s 131us/sample – loss: 15306.5961 – mean_absolute_error: 15306.5977 – val_loss: 22284.3782 – val_mean_absolute_error: 22284.3789
Epoch 480/500
992/1168 [========================>…..] – ETA: 0s – loss: 13692.1782 – mean_absolute_error: 13692.1777
Epoch 00480: val_loss did not improve from 19021.54955
1168/1168 [==============================] – 0s 131us/sample – loss: 14152.7634 – mean_absolute_error: 14152.7627 – val_loss: 20748.4468 – val_mean_absolute_error: 20748.4453
Epoch 481/500
960/1168 [=======================>……] – ETA: 0s – loss: 16288.2014 – mean_absolute_error: 16288.2021
Epoch 00481: val_loss did not improve from 19021.54955
1168/1168 [==============================] – 0s 135us/sample – loss: 16315.2749 – mean_absolute_error: 16315.2754 – val_loss: 20135.2680 – val_mean_absolute_error: 20135.2695
Epoch 482/500
864/1168 [=====================>……..] – ETA: 0s – loss: 14158.9685 – mean_absolute_error: 14158.9688
Epoch 00482: val_loss did not improve from 19021.54955
1168/1168 [==============================] – 0s 147us/sample – loss: 14034.1647 – mean_absolute_error: 14034.1641 – val_loss: 20579.4992 – val_mean_absolute_error: 20579.4980
Epoch 483/500
896/1168 [======================>…….] – ETA: 0s – loss: 14612.5501 – mean_absolute_error: 14612.5488
Epoch 00483: val_loss did not improve from 19021.54955
1168/1168 [==============================] – 0s 144us/sample – loss: 14698.0003 – mean_absolute_error: 14698.0000 – val_loss: 21716.3569 – val_mean_absolute_error: 21716.3574
Epoch 484/500
864/1168 [=====================>……..] – ETA: 0s – loss: 13866.2125 – mean_absolute_error: 13866.2119
Epoch 00484: val_loss did not improve from 19021.54955
1168/1168 [==============================] – 0s 145us/sample – loss: 14265.1206 – mean_absolute_error: 14265.1211 – val_loss: 19965.4036 – val_mean_absolute_error: 19965.4023
Epoch 485/500
896/1168 [======================>…….] – ETA: 0s – loss: 13969.0258 – mean_absolute_error: 13969.0244
Epoch 00485: val_loss did not improve from 19021.54955
1168/1168 [==============================] – 0s 138us/sample – loss: 13965.8748 – mean_absolute_error: 13965.8740 – val_loss: 19908.1933 – val_mean_absolute_error: 19908.1934
Epoch 486/500
928/1168 [======================>…….] – ETA: 0s – loss: 14490.5191 – mean_absolute_error: 14490.5186
Epoch 00486: val_loss did not improve from 19021.54955
1168/1168 [==============================] – 0s 138us/sample – loss: 14479.1797 – mean_absolute_error: 14479.1797 – val_loss: 20195.7300 – val_mean_absolute_error: 20195.7324
Epoch 487/500
864/1168 [=====================>……..] – ETA: 0s – loss: 14005.1832 – mean_absolute_error: 14005.1826
Epoch 00487: val_loss did not improve from 19021.54955
1168/1168 [==============================] – 0s 143us/sample – loss: 15465.2619 – mean_absolute_error: 15465.2607 – val_loss: 29933.1949 – val_mean_absolute_error: 29933.1914
Epoch 488/500
928/1168 [======================>…….] – ETA: 0s – loss: 17983.1120 – mean_absolute_error: 17983.1113
Epoch 00488: val_loss did not improve from 19021.54955
1168/1168 [==============================] – 0s 137us/sample – loss: 17134.4067 – mean_absolute_error: 17134.4062 – val_loss: 20474.9724 – val_mean_absolute_error: 20474.9727
Epoch 489/500
896/1168 [======================>…….] – ETA: 0s – loss: 13382.3732 – mean_absolute_error: 13382.3721
Epoch 00489: val_loss did not improve from 19021.54955
1168/1168 [==============================] – 0s 143us/sample – loss: 13967.5620 – mean_absolute_error: 13967.5605 – val_loss: 23867.4266 – val_mean_absolute_error: 23867.4258
Epoch 490/500
832/1168 [====================>………] – ETA: 0s – loss: 17868.7001 – mean_absolute_error: 17868.6992
Epoch 00490: val_loss did not improve from 19021.54955
1168/1168 [==============================] – 0s 146us/sample – loss: 17102.0193 – mean_absolute_error: 17102.0176 – val_loss: 22157.6575 – val_mean_absolute_error: 22157.6582
Epoch 491/500
800/1168 [===================>……….] – ETA: 0s – loss: 14781.0398 – mean_absolute_error: 14781.0400
Epoch 00491: val_loss did not improve from 19021.54955
1168/1168 [==============================] – 0s 149us/sample – loss: 15119.1851 – mean_absolute_error: 15119.1846 – val_loss: 22109.9049 – val_mean_absolute_error: 22109.9043
Epoch 492/500
864/1168 [=====================>……..] – ETA: 0s – loss: 16073.1453 – mean_absolute_error: 16073.1455
Epoch 00492: val_loss did not improve from 19021.54955
1168/1168 [==============================] – 0s 139us/sample – loss: 16336.1436 – mean_absolute_error: 16336.1455 – val_loss: 21006.9140 – val_mean_absolute_error: 21006.9141
Epoch 493/500
928/1168 [======================>…….] – ETA: 0s – loss: 13231.8747 – mean_absolute_error: 13231.8750
Epoch 00493: val_loss did not improve from 19021.54955
1168/1168 [==============================] – 0s 138us/sample – loss: 13619.7374 – mean_absolute_error: 13619.7373 – val_loss: 26109.9220 – val_mean_absolute_error: 26109.9238
Epoch 494/500
896/1168 [======================>…….] – ETA: 0s – loss: 16279.9845 – mean_absolute_error: 16279.9844
Epoch 00494: val_loss did not improve from 19021.54955
1168/1168 [==============================] – 0s 143us/sample – loss: 16067.6761 – mean_absolute_error: 16067.6748 – val_loss: 23794.1211 – val_mean_absolute_error: 23794.1211
Epoch 495/500
928/1168 [======================>…….] – ETA: 0s – loss: 14325.2337 – mean_absolute_error: 14325.2324
Epoch 00495: val_loss did not improve from 19021.54955
1168/1168 [==============================] – 0s 135us/sample – loss: 14258.5056 – mean_absolute_error: 14258.5049 – val_loss: 20929.3663 – val_mean_absolute_error: 20929.3672
Epoch 496/500
800/1168 [===================>……….] – ETA: 0s – loss: 13857.6488 – mean_absolute_error: 13857.6475
Epoch 00496: val_loss did not improve from 19021.54955
1168/1168 [==============================] – 0s 146us/sample – loss: 14124.6974 – mean_absolute_error: 14124.6973 – val_loss: 20404.9172 – val_mean_absolute_error: 20404.9160
Epoch 497/500
864/1168 [=====================>……..] – ETA: 0s – loss: 14293.5812 – mean_absolute_error: 14293.5811
Epoch 00497: val_loss did not improve from 19021.54955
1168/1168 [==============================] – 0s 143us/sample – loss: 14158.6481 – mean_absolute_error: 14158.6494 – val_loss: 20707.6622 – val_mean_absolute_error: 20707.6602
Epoch 498/500
928/1168 [======================>…….] – ETA: 0s – loss: 13413.4861 – mean_absolute_error: 13413.485 – ETA: 0s – loss: 14169.1065 – mean_absolute_error: 14169.1064
Epoch 00498: val_loss did not improve from 19021.54955
1168/1168 [==============================] – 0s 138us/sample – loss: 14648.7456 – mean_absolute_error: 14648.7451 – val_loss: 21075.6366 – val_mean_absolute_error: 21075.6348
Epoch 499/500
928/1168 [======================>…….] – ETA: 0s – loss: 15687.6749 – mean_absolute_error: 15687.6768
Epoch 00499: val_loss did not improve from 19021.54955
1168/1168 [==============================] – 0s 127us/sample – loss: 15305.0852 – mean_absolute_error: 15305.0859 – val_loss: 22625.7373 – val_mean_absolute_error: 22625.7383
Epoch 500/500
1024/1168 [=========================>….] – ETA: 0s – loss: 14597.8644 – mean_absolute_error: 14597.8652
Epoch 00500: val_loss did not improve from 19021.54955
1168/1168 [==============================] – 0s 127us/sample – loss: 14994.0517 – mean_absolute_error: 14994.0527 – val_loss: 25678.1949 – val_mean_absolute_error: 25678.1953
In [27]:
# Load wights file of the best model :
wights_file = ‘Models/Weights-268–19021.54955.hdf5′ # choose the best checkpoint- YOURS IS DIFFERENT THAN THIS NUMBER
NN_model.load_weights(wights_file) # load it
NN_model.compile(loss=’mean_absolute_error’, optimizer=’adam’, metrics=[‘mean_absolute_error’])
In [28]:
# Make predictions
predictions = NN_model.predict(test)

WARNING:tensorflow:Falling back from v2 loop because of error: Failed to find data adapter that can handle input: ,

WARNING:tensorflow:Falling back from v2 loop because of error: Failed to find data adapter that can handle input: ,
In [29]:
predictions
Out[29]:
array([[123776.84],
[235035.56],
[180865.61],
…,
[172107.81],
[145937.22],
[214936.33]], dtype=float32)
In [30]:
#plt.style.use(‘ggplot’)

def plot_history(history):
loss = history.history[‘loss’]
val_loss = history.history[‘val_loss’]
x = range(1, len(loss) + 1)

plt.figure(figsize=(12, 5))
plt.plot(x, loss, ‘b’, label=’Training loss’)
plt.plot(x, val_loss, ‘r’, label=’Validation loss’)
plt.title(‘Training and validation loss’)
plt.legend()
plt.show()

plot_history(hist)


In [32]:
import tensorflow
early_stop = tensorflow.keras.callbacks.EarlyStopping(monitor=’val_loss’, patience= 30)
callbacks_list= ModelCheckpoint(‘Models/Weights-{epoch:03d}–{val_loss:.5f}.hdf5′, monitor=’val_loss’, save_best_only = True)
callbacks = [early_stop, callbacks_list]
In [33]:
hist = NN_model.fit(train, target, epochs=500, batch_size=32, validation_split = 0.2, callbacks=callbacks)

WARNING:tensorflow:Falling back from v2 loop because of error: Failed to find data adapter that can handle input: ,

WARNING:tensorflow:Falling back from v2 loop because of error: Failed to find data adapter that can handle input: ,

Train on 1168 samples, validate on 292 samples
Epoch 1/500
1168/1168 [==============================] – 0s 265us/sample – loss: 19515.0980 – mean_absolute_error: 19515.0977 – val_loss: 21412.6048 – val_mean_absolute_error: 21412.6035
Epoch 2/500
1168/1168 [==============================] – 0s 152us/sample – loss: 16076.9050 – mean_absolute_error: 16076.9062 – val_loss: 21335.5540 – val_mean_absolute_error: 21335.5547
Epoch 3/500
1168/1168 [==============================] – 0s 131us/sample – loss: 16652.1658 – mean_absolute_error: 16652.1641 – val_loss: 21366.2398 – val_mean_absolute_error: 21366.2422
Epoch 4/500
1168/1168 [==============================] – 0s 259us/sample – loss: 15734.0637 – mean_absolute_error: 15734.0635 – val_loss: 23244.6636 – val_mean_absolute_error: 23244.6621
Epoch 5/500
1168/1168 [==============================] – 0s 183us/sample – loss: 16418.7462 – mean_absolute_error: 16418.7461 – val_loss: 19819.9880 – val_mean_absolute_error: 19819.9883
Epoch 6/500
1168/1168 [==============================] – 0s 155us/sample – loss: 15823.5233 – mean_absolute_error: 15823.5225 – val_loss: 22173.8992 – val_mean_absolute_error: 22173.8965
Epoch 7/500
1168/1168 [==============================] – 0s 225us/sample – loss: 15473.8573 – mean_absolute_error: 15473.8594 – val_loss: 20303.1729 – val_mean_absolute_error: 20303.1738
Epoch 8/500
1168/1168 [==============================] – 0s 144us/sample – loss: 16859.8279 – mean_absolute_error: 16859.8262 – val_loss: 22120.0988 – val_mean_absolute_error: 22120.1016
Epoch 9/500
1168/1168 [==============================] – 0s 215us/sample – loss: 16039.1181 – mean_absolute_error: 16039.1182 – val_loss: 23774.6176 – val_mean_absolute_error: 23774.6172
Epoch 10/500
1168/1168 [==============================] – 0s 131us/sample – loss: 15562.4100 – mean_absolute_error: 15562.4111 – val_loss: 20220.8517 – val_mean_absolute_error: 20220.8535
Epoch 11/500
1168/1168 [==============================] – 0s 137us/sample – loss: 16386.9869 – mean_absolute_error: 16386.9902 – val_loss: 22865.5298 – val_mean_absolute_error: 22865.5293
Epoch 12/500
1168/1168 [==============================] – 0s 122us/sample – loss: 16283.3474 – mean_absolute_error: 16283.3477 – val_loss: 21156.2470 – val_mean_absolute_error: 21156.2461
Epoch 13/500
1168/1168 [==============================] – 0s 143us/sample – loss: 15270.3017 – mean_absolute_error: 15270.3027 – val_loss: 19521.2007 – val_mean_absolute_error: 19521.2012
Epoch 14/500
1168/1168 [==============================] – 0s 130us/sample – loss: 15152.6674 – mean_absolute_error: 15152.6680 – val_loss: 19756.4869 – val_mean_absolute_error: 19756.4883
Epoch 15/500
1168/1168 [==============================] – 0s 136us/sample – loss: 15487.3398 – mean_absolute_error: 15487.3408 – val_loss: 20807.4235 – val_mean_absolute_error: 20807.4238
Epoch 16/500
1168/1168 [==============================] – 0s 127us/sample – loss: 17103.4157 – mean_absolute_error: 17103.4160 – val_loss: 23899.6919 – val_mean_absolute_error: 23899.6914
Epoch 17/500
1168/1168 [==============================] – 0s 138us/sample – loss: 15601.9872 – mean_absolute_error: 15601.9883 – val_loss: 21620.4568 – val_mean_absolute_error: 21620.4551
Epoch 18/500
1168/1168 [==============================] – 0s 135us/sample – loss: 15304.3522 – mean_absolute_error: 15304.3506 – val_loss: 19947.5172 – val_mean_absolute_error: 19947.5176
Epoch 19/500
1168/1168 [==============================] – 0s 136us/sample – loss: 15853.4462 – mean_absolute_error: 15853.4473 – val_loss: 20316.2519 – val_mean_absolute_error: 20316.2539
Epoch 20/500
1168/1168 [==============================] – 0s 178us/sample – loss: 15406.1253 – mean_absolute_error: 15406.1250 – val_loss: 20765.2253 – val_mean_absolute_error: 20765.2285
Epoch 21/500
1168/1168 [==============================] – 0s 183us/sample – loss: 15879.8468 – mean_absolute_error: 15879.8477 – val_loss: 20478.1405 – val_mean_absolute_error: 20478.1406
Epoch 22/500
1168/1168 [==============================] – 0s 172us/sample – loss: 15532.7071 – mean_absolute_error: 15532.7070 – val_loss: 19213.1375 – val_mean_absolute_error: 19213.1367
Epoch 23/500
1168/1168 [==============================] – 0s 142us/sample – loss: 15275.2724 – mean_absolute_error: 15275.2725 – val_loss: 21355.4735 – val_mean_absolute_error: 21355.4746
Epoch 24/500
1168/1168 [==============================] – 0s 131us/sample – loss: 15877.9764 – mean_absolute_error: 15877.9746 – val_loss: 22735.8542 – val_mean_absolute_error: 22735.8535
Epoch 25/500
1168/1168 [==============================] – 0s 128us/sample – loss: 17154.3873 – mean_absolute_error: 17154.3867 – val_loss: 24303.8777 – val_mean_absolute_error: 24303.8789
Epoch 26/500
1168/1168 [==============================] – 0s 125us/sample – loss: 17391.4972 – mean_absolute_error: 17391.4980 – val_loss: 20208.8854 – val_mean_absolute_error: 20208.8848
Epoch 27/500
1168/1168 [==============================] – 0s 165us/sample – loss: 15274.3994 – mean_absolute_error: 15274.4004 – val_loss: 20086.7404 – val_mean_absolute_error: 20086.7402
Epoch 28/500
1168/1168 [==============================] – 0s 139us/sample – loss: 15786.8221 – mean_absolute_error: 15786.8223 – val_loss: 25395.0046 – val_mean_absolute_error: 25395.0039
Epoch 29/500
1168/1168 [==============================] – 0s 196us/sample – loss: 16303.5937 – mean_absolute_error: 16303.5928 – val_loss: 19973.5476 – val_mean_absolute_error: 19973.5488
Epoch 30/500
1168/1168 [==============================] – 0s 136us/sample – loss: 15538.2017 – mean_absolute_error: 15538.2041 – val_loss: 20229.8219 – val_mean_absolute_error: 20229.8242
Epoch 31/500
1168/1168 [==============================] – 0s 129us/sample – loss: 15714.7387 – mean_absolute_error: 15714.7412 – val_loss: 22653.3327 – val_mean_absolute_error: 22653.3301
Epoch 32/500
1168/1168 [==============================] – 0s 166us/sample – loss: 15955.7686 – mean_absolute_error: 15955.7686 – val_loss: 23535.3210 – val_mean_absolute_error: 23535.3223
Epoch 33/500
1168/1168 [==============================] – 0s 126us/sample – loss: 16668.9927 – mean_absolute_error: 16668.9922 – val_loss: 24576.1199 – val_mean_absolute_error: 24576.1191
Epoch 34/500
1168/1168 [==============================] – 0s 161us/sample – loss: 18063.1266 – mean_absolute_error: 18063.1250 – val_loss: 22423.2508 – val_mean_absolute_error: 22423.2500
Epoch 35/500
1168/1168 [==============================] – 0s 190us/sample – loss: 15730.8151 – mean_absolute_error: 15730.8154 – val_loss: 23198.6901 – val_mean_absolute_error: 23198.6895
Epoch 36/500
1168/1168 [==============================] – 0s 150us/sample – loss: 17847.9431 – mean_absolute_error: 17847.9434 – val_loss: 21266.7705 – val_mean_absolute_error: 21266.7715
Epoch 37/500
1168/1168 [==============================] – 0s 132us/sample – loss: 15184.0771 – mean_absolute_error: 15184.0752 – val_loss: 22168.0055 – val_mean_absolute_error: 22168.0059
Epoch 38/500
1168/1168 [==============================] – 0s 144us/sample – loss: 15774.0118 – mean_absolute_error: 15774.0107 – val_loss: 19585.7914 – val_mean_absolute_error: 19585.7930
Epoch 39/500
1168/1168 [==============================] – 0s 139us/sample – loss: 16415.5958 – mean_absolute_error: 16415.5938 – val_loss: 20628.5487 – val_mean_absolute_error: 20628.5488
Epoch 40/500
1168/1168 [==============================] – 0s 135us/sample – loss: 15532.1212 – mean_absolute_error: 15532.1211 – val_loss: 20208.4951 – val_mean_absolute_error: 20208.4922
Epoch 41/500
1168/1168 [==============================] – 0s 167us/sample – loss: 18307.2277 – mean_absolute_error: 18307.2285 – val_loss: 21913.8399 – val_mean_absolute_error: 21913.8398
Epoch 42/500
1168/1168 [==============================] – 0s 143us/sample – loss: 19448.5686 – mean_absolute_error: 19448.5703 – val_loss: 21961.7796 – val_mean_absolute_error: 21961.7812
Epoch 43/500
1168/1168 [==============================] – 0s 173us/sample – loss: 16667.0421 – mean_absolute_error: 16667.0449 – val_loss: 19736.1654 – val_mean_absolute_error: 19736.1641
Epoch 44/500
1168/1168 [==============================] – 0s 132us/sample – loss: 16455.1672 – mean_absolute_error: 16455.1680 – val_loss: 23348.7457 – val_mean_absolute_error: 23348.7461
Epoch 45/500
1168/1168 [==============================] – 0s 119us/sample – loss: 17937.1042 – mean_absolute_error: 17937.1055 – val_loss: 21423.4196 – val_mean_absolute_error: 21423.4199
Epoch 46/500
1168/1168 [==============================] – 0s 120us/sample – loss: 16816.7419 – mean_absolute_error: 16816.7422 – val_loss: 19725.0984 – val_mean_absolute_error: 19725.0996
Epoch 47/500
1168/1168 [==============================] – 0s 125us/sample – loss: 17620.9333 – mean_absolute_error: 17620.9336 – val_loss: 20608.2634 – val_mean_absolute_error: 20608.2637
Epoch 48/500
1168/1168 [==============================] – 0s 122us/sample – loss: 17637.1518 – mean_absolute_error: 17637.1543 – val_loss: 20147.1720 – val_mean_absolute_error: 20147.1719
Epoch 49/500
1168/1168 [==============================] – 0s 122us/sample – loss: 16260.3269 – mean_absolute_error: 16260.3271 – val_loss: 20381.3638 – val_mean_absolute_error: 20381.3633
Epoch 50/500
1168/1168 [==============================] – 0s 131us/sample – loss: 16667.4816 – mean_absolute_error: 16667.4805 – val_loss: 21200.7778 – val_mean_absolute_error: 21200.7793
Epoch 51/500
1168/1168 [==============================] – 0s 117us/sample – loss: 15138.0661 – mean_absolute_error: 15138.0664 – val_loss: 20690.2665 – val_mean_absolute_error: 20690.2676
Epoch 52/500
1168/1168 [==============================] – 0s 126us/sample – loss: 16944.9618 – mean_absolute_error: 16944.9609 – val_loss: 20995.3527 – val_mean_absolute_error: 20995.3535
In [34]:
plot_history(hist)


In [35]:
# Assignment 7 solution
# Make predictions
predictions2 = NN_model.predict(train)

WARNING:tensorflow:Falling back from v2 loop because of error: Failed to find data adapter that can handle input: ,

WARNING:tensorflow:Falling back from v2 loop because of error: Failed to find data adapter that can handle input: ,
In [ ]: