![]() |
Learn = 0.001 (not 1) |
keras.optimizers.Adam(lr=0.001, beta_1=0.9, beta_2=0.999, epsilon=1e-08, decay=0.0)
Learn = 3e-05 Test loss: 0.0690871921693 Test accuracy: 0.9801
Learn = 0.0001 Test loss: 0.0280504023324 Test accuracy: 0.9915
Learn = 0.0003 Test loss: 0.0197399869517 Test accuracy: 0.9936
Learn = 0.001 Test loss: 0.0218230394037 Test accuracy: 0.9942
Learn = 0.003 Test loss: 0.0266497306848 Test accuracy: 0.9933
Learn = 0.01 Test loss: 7.57344366913 Test accuracy: 0.5143
With the default 0.001 we seem to be close to the optimum.
Learn = 0.0008 Test loss: 0.0185411315141 Test accuracy: 0.9943
Learn = 0.001 Test loss: 0.0218230394037 Test accuracy: 0.9942
Learn = 0.002 Test loss: 0.0207599511368 Test accuracy: 0.9944
Learn = 0.003 Test loss: 0.0214822151026 Test accuracy: 0.994
There seems little difference around .001 for the accuracy. However the loss is a little lower going to a learning rate of 0.0008. Let see if I can repeat that. I added now pseudo randomness so the results should now be repeatable.
Learn = 0.0008 Test loss: 0.0182288939081 Test accuracy: 0.9944
Learn = 0.0008 Test loss: 0.0185519629256 Test accuracy: 0.9945
Een Batchnormalisation layer lijkt weinig toe te voegen . Deze is na de Flatten layer voor de eerste Dense layer. Maar ook bij na de eerste conv layer gaan de prestaties achteruit.
Geen opmerkingen:
Een reactie posten