Questions & Answers

Why does train data performance deteriorate dramatically?

I am training a binary classifier model that classifies between disease and non-disease.

When I run the model, training loss decreased and auc, acc, get increased.

But, after certain epoch train loss increased and auc, acc were decreased.

I don't know why training performance got decreased after certain epoch.

I used general 1d cnn model and methods, details here:

Model table in white text on black background

Training process image. Unreadable white text output on black background

I tried already to:

  1. batch shuffle
  2. introduce class weights
  3. loss change (binary_crossentropy > BinaryFocalLoss)
  4. learning_rate change
Answers(2) :

Two questions for you going forward.

  1. Does the training and validation accuracy keep dropping - when you would just let it run for let's say 100 epochs? Definitely something I would try.
  2. Which optimizer are you using? SGD? ADAM?
  3. How large is your dropout, maybe this value is too large. Try without and check whether the behavior is still the same.

It is probably the optimizer

As you do not seem to augment (this could be a potential issue if you do by accident break some label affiliation) your data, each epoch should see similar gradients. Thus I guess, at this point in your optimization process, the learning rate and thus the update step is not adjusted properly - hence not allowing to further progress into that local optimum, and rather overstepping the minimum while at the same time decreasing training and validation performance.

This is an intuitive explanation and the next things I would try are:

  • Scheduling the learning rate
  • Using a more sophisticated optimizer (starting with ADAM if you are not already using it)
2023-01-11 09:10:51
1. There are two cases. 1) constant train accuracy & recall (recall =0, acc = 0.8725(percentage of one side of the data)) 2) As you said, training accuracy & recall dropped after certain epoch(20-30) 2. I used Adam(lr = 0.001) I tried to change the learning rate. 1) decreasing learning rate 2) Scheduling the learning rate(increases again after a certain epoch)
2023-01-11 09:10:51
How large is your dropout? - maybe this value is too large. Try without dropout first and check whether the behavior is still the same.

Your model is overfitting. This is why your accuracy increases and then begins decreasing. You need to implement Early Stopping to stop at the Epoch with the best results. You should also implement dropout layers.

2023-01-11 09:10:51
Except for overfitting problems, i couldn't understand why training performance decrease at certain epoch. might it be a problems of model? i know that overfitting is just affect to validation and test. i think it is not a common situation, please only provide information about changes of training performance
2023-01-11 09:10:51
Hmm, so you are saying that training's accuracy is decreasing after a certain epoch as well as validation and test?