-
Notifications
You must be signed in to change notification settings - Fork 5
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Loss: nan #9
Comments
Hi @uiloatoat |
Maybe you can change "mixed_precision: true" to "mixed_precision: false" in the "config.yaml" file. |
Thanks to @ziwei-cui 's advice, the model can converge. My GPU is 3090, and after turning off mixed precision, I had to reduce the batch size to 16. However, the trained model only achieved 0.6499 bPQ and 0.4835 mPQ, which is different from the paper. I only changed the |
Hi, @xiazhi1
I didn't change any settings, but after training for about 10 epochs, the loss will become nan.
Have you encountered this situation during training? Can you provide some solutions? thank you
The text was updated successfully, but these errors were encountered: