Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Multiple GPUs training problem #12

Open
JohanCao opened this issue Aug 6, 2019 · 0 comments
Open

Multiple GPUs training problem #12

JohanCao opened this issue Aug 6, 2019 · 0 comments
Labels
help wanted Extra attention is needed

Comments

@JohanCao
Copy link

JohanCao commented Aug 6, 2019

Hi all,
The training code works fine when I just use one GPU, however when change the "num_gpus" to 2 and keep the rest parameters unchanged, the training is not converge at all (all the losses are not dropping and the IoU is not inceasing). The 2 GPUs are same model and I am using Pytorch 1.0.1. Has anyone else encountered this problem and how to fix this please? Many thanks!

Johan

@bertjiazheng bertjiazheng added the help wanted Extra attention is needed label Aug 13, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
help wanted Extra attention is needed
Projects
None yet
Development

No branches or pull requests

2 participants