-
Notifications
You must be signed in to change notification settings - Fork 43
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Train with the whole dataset #44
Comments
Hello! I have modified the order in which X_U selects samples in |
Thank you for the update. However, the problem occurred again. 2021-10-13 05:19:28,508 - mmdet - INFO - Epoch [1][600/1110] lr: 1.000e-03, eta: 0:02:12, time: 0.137, data_time: 0.005, memory: 2144, l_det_cls: 0.2635, l_det_loc: 0.1598, l_wave_dis: 0.0000, l_imgcls: 0.1265, L_wave_min: 0.5498 Besides, I notice that the 'l_wave_dis' became zero. Would it be OK? |
It shouldn't be zero. Which dataset and how many GPUs did you use? |
I trained it with a private dataset. While this dataset might be kind of hard to detection task, in my previous work on the other model, sometimes the model can't detect any positives in the training progress. I was wondering if all the detections are negative(below the threshold), it will make 'l_wave_dis' zero. |
It is possible because But I have never seen such extreme results before. |
Hello!
I try to run experiments that gradually use the whole dataset.
But when the proportion of the labeled data have been used came to 1100/1659, I run into the StopIteration problem.
I was wondering how to set the config that could use the whole labeled dataset when all the active learning cycles finish.
Thanks in advance.
The text was updated successfully, but these errors were encountered: