-
-
Notifications
You must be signed in to change notification settings - Fork 16.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fine tuning my custom dataset on already trained yolo #2498
Comments
@usamatariq70 to start training from a previous model simply pass it to the --weights argument:
|
@usamatariq70 for freezing layers see Transfer Learning with Frozen Layers Tutorial: YOLOv5 Tutorials |
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. |
just this command, it doesn't need to modify learning rate or other parameters? thanks!!!! looking forward to your reply |
@SISTMrL 👋 Hello! Thanks for asking about resuming training. YOLOv5 🚀 Learning Rate (LR) schedulers follow predefined LR curves for the fixed number of If your training was interrupted for any reason you may continue where you left off using the Resume Single-GPUYou may not change settings when resuming, and no additional arguments other than python train.py --resume # automatically find latest checkpoint (searches yolov5/ directory)
python train.py --resume path/to/last.pt # specify resume checkpoint Resume Multi-GPUMulti-GPU DDP trainings must be resumed with the same GPUs and DDP command, i.e. assuming 8 GPUs: python -m torch.distributed.launch --nproc_per_node 8 train.py --resume # resume latest checkpoint
python -m torch.distributed.launch --nproc_per_node 8 train.py --resume path/to/last.pt # specify resume checkpoint Start from PretrainedIf you would like to start training from a fully trained model, use the python train.py --weights path/to/best.pt # start from pretrained model Good luck and let us know if you have any other questions! |
@glenn-jocher I've tried fine-tuning my custom model using this command but instead of fine-tuned model I get a newly trained model. scenario: I've trained a model on 3 classes with 2 classes containing around 2500 images and 3rd class with only 500 images due to lack of data. Later I've gathered 1700 more for class 3 and fine-tuned my model on only images of 3rd class and while evaluating the fine-tuned model, it only predicts the 3rd class and some false positives for other two classes (predicts on 3rd class). Can you please tell me what went wrong? |
@Dhruv312 everything is correct. You must train on all classes you expect your model to predict in deployment. |
@glenn-jocher so in fine-tuning also I need to give all the classes right? |
@Dhruv312 yes, during training the model is updated on the training data and nothing else. Whatever it's learned in the past is not actively retained, in fact quite the opposite weight decay by itself will bring all weights to zero even with zero losses. |
can i check when we do the training on the pretrained model, are the layers all frozen? checking on the training code, it seem to me that the layers are not unfrozen, pls correct me if am wrong. thank you |
@lchunleo no layers are frozen by default, they are all trainable, but you can use --freeze when training to freeze part of the model. See Transfer Learning with Frozen Layers Tutorial for details. Tutorials
Good luck 🍀 and let us know if you have any other questions! |
I have a big doubt here; if we are taking a pre trained model, and training it for a custom dataset, that means we are fine tunning the model, not the training it from scratch, right? |
@Nitya476 In fine tuning we are adjusting the pre trained model weights on our custom dataset. In fine tuning we are not training from scratch. |
❔Question
I trained my yolov5 and I had 7000 images with 33000 annotations. Now I want to finetune my that model and I have almost 5000 images and annotations.
Can you please guide me which layers should I freeze to finetune it and how?
Additional context
The text was updated successfully, but these errors were encountered: