Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

YOLOv5 release v6.0 #5141

Merged
merged 27 commits into from
Oct 12, 2021
Merged

YOLOv5 release v6.0 #5141

merged 27 commits into from
Oct 12, 2021

Conversation

glenn-jocher
Copy link
Member

@glenn-jocher glenn-jocher commented Oct 12, 2021

YOLOv5 release v6.0 PR - YOLOv5n 'Nano' models, Roboflow integration, TensorFlow export, OpenCV DNN support

This release incorporates many new features and bug fixes (465 PRs from 73 contributors) since our last release v5.0 in April, brings architecture tweaks, and also introduces new P5 and P6 'Nano' models: YOLOv5n and YOLOv5n6. Nano models maintain the YOLOv5s depth multiple of 0.33 but reduce the YOLOv5s width multiple from 0.50 to 0.25, resulting in ~75% fewer parameters, from 7.5M to 1.9M, ideal for mobile and CPU solutions.

Example usage:

python detect.py --weights yolov5n.pt --img 640    # Nano P5 model trained at --img 640 (28.4 [email protected]:0.95)
python detect.py --weights yolov5n6.pt --img 1280  # Nano P6 model trained at --img 1280 (34.0 mAP0.5:0.95)

Important Updates

New Results

YOLOv5-P5 640 Figure (click to expand)

Figure Notes (click to expand)
  • COCO AP val denotes [email protected]:0.95 metric measured on the 5000-image COCO val2017 dataset over various inference sizes from 256 to 1536.
  • GPU Speed measures average inference time per image on COCO val2017 dataset using a AWS p3.2xlarge V100 instance at batch-size 32.
  • EfficientDet data from google/automl at batch size 8.
  • Reproduce by python val.py --task study --data coco.yaml --iou 0.7 --weights yolov5n6.pt yolov5s6.pt yolov5m6.pt yolov5l6.pt yolov5x6.pt

mAP improves from +0.3% to +1.1% across all models, and ~5% FLOPs reduction produces slight speed improvements and a reduced CUDA memory footprint. Example YOLOv5l before and after metrics:

YOLOv5l
Large
size
(pixels)
mAPval
0.5:0.95
mAPval
0.5
Speed
CPU b1
(ms)
Speed
V100 b1
(ms)
Speed
V100 b32
(ms)
params
(M)
FLOPs
@640 (B)
v5.0 (previous) 640 48.2 66.9 457.9 11.6 2.8 47.0 115.4
v6.0 (this release) 640 48.8 67.2 424.5 10.9 2.7 46.5 109.1

Pretrained Checkpoints

Model size
(pixels)
mAPval
0.5:0.95
mAPval
0.5
Speed
CPU b1
(ms)
Speed
V100 b1
(ms)
Speed
V100 b32
(ms)
params
(M)
FLOPs
@640 (B)
YOLOv5n 640 28.4 46.0 45 6.3 0.6 1.9 4.5
YOLOv5s 640 37.2 56.0 98 6.4 0.9 7.2 16.5
YOLOv5m 640 45.2 63.9 224 8.2 1.7 21.2 49.0
YOLOv5l 640 48.8 67.2 430 10.1 2.7 46.5 109.1
YOLOv5x 640 50.7 68.9 766 12.1 4.8 86.7 205.7
YOLOv5n6 1280 34.0 50.7 153 8.1 2.1 3.2 4.6
YOLOv5s6 1280 44.5 63.0 385 8.2 3.6 16.8 12.6
YOLOv5m6 1280 51.0 69.0 887 11.1 6.8 35.7 50.0
YOLOv5l6 1280 53.6 71.6 1784 15.8 10.5 76.8 111.4
YOLOv5x6
+ TTA
1280
1536
54.7
55.4
72.4
72.3
3136
-
26.2
-
19.4
-
140.7
-
209.8
-
Table Notes (click to expand)
  • All checkpoints are trained to 300 epochs with default settings. Nano models use hyp.scratch-low.yaml hyperparameters, all others use hyp.scratch-high.yaml.
  • mAPval values are for single-model single-scale on COCO val2017 dataset.
    Reproduce by python val.py --data coco.yaml --img 640 --conf 0.001 --iou 0.65
  • Speed averaged over COCO val images using a AWS p3.2xlarge instance. NMS times (~1 ms/img) not included.
    Reproduce by python val.py --data coco.yaml --img 640 --conf 0.25 --iou 0.45
  • TTA Test Time Augmentation includes reflection and scale augmentations.
    Reproduce by python val.py --data coco.yaml --img 1536 --iou 0.7 --augment

Changelog

Changes between previous release and this release: v5.0...v6.0
Changes since this release: v6.0...HEAD

New Features and Bug Fixes (465)
New Contributors (73) * @robmarkcole made their first contribution in https://github.com//pull/2732 * @timstokman made their first contribution in https://github.com//pull/2856 * @Ab-Abdurrahman made their first contribution in https://github.com//pull/2827 * @JoshSong made their first contribution in https://github.com//pull/2871 * @MichHeilig made their first contribution in https://github.com//pull/2883 * @r-blmnr made their first contribution in https://github.com//pull/2890 * @fcakyon made their first contribution in https://github.com//pull/2817 * @Ashafix made their first contribution in https://github.com//pull/2658 * @albinxavi made their first contribution in https://github.com//pull/2923 * @BZFYS made their first contribution in https://github.com//pull/2934 * @ferdinandl007 made their first contribution in https://github.com//pull/2932 * @jluntamazon made their first contribution in https://github.com//pull/2953 * @hodovo made their first contribution in https://github.com//pull/3010 * @jylink made their first contribution in https://github.com//pull/2982 * @kepler62f made their first contribution in https://github.com//pull/3058 * @KC-Zhang made their first contribution in https://github.com//pull/3127 * @CristiFati made their first contribution in https://github.com//pull/3137 * @cgerum made their first contribution in https://github.com//pull/3104 * @adrianholovaty made their first contribution in https://github.com//pull/3215 * @yeric1789 made their first contribution in https://github.com//pull/3240 * @charlesfrye made their first contribution in https://github.com//pull/3264 * @ChaofWang made their first contribution in https://github.com//pull/3362 * @pizzaz93 made their first contribution in https://github.com//pull/3368 * @tudoulei made their first contribution in https://github.com//pull/3379 * @chocosaj made their first contribution in https://github.com//pull/3422 * @SamSamhuns made their first contribution in https://github.com//pull/3456 * @edificewang made their first contribution in https://github.com//pull/3423 * @deanmark made their first contribution in https://github.com//pull/3505 * @dependabot made their first contribution in https://github.com//pull/3561 * @kalenmike made their first contribution in https://github.com//pull/3530 * @masoodazhar made their first contribution in https://github.com//pull/3591 * @wq9 made their first contribution in https://github.com//pull/3612 * @xiaowk5516 made their first contribution in https://github.com//pull/3638 * @thanhminhmr made their first contribution in https://github.com//pull/3646 * @SpongeBab made their first contribution in https://github.com//pull/3650 * @ZouJiu1 made their first contribution in https://github.com//pull/3681 * @lb-desupervised made their first contribution in https://github.com//pull/3687 * @batrlatom made their first contribution in https://github.com//pull/3799 * @yellowdolphin made their first contribution in https://github.com//pull/3722 * @Zigars made their first contribution in https://github.com//pull/3804 * @feras-oughali made their first contribution in https://github.com//pull/3833 * @vaaliferov made their first contribution in https://github.com//pull/3852 * @san-soucie made their first contribution in https://github.com//pull/3863 * @ketan-b made their first contribution in https://github.com//pull/3864 * @johnohagan made their first contribution in https://github.com//pull/3904 * @jmiranda-laplateforme made their first contribution in https://github.com//pull/3923 * @eldarkurtic made their first contribution in https://github.com//pull/3934 * @seven320 made their first contribution in https://github.com//pull/3973 * @imyhxy made their first contribution in https://github.com//pull/4126 * @IneovaAI made their first contribution in https://github.com//pull/4238 * @junjihashimoto made their first contribution in https://github.com//pull/4049 * @Justsubh01 made their first contribution in https://github.com//pull/4309 * @orangeccc made their first contribution in https://github.com//pull/4379 * @ahmadmustafaanis made their first contribution in https://github.com//pull/4376 * @OmidSa75 made their first contribution in https://github.com//pull/4428 * @huuquan1994 made their first contribution in https://github.com//pull/4455 * @karasawatakumi made their first contribution in https://github.com//pull/4588 * @YukunXia made their first contribution in https://github.com//pull/4608 * @zhiqwang made their first contribution in https://github.com//pull/4701 * @ELHoussineT made their first contribution in https://github.com//pull/4676 * @joaodiogocosta made their first contribution in https://github.com//pull/4727 * @jeanbmar made their first contribution in https://github.com//pull/4728 * @Zegorax made their first contribution in https://github.com//pull/4730 * @jveitchmichaelis made their first contribution in https://github.com//pull/4742 * @kimnamu made their first contribution in https://github.com//pull/4787 * @NauchtanRobotics made their first contribution in https://github.com//pull/4893 * @SamFC10 made their first contribution in https://github.com//pull/4914 * @d57montes made their first contribution in https://github.com//pull/4958 * @EgOrlukha made their first contribution in https://github.com//pull/5074 * @sandstorm12 made their first contribution in https://github.com//pull/5112 * @qiningonline made their first contribution in https://github.com//pull/5114 * @maltelorbach made their first contribution in https://github.com//pull/5129 * @andreiionutdamian made their first contribution in https://github.com//pull/5109

🛠️ PR Summary

Made with ❤️ by Ultralytics Actions

🌟 Summary

Update improves YOLOv5 model benchmarks, introduces new model variants, and refines training hyperparameters.

📊 Key Changes

  • Updated the model benchmark images and statistics in the README, offering clearer performance insights.
  • Renamed a hyperparameter file for clarity, and added new hyperparameter files for different augmentation strategies.
  • Introduced a new smaller model variant, YOLOv5n and YOLOv5n6, optimized for speed.
  • Modified and extended existing model configurations to improve performance and accommodate the P6 models, which are larger-scale variants.
  • Adjusted default hyperparameters for training, providing better baselines for different augmentation levels.
  • Incorporated changes into the hubconf.py script to allow easy loading of the new models.

🎯 Purpose & Impact

  • Clarity: The README update helps users understand the model's performance more easily.
  • Variety: The new models (YOLOv5n, YOLOv5n6) offer additional options for users who need faster inference times, which can be especially beneficial for edge computing or devices with limited computational power.
  • Optimization: Updating the hyperparameters and model configurations can lead to better training outcomes without the need for manual adjustment.
  • Ease of Use: Changes to hubconf.py streamline the process for users to use the pretrained models in their applications.

These changes can potentially lead to wider adoption and more effective implementation of YOLOv5 models in various real-world applications. ⚙️🚀

@glenn-jocher glenn-jocher added the enhancement New feature or request label Oct 12, 2021
@glenn-jocher glenn-jocher merged commit 956be8e into master Oct 12, 2021
@glenn-jocher glenn-jocher deleted the v6.0 branch October 12, 2021 06:47
@SpongeBab
Copy link
Contributor

ohhhhh,finally!Great job!

@AyushExel
Copy link
Contributor

Amazing work as always @glenn-jocher

@glenn-jocher
Copy link
Member Author

About Reported Speeds

mAP values are reproducible across any hardware, but speeds will vary significantly among V100 instances, and seem to depend heavily on the CUDA, CUDNN and PyTorch installations used. The numbers reported above were produced on GCP N1-standard-8 Skylake V100 instances running the v6.0 Docker image with:

  • NVIDIA Driver Version: 460.73.01
  • CUDA Version: 11.2
# Pull and Run v6.0 image
t=ultralytics/yolov5:v6.0 && sudo docker pull $t && sudo docker run -it --ipc=host --gpus all $t

Screen Shot 2021-10-12 at 10 57 31 AM

Our speed command is:

# Reproduce YOLOv5s batch-1 speeds in table
python val.py --data coco.yaml --img 640 --task speed --batch 1

Screen Shot 2021-10-12 at 11 03 34 AM

We tried several options, including AWS P3 instances, pulling more recent base versions (FROM nvcr.io/nvidia/pytorch:21.09-py3), different pytorch install methods, and many returned slower speeds than the values reported in our table. batch-32 speeds were found to vary less across options.

@PussyCat0700
Copy link

PussyCat0700 commented Nov 2, 2021

YOLOv5 release v6.0 PR - YOLOv5n 'Nano' models, Roboflow integration, TensorFlow export, OpenCV DNN support

This release incorporates many new features and bug fixes (465 PRs from 73 contributors) since our last release v5.0 in April, brings architecture tweaks, and also introduces new P5 and P6 'Nano' models: YOLOv5n and YOLOv5n6. Nano models maintain the YOLOv5s depth multiple of 0.33 but reduce the YOLOv5s width multiple from 0.50 to 0.25, resulting in ~75% fewer parameters, from 7.5M to 1.9M, ideal for mobile and CPU solutions.

Example usage:

python detect.py --weights yolov5n.pt --img 640    # Nano P5 model trained at --img 640 (28.4 [email protected]:0.95)
python detect.py --weights yolov5n6.pt --img 1280  # Nano P6 model trained at --img 1280 (34.0 mAP0.5:0.95)

Important Updates

New Results

YOLOv5-P5 640 Figure (click to expand)
Figure Notes (click to expand)
mAP improves from +0.3% to +1.1% across all models, and ~5% FLOPs reduction produces slight speed improvements and a reduced CUDA memory footprint. Example YOLOv5l before and after metrics:

YOLOv5l
Large size
(pixels) mAPval
0.5:0.95 mAPval
0.5 Speed
CPU b1
(ms) Speed
V100 b1
(ms) Speed
V100 b32
(ms) params
(M) FLOPs
@640 (B)
v5.0 (previous) 640 48.2 66.9 457.9 11.6 2.8 47.0 115.4
v6.0 (this release) 640 48.8 67.2 424.5 10.9 2.7 46.5 109.1

Pretrained Checkpoints

Model size
(pixels) mAPval
0.5:0.95 mAPval
0.5 Speed
CPU b1
(ms) Speed
V100 b1
(ms) Speed
V100 b32
(ms) params
(M) FLOPs
@640 (B)
YOLOv5n 640 28.4 46.0 45 6.3 0.6 1.9 4.5
YOLOv5s 640 37.2 56.0 98 6.4 0.9 7.2 16.5
YOLOv5m 640 45.2 63.9 224 8.2 1.7 21.2 49.0
YOLOv5l 640 48.8 67.2 430 10.1 2.7 46.5 109.1
YOLOv5x 640 50.7 68.9 766 12.1 4.8 86.7 205.7
YOLOv5n6 1280 34.0 50.7 153 8.1 2.1 3.2 4.6
YOLOv5s6 1280 44.5 63.0 385 8.2 3.6 16.8 12.6
YOLOv5m6 1280 51.0 69.0 887 11.1 6.8 35.7 50.0
YOLOv5l6 1280 53.6 71.6 1784 15.8 10.5 76.8 111.4
YOLOv5x6

  • TTA 1280
    1536 54.7
    55.4 72.4
    72.3 3136
  • 26.2
  • 19.4
  • 140.7
  • 209.8

Table Notes (click to expand)

Changelog

Changes between previous release and this release: v5.0...v6.0 Changes since this release: v6.0...HEAD

New Features and Bug Fixes (465)
New Contributors (73)

So Focus() has been removed for its almost equivalent performace to a simple Conv() layer in most cases?

@zldrobit
Copy link
Contributor

zldrobit commented Nov 2, 2021

@PussyCat0700 There is no any Focus() layer in modesl/*.yaml files after the v6.0 release.

@myasser63
Copy link

myasser63 commented Nov 15, 2021

@glenn-jocher Can I know now what is the difference between yolov5s and yolov5s6 both have V6 backbone and Head just the input is different?

@glenn-jocher
Copy link
Member Author

@myasser63 yolov5s is a P5 model trained at --img 640 while yolov5s6 is a P6 model trained at --img 1280. See release v5.0 notes for details on these two model types:
https://github.com/ultralytics/yolov5/releases/tag/v5.0

@myasser63
Copy link

Does P5 models has focus layer?

@glenn-jocher
Copy link
Member Author

@myasser63 no. See #4825

@glenn-jocher glenn-jocher mentioned this pull request Jan 6, 2022
1 task
@Adeelbek
Copy link

@glenn-jocher thanks for sharing your great effort as Yolov5 series.
I am actively using it in my projects. However, I was wondering if you could kindly explain me why you use 6x6 kernel and only in 1st layer of the backbone. Is this something borrowed from CSP (Although I haven't seen it in CSP paper) or do have some empirical prove that 6x6 gives better performance? If one wants to change 6x6 => 3x3 does it give similar performance?
Actually, implementing Yolov5 in NPU may become bit complex just due to 6x6 kernel in 1st Conv layer of backbone

@glenn-jocher
Copy link
Member Author

glenn-jocher commented Feb 25, 2022

@Adeelbek the input layer 6x6 stride 2 padding 2 convolution is a replacement for the Focus layer in earlier versions of YOLOv5. The Focus layer rearranges spatial data into the channel dimension. The replacement was made in YOLOv5 v6.0.

The PR explaining the update is #4825

BjarneKuehl pushed a commit to fhkiel-mlaip/yolov5 that referenced this pull request Aug 26, 2022
* Update P5 models

* Update P6 models

* Update with GFLOPs and Params

* Update with GFLOPs and Params

* Update README

* Update

* Update README

* Update

* Update

* Add times

* Update README

* Update results

* Update results

* Update results

* Update hyps

* Update plots

* Update plots

* Update README.md

* Add nano models to hubconf.py
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
documentation Improvements or additions to documentation enhancement New feature or request
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Question about calculating mAP at test time DDP wandb utils not running check_file() on --data
7 participants