Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cannot reproduce the 64.1 mAP on COCO dataset by yolov5m #10905

Closed
1 task done
SikaAntler opened this issue Feb 5, 2023 · 18 comments
Closed
1 task done

Cannot reproduce the 64.1 mAP on COCO dataset by yolov5m #10905

SikaAntler opened this issue Feb 5, 2023 · 18 comments
Labels
question Further information is requested Stale Stale and schedule for closing soon

Comments

@SikaAntler
Copy link

SikaAntler commented Feb 5, 2023

Search before asking

Question

I was trying to reproduce the 64.1 mAP(0.5) by yolov5m on COCO dataset.

My environment is:
Python-3.9.13 torch-1.9.1+cu111

My training shell script is:
python -m torch.distributed.run --nproc_per_node 4 train.py --weights '' --cfg yolov5m.yaml --data data/coco.yaml --hyp data/hyps/hyp.scratch-high.yaml --epochs 300 --batch-size 128 --imgsz 640 --device 0,1,2,3 --sync-bn

Then, I could only get the best mAP(0.5) of 0.587.

I have been mentioned by other issue that the mAP was reported by pycocotools, not the inside metrics. Then, I download the support chechpoint of yolov5m to test by the inside metrics, then I got the mAP(0.5) of 0.635. There is still a big gap between 0.587 and 0.635.

So I considered that the problem might be caused by the training process, can anyboby help me to solve this? Thank you very much!

Additional

No response

@SikaAntler SikaAntler added the question Further information is requested label Feb 5, 2023
@github-actions
Copy link
Contributor

github-actions bot commented Feb 5, 2023

👋 Hello @SikaAntler, thank you for your interest in YOLOv5 🚀! Please visit our ⭐️ Tutorials to get started, where you can find quickstart guides for simple tasks like Custom Data Training all the way to advanced concepts like Hyperparameter Evolution.

If this is a 🐛 Bug Report, please provide screenshots and minimum viable code to reproduce your issue, otherwise we can not help you.

If this is a custom training ❓ Question, please provide as much information as possible, including dataset images, training logs, screenshots, and a public link to online W&B logging if available.

For business inquiries or professional support requests please visit https://ultralytics.com or email [email protected].

Requirements

Python>=3.7.0 with all requirements.txt installed including PyTorch>=1.7. To get started:

git clone https://github.com/ultralytics/yolov5  # clone
cd yolov5
pip install -r requirements.txt  # install

Environments

YOLOv5 may be run in any of the following up-to-date verified environments (with all dependencies including CUDA/CUDNN, Python and PyTorch preinstalled):

Status

YOLOv5 CI

If this badge is green, all YOLOv5 GitHub Actions Continuous Integration (CI) tests are currently passing. CI tests verify correct operation of YOLOv5 training, validation, inference, export and benchmarks on MacOS, Windows, and Ubuntu every 24 hours and on every commit.

@marigoold
Copy link

I encountered the same problem, but it's on YOLOv5n, I can't reproduce the 45.7 mAP :( YOLOv5 early stopped at ~36 mAP

@SikaAntler
Copy link
Author

I encountered the same problem, but it's on YOLOv5n, I can't reproduce the 45.7 mAP :( YOLOv5 early stopped at ~36 mAP

Ok, then I reproduce the yolov5n, and I will tell you my result tomorrow.

@marigoold
Copy link

marigoold commented Feb 5, 2023

I encountered the same problem, but it's on YOLOv5n, I can't reproduce the 45.7 mAP :( YOLOv5 early stopped at ~36 mAP

Ok, then I reproduce the yolov5n, and I will tell you my result tomorrow.

Thanks! BTW, my training command is python3 -m torch.distributed.launch --nproc_per_node 8 train.py --data coco.yaml --epochs 300 --weights '' --cfg yolov5n.yaml --batch-size 512 --sync-bn, and used the main branch, FYI

@marigoold
Copy link

And I disabled amp training for loss computing, and the mAP seems to be normal (I didn't finish the training, just check the increasing trend of mAP). I tried different training combinations, the results are:

  • DDP and AMP enabled, got abnormal mAP
  • DDP enabled, AMP disabled(comment the amp related code in train.py), got normal mAP
  • DDP disabled, AMP enabled (single GPU training), got normal mAP

@SikaAntler
Copy link
Author

I encountered the same problem, but it's on YOLOv5n, I can't reproduce the 45.7 mAP :( YOLOv5 early stopped at ~36 mAP

Ok, then I reproduce the yolov5n, and I will tell you my result tomorrow.

Thanks! BTW, my training command is python3 -m torch.distributed.launch --nproc_per_node 8 train.py --data coco.yaml --epochs 300 --weights '' --cfg yolov5n.yaml --batch-size 512 --sync-bn, and used the main branch, FYI

Well, I could only get 0.32 mAP(0.5), and no early stop, I don't know why...

My training script is:
python -m torch.distributed.run --nproc_per_node 4 train.py --weights '' --cfg yolov5n.yaml --data coco.yaml --hyp hyp.scratch-low.yaml --epochs 300 --batch-size 512 --imgsz 640 --device 0,1,2,3 --sync-bn

@SikaAntler
Copy link
Author

And I disabled amp training for loss computing, and the mAP seems to be normal (I didn't finish the training, just check the increasing trend of mAP). I tried different training combinations, the results are:

  • DDP and AMP enabled, got abnormal mAP
  • DDP enabled, AMP disabled(comment the amp related code in train.py), got normal mAP
  • DDP disabled, AMP enabled (single GPU training), got normal mAP

Thank you, it seems like the problem caused by DDP and AMP. I am going to try these settings.

@marigoold
Copy link

And I disabled amp training for loss computing, and the mAP seems to be normal (I didn't finish the training, just check the increasing trend of mAP). I tried different training combinations, the results are:

  • DDP and AMP enabled, got abnormal mAP
  • DDP enabled, AMP disabled(comment the amp related code in train.py), got normal mAP
  • DDP disabled, AMP enabled (single GPU training), got normal mAP

Thank you, it seems like the problem caused by DDP and AMP. I am going to try these settings.

Unfortunately, DDP disabled, AMP enabled (single GPU training) seems to get abnormal mAP too..
I just trained YOLOv5n on single gpu, and mAP(0.5) stuck at ~0.354.
Maybe the issue is caused by AMP, and has no business with DDP

@YoungjaeDev
Copy link

@marigoold @SikaAntler
When I turn to ddp, for example,

 python -m torch.distributed.run --nproc_per_node 4 train.py --batch 256 --cfg models/yolov5s.yaml --weights '' --data coco.yaml --img 640 --epochs 300 --project Train_COCO --name coco-scratch-v5s640 --device 0,1,2,3 --hyp data/hyps/hyp.scratch-low.yaml

Is the sync-bn option required? can use both.

@marigoold
Copy link

@marigoold @SikaAntler When I turn to ddp, for example,

 python -m torch.distributed.run --nproc_per_node 4 train.py --batch 256 --cfg models/yolov5s.yaml --weights '' --data coco.yaml --img 640 --epochs 300 --project Train_COCO --name coco-scratch-v5s640 --device 0,1,2,3 --hyp data/hyps/hyp.scratch-low.yaml

Is the sync-bn option required? can use both.

Hi, @youngjae-avikus you can read this discussion. Briefly, sync-bn will improve your accuracy for BN, but slow down the training.

@SikaAntler
Copy link
Author

@marigoold @SikaAntler When I turn to ddp, for example,

 python -m torch.distributed.run --nproc_per_node 4 train.py --batch 256 --cfg models/yolov5s.yaml --weights '' --data coco.yaml --img 640 --epochs 300 --project Train_COCO --name coco-scratch-v5s640 --device 0,1,2,3 --hyp data/hyps/hyp.scratch-low.yaml

Is the sync-bn option required? can use both.

In my experience, sync-bn is not necessary, you can drop it when your batch-size is big enough in single gpu.

@SikaAntler
Copy link
Author

And I disabled amp training for loss computing, and the mAP seems to be normal (I didn't finish the training, just check the increasing trend of mAP). I tried different training combinations, the results are:

  • DDP and AMP enabled, got abnormal mAP
  • DDP enabled, AMP disabled(comment the amp related code in train.py), got normal mAP
  • DDP disabled, AMP enabled (single GPU training), got normal mAP

Thank you, it seems like the problem caused by DDP and AMP. I am going to try these settings.

Unfortunately, DDP disabled, AMP enabled (single GPU training) seems to get abnormal mAP too.. I just trained YOLOv5n on single gpu, and mAP(0.5) stuck at ~0.354. Maybe the issue is caused by AMP, and has no business with DDP

I have tried yolov5m with AMP disabled, however, the best mAP(0.5) only achieved 0.58484.

My method of turning off AMP is to modify amp = check_amp(model) always be amp = False. Am I correct?

@glenn-jocher
Copy link
Member

Use batch size 128 to reproduce official trainings

@marigoold
Copy link

marigoold commented Feb 8, 2023

Use batch size 128 to reproduce official trainings

@glenn-jocher Hi, how to reproduce official mAP on multi cards? It seems the scripts in https://docs.ultralytics.com/yolov5/tutorials/multi_gpu_training always get low mAP : (

@SikaAntler
Copy link
Author

Use batch size 128 to reproduce official trainings

Yes, I set the batch-size to 128 and turn on sync-bn, however, I could only get the best mAP(0.5) of 0.58691.

As reminded by marigold, I turned off AMP, the best mAP(0.5) is only 0.58484.

Is there any key point I have missed? Thank you.

@github-actions
Copy link
Contributor

github-actions bot commented Mar 11, 2023

👋 Hello, this issue has been automatically marked as stale because it has not had recent activity. Please note it will be closed if no further activity occurs.

Access additional YOLOv5 🚀 resources:

Access additional Ultralytics ⚡ resources:

Feel free to inform us of any other issues you discover or feature requests that come to mind in the future. Pull Requests (PRs) are also always welcomed!

Thank you for your contributions to YOLOv5 🚀 and Vision AI ⭐!

@github-actions github-actions bot added the Stale Stale and schedule for closing soon label Mar 11, 2023
@github-actions github-actions bot closed this as not planned Won't fix, can't repro, duplicate, stale Mar 21, 2023
@eypros
Copy link

eypros commented Apr 25, 2023

Hello, a bit late but a key factor you are changing is using image size 512 for some reason. The reported results uses 640 for yolov5n.pt and 1280x1280 for yolov5n6.pt as I get it. You should check with this image size for sure to make clear conclusions.

@glenn-jocher
Copy link
Member

Yes, @eypros is correct. The original YOLOv5 research paper and the YOLOv5 github repository both using 640x640 image size for the COCO dataset.

If you use different image sizes than the original research, the performance of the model will be different. Hence, I suggest using the original image size for accurate results.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested Stale Stale and schedule for closing soon
Projects
None yet
Development

No branches or pull requests

5 participants