Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

FEATURE REQUEST - GRAD CAM FOR YOLO #7306

Open
holger-prause opened this issue Jan 28, 2021 · 18 comments
Open

FEATURE REQUEST - GRAD CAM FOR YOLO #7306

holger-prause opened this issue Jan 28, 2021 · 18 comments
Labels
Feature-request Any feature-request

Comments

@holger-prause
Copy link

Hello,

For debugging a model its a very userfull to show a heatmap of the classactivations.
This is described in paper https://arxiv.org/pdf/1610.02391v1.pdf

This would be very usefull imho.

@holger-prause holger-prause added the Feature-request Any feature-request label Jan 28, 2021
@AlexeyAB
Copy link
Owner

#5117
Set in your cfg-file and run training:

[net] 
adversarial_lr=0.05 
attention=1

@holger-prause
Copy link
Author

holger-prause commented Jan 28, 2021

Thank you very much :-)
Do i need to specify -show_imgs and will this produce a heatmap?
Feel free to decline this feature request when its already there :-)
Probaply think about putting this into the documentation.

@holger-prause
Copy link
Author

holger-prause commented Jan 28, 2021

Oh i am stupid - i just have to read the linked issue #5117 - its all there!
Thank you very much again - yolo is awesome!

@holger-prause
Copy link
Author

@AlexeyAB
My goal is to visualize the classactivation for a certain class - even the classactivationmap for all classes would be fine.
Before testing my own model - i want to try out on sample image dog.jpg and the coco weights (yolov4.weights) and try to get
the attention visualization.

I created a small dataset containing the dog.jpg sample image. ** I
dont created any annotation for this sample**.

After this i set the mentioned config parameter into the net section

    [net] 
    adversarial_lr=0.05 
    attention=1

After this is started training with -
darknet_no_gpu.exe detector train C:\development\datasets\gradcamtest\train.data C:\development\datasets\gradcamtest\yolov4.cfg C:\development\datasets\gradcamtest\yolov4.weights -show_imgs

After this - i see a bunch auf augemented images - but none of them contains any kind of heatmap or attention visualization.
What am i doing wrong (should i pick yolov4.conv.137?, do i need to picked annotated images?) ?

Thank you very much for your patience - i did some trainings before - for me this is not really straighforward and any help is highly welcome.

@AlexeyAB
Copy link
Owner

@holger-prause
Copy link
Author

holger-prause commented Jan 29, 2021

@AlexeyAB
Ok i did the suggest changes and removed -show_imgs and added the burn in - the console output is
Saving weights to C:\development\datasets\gradcamtest\backup/yolov4_final.weights

But i still dont see any attention image generated(i checked dataset backup and darknet folder). Note that i use the no gpu version. I also compiled yolo with opencv support. Am i missing something very obvious - do i need to annotate my sample image(i want to get the attention visualization for the sample image dog.jpg contained in the yolo folder)?

@AlexeyAB
Copy link
Owner

It supports on GPU. It isn't implemented for CPU.

@holger-prause
Copy link
Author

holger-prause commented Jan 29, 2021

@AlexeyAB
Ah ok - i was assuming something like this - fair enough, i got a new pc on work and it has no cuda gpu...
I train in cloud.

Can i please summary(my understanding) what i need to do to get the attention image:
First of all: Make sure you use gpu (cuda) variant of yolo framework

  • Step 1 - Create a yolo dataset ready to train with the sample image(s) on which you want to do the prediction
  • Step 2 - Adopt you model config to include (as mentioned above)
    [net] 
    adversarial_lr=0.05 
    attention=1 

AND also

    burn_in=0 
  • Step 3 - Run training (dont use -show_img.!)

Now i can only guess:

  • Step 4?
    Use generated weights file and adopted config to do a prediction (inference)

Is this correct? I am not sure about step4 - does not make much sense to me as grad cam used guided backward propagation (i guess this why we need to train)
Thank you again very much for your time and patience.

@AlexeyAB
Copy link
Owner

AlexeyAB commented Jan 29, 2021

  1. Train the model or use already trained cfg/weights files
  2. Change in cfg [net] adversarial_lr=0.05 attention=1 burn_in=0
  3. Run training on GPU with these cfg/weights files (you can tune adversarial_lr=0.05) and with flag -clear at the end of training command
    • just an additional note: it will generate new weights in the /backup/ directory by using SAT (self-adversarial trainin), it can be better if model(yolov4) has higher capacity than dataset(ua-detrac), and can be worse for model(yolov4) has lower capacity than dataset(mscoco)

@holger-prause
Copy link
Author

holger-prause commented Jan 29, 2021

@AlexeyAB
OMG its finally working - i tried out on another computer(which has gpu version of yolo), followed all the steps and i had to use -clear for some reason. I made sure it is training for at least some batches. I also made sure my object is annotated so the weights do not change too much.

Next thing would be to get attention for only one specific class.

@bjajoh
Copy link

bjajoh commented May 8, 2021

@holger-prause I'm not sure if I'm understanding it correctly, but is there a working Grad Cam implementation to give me the heat map?

@jojo0513
Copy link

jojo0513 commented Mar 8, 2022

  1. Train the model or use already trained cfg/weights files

  2. Change in cfg [net] adversarial_lr=0.05 attention=1 burn_in=0

  3. Run training on GPU with these cfg/weights files (you can tune adversarial_lr=0.05) and with flag -clear at the end of training command

    • just an additional note: it will generate new weights in the /backup/ directory by using SAT (self-adversarial trainin), it can be better if model(yolov4) has higher capacity than dataset(ua-detrac), and can be worse for model(yolov4) has lower capacity than dataset(mscoco)

Hi Alexey, I do what you said and without -flag, the following error raised.

"[yolo] params: iou loss: ciou (4), iou_norm: 0.07, obj_norm: 1.00, cls_norm: 1.00, delta_norm: 1.00, scale_x_y: 1.05
nms_kind: greedynms (1), beta = 0.600000
Total BFLOPS 59.614
avg_outputs = 490698
Allocate additional workspace_size = 111.05 MB
Loading weights from /mydrive/yolov4/backup/2021_12_21_weight_with_8_species.weights...
seen 64, trained: 819 K-images (12 Kilo-batches_64)
Done! Loaded 162 layers from weights-file
Learning Rate: 0.001, Momentum: 0.949, Decay: 0.0005
Detection layer: 139 - type = 28
Detection layer: 150 - type = 28
Detection layer: 161 - type = 28
If error occurs - run training with flag: -dont_show
Unable to init server: Could not connect: Connection refused

(chart_yolov4-obj.png:2416): Gtk-WARNING **: 09:45:31.916: cannot open display: "

Do you have any idea what happen?

@stephanecharette
Copy link
Collaborator

Did you try to search for that error?

https://github.com/AlexeyAB/darknet/issues?q=cannot+open+display

@jojo0513
Copy link

jojo0513 commented Mar 9, 2022

Did you try to search for that error?

https://github.com/AlexeyAB/darknet/issues?q=cannot+open+display

I did, then someone suggested that I conclude the code with "-dont show." However, after adding it, the code continues to produce the same error and no CAM plot appears...

@stephanecharette
Copy link
Collaborator

You have to spell it correctly: -dont_show.

@stephanecharette
Copy link
Collaborator

Or you have to use X-forwarding when you SSH into the device. Or you have to run it on a device that has X installed.

@jojo0513
Copy link

jojo0513 commented Mar 9, 2022

Yes, in my code, it is -dont show. I made a mistake in my last response. Additionally, I use Google colab; do I still need to instal X?

@stephanecharette
Copy link
Collaborator

You still typed it wrong in your last comment, so I'm not convinced you're using it correctly. It should be -dont_show. No, you don't have to install X if you use -dont_show.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Feature-request Any feature-request
Projects
None yet
Development

No branches or pull requests

5 participants