Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to implement GRADCAM for object detection model #2

Open
Monk5088 opened this issue Oct 27, 2022 · 1 comment
Open

How to implement GRADCAM for object detection model #2

Monk5088 opened this issue Oct 27, 2022 · 1 comment

Comments

@Monk5088
Copy link

Hey author,
Can you explain how can we implement GRADCAM for object detection result model?

@Monk5088
Copy link
Author

Monk5088 commented Oct 27, 2022

It gives the following error:

att_shot = gc(data_test.valid_ds[0][0], index=10)

ERROR:

TypeError                                 Traceback (most recent call last)
[<ipython-input-77-0a01d7c8cfc1>](https://localhost:8080/#) in <module>
      1 #All inputs for model and index of chosen class
      2 #If index is None -> will be provided attention to the most chosen class
----> 3 att_shot = gc(data_test.valid_ds[0][0], index=10)

7 frames
[/content/pytorchGradCAM/gradCAM.py](https://localhost:8080/#) in __call__(self, index, *input)
     29         self.output_gradients : List[torch.Tensor] = []
     30 
---> 31         output = self.model(*input)
     32 
     33         if index == None:

[/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py](https://localhost:8080/#) in _call_impl(self, *input, **kwargs)
   1108         if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
   1109                 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1110             return forward_call(*input, **kwargs)
   1111         # Do not call functions when jit is used
   1112         full_backward_hooks, non_full_backward_hooks = [], []

[/usr/local/lib/python3.7/dist-packages/object_detection_fastai/models/RetinaNet.py](https://localhost:8080/#) in forward(self, x)
     70 
     71     def forward(self, x):
---> 72         c5 = self.encoder(x)
     73         p_states = [self.c5top5(c5.clone()), self.c5top6(c5)]
     74         p_states.append(self.p6top7(p_states[-1]))

[/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py](https://localhost:8080/#) in _call_impl(self, *input, **kwargs)
   1126             input = bw_hook.setup_input_hook(input)
   1127 
-> 1128         result = forward_call(*input, **kwargs)
   1129         if _global_forward_hooks or self._forward_hooks:
   1130             for hook in (*_global_forward_hooks.values(), *self._forward_hooks.values()):

[/usr/local/lib/python3.7/dist-packages/torch/nn/modules/container.py](https://localhost:8080/#) in forward(self, input)
    139     def forward(self, input):
    140         for module in self:
--> 141             input = module(input)
    142         return input
    143 

[/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py](https://localhost:8080/#) in _call_impl(self, *input, **kwargs)
   1126             input = bw_hook.setup_input_hook(input)
   1127 
-> 1128         result = forward_call(*input, **kwargs)
   1129         if _global_forward_hooks or self._forward_hooks:
   1130             for hook in (*_global_forward_hooks.values(), *self._forward_hooks.values()):

[/usr/local/lib/python3.7/dist-packages/torch/nn/modules/conv.py](https://localhost:8080/#) in forward(self, input)
    445 
    446     def forward(self, input: Tensor) -> Tensor:
--> 447         return self._conv_forward(input, self.weight, self.bias)
    448 
    449 class Conv3d(_ConvNd):

[/usr/local/lib/python3.7/dist-packages/torch/nn/modules/conv.py](https://localhost:8080/#) in _conv_forward(self, input, weight, bias)
    442                             _pair(0), self.dilation, self.groups)
    443         return F.conv2d(input, weight, bias, self.stride,
--> 444                         self.padding, self.dilation, self.groups)
    445 
    446     def forward(self, input: Tensor) -> Tensor:

TypeError: conv2d() received an invalid combination of arguments - got (Image, Parameter, NoneType, tuple, tuple, tuple, int), but expected one of:
 * (Tensor input, Tensor weight, Tensor bias, tuple of ints stride, tuple of ints padding, tuple of ints dilation, int groups)
      didn't match because some of the arguments have invalid types: (!Image!, !Parameter!, !NoneType!, !tuple!, !tuple!, !tuple!, int)
 * (Tensor input, Tensor weight, Tensor bias, tuple of ints stride, str padding, tuple of ints dilation, int groups)
      didn't match because some of the arguments have invalid types: (!Image!, !Parameter!, !NoneType!, !tuple!, !tuple!, !tuple!, int)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant