You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi,
I wanted to ask how to visualize the attention masks as given in Fig 1 in the paper? Does it involve using GRAD CAM? or is it directly the actual outputs of the mask? Also, given that the masks are of different sizes and the number of channels are more (usually 256, 512 or 1024), how do the authors visualize the attention masks then?
A code snippet would help highly.
The text was updated successfully, but these errors were encountered:
Hi,
I wanted to ask how to visualize the attention masks as given in Fig 1 in the paper? Does it involve using GRAD CAM? or is it directly the actual outputs of the mask? Also, given that the masks are of different sizes and the number of channels are more (usually 256, 512 or 1024), how do the authors visualize the attention masks then?
A code snippet would help highly.
The text was updated successfully, but these errors were encountered: