You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi, thanks for your great work.
In your code, I find that the input of the EfficientAdditiveAttention is reshape from (B C H W)--> (B H*W C).
self.attn(x.permute(0, 2, 3, 1).reshape(B, H * W, C))
My understanding is that the num_tokens is H*W and the dim_token is C? That's right? Its result will be better? It will be more efficient?
What's the difference between yours and split the feature to patchs just like below:
# from (B C H W) --> (B N D) N:num_tokens D:dim_token
rearrange(feature_map, 'b c (w s1) (h s2) -> b (w h) (c s1 s2)', s1=patch_size, s2=patch_size)
Looking forward your reply! :)
The text was updated successfully, but these errors were encountered:
Yes, in our code the number of tokens is H*W and the token dimension is C.
Splitting the features into patches is also doable and another common way, it will increase the number of dim_token C and reduce the number of tokens N. We haven't tested dividing the feature maps into patches, but it would be interesting to try it in terms of complexity, inference speed, and accuracy.
Hi, thanks for your great work.
In your code, I find that the input of the EfficientAdditiveAttention is reshape from (B C H W)--> (B H*W C).
My understanding is that the num_tokens is H*W and the dim_token is C? That's right? Its result will be better? It will be more efficient?
What's the difference between yours and split the feature to patchs just like below:
Looking forward your reply! :)
The text was updated successfully, but these errors were encountered: