-
Notifications
You must be signed in to change notification settings - Fork 5
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
SpViT on Swin Transformer #4
Comments
Yes, theoretically, this working flow can also be used for Swin Transformer. At this time, we do not have enough time working on the related implementation. We would release in the future If we can make time. Thanks for support and understanding. |
Thank you for the quick response! Do you plan on releasing the code used to generate results in Table 3 such as I am currently implementing your method for my own Swin code. I would be grateful if you could help on the following comments. In terms of the implementation, the ViT code for instance has
When you used Swin - did you generate a new policy for each layer since the number of patches changes for each successive layer? Also - it is stated that the Token Selector is applied after Lastly - just to confirm my understanding of the code - this code snippet below corresponds to the token packaging?
whilst the policy tensor is used in the attention module to ensure that the softmax is only done on tokens that are kept / repackaged? Thanks for your guidance! |
Hi,
Thanks a lot for your contribution!
I was wondering if you could release the SpViT implementation for the Swin Transformer?
Many thanks!
The text was updated successfully, but these errors were encountered: