Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add NanoDet-t with Transformer Attention Net #183

Merged
merged 6 commits into from
Mar 12, 2021
Merged

Add NanoDet-t with Transformer Attention Net #183

merged 6 commits into from
Mar 12, 2021

Conversation

RangiLyu
Copy link
Owner

Recently I did some experiments on Transformer. I added a Transformer Encoder to the PAN in NanoDet-m to get spatial self-attention. After adding this module, the score on COCO increased from 20.6 to 21.7.

image

(Sorry for the ugly painting.)

@RangiLyu RangiLyu merged commit b8763e7 into main Mar 12, 2021
@zimenglan-sysu-512
Copy link

it seems that the linear layer is not used.

@zimenglan-sysu-512
Copy link

and why x = x.flatten(2).permute(2, 0, 1) put the wh in the first dim?

@zimenglan-sysu-512
Copy link

it requires torch>=1.6?

@RangiLyu RangiLyu deleted the transformer branch April 17, 2021 12:41
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants