Skip to content

A collection of self-attention modules and pre-trained backbones

License

Notifications You must be signed in to change notification settings

d-li14/dot-product-attention

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

13 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

dot-product attention

Architecture Top-1 Acc. (%) Top-5 Acc. (%) Download
ResNet-50 76.74 93.47 model | log
NL-ResNet-50 76.55 92.99 model | log
A^2-ResNet-50 77.24 93.66 model | log
GloRe-ResNet-50 77.81 93.99 model | log
AA-ResNet-50 77.57 93.73 model | log

Models are trained on 32 GPUs with the mini-batch size 32 per GPU for 100 epochs. The SGD optimizer with initial learning rate 0.4, momentum 0.9, weight decay 0.0001 is adopted for training. The learning rate anneals following the cosine schedule, with linear warmup for the first 5 epochs with the warmup ratio of 0.25.

† initial learning rate 0.1 w/o warmup for 130 epochs, w/ label smoothing, mini-batch size 32 per GPU on 8 GPUs

About

A collection of self-attention modules and pre-trained backbones

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages